Please answer the below

Please answer the below.

I’m trying to study for my Statistics course and I need some help to understand this question.

Use the Excel file General-Electric and upload it to SPSS. This file contains GE’s daily stock market data covering the period of 12/13/2010 to 12/11/2018. The file contains a total of 2013 daily transaction records including date, opening price of the GE stock for the day, highest price, lowest price, closing price, closing price adjusted for dividends, and the number of stocks traded (volume).

1- Use the explore command in SPSS and explain whether the trading volume of the stock is normally distributed. Make sure to discuss, Skewness, kurtosis, results from test of normality as well as the Q-Q plots.

2- Select a random Sample of exactly 125 observations. Then run the descriptive command and calculate the mean and standard deviation of the sample. Repeat this process (i.e., selection of a random sample and descriptive command) exactly 50 times. Hint: Use SPSS syntax to repeat the command. List both values (mean and the standard deviation) in a new excel file with proper column headings.

3- Upload the newly created excel file into SPSS and create a histogram of both the calculated means and standard deviations.

4- Run the explore command similar to what you did in step 1 for both variables and make your observations. Does the Central Limit Theorem (CLT) apply to both measurements?

5- Suppose you believe that the true average daily trade volume for General Electric stock is 49,829,719 shares. Based on a recent sample you have also calculated a standard deviation of 21,059,637 shares. Considering a 95% confidence level, what is the minimum required sample size if you like your sampling error to be limited to 10,000,000 shares. What sample size would offer a sampling error of not more than 20,000,000 shares? Assuming N=2013 represents the total population size, how will your calculations change for the finite sample?

6- Is there a statistically significant difference between the average trading volume in 2017 and 2018? Hint: While technically, this can be carried out as a pared sample t-test since volume data are reported for the same stock, we will treat this as independent samples. Complete your calculations by hand assuming M2017=46108055, S2017=34099055, n2017=251, M2018= 87241844, S2018=50977722, n2018=238.

Repeat the test, this time by using SPSS. Hint: Create a new grouping variable for 2017 and 2018 and use it to run your test.

Please answer the below

Posted in Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Please answer the below

Please answer the below. Please answer the below.

select a subset of user stories and create a sprint backlog document. Each user story should be broken down into multiple tasks and allocate time spent for each of these tasks over the duration of the sprint (typically for each day in the sprint). Typically, the user stories chosen for this sprint would be the functionality implemented of the system but we are not implementing the system so you will take half of your user stories and break them down into tasks and assign how much time spent on each task.
Create the sprint backlog that lists a subset of user stories that will be broken down into tasks and allocated time spent on this task each day of the sprint (because we only have 2 sprints left choose half the user stories for this sprint).
In addition to the backlog you should create a burndown chart to show the rate of progress either via story points completed or hours burned
PFA attachment sprint 1 backlog example document for reference
PFA attachment online payment system document which contains user and user stories on which we need to work and prepare sprint backlog and burndown chart
[supanova_question]

There are four total initial posts and four additional responses required. All the files that

There are four total initial posts and four additional responses required. All the files that are mentioned in the discussion post are in the attached zip file. I also included a word document titled “Criteria for Discussions” which outlines each discussion, which files can be used, and the due date.[supanova_question]

Computer Science Question

Please answer the below Computer Science Assignment Help Build the DOCTIME Knowledge Engine Purpose: The purpose of this project is to reinforce the knowledge you have gained so far about databases, specifically developing a database from business rules.
Instruction
Required Items: 1. The Business Rules from the Produce an ER Diagram assignment 2. ERD you constructed as result of the Produce an ER Diagram assignment 3. The 3NF Dependency Diagram you produced as part of the Produce a 3NF Dependency Diagram assignmeNT 4. The final result of the ETL: SQL Statements exercise Project deliverables: 1. Use the Steps below to build the database in MariaDB database. 2. You will produce 5 PDF Documents (Use the SAVE AS or EXPORT command to save as PDF) a. The Submission in BB should appear similarly as follows -SUBMISSIONBusinessRules.pdf ERD.pdf 3NF Dependency Diagram.pdf SQL Documentation.pdf Database Construction Screen Captures.pdf 3. The submission in BB should look similar to the example below

done
Seen
8 mins ago[supanova_question]

Computer Science Question

The topics of reporting and of expert witness testimony are critical aspects of digital forensics. In 500 words and using your own words, explain one thing that makes such reports and testimony more compelling. Explain one risk that failing to write effectively could negatively impact your report or testimony.

done
Seen
few seconds ago[supanova_question]

Please answer the below

Please answer the below