I am interested in learning deeper about the number 1.96 used in the test of 95% confidence with a normal distribution.
More specifically, I am interested in whether someone could provide a numerical example of this, and how 1.96 is calculated using the 97.5th percentile, or anybody knows somewhere where it is shown in more detail?
Would be really appreciated,
Best,
Andrew
$\endgroup$ 11 Answer
$\begingroup$$$X \sim N(\mu,\sigma^2)$$
$$P \bigg( \mu - 1.96\sigma < X < \mu + 1.96\sigma\bigg) = 0.95$$
$$P\bigg( \mu - 1.96\sigma > X\bigg) = 0.025$$
$$P\bigg( \mu + 1.96\sigma > X\bigg) = 0.975$$
In English, if you go 1.96 standard deviations from the mean in both directions, you account for 95% of the density. By symmetry, you end up with 2.5% of the density in either tail that is further than 1.96 standard deviations from the mean.
What this means in statistics is that, when you have a sampling distribution of a mean of a normal variable, the standard deviation of the sampling distribution is the standard error of the estimate, so you go 1.96 standard errors in either direction from the estimate to get 95% of the probability.
$\endgroup$ 2