- Source: Standard score
In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores.
It is calculated by subtracting the population mean from an individual raw score and then dividing the difference by the population standard deviation. This process of converting a raw score into a standard score is called standardizing or normalizing (however, "normalizing" can refer to many types of ratios; see Normalization for more).
Standard scores are most commonly called z-scores; the two terms may be used interchangeably, as they are in this article. Other equivalent terms in use include z-value, z-statistic, normal score, standardized variable and pull in high energy physics.
Computing a z-score requires knowledge of the mean and standard deviation of the complete population to which a data point belongs; if one only has a sample of observations from the population, then the analogous computation using the sample mean and sample standard deviation yields the t-statistic.
Calculation
If the population mean and population standard deviation are known, a raw score
x is converted into a standard score by
z
=
x
−
μ
σ
{\displaystyle z={x-\mu \over \sigma }}
where:
μ is the mean of the population,
σ is the standard deviation of the population.
The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above.
Calculating z using this formula requires use of the population mean and the population standard deviation, not the sample mean or sample deviation. However, knowing the true mean and standard deviation of a population is often an unrealistic expectation, except in cases such as standardized testing, where the entire population is measured.
When the population mean and the population standard deviation are unknown, the standard score may be estimated by using the sample mean and sample standard deviation as estimates of the population values.
In these cases, the z-score is given by
z
=
x
−
x
¯
S
{\displaystyle z={x-{\bar {x}} \over S}}
where:
x
¯
{\displaystyle {\bar {x}}}
is the mean of the sample,
S is the standard deviation of the sample.
Though it should always be stated, the distinction between use of the population and sample statistics often is not made. In either case, the numerator and denominator of the equations have the same units of measure so that the units cancel out through division and z is left as a dimensionless quantity.
Applications
= Z-test
=The z-score is often used in the z-test in standardized testing – the analog of the Student's t-test for a population whose parameters are known, rather than estimated. As it is very unusual to know the entire population, the t-test is much more widely used.
= Prediction intervals
=The standard score can be used in the calculation of prediction intervals. A prediction interval [L,U], consisting of a lower endpoint designated L and an upper endpoint designated U, is an interval such that a future observation X will lie in the interval with high probability
γ
{\displaystyle \gamma }
, i.e.
P
(
L
<
X
<
U
)
=
γ
,
{\displaystyle P(L
For the standard score Z of X it gives:
P
(
L
−
μ
σ
<
Z
<
U
−
μ
σ
)
=
γ
.
{\displaystyle P\left({\frac {L-\mu }{\sigma }}
By determining the quantile z such that
P
(
−
z
<
Z
<
z
)
=
γ
{\displaystyle P\left(-z
it follows:
L
=
μ
−
z
σ
,
U
=
μ
+
z
σ
{\displaystyle L=\mu -z\sigma ,\ U=\mu +z\sigma }
= Process control
=In process control applications, the Z value provides an assessment of the degree to which a process is operating off-target.
= Comparison of scores measured on different scales: ACT and SAT
=When scores are measured on different scales, they may be converted to z-scores to aid comparison. Dietz et al. give the following example, comparing student scores on the (old) SAT and ACT high school tests. The table shows the mean and standard deviation for total scores on the SAT and ACT. Suppose that student A scored 1800 on the SAT, and student B scored 24 on the ACT. Which student performed better relative to other test-takers?
The z-score for student A is
z
=
x
−
μ
σ
=
1800
−
1500
300
=
1
{\displaystyle z={x-\mu \over \sigma }={1800-1500 \over 300}=1}
The z-score for student B is
z
=
x
−
μ
σ
=
24
−
21
5
=
0.6
{\displaystyle z={x-\mu \over \sigma }={24-21 \over 5}=0.6}
Because student A has a higher z-score than student B, student A performed better compared to other test-takers than did student B.
= Percentage of observations below a z-score
=Continuing the example of ACT and SAT scores, if it can be further assumed that both ACT and SAT scores are normally distributed (which is approximately correct), then the z-scores may be used to calculate the percentage of test-takers who received lower scores than students A and B.
= Cluster analysis and multidimensional scaling
="For some multivariate techniques such as multidimensional scaling and cluster analysis, the concept of distance between the units in the data is often of considerable interest and importance… When the variables in a multivariate data set are on different scales, it makes more sense to calculate the distances after some form of standardization."
= Principal components analysis
=In principal components analysis, "Variables measured on different scales or on a common scale with widely differing ranges are often standardized."
= Relative importance of variables in multiple regression: standardized regression coefficients
=Standardization of variables prior to multiple regression analysis is sometimes used as an aid to interpretation.
(page 95) state the following.
"The standardized regression slope is the slope in the regression equation if X and Y are standardized … Standardization of X and Y is done by subtracting the respective means from each set of observations and dividing by the respective standard deviations … In multiple regression, where several X variables are used, the standardized regression coefficients quantify the relative contribution of each X variable."
However, Kutner et al. (p 278) give the following caveat: "… one must be cautious about interpreting any regression coefficients, whether standardized or not. The reason is that when the predictor variables are correlated among themselves, … the regression coefficients are affected by the other predictor variables in the model … The magnitudes of the standardized regression coefficients are affected not only by the presence of correlations among the predictor variables but also by the spacings of the observations on each of these variables. Sometimes these spacings may be quite arbitrary. Hence, it is ordinarily not wise to interpret the magnitudes of standardized regression coefficients as reflecting the comparative importance of the predictor variables."
Standardizing in mathematical statistics
In mathematical statistics, a random variable X is standardized by subtracting its expected value
E
[
X
]
{\displaystyle \operatorname {E} [X]}
and dividing the difference by its standard deviation
σ
(
X
)
=
Var
(
X
)
:
{\displaystyle \sigma (X)={\sqrt {\operatorname {Var} (X)}}:}
Z
=
X
−
E
[
X
]
σ
(
X
)
{\displaystyle Z={X-\operatorname {E} [X] \over \sigma (X)}}
If the random variable under consideration is the sample mean of a random sample
X
1
,
…
,
X
n
{\displaystyle \ X_{1},\dots ,X_{n}}
of X:
X
¯
=
1
n
∑
i
=
1
n
X
i
{\displaystyle {\bar {X}}={1 \over n}\sum _{i=1}^{n}X_{i}}
then the standardized version is
Z
=
X
¯
−
E
[
X
¯
]
σ
(
X
)
/
n
{\displaystyle Z={\frac {{\bar {X}}-\operatorname {E} [{\bar {X}}]}{\sigma (X)/{\sqrt {n}}}}}
Where the standardised sample mean's variance was calculated as follows:
Var
(
∑
x
i
)
=
∑
Var
(
x
i
)
=
n
Var
(
x
i
)
=
n
σ
2
Var
(
X
¯
)
=
Var
(
∑
x
i
n
)
=
1
n
2
Var
(
∑
x
i
)
=
n
σ
2
n
2
=
σ
2
n
{\displaystyle {\begin{array}{l}\operatorname {Var} \left(\sum x_{i}\right)=\sum \operatorname {Var} (x_{i})=n\operatorname {Var} (x_{i})=n\sigma ^{2}\\\operatorname {Var} ({\overline {X}})=\operatorname {Var} \left({\frac {\sum x_{i}}{n}}\right)={\frac {1}{n^{2}}}\operatorname {Var} \left(\sum x_{i}\right)={\frac {n\sigma ^{2}}{n^{2}}}={\frac {\sigma ^{2}}{n}}\end{array}}}
T-score
In educational assessment, T-score is a standard score Z shifted and scaled to have a mean of 50 and a standard deviation of 10.
In bone density measurements, the T-score is the standard score of the measurement compared to the population of healthy 30-year-old adults, and has the usual mean of 0 and standard deviation of 1.
See also
Coefficient of variation
Error function
Mahalanobis distance
Normalization (statistics)
Omega ratio
Standard normal deviate
Studentized residual
References
Further reading
Carroll, Susan Rovezzi; Carroll, David J. (2002). Statistics Made Simple for School Leaders (illustrated ed.). Rowman & Littlefield. ISBN 978-0-8108-4322-6. Retrieved 7 June 2009.
Larsen, Richard J.; Marx, Morris L. (2000). An Introduction to Mathematical Statistics and Its Applications (Third ed.). p. 282. ISBN 0-13-922303-7.
External links
z-score calculator
Kata Kunci Pencarian:
- X (media sosial)
- YouTube
- Haruma Miura
- Temasek Holdings
- MuseScore
- The Beatles
- Daftar pencapaian karier yang diraih oleh Cristiano Ronaldo
- Daftar kata serapan dari bahasa Inggris dalam bahasa Indonesia
- Twilight (lagu tema)
- Statistika
- Standard score
- Score
- Geometric standard deviation
- Standard
- Standard deviation
- Handicap (golf)
- IQ classification
- United States vehicle emission standards
- Standard normal table
- Normal score