- Source: Power transform
In statistics, a power transform is a family of functions applied to create a monotonic transformation of data using power functions. It is a data transformation technique used to stabilize variance, make the data more normal distribution-like, improve the validity of measures of association (such as the Pearson correlation between variables), and for other data stabilization procedures.
Power transforms are used in multiple fields, including multi-resolution and wavelet analysis, statistical data analysis, medical research, modeling of physical processes, geochemical data analysis, epidemiology and many other clinical, environmental and social research areas.
Definition
The power transformation is defined as a continuous function of power parameter λ, typically given in piece-wise form that makes it continuous at the point of singularity (λ = 0). For data vectors (y1,..., yn) in which each yi > 0, the power transform is
y
i
(
λ
)
=
{
y
i
λ
−
1
λ
(
GM
(
y
)
)
λ
−
1
,
if
λ
≠
0
GM
(
y
)
ln
y
i
,
if
λ
=
0
{\displaystyle y_{i}^{(\lambda )}={\begin{cases}{\dfrac {y_{i}^{\lambda }-1}{\lambda (\operatorname {GM} (y))^{\lambda -1}}},&{\text{if }}\lambda \neq 0\\[12pt]\operatorname {GM} (y)\ln {y_{i}},&{\text{if }}\lambda =0\end{cases}}}
where
GM
(
y
)
=
(
∏
i
=
1
n
y
i
)
1
n
=
y
1
y
2
⋯
y
n
n
{\displaystyle \operatorname {GM} (y)=\left(\prod _{i=1}^{n}y_{i}\right)^{\frac {1}{n}}={\sqrt[{n}]{y_{1}y_{2}\cdots y_{n}}}\,}
is the geometric mean of the observations y1, ..., yn. The case for
λ
=
0
{\displaystyle \lambda =0}
is the limit as
λ
{\displaystyle \lambda }
approaches 0. To see this, note that
y
i
λ
=
exp
(
λ
ln
(
y
i
)
)
=
1
+
λ
ln
(
y
i
)
+
O
(
(
λ
ln
(
y
i
)
)
2
)
{\displaystyle y_{i}^{\lambda }=\exp({\lambda \ln(y_{i})})=1+\lambda \ln(y_{i})+O((\lambda \ln(y_{i}))^{2})}
- using Taylor series. Then
y
i
λ
−
1
λ
=
ln
(
y
i
)
+
O
(
λ
)
{\displaystyle {\dfrac {y_{i}^{\lambda }-1}{\lambda }}=\ln(y_{i})+O(\lambda )}
, and everything but
ln
(
y
i
)
{\displaystyle \ln(y_{i})}
becomes negligible for
λ
{\displaystyle \lambda }
sufficiently small.
The inclusion of the (λ − 1)th power of the geometric mean in the denominator simplifies the scientific interpretation of any equation involving
y
i
(
λ
)
{\displaystyle y_{i}^{(\lambda )}}
, because the units of measurement do not change as λ changes.
Box and Cox (1964) introduced the geometric mean into this transformation by first including the Jacobian of rescaled power transformation
y
λ
−
1
λ
.
{\displaystyle {\frac {y^{\lambda }-1}{\lambda }}.}
with the likelihood. This Jacobian is as follows:
J
(
λ
;
y
1
,
…
,
y
n
)
=
∏
i
=
1
n
|
d
y
i
(
λ
)
/
d
y
|
=
∏
i
=
1
n
y
i
λ
−
1
=
GM
(
y
)
n
(
λ
−
1
)
{\displaystyle J(\lambda ;y_{1},\ldots ,y_{n})=\prod _{i=1}^{n}|dy_{i}^{(\lambda )}/dy|=\prod _{i=1}^{n}y_{i}^{\lambda -1}=\operatorname {GM} (y)^{n(\lambda -1)}}
This allows the normal log likelihood at its maximum to be written as follows:
log
(
L
(
μ
^
,
σ
^
)
)
=
(
−
n
/
2
)
(
log
(
2
π
σ
^
2
)
+
1
)
+
n
(
λ
−
1
)
log
(
GM
(
y
)
)
=
(
−
n
/
2
)
(
log
(
2
π
σ
^
2
/
GM
(
y
)
2
(
λ
−
1
)
)
+
1
)
.
{\displaystyle {\begin{aligned}\log({\mathcal {L}}({\hat {\mu }},{\hat {\sigma }}))&=(-n/2)(\log(2\pi {\hat {\sigma }}^{2})+1)+n(\lambda -1)\log(\operatorname {GM} (y))\\[5pt]&=(-n/2)(\log(2\pi {\hat {\sigma }}^{2}/\operatorname {GM} (y)^{2(\lambda -1)})+1).\end{aligned}}}
From here, absorbing
GM
(
y
)
2
(
λ
−
1
)
{\displaystyle \operatorname {GM} (y)^{2(\lambda -1)}}
into the expression for
σ
^
2
{\displaystyle {\hat {\sigma }}^{2}}
produces an expression that establishes that minimizing the sum of squares of residuals from
y
i
(
λ
)
{\displaystyle y_{i}^{(\lambda )}}
is equivalent to maximizing the sum of the normal log likelihood of deviations from
(
y
λ
−
1
)
/
λ
{\displaystyle (y^{\lambda }-1)/\lambda }
and the log of the Jacobian of the transformation.
The value at Y = 1 for any λ is 0, and the derivative with respect to Y there is 1 for any λ. Sometimes Y is a version of some other variable scaled to give Y = 1 at some sort of average value.
The transformation is a power transformation, but done in such a way as to make it continuous with the parameter λ at λ = 0. It has proved popular in regression analysis, including econometrics.
Box and Cox also proposed a more general form of the transformation that incorporates a shift parameter.
τ
(
y
i
;
λ
,
α
)
=
{
(
y
i
+
α
)
λ
−
1
λ
(
GM
(
y
+
α
)
)
λ
−
1
if
λ
≠
0
,
GM
(
y
+
α
)
ln
(
y
i
+
α
)
if
λ
=
0
,
{\displaystyle \tau (y_{i};\lambda ,\alpha )={\begin{cases}{\dfrac {(y_{i}+\alpha )^{\lambda }-1}{\lambda (\operatorname {GM} (y+\alpha ))^{\lambda -1}}}&{\text{if }}\lambda \neq 0,\\\\\operatorname {GM} (y+\alpha )\ln(y_{i}+\alpha )&{\text{if }}\lambda =0,\end{cases}}}
which holds if yi + α > 0 for all i. If τ(Y, λ, α) follows a truncated normal distribution, then Y is said to follow a Box–Cox distribution.
Bickel and Doksum eliminated the need to use a truncated distribution by extending the range of the transformation to all y, as follows:
τ
(
y
i
;
λ
,
α
)
=
{
sgn
(
y
i
+
α
)
|
y
i
+
α
|
λ
−
1
λ
(
GM
(
y
+
α
)
)
λ
−
1
if
λ
≠
0
,
GM
(
y
+
α
)
sgn
(
y
+
α
)
ln
(
y
i
+
α
)
if
λ
=
0
,
{\displaystyle \tau (y_{i};\lambda ,\alpha )={\begin{cases}{\dfrac {\operatorname {sgn} (y_{i}+\alpha )|y_{i}+\alpha |^{\lambda }-1}{\lambda (\operatorname {GM} (y+\alpha ))^{\lambda -1}}}&{\text{if }}\lambda \neq 0,\\\\\operatorname {GM} (y+\alpha )\operatorname {sgn} (y+\alpha )\ln(y_{i}+\alpha )&{\text{if }}\lambda =0,\end{cases}}}
where sgn(.) is the sign function. This change in definition has little practical import as long as
α
{\displaystyle \alpha }
is less than
min
(
y
i
)
{\displaystyle \operatorname {min} (y_{i})}
, which it usually is.
Bickel and Doksum also proved that the parameter estimates are consistent and asymptotically normal under appropriate regularity conditions, though the standard Cramér–Rao lower bound can substantially underestimate the variance when parameter values are small relative to the noise variance. However, this problem of underestimating the variance may not be a substantive problem in many applications.
Box–Cox transformation
The one-parameter Box–Cox transformations are defined as
y
i
(
λ
)
=
{
y
i
λ
−
1
λ
if
λ
≠
0
,
ln
y
i
if
λ
=
0
,
{\displaystyle y_{i}^{(\lambda )}={\begin{cases}{\dfrac {y_{i}^{\lambda }-1}{\lambda }}&{\text{if }}\lambda \neq 0,\\\ln y_{i}&{\text{if }}\lambda =0,\end{cases}}}
and the two-parameter Box–Cox transformations as
y
i
(
λ
)
=
{
(
y
i
+
λ
2
)
λ
1
−
1
λ
1
if
λ
1
≠
0
,
ln
(
y
i
+
λ
2
)
if
λ
1
=
0
,
{\displaystyle y_{i}^{({\boldsymbol {\lambda }})}={\begin{cases}{\dfrac {(y_{i}+\lambda _{2})^{\lambda _{1}}-1}{\lambda _{1}}}&{\text{if }}\lambda _{1}\neq 0,\\\ln(y_{i}+\lambda _{2})&{\text{if }}\lambda _{1}=0,\end{cases}}}
as described in the original article. Moreover, the first transformations hold for
y
i
>
0
{\displaystyle y_{i}>0}
, and the second for
y
i
>
−
λ
2
{\displaystyle y_{i}>-\lambda _{2}}
.
The parameter
λ
{\displaystyle \lambda }
is estimated using the profile likelihood function and using goodness-of-fit tests.
= Confidence interval
=Confidence interval for the Box–Cox transformation can be asymptotically constructed using Wilks's theorem on the profile likelihood function to find all the possible values of
λ
{\displaystyle \lambda }
that fulfill the following restriction:
ln
(
L
(
λ
)
)
≥
ln
(
L
(
λ
^
)
)
−
1
2
χ
2
1
,
1
−
α
.
{\displaystyle \ln {\big (}L(\lambda ){\big )}\geq \ln {\big (}L({\hat {\lambda }}){\big )}-{\frac {1}{2}}{\chi ^{2}}_{1,1-\alpha }.}
= Example
=The BUPA liver data set contains data on liver enzymes ALT and γGT. Suppose we are interested in using log(γGT) to predict ALT. A plot of the data appears in panel (a) of the figure. There appears to be non-constant variance, and a Box–Cox transformation might help.
The log-likelihood of the power parameter appears in panel (b). The horizontal reference line is at a distance of χ12/2 from the maximum and can be used to read off an approximate 95% confidence interval for λ. It appears as though a value close to zero would be good, so we take logs.
Possibly, the transformation could be improved by adding a shift parameter to the log transformation. Panel (c) of the figure shows the log-likelihood. In this case, the maximum of the likelihood is close to zero suggesting that a shift parameter is not needed. The final panel shows the transformed data with a superimposed regression line.
Note that although Box–Cox transformations can make big improvements in model fit, there are some issues that the transformation cannot help with. In the current example, the data are rather heavy-tailed so that the assumption of normality is not realistic and a robust regression approach leads to a more precise model.
= Econometric application
=Economists often characterize production relationships by some variant of the Box–Cox transformation.
Consider a common representation of production Q as dependent on services provided by a capital stock K and by labor hours N:
τ
(
Q
)
=
α
τ
(
K
)
+
(
1
−
α
)
τ
(
N
)
.
{\displaystyle \tau (Q)=\alpha \tau (K)+(1-\alpha )\tau (N).\,}
Solving for Q by inverting the Box–Cox transformation we find
Q
=
(
α
K
λ
+
(
1
−
α
)
N
λ
)
1
/
λ
,
{\displaystyle Q={\big (}\alpha K^{\lambda }+(1-\alpha )N^{\lambda }{\big )}^{1/\lambda },\,}
which is known as the constant elasticity of substitution (CES) production function.
The CES production function is a homogeneous function of degree one.
When λ = 1, this produces the linear production function:
Q
=
α
K
+
(
1
−
α
)
N
.
{\displaystyle Q=\alpha K+(1-\alpha )N.\,}
When λ → 0 this produces the famous Cobb–Douglas production function:
Q
=
K
α
N
1
−
α
.
{\displaystyle Q=K^{\alpha }N^{1-\alpha }.\,}
= Activities and demonstrations
=The SOCR resource pages contain a number of hands-on interactive activities demonstrating the Box–Cox (power) transformation using Java applets and charts. These directly illustrate the effects of this transform on Q–Q plots, X–Y scatterplots, time-series plots and histograms.
Yeo–Johnson transformation
The Yeo–Johnson transformation
allows also for zero and negative values of
y
{\displaystyle y}
.
λ
{\displaystyle \lambda }
can be any real number, where
λ
=
1
{\displaystyle \lambda =1}
produces the identity transformation.
The transformation law reads:
y
i
(
λ
)
=
{
(
(
y
i
+
1
)
λ
−
1
)
/
λ
if
λ
≠
0
,
y
≥
0
ln
(
y
i
+
1
)
if
λ
=
0
,
y
≥
0
−
(
(
−
y
i
+
1
)
(
2
−
λ
)
−
1
)
/
(
2
−
λ
)
if
λ
≠
2
,
y
<
0
−
ln
(
−
y
i
+
1
)
if
λ
=
2
,
y
<
0
{\displaystyle y_{i}^{(\lambda )}={\begin{cases}((y_{i}+1)^{\lambda }-1)/\lambda &{\text{if }}\lambda \neq 0,y\geq 0\\[4pt]\ln(y_{i}+1)&{\text{if }}\lambda =0,y\geq 0\\[4pt]-((-y_{i}+1)^{(2-\lambda )}-1)/(2-\lambda )&{\text{if }}\lambda \neq 2,y<0\\[4pt]-\ln(-y_{i}+1)&{\text{if }}\lambda =2,y<0\end{cases}}}
Box-Tidwell transformation
The Box-Tidwell transformation is a statistical technique used to assess and correct non-linearity between predictor variables and the logit in a generalized linear model, particularly in logistic regression. This transformation is useful when the relationship between the independent variables and the outcome is non-linear and cannot be adequately captured by the standard model.
= Overview
=The Box-Tidwell transformation was developed by George E. P. Box and John W. Tidwell in 1962 as an extension of Box-Cox transformations, which are applied to the dependent variable. However, unlike the Box-Cox transformation, the Box-Tidwell transformation is applied to the independent variables in regression models. It is often used when the assumption of linearity between the predictors and the outcome is violated.
= Method
=The general idea behind the Box-Tidwell transformation is to apply a power transformation to each independent variable Xi in the regression model:
X
i
′
=
X
i
λ
{\displaystyle X_{i}'=X_{i}^{\lambda }}
Where
λ
{\displaystyle \lambda }
is the parameter estimated from the data. If Box-Tidwell Transformation is significantly different from 1, this indicates a non-linear relationship between Xi and the logit, and the transformation improves the model fit.
The Box-Tidwell test is typically performed by augmenting the regression model with terms like
X
i
log
(
X
i
)
{\displaystyle X_{i}\log(X_{i})}
and testing the significance of the coefficients. If significant, this suggests that a transformation should be applied to achieve a linear relationship between the predictor and the logit.
Applications
= Stabilizing Continuous Predictors
=The transformation is beneficial in logistic regression or proportional hazards models where non-linearity in continuous predictors can distort the relationship with the dependent variable. It is a flexible tool that allows the researcher to fit a more appropriate model to the data without guessing the relationship's functional form in advance.
= Verifying Linearity in Logistic Regression
=In logistic regression, a key assumption is that continuous independent variables exhibit a linear relationship with the logit of the dependent variable. Violations of this assumption can lead to biased estimates and reduced model performance. The Box-Tidwell transformation is a method used to assess and correct such violations by determining whether a continuous predictor requires transformation to achieve linearity with the logit.
Method for Verifying Linearity
The Box-Tidwell transformation introduces an interaction term between each continuous variable Xi and its natural logarithm
log
(
X
i
)
{\displaystyle \log(X_{i})}
:
X
i
log
(
X
i
)
{\displaystyle X_{i}\log(X_{i})}
This term is included in the logistic regression model to test whether the relationship between Xi and the logit is non-linear. A statistically significant coefficient for this interaction term indicates a violation of the linearity assumption, suggesting the need for a transformation of the predictor. the Box-Tidwell transformation provides an appropriate power transformation to linearize the relationship, thereby improving model accuracy and validity. Conversely, non-significant results support the assumption of linearity.
Limitations
One limitation of the Box-Tidwell transformation is that it only works for positive values of the independent variables. If your data contains negative values, the transformation cannot be applied directly without modifying the variables (e.g., adding a constant).
Notes
References
Box, George E. P.; Cox, D. R. (1964). "An analysis of transformations". Journal of the Royal Statistical Society, Series B. 26 (2): 211–252. JSTOR 2984418. MR 0192611.
Carroll, R. J.; Ruppert, D. (1981). "On prediction and the power transformation family" (PDF). Biometrika. 68 (3): 609–615. doi:10.1093/biomet/68.3.609.
DeGroot, M. H. (1987). "A Conversation with George Box" (PDF). Statistical Science. 2 (3): 239–258. doi:10.1214/ss/1177013223.
Handelsman, D. J. (2002). "Optimal Power Transformations for Analysis of Sperm Concentration and Other Semen Variables". Journal of Andrology. 23 (5).
Gluzman, S.; Yukalov, V. I. (2006). "Self-similar power transforms in extrapolation problems". Journal of Mathematical Chemistry. 39 (1): 47–56. arXiv:cond-mat/0606104. Bibcode:2006cond.mat..6104G. doi:10.1007/s10910-005-9003-7. S2CID 118965098.
Howarth, R. J.; Earle, S. A. M. (1979). "Application of a generalized power transformation to geochemical data". Journal of the International Association for Mathematical Geology. 11 (1): 45–62. doi:10.1007/BF01043245. S2CID 121582755.
Box, G.E.P. and Tidwell, P.W. (1962) Transformation of Independent Variables. Technometrics, 4, 531-550. https://doi.org/10.1080/00401706.1962.10490038 (a.k.a: Box-Tidwell transformation)
External links
Nishii, R. (2001) [1994], "Box–Cox transformation", Encyclopedia of Mathematics, EMS Press (fixed link)
Sanford Weisberg, Yeo-Johnson Power Transformations
Kata Kunci Pencarian:
- OYO Rooms
- Gugus Pakar Fotografi Gabungan
- Stephanie Hsu
- Seth Godin
- Adobe Photoshop
- Globalisasi
- Hyundai Motor Company
- Khoirul Anwar
- Fosforilasi oksidatif
- Unit pemroses penglihatan
- Power transform
- Laplace transform
- Fourier transform
- Z-transform
- Direct-quadrature-zero transformation
- Fast Fourier transform
- Hilbert transform
- Radon transform
- Integral transform
- Y-Δ transform
Charlie’s Angels (2019)
The Deadly Breaking Sword (1979)
Wintertide (2023)
PAW Patrol: The Mighty Movie (2023)
No More Posts Available.
No more pages to load.