- Massachusetts
- Sosialisme demokratis
- Local regression
- Kernel smoother
- Nonparametric regression
- Regression analysis
- Segmented regression
- Kernel regression
- Nonlinear regression
- Polynomial regression
- 2025 Australian federal election
- Quantile regression
local regression
Local regression GudangMovies21 Rebahinxxi LK21
Local regression or local polynomial regression, also known as moving regression, is a generalization of the moving average and polynomial regression.
Its most common methods, initially developed for scatterplot smoothing, are LOESS (locally estimated scatterplot smoothing) and LOWESS (locally weighted scatterplot smoothing), both pronounced LOH-ess. They are two strongly related non-parametric regression methods that combine multiple regression models in a k-nearest-neighbor-based meta-model.
In some fields, LOESS is known and commonly referred to as Savitzky–Golay filter (proposed 15 years before LOESS).
LOESS and LOWESS thus build on "classical" methods, such as linear and nonlinear least squares regression. They address situations in which the classical procedures do not perform well or cannot be effectively applied without undue labor. LOESS combines much of the simplicity of linear least squares regression with the flexibility of nonlinear regression. It does this by fitting simple models to localized subsets of the data to build up a function that describes the deterministic part of the variation in the data, point by point. In fact, one of the chief attractions of this method is that the data analyst is not required to specify a global function of any form to fit a model to the data, only to fit segments of the data.
The trade-off for these features is increased computation. Because it is so computationally intensive, LOESS would have been practically impossible to use in the era when least squares regression was being developed. Most other modern methods for process modeling are similar to LOESS in this respect. These methods have been consciously designed to use our current computational ability to the fullest possible advantage to achieve goals not easily achieved by traditional approaches.
A smooth curve through a set of data points obtained with this statistical technique is called a loess curve, particularly when each smoothed value is given by a weighted quadratic least squares regression over the span of values of the y-axis scattergram criterion variable. When each smoothed value is given by a weighted linear least squares regression over the span, this is known as a lowess curve; however, some authorities treat lowess and loess as synonyms.
History
Local regression and closely related procedures have a long and rich history, having been discovered and rediscovered in different fields on multiple occasions. An early work by Robert Henderson studying the problem of graduation (a term for smoothing used in Actuarial literature) introduced local regression using cubic polynomials, and showed how earlier graduation methods could be interpreted as local polynomial fitting. William S. Cleveland and Catherine Loader (1995); and Lori Murray and David Bellhouse (2019) discuss more of the historical work on graduation.
The Savitzky-Golay filter, introduced by Abraham Savitzky and Marcel J. E. Golay (1964) significantly expanded the method. Like the earlier graduation work, the focus was on data with an equally-spaced predictor variable, where (excluding boundary effects) local regression can be represented as a convolution. Savitzky and Golay published extensive sets of convolution coefficients for different orders of polynomial and smoothing window widths.
Local regression methods started to appear extensively in statistics literature in the 1970s; for example, Charles J. Stone (1977), Vladimir Katkovnik (1979) and William S. Cleveland (1979). Katkovnik (1985) is the earliest book devoted primarily to local regression methods.
Extensive theoretical work continued to appear throughout the 1990s. Important contributions include Jianqing Fan and Irène Gijbels (1992) studying efficiency properties, and David Ruppert and Matthew P. Wand (1994) developing an asymptotic distribution theory for multivariate local regression.
An important extension of local regression is Local Likelihood Estimation, formulated by Robert Tibshirani and Trevor Hastie (1987). This replaces the local least-squares criterion with a likelihood-based criterion, thereby extending the local regression method to the Generalized linear model setting; for example binary data; count data; censored data.
Practical implementations of local regression began appearing in statistical software in the 1980s. Cleveland (1981) introduces the LOWESS routines, intended for smoothing scatterplots. This implements local linear fitting with a single predictor variable, and also introduces robustness downweighting to make the procedure resistant to outliers. An entirely new implementation, LOESS, is described in Cleveland and Susan J. Devlin (1988). LOESS is a multivariate smoother, able to handle spatial data with two (or more) predictor variables, and uses (by default) local quadratic fitting. Both LOWESS and LOESS are implemented in the S and R programming languages. See also Cleveland's Local Fitting Software.
While Local Regression, LOWESS and LOESS are sometimes used interchangeably, this usage should be considered incorrect. Local Regression is a general term for the fitting procedure; LOWESS and LOESS are two distinct implementations.
Model definition
Local regression uses a data set consisting of observations one or more ‘independent’ or ‘predictor’ variables, and a ‘dependent’ or ‘response’ variable. The dataset will consist of a number
n
{\displaystyle n}
observations. The observations of the predictor variable can be denoted
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
, and corresponding observations of the response variable by
Y
1
,
…
,
Y
n
{\displaystyle Y_{1},\ldots ,Y_{n}}
.
For ease of presentation, the development below assumes a single predictor variable; the extension to multiple predictors (when the
x
i
{\displaystyle x_{i}}
are vectors) is conceptually straightforward. A functional relationship between the predictor and response variables is assumed:
Y
i
=
μ
(
x
i
)
+
ϵ
i
{\displaystyle Y_{i}=\mu (x_{i})+\epsilon _{i}}
where
μ
(
x
)
{\displaystyle \mu (x)}
is the unknown ‘smooth’ regression function to be estimated, and represents the conditional expectation of the response, given a value of the predictor variables. In theoretical work, the ‘smoothness’ of this function can be formally characterized by placing bounds on higher order derivatives. The
ϵ
i
{\displaystyle \epsilon _{i}}
represents random error; for estimation purposes these are assumed to have mean zero. Stronger assumptions (e.g., independence and equal variance) may be made when assessing properties of the estimates.
Local regression then estimates the function
μ
(
x
)
{\displaystyle \mu (x)}
, for one value of
x
{\displaystyle x}
at a time. Since the function is assumed to be smooth, the most informative data points are those whose
x
i
{\displaystyle x_{i}}
values are close to
x
{\displaystyle x}
. This is formalized with a bandwidth
h
{\displaystyle h}
and a kernel or weight function
W
(
⋅
)
{\displaystyle W(\cdot )}
, with observations assigned weights
w
i
(
x
)
=
W
(
x
i
−
x
h
)
.
{\displaystyle w_{i}(x)=W\left({\frac {x_{i}-x}{h}}\right).}
A typical choice of
W
{\displaystyle W}
, used by Cleveland in LOWESS, is
W
(
u
)
=
(
1
−
|
u
|
3
)
3
{\displaystyle W(u)=(1-|u|^{3})^{3}}
for
|
u
|
<
1
{\displaystyle |u|<1}
, although any similar function (peaked at
u
=
0
{\displaystyle u=0}
and small or 0 for large values of
u
{\displaystyle u}
) can be used. Questions of bandwidth selection and specification (how large should
h
{\displaystyle h}
be, and should it vary depending upon the fitting point
x
{\displaystyle x}
?) are deferred for now.
A local model (usually a low-order polynomial with degree
p
≤
3
{\displaystyle p\leq 3}
), expressed as
μ
(
x
i
)
≈
β
0
+
β
1
(
x
i
−
x
)
+
…
+
β
p
(
x
i
−
x
)
p
{\displaystyle \mu (x_{i})\approx \beta _{0}+\beta _{1}(x_{i}-x)+\ldots +\beta _{p}(x_{i}-x)^{p}}
is then fitted by weighted least squares: choose regression coefficients
(
β
^
0
,
…
,
β
^
p
)
{\displaystyle ({\hat {\beta }}_{0},\ldots ,{\hat {\beta }}_{p})}
to minimize
∑
i
=
1
n
w
i
(
x
)
(
Y
i
−
β
0
−
β
1
(
x
i
−
x
)
−
…
−
β
p
(
x
i
−
x
)
p
)
2
.
{\displaystyle \sum _{i=1}^{n}w_{i}(x)\left(Y_{i}-\beta _{0}-\beta _{1}(x_{i}-x)-\ldots -\beta _{p}(x_{i}-x)^{p}\right)^{2}.}
The local regression estimate of
μ
(
x
)
{\displaystyle \mu (x)}
is then simply the intercept estimate:
μ
^
(
x
)
=
β
^
0
{\displaystyle {\hat {\mu }}(x)={\hat {\beta }}_{0}}
while the remaining coefficients can be interpreted
(up to a factor of
p
!
{\displaystyle p!}
) as derivative estimates.
It is to be emphasized that the above procedure produces the estimate
μ
^
(
x
)
{\displaystyle {\hat {\mu }}(x)}
for one value of
x
{\displaystyle x}
. When considering a new value of
x
{\displaystyle x}
, a new set of weights
w
i
(
x
)
{\displaystyle w_{i}(x)}
must be computed, and the regression coefficient estimated afresh.
= Matrix Representation of the Local Regression Estimate
=As with all least squares estimates, the estimated regression coefficients can be expressed in closed form (see Weighted least squares for details):
β
^
=
(
X
T
W
X
)
−
1
X
T
W
y
{\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X^{\textsf {T}}WX} )^{-1}\mathbf {X^{\textsf {T}}W} \mathbf {y} }
where
β
^
{\displaystyle {\hat {\boldsymbol {\beta }}}}
is a vector of the local regression coefficients;
X
{\displaystyle \mathbf {X} }
is the
n
×
(
p
+
1
)
{\displaystyle n\times (p+1)}
design matrix with entries
(
x
i
−
x
)
j
{\displaystyle (x_{i}-x)^{j}}
;
W
{\displaystyle \mathbf {W} }
is a diagonal matrix of the smoothing weights
w
i
(
x
)
{\displaystyle w_{i}(x)}
; and
y
{\displaystyle \mathbf {y} }
is a vector of the responses
Y
i
{\displaystyle Y_{i}}
.
This matrix representation is crucial for studying the theoretical properties of local regression estimates. With appropriate definitions of the design and weight matrices, it immediately generalizes to the multiple-predictor setting.
Selection Issues: Bandwidth, local model, fitting criteria
Implementation of local regression requires specification and selection of several components:
The bandwidth, and more generally the localized subsets of the data.
The degree of local polynomial, or more generally, the form of the local model.
The choice of weight function
W
(
⋅
)
{\displaystyle W(\cdot )}
.
The choice of fitting criterion (least squares or something else).
Each of these components has been the subject of extensive study; a summary is provided below.
= Localized subsets of data; Bandwidth
=The bandwidth
h
{\displaystyle h}
controls the resolution of the local regression estimate. If h is too small, the estimate may show high-resolution features that represent noise in the data, rather than any real structure in the mean function. Conversely, if h is too large, the estimate will only show low-resolution features, and important structure may be lost. This is the bias-variance tradeoff; if h is too small, the estimate exhibits large variation; while at large h, the estimate exhibits large bias.
Careful choice of bandwidth is therefore crucial when applying local regression. Mathematical methods for bandwidth selection require, firstly, formal criteria to assess the performance of an estimate. One such criterion is prediction error: if a new observation is made at
x
~
{\displaystyle {\tilde {x}}}
, how well does the estimate
μ
^
(
x
~
)
{\displaystyle {\hat {\mu }}({\tilde {x}})}
predict the new response
Y
~
{\displaystyle {\tilde {Y}}}
?
Performance is often assessed using a squared-error loss function. The mean squared prediction error is
E
(
Y
~
−
μ
^
(
x
~
)
)
2
=
E
(
Y
~
−
μ
(
x
)
+
μ
(
x
)
−
μ
^
(
x
~
)
)
2
=
E
(
Y
~
−
μ
(
x
)
)
2
+
E
(
μ
(
x
)
−
μ
^
(
x
~
)
)
2
.
{\displaystyle {\begin{aligned}E\left({\tilde {Y}}-{\hat {\mu }}({\tilde {x}})\right)^{2}&=E\left({\tilde {Y}}-\mu (x)+\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}\\&=E\left({\tilde {Y}}-\mu (x)\right)^{2}+E\left(\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}.\end{aligned}}}
The first term
E
(
Y
~
−
μ
(
x
)
)
2
{\displaystyle E\left({\tilde {Y}}-\mu (x)\right)^{2}}
is the random variation of the observation; this is entirely independent of the local regression estimate. The second term,
E
(
μ
(
x
)
−
μ
^
(
x
~
)
)
2
{\displaystyle E\left(\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}}
is the mean squared estimation error. This relation shows that, for squared error loss, minimizing prediction error and estimation error are equivalent problems.
In global bandwidth selection, these measures can be integrated over the
x
{\displaystyle x}
space ("mean integrated squared error", often used in theoretical work), or averaged over the actual
x
i
{\displaystyle x_{i}}
(more useful for practical implementations). Some standard techniques from model selection can be readily adapted to local regression:
Cross Validation, which estimates the mean-squared prediction error.
Mallow's Cp and Akaike's Information Criterion, which estimate mean squared estimation error.
Other methods which attempt to estimate bias and variance variance components of the estimation error directly.
Any of these criteria can be minimized to produce an automatic bandwidth selector. Cleveland and Devlin prefer a graphical method (the M-plot) to visually display the bias-variance trade-off and guide bandwidth choice.
One question not addressed above is, how should the bandwidth depend upon the fitting point
x
{\displaystyle x}
? Often a constant bandwidth is used, while LOWESS and LOESS prefer a nearest-neighbor bandwidth, meaning h is smaller in regions with many data points. Formally, the smoothing parameter,
α
{\displaystyle \alpha }
, is the fraction of the total number n of data points that are used in each local fit. The subset of data used in each weighted least squares fit thus comprises the
n
α
{\displaystyle n\alpha }
points (rounded to the next largest integer) whose explanatory variables' values are closest to the point at which the response is being estimated.
More sophisticated methods attempt to choose the bandwidth adaptively; that is, choose a bandwidth at each fitting point
x
{\displaystyle x}
by applying criteria such as cross-validation locally within the smoothing window. An early example of this is Jerome H. Friedman's "supersmoother", which uses cross-validation to choose among local linear fits at different bandwidths.
= Degree of local polynomials
=Most sources, in both theoretical and computational work, use low-order polynomials as the local model, with polynomial degree ranging from 0 to 3.
The degree 0 (local constant) model is equivalent to a kernel smoother; usually credited to Èlizbar Nadaraya (1964) and G. S. Watson (1964). This is the simplest model to use, but can suffer from bias when fitting near boundaries of the dataset.
Local linear (degree 1) fitting can substantially reduce the boundary bias.
Local quadratic (degree 2) and local cubic (degree 3) can result in improved fits, particularly when the underlying mean function
μ
(
x
)
{\displaystyle \mu (x)}
has substantial curvature, or equivalently a large second derivative.
In theory, higher orders of polynomial can lead to faster convergence of the estimate
μ
^
(
x
)
{\displaystyle {\hat {\mu }}(x)}
to the true mean
μ
(
x
)
{\displaystyle \mu (x)}
, provided that
μ
(
x
)
{\displaystyle \mu (x)}
has a sufficient number of derivatives. See C. J. Stone (1980). Generally, it takes a large sample size for this faster convergence to be realized. There are also computational and stability issues that arise, particularly for multivariate smoothing. It is generally not recommended to use local polynomials with degree greater than 3.
As with bandwidth selection, methods such as cross-validation can be used to compare the fits obtained with different degrees of polynomial.
= Weight function
=As mentioned above, the weight function gives the most weight to the data points nearest the point of estimation and the least weight to the data points that are furthest away. The use of the weights is based on the idea that points near each other in the explanatory variable space are more likely to be related to each other in a simple way than points that are further apart. Following this logic, points that are likely to follow the local model best influence the local model parameter estimates the most. Points that are less likely to actually conform to the local model have less influence on the local model parameter estimates.
Cleveland (1979) sets out four requirements for the weight function:
Non-negative:
W
(
x
)
>
0
{\displaystyle W(x)>0}
for
|
x
|
<
1
{\displaystyle |x|<1}
.
Symmetry:
W
(
−
x
)
=
W
(
x
)
{\displaystyle W(-x)=W(x)}
.
Monotone:
W
(
x
)
{\displaystyle W(x)}
is a nonincreasing function for
x
≥
0
{\displaystyle x\geq 0}
.
Bounded support:
W
(
x
)
=
0
{\displaystyle W(x)=0}
for
|
x
|
≥
1
{\displaystyle |x|\geq 1}
.
Asymptotic efficiency of weight functions has been considered by V. A. Epanechnikov (1969) in the context of kernel density estimation; J. Fan (1993) has derived similar results for local regression. They conclude that the quadratic kernel,
W
(
x
)
=
1
−
x
2
{\displaystyle W(x)=1-x^{2}}
for
|
x
|
≤
1
{\displaystyle |x|\leq 1}
has greatest efficiency under a mean-squared-error loss function. See "kernel functions in common use" for more discussion of different kernels and their efficiencies.
Considerations other than MSE are also relevant to the choice of weight function. Smoothness properties of
W
(
x
)
{\displaystyle W(x)}
directly affect smoothness of the estimate
μ
^
(
x
)
{\displaystyle {\hat {\mu }}(x)}
. In particular, the quadaratic kernel is not differentiable at
x
=
±
1
{\displaystyle x=\pm 1}
, and
μ
^
(
x
)
{\displaystyle {\hat {\mu }}(x)}
is not differentiable as a result.
The tri-cube weight function,
W
(
x
)
=
(
1
−
|
x
|
3
)
3
;
|
x
|
<
1
{\displaystyle W(x)=(1-|x|^{3})^{3};|x|<1}
has been used in LOWESS and other local regression software; this combines higher-order differentiability with a high MSE efficiency.
One criticism of weight functions with bounded support is that they can lead to numerical problems (i.e. an unstable or singular design matrix) when fitting in regions with sparse data. For this reason, some authors choose to use the Gaussian kernel, or others with unbounded support.
= Choice of Fitting Criterion
=As described above, local regression uses a locally weighted least squares criterion to estimate the regression parameters. This inherits many of the advantages (ease of implementation and interpretation; good properties when errors are normally distributed) and disadvantages (sensitivity to extreme values and outliers; inefficiency when errors have unequal variance or are not normally distributed) usually associated with least squares regression.
These disadvantages can be addressed by replacing the local least-squares estimation by something else. Two such ideas are presented here: Local likelihood estimation, which applies local estimation to the Generalized linear model, and Robust local regression, which localizes methods from robust regression.
Local Likelihood Estimation
In local likelihood estimation, developed in Tibshirani and Hastie (1987), the observations
Y
i
{\displaystyle Y_{i}}
are assumed to come from a parametric family of distributions, with a known probability density function (or mass function, for discrete data),
Y
i
∼
f
(
y
,
θ
(
x
i
)
)
,
{\displaystyle Y_{i}\sim f(y,\theta (x_{i})),}
where the parameter function
θ
(
x
)
{\displaystyle \theta (x)}
is the unknown quantity to be estimated. To estimate
θ
(
x
)
{\displaystyle \theta (x)}
at a particular point
x
{\displaystyle x}
, the local likelihood criterion is
∑
i
=
1
n
w
i
(
x
)
log
(
f
(
Y
i
,
β
0
+
β
1
(
x
i
−
x
)
+
…
+
β
p
(
x
i
−
x
)
p
)
.
{\displaystyle \sum _{i=1}^{n}w_{i}(x)\log \left(f(Y_{i},\beta _{0}+\beta _{1}(x_{i}-x)+\ldots +\beta _{p}(x_{i}-x)^{p}\right).}
Estimates of the regression coefficients (in, particular,
β
^
0
{\displaystyle {\hat {\beta }}_{0}}
) are obtained by maximizing the local likelihood criterion, and
the local likelihood estimate is
θ
^
(
x
)
=
β
^
0
.
{\displaystyle {\hat {\theta }}(x)={\hat {\beta }}_{0}.}
When
f
(
y
,
θ
(
x
)
)
{\displaystyle f(y,\theta (x))}
is the normal distribution and
θ
(
x
)
{\displaystyle \theta (x)}
is the mean function, the local likelihood method reduces to the standard local least-squares regression. For other likelihood families, there is (usually) no closed-form solution for the local likelihood estimate, and iterative procedures such as iteratively reweighted least squares must be used to compute the estimate.
Example (local logistic regression). All response observations are 0 or 1, and the mean function is the "success" probability,
μ
(
x
i
)
=
Pr
(
Y
i
=
1
|
x
i
)
{\displaystyle \mu (x_{i})=\Pr(Y_{i}=1|x_{i})}
. Since
μ
(
x
i
)
{\displaystyle \mu (x_{i})}
must be between 0 and 1, a local polynomial model should not be used for
μ
(
x
)
{\displaystyle \mu (x)}
directly. Insead, the logistic transformation
θ
(
x
)
=
log
(
μ
(
x
)
1
−
μ
(
x
)
)
{\displaystyle \theta (x)=\log \left({\frac {\mu (x)}{1-\mu (x)}}\right)}
can be used; equivalently,
1
−
μ
(
x
)
=
1
1
+
e
θ
(
x
)
;
μ
(
x
)
=
e
θ
(
x
)
1
+
e
θ
(
x
)
{\displaystyle {\begin{aligned}1-\mu (x)&={\frac {1}{1+e^{\theta (x)}}};\\\mu (x)&={\frac {e^{\theta (x)}}{1+e^{\theta (x)}}}\end{aligned}}}
and the mass function is
f
(
Y
i
,
θ
(
x
i
)
)
=
e
Y
i
θ
(
x
i
)
1
+
e
θ
(
x
i
)
.
{\displaystyle f(Y_{i},\theta (x_{i}))={\frac {e^{Y_{i}\theta (x_{i})}}{1+e^{\theta (x_{i})}}}.}
An asymptotic theory for local likelihood estimation is developed in J. Fan, Nancy E. Heckman and M.P.Wand (1995); the book Loader (1999) discusses many more applications of local likelihood.
Robust Local Regression
To address the sensitivity to outliers, techniques from robust regression can be employed. In local M-estimation, the local least-squares criterion is replaced by a criterion of the form
∑
i
=
1
n
w
i
(
x
)
ρ
(
Y
i
−
β
0
−
…
−
β
p
(
x
i
−
x
)
p
s
)
{\displaystyle \sum _{i=1}^{n}w_{i}(x)\rho \left({\frac {Y_{i}-\beta _{0}-\ldots -\beta _{p}(x_{i}-x)^{p}}{s}}\right)}
where
ρ
(
⋅
)
{\displaystyle \rho (\cdot )}
is a robustness function and
s
{\displaystyle s}
is a scale parameter. Discussion of the merits of different choices of robustness function is best left to the robust regression literature. The scale parameter
s
{\displaystyle s}
must also be estimated. References for local M-estimation include Katkovnik (1985) and Alexandre Tsybakov (1986).
The robustness iterations in LOWESS and LOESS correspond to the robustness function defined by
ρ
′
(
u
)
=
u
(
1
−
u
2
/
6
)
2
;
|
u
|
<
1
{\displaystyle \rho '(u)=u(1-u^{2}/6)^{2};|u|<1}
and a robust global estimate of the scale parameter.
If
ρ
(
u
)
=
|
u
|
{\displaystyle \rho (u)=|u|}
, the local
L
1
{\displaystyle L_{1}}
criterion
∑
i
=
1
n
w
i
(
x
)
|
Y
i
−
β
0
−
…
−
β
p
(
x
i
−
x
)
p
|
{\displaystyle \sum _{i=1}^{n}w_{i}(x)\left|Y_{i}-\beta _{0}-\ldots -\beta _{p}(x_{i}-x)^{p}\right|}
results; this does not require a scale parameter. When
p
=
0
{\displaystyle p=0}
, this criterion is minimized by a locally weighted median; local
L
1
{\displaystyle L_{1}}
regression can be interpreted as estimating the median, rather than mean, response. If the loss function is skewed, this becomes local quantile regression. See Keming Yu and M.C. Jones (1998).
Advantages
As discussed above, the biggest advantage LOESS has over many other methods is the process of fitting a model to the sample data does not begin with the specification of a function. Instead the analyst only has to provide a smoothing parameter value and the degree of the local polynomial. In addition, LOESS is very flexible, making it ideal for modeling complex processes for which no theoretical models exist. These two advantages, combined with the simplicity of the method, make LOESS one of the most attractive of the modern regression methods for applications that fit the general framework of least squares regression but which have a complex deterministic structure.
Although it is less obvious than for some of the other methods related to linear least squares regression, LOESS also accrues most of the benefits typically shared by those procedures. The most important of those is the theory for computing uncertainties for prediction and calibration. Many other tests and procedures used for validation of least squares models can also be extended to LOESS models .
Disadvantages
LOESS makes less efficient use of data than other least squares methods. It requires fairly large, densely sampled data sets in order to produce good models. This is because LOESS relies on the local data structure when performing the local fitting. Thus, LOESS provides less complex data analysis in exchange for greater experimental costs.
Another disadvantage of LOESS is the fact that it does not produce a regression function that is easily represented by a mathematical formula. This can make it difficult to transfer the results of an analysis to other people. In order to transfer the regression function to another person, they would need the data set and software for LOESS calculations. In nonlinear regression, on the other hand, it is only necessary to write down a functional form in order to provide estimates of the unknown parameters and the estimated uncertainty. Depending on the application, this could be either a major or a minor drawback to using LOESS. In particular, the simple form of LOESS can not be used for mechanistic modelling where fitted parameters specify particular physical properties of a system.
Finally, as discussed above, LOESS is a computationally intensive method (with the exception of evenly spaced data, where the regression can then be phrased as a non-causal finite impulse response filter). LOESS is also prone to the effects of outliers in the data set, like other least squares methods. There is an iterative, robust version of LOESS [Cleveland (1979)] that can be used to reduce LOESS' sensitivity to outliers, but too many extreme outliers can still overcome even the robust method.
See also
Degrees of freedom (statistics)#In non-standard regression
Kernel regression
Moving least squares
Moving average
Multivariate adaptive regression splines
Non-parametric statistics
Savitzky–Golay filter
Segmented regression
References
= Citations
== Sources
=External links
NIST Engineering Statistics Handbook Section on LOESS
R: Local Polynomial Regression Fitting The Loess function in R
R: Scatter Plot Smoothing The Lowess function in R
[https://stat.ethz.ch/R-manual/R-devel/library/stats/html/supsmu.html The supsmu function (Friedman's SuperSmoother) in R
Quantile LOESS – A method to perform Local regression on a 'Quantile moving window (with R code)
Nate Silver, How Opinion on Same-Sex Marriage Is Changing, and What It Means – sample of LOESS versus linear regression
This article incorporates public domain material from the National Institute of Standards and Technology