- Source: Logistic function
A logistic function or logistic curve is a common S-shaped curve (sigmoid curve) with the equation
f
(
x
)
=
L
1
+
e
−
k
(
x
−
x
0
)
{\displaystyle f(x)={\frac {L}{1+e^{-k(x-x_{0})}}}}
where
The logistic function has domain the real numbers, the limit as
x
→
−
∞
{\displaystyle x\to -\infty }
is 0, and the limit as
x
→
+
∞
{\displaystyle x\to +\infty }
is
L
{\displaystyle L}
.
The standard logistic function, depicted at right, where
L
=
1
,
k
=
1
,
x
0
=
0
{\displaystyle L=1,k=1,x_{0}=0}
, has the equation
f
(
x
)
=
1
1
+
e
−
x
{\displaystyle f(x)={\frac {1}{1+e^{-x}}}}
and is sometimes simply called the sigmoid. It is also sometimes called the expit, being the inverse function of the logit.
The logistic function finds applications in a range of fields, including biology (especially ecology), biomathematics, chemistry, demography, economics, geoscience, mathematical psychology, probability, sociology, political science, linguistics, statistics, and artificial neural networks. There are various generalizations, depending on the field.
History
The logistic function was introduced in a series of three papers by Pierre François Verhulst between 1838 and 1847, who devised it as a model of population growth by adjusting the exponential growth model, under the guidance of Adolphe Quetelet. Verhulst first devised the function in the mid 1830s, publishing a brief note in 1838, then presented an expanded analysis and named the function in 1844 (published 1845); the third paper adjusted the correction term in his model of Belgian population growth.
The initial stage of growth is approximately exponential (geometric); then, as saturation begins, the growth slows to linear (arithmetic), and at maturity, growth approaches the limit with an exponentially decaying gap, like the initial stage in reverse.
Verhulst did not explain the choice of the term "logistic" (French: logistique), but it is presumably in contrast to the logarithmic curve, and by analogy with arithmetic and geometric. His growth model is preceded by a discussion of arithmetic growth and geometric growth (whose curve he calls a logarithmic curve, instead of the modern term exponential curve), and thus "logistic growth" is presumably named by analogy, logistic being from Ancient Greek: λογῐστῐκός, romanized: logistikós, a traditional division of Greek mathematics.
As a word derived from ancient Greek mathematical terms,
the name of this function is unrelated to the military and management term logistics, which is instead from French: logis "lodgings", though some believe the Greek term also influenced logistics; see Logistics § Origin for details.
Mathematical properties
The standard logistic function is the logistic function with parameters
k
=
1
{\displaystyle k=1}
,
x
0
=
0
{\displaystyle x_{0}=0}
,
L
=
1
{\displaystyle L=1}
, which yields
f
(
x
)
=
1
1
+
e
−
x
=
e
x
e
x
+
1
=
e
x
/
2
e
x
/
2
+
e
−
x
/
2
.
{\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{e^{x}+1}}={\frac {e^{x/2}}{e^{x/2}+e^{-x/2}}}.}
In practice, due to the nature of the exponential function
e
−
x
{\displaystyle e^{-x}}
, it is often sufficient to compute the standard logistic function for
x
{\displaystyle x}
over a small range of real numbers, such as a range contained in [−6, +6], as it quickly converges very close to its saturation values of 0 and 1.
= Symmetries
=The logistic function has the symmetry property that
1
−
f
(
x
)
=
f
(
−
x
)
.
{\displaystyle 1-f(x)=f(-x).}
This reflects that the growth from 0 when
x
{\displaystyle x}
is small is symmetric with the decay of the gap to the limit (1) when
x
{\displaystyle x}
is large.
Further,
x
↦
f
(
x
)
−
1
/
2
{\displaystyle x\mapsto f(x)-1/2}
is an odd function.
The sum of the logistic function and its reflection about the vertical axis,
f
(
−
x
)
{\displaystyle f(-x)}
, is
1
1
+
e
−
x
+
1
1
+
e
−
(
−
x
)
=
e
x
e
x
+
1
+
1
e
x
+
1
=
1.
{\displaystyle {\frac {1}{1+e^{-x}}}+{\frac {1}{1+e^{-(-x)}}}={\frac {e^{x}}{e^{x}+1}}+{\frac {1}{e^{x}+1}}=1.}
The logistic function is thus rotationally symmetrical about the point (0, 1/2).
= Inverse function
=The logistic function is the inverse of the natural logit function
logit
p
=
log
p
1
−
p
for
0
<
p
<
1
{\displaystyle \operatorname {logit} p=\log {\frac {p}{1-p}}{\text{ for }}0
and so converts the logarithm of odds into a probability. The conversion from the log-likelihood ratio of two alternatives also takes the form of a logistic curve.
= Hyperbolic tangent
=The logistic function is an offset and scaled hyperbolic tangent function:
f
(
x
)
=
1
2
+
1
2
tanh
(
x
2
)
,
{\displaystyle f(x)={\frac {1}{2}}+{\frac {1}{2}}\tanh \left({\frac {x}{2}}\right),}
or
tanh
(
x
)
=
2
f
(
2
x
)
−
1.
{\displaystyle \tanh(x)=2f(2x)-1.}
This follows from
tanh
(
x
)
=
e
x
−
e
−
x
e
x
+
e
−
x
=
e
x
⋅
(
1
−
e
−
2
x
)
e
x
⋅
(
1
+
e
−
2
x
)
=
f
(
2
x
)
−
e
−
2
x
1
+
e
−
2
x
=
f
(
2
x
)
−
e
−
2
x
+
1
−
1
1
+
e
−
2
x
=
2
f
(
2
x
)
−
1.
{\displaystyle {\begin{aligned}\tanh(x)&={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}={\frac {e^{x}\cdot \left(1-e^{-2x}\right)}{e^{x}\cdot \left(1+e^{-2x}\right)}}\\&=f(2x)-{\frac {e^{-2x}}{1+e^{-2x}}}=f(2x)-{\frac {e^{-2x}+1-1}{1+e^{-2x}}}=2f(2x)-1.\end{aligned}}}
The hyperbolic-tangent relationship leads to another form for the logistic function's derivative:
d
d
x
f
(
x
)
=
1
4
sech
2
(
x
2
)
,
{\displaystyle {\frac {d}{dx}}f(x)={\frac {1}{4}}\operatorname {sech} ^{2}\left({\frac {x}{2}}\right),}
which ties the logistic function into the logistic distribution.
Geometrically, the hyperbolic tangent function is the hyperbolic angle on the unit hyperbola
x
2
−
y
2
=
1
{\displaystyle x^{2}-y^{2}=1}
, which factors as
(
x
+
y
)
(
x
−
y
)
=
1
{\displaystyle (x+y)(x-y)=1}
, and thus has asymptotes the lines through the origin with slope
−
1
{\displaystyle -1}
and with slope
1
{\displaystyle 1}
, and vertex at
(
1
,
0
)
{\displaystyle (1,0)}
corresponding to the range and midpoint (
1
{\displaystyle {1}}
) of tanh. Analogously, the logistic function can be viewed as the hyperbolic angle on the hyperbola
x
y
−
y
2
=
1
{\displaystyle xy-y^{2}=1}
, which factors as
y
(
x
−
y
)
=
1
{\displaystyle y(x-y)=1}
, and thus has asymptotes the lines through the origin with slope
0
{\displaystyle 0}
and with slope
1
{\displaystyle 1}
, and vertex at
(
2
,
1
)
{\displaystyle (2,1)}
, corresponding to the range and midpoint (
1
/
2
{\displaystyle 1/2}
) of the logistic function.
Parametrically, hyperbolic cosine and hyperbolic sine give coordinates on the unit hyperbola:
(
(
e
t
+
e
−
t
)
/
2
,
(
e
t
−
e
−
t
)
/
2
)
{\displaystyle \left((e^{t}+e^{-t})/2,(e^{t}-e^{-t})/2\right)}
, with quotient the hyperbolic tangent. Similarly,
(
e
t
/
2
+
e
−
t
/
2
,
e
t
/
2
)
{\displaystyle {\bigl (}e^{t/2}+e^{-t/2},e^{t/2}{\bigr )}}
parametrizes the hyperbola
x
y
−
y
2
=
1
{\displaystyle xy-y^{2}=1}
, with quotient the logistic function. These correspond to linear transformations (and rescaling the parametrization) of the hyperbola
x
y
=
1
{\displaystyle xy=1}
, with parametrization
(
e
−
t
,
e
t
)
{\displaystyle (e^{-t},e^{t})}
: the parametrization of the hyperbola for the logistic function corresponds to
t
/
2
{\displaystyle t/2}
and the linear transformation
(
1
1
0
1
)
{\displaystyle {\bigl (}{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}{\bigr )}}
, while the parametrization of the unit hyperbola (for the hyperbolic tangent) corresponds to the linear transformation
1
2
(
1
1
−
1
1
)
{\displaystyle {\tfrac {1}{2}}{\bigl (}{\begin{smallmatrix}1&1\\-1&1\end{smallmatrix}}{\bigr )}}
.
= Derivative
=The standard logistic function has an easily calculated derivative. The derivative is known as the density of the logistic distribution:
f
(
x
)
=
1
1
+
e
−
x
=
e
x
1
+
e
x
,
{\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{1+e^{x}}},}
d
d
x
f
(
x
)
=
e
x
⋅
(
1
+
e
x
)
−
e
x
⋅
e
x
(
1
+
e
x
)
2
=
e
x
(
1
+
e
x
)
2
=
(
e
x
1
+
e
x
)
(
1
1
+
e
x
)
=
(
e
x
1
+
e
x
)
(
1
−
e
x
1
+
e
x
)
=
f
(
x
)
(
1
−
f
(
x
)
)
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} x}}f(x)&={\frac {e^{x}\cdot (1+e^{x})-e^{x}\cdot e^{x}}{(1+e^{x})^{2}}}\\&={\frac {e^{x}}{(1+e^{x})^{2}}}\\&=\left({\frac {e^{x}}{1+e^{x}}}\right)\left({\frac {1}{1+e^{x}}}\right)\\&=\left({\frac {e^{x}}{1+e^{x}}}\right)\left(1-{\frac {e^{x}}{1+e^{x}}}\right)\\&=f(x)\left(1-f(x)\right)\end{aligned}}}
from which all higher derivatives can be derived algebraically. For example,
f
″
=
(
1
−
2
f
)
(
1
−
f
)
f
{\displaystyle f''=(1-2f)(1-f)f}
.
The logistic distribution is a location–scale family, which corresponds to parameters of the logistic function. If
L
=
1
{\displaystyle L=1}
is fixed, then the midpoint
x
0
{\displaystyle x_{0}}
is the location and the slope
k
{\displaystyle k}
is the scale.
= Integral
=Conversely, its antiderivative can be computed by the substitution
u
=
1
+
e
x
{\displaystyle u=1+e^{x}}
, since
f
(
x
)
=
e
x
1
+
e
x
=
u
′
u
,
{\displaystyle f(x)={\frac {e^{x}}{1+e^{x}}}={\frac {u'}{u}},}
so (dropping the constant of integration)
∫
e
x
1
+
e
x
d
x
=
∫
1
u
d
u
=
ln
u
=
ln
(
1
+
e
x
)
.
{\displaystyle \int {\frac {e^{x}}{1+e^{x}}}\,dx=\int {\frac {1}{u}}\,du=\ln u=\ln(1+e^{x}).}
In artificial neural networks, this is known as the softplus function and (with scaling) is a smooth approximation of the ramp function, just as the logistic function (with scaling) is a smooth approximation of the Heaviside step function.
= Logistic differential equation
=The unique standard logistic function is the solution of the simple first-order non-linear ordinary differential equation
d
d
x
f
(
x
)
=
f
(
x
)
(
1
−
f
(
x
)
)
{\displaystyle {\frac {d}{dx}}f(x)=f(x){\big (}1-f(x){\big )}}
with boundary condition
f
(
0
)
=
1
/
2
{\displaystyle f(0)=1/2}
. This equation is the continuous version of the logistic map. Note that the reciprocal logistic function is solution to a simple first-order linear ordinary differential equation.
The qualitative behavior is easily understood in terms of the phase line: the derivative is 0 when the function is 1; and the derivative is positive for
f
{\displaystyle f}
between 0 and 1, and negative for
f
{\displaystyle f}
above 1 or less than 0 (though negative populations do not generally accord with a physical model). This yields an unstable equilibrium at 0 and a stable equilibrium at 1, and thus for any function value greater than 0 and less than 1, it grows to 1.
The logistic equation is a special case of the Bernoulli differential equation and has the following solution:
f
(
x
)
=
e
x
e
x
+
C
.
{\displaystyle f(x)={\frac {e^{x}}{e^{x}+C}}.}
Choosing the constant of integration
C
=
1
{\displaystyle C=1}
gives the other well known form of the definition of the logistic curve:
f
(
x
)
=
e
x
e
x
+
1
=
1
1
+
e
−
x
.
{\displaystyle f(x)={\frac {e^{x}}{e^{x}+1}}={\frac {1}{1+e^{-x}}}.}
More quantitatively, as can be seen from the analytical solution, the logistic curve shows early exponential growth for negative argument, which reaches to linear growth of slope 1/4 for an argument near 0, then approaches 1 with an exponentially decaying gap.
The differential equation derived above is a special case of a general differential equation that only models the sigmoid function for
x
>
0
{\displaystyle x>0}
. In many modeling applications, the more general form
d
f
(
x
)
d
x
=
k
a
f
(
x
)
(
a
−
f
(
x
)
)
,
f
(
0
)
=
a
1
+
e
k
r
{\displaystyle {\frac {df(x)}{dx}}={\frac {k}{a}}f(x){\big (}a-f(x){\big )},\quad f(0)={\frac {a}{1+e^{kr}}}}
can be desirable. Its solution is the shifted and scaled sigmoid
a
S
(
k
(
x
−
r
)
)
{\displaystyle aS{\big (}k(x-r){\big )}}
.
Probabilistic interpretation
When the capacity
L
=
1
{\displaystyle L=1}
, the value of the logistic function is in the range
(
0
,
1
)
{\displaystyle (0,1)}
and can be interpreted as a probability p. In more detail, p can be interpreted as the probability of one of two alternatives (the parameter of a Bernoulli distribution); the two alternatives are complementary, so the probability of the other alternative is
q
=
1
−
p
{\displaystyle q=1-p}
and
p
+
q
=
1
{\displaystyle p+q=1}
. The two alternatives are coded as 1 and 0, corresponding to the limiting values as
x
→
±
∞
{\displaystyle x\to \pm \infty }
.
In this interpretation the input x is the log-odds for the first alternative (relative to the other alternative), measured in "logistic units" (or logits),
e
x
{\displaystyle e^{x}}
is the odds for the first event (relative to the second), and, recalling that given odds of
O
=
O
:
1
{\displaystyle O=O:1}
for (
O
{\displaystyle O}
against 1), the probability is the ratio of for over (for plus against),
O
/
(
O
+
1
)
{\displaystyle O/(O+1)}
, we see that
e
x
/
(
e
x
+
1
)
=
1
/
(
1
+
e
−
x
)
=
p
{\displaystyle e^{x}/(e^{x}+1)=1/(1+e^{-x})=p}
is the probability of the first alternative. Conversely, x is the log-odds against the second alternative,
−
x
{\displaystyle -x}
is the log-odds for the second alternative,
e
−
x
{\displaystyle e^{-x}}
is the odds for the second alternative, and
e
−
x
/
(
e
−
x
+
1
)
=
1
/
(
1
+
e
x
)
=
q
{\displaystyle e^{-x}/(e^{-x}+1)=1/(1+e^{x})=q}
is the probability of the second alternative.
This can be framed more symmetrically in terms of two inputs,
x
0
{\displaystyle x_{0}}
and
x
1
{\displaystyle x_{1}}
, which then generalizes naturally to more than two alternatives. Given two real number inputs,
x
0
{\displaystyle x_{0}}
and
x
1
{\displaystyle x_{1}}
, interpreted as logits, their difference
x
1
−
x
0
{\displaystyle x_{1}-x_{0}}
is the log-odds for option 1 (the log-odds against option 0),
e
x
1
−
x
0
{\displaystyle e^{x_{1}-x_{0}}}
is the odds,
e
x
1
−
x
0
/
(
e
x
1
−
x
0
+
1
)
=
1
/
(
1
+
e
−
(
x
1
−
x
0
)
)
=
e
x
1
/
(
e
x
0
+
e
x
1
)
{\displaystyle e^{x_{1}-x_{0}}/(e^{x_{1}-x_{0}}+1)=1/\left(1+e^{-(x_{1}-x_{0})}\right)=e^{x_{1}}/(e^{x_{0}}+e^{x_{1}})}
is the probability of option 1, and similarly
e
x
0
/
(
e
x
0
+
e
x
1
)
{\displaystyle e^{x_{0}}/(e^{x_{0}}+e^{x_{1}})}
is the probability of option 0.
This form immediately generalizes to more alternatives as the softmax function, which is a vector-valued function whose i-th coordinate is
e
x
i
/
∑
i
=
0
n
e
x
i
{\textstyle e^{x_{i}}/\sum _{i=0}^{n}e^{x_{i}}}
.
More subtly, the symmetric form emphasizes interpreting the input x as
x
1
−
x
0
{\displaystyle x_{1}-x_{0}}
and thus relative to some reference point, implicitly to
x
0
=
0
{\displaystyle x_{0}=0}
. Notably, the softmax function is invariant under adding a constant to all the logits
x
i
{\displaystyle x_{i}}
, which corresponds to the difference
x
j
−
x
i
{\displaystyle x_{j}-x_{i}}
being the log-odds for option j against option i, but the individual logits
x
i
{\displaystyle x_{i}}
not being log-odds on their own. Often one of the options is used as a reference ("pivot"), and its value fixed as 0, so the other logits are interpreted as odds versus this reference. This is generally done with the first alternative, hence the choice of numbering:
x
0
=
0
{\displaystyle x_{0}=0}
, and then
x
i
=
x
i
−
x
0
{\displaystyle x_{i}=x_{i}-x_{0}}
is the log-odds for option i against option 0. Since
e
0
=
1
{\displaystyle e^{0}=1}
, this yields the
+
1
{\displaystyle +1}
term in many expressions for the logistic function and generalizations.
Generalizations
In growth modeling, numerous generalizations exist, including the generalized logistic curve, the Gompertz function, the cumulative distribution function of the shifted Gompertz distribution, and the hyperbolastic function of type I.
In statistics, where the logistic function is interpreted as the probability of one of two alternatives, the generalization to three or more alternatives is the softmax function, which is vector-valued, as it gives the probability of each alternative.
Applications
= In ecology: modeling population growth
=A typical application of the logistic equation is a common model of population growth (see also population dynamics), originally due to Pierre-François Verhulst in 1838, where the rate of reproduction is proportional to both the existing population and the amount of available resources, all else being equal. The Verhulst equation was published after Verhulst had read Thomas Malthus' An Essay on the Principle of Population, which describes the Malthusian growth model of simple (unconstrained) exponential growth. Verhulst derived his logistic equation to describe the self-limiting growth of a biological population. The equation was rediscovered in 1911 by A. G. McKendrick for the growth of bacteria in broth and experimentally tested using a technique for nonlinear parameter estimation. The equation is also sometimes called the Verhulst-Pearl equation following its rediscovery in 1920 by Raymond Pearl (1879–1940) and Lowell Reed (1888–1966) of the Johns Hopkins University. Another scientist, Alfred J. Lotka derived the equation again in 1925, calling it the law of population growth.
Letting
P
{\displaystyle P}
represent population size (
N
{\displaystyle N}
is often used in ecology instead) and
t
{\displaystyle t}
represent time, this model is formalized by the differential equation:
d
P
d
t
=
r
P
(
1
−
P
K
)
,
{\displaystyle {\frac {dP}{dt}}=rP\left(1-{\frac {P}{K}}\right),}
where the constant
r
{\displaystyle r}
defines the growth rate and
K
{\displaystyle K}
is the carrying capacity.
In the equation, the early, unimpeded growth rate is modeled by the first term
+
r
P
{\displaystyle +rP}
. The value of the rate
r
{\displaystyle r}
represents the proportional increase of the population
P
{\displaystyle P}
in one unit of time. Later, as the population grows, the modulus of the second term (which multiplied out is
−
r
P
2
/
K
{\displaystyle -rP^{2}/K}
) becomes almost as large as the first, as some members of the population
P
{\displaystyle P}
interfere with each other by competing for some critical resource, such as food or living space. This antagonistic effect is called the bottleneck, and is modeled by the value of the parameter
K
{\displaystyle K}
. The competition diminishes the combined growth rate, until the value of
P
{\displaystyle P}
ceases to grow (this is called maturity of the population).
The solution to the equation (with
P
0
{\displaystyle P_{0}}
being the initial population) is
P
(
t
)
=
K
P
0
e
r
t
K
+
P
0
(
e
r
t
−
1
)
=
K
1
+
(
K
−
P
0
P
0
)
e
−
r
t
,
{\displaystyle P(t)={\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}={\frac {K}{1+\left({\frac {K-P_{0}}{P_{0}}}\right)e^{-rt}}},}
where
lim
t
→
∞
P
(
t
)
=
K
,
{\displaystyle \lim _{t\to \infty }P(t)=K,}
where
K
{\displaystyle K}
is the limiting value of
P
{\displaystyle P}
, the highest value that the population can reach given infinite time (or come close to reaching in finite time). The carrying capacity is asymptotically reached independently of the initial value
P
(
0
)
>
0
{\displaystyle P(0)>0}
, and also in the case that
P
(
0
)
>
K
{\displaystyle P(0)>K}
.
In ecology, species are sometimes referred to as
r
{\displaystyle r}
-strategist or
K
{\displaystyle K}
-strategist depending upon the selective processes that have shaped their life history strategies.
Choosing the variable dimensions so that
n
{\displaystyle n}
measures the population in units of carrying capacity, and
τ
{\displaystyle \tau }
measures time in units of
1
/
r
{\displaystyle 1/r}
, gives the dimensionless differential equation
d
n
d
τ
=
n
(
1
−
n
)
.
{\displaystyle {\frac {dn}{d\tau }}=n(1-n).}
Integral
The antiderivative of the ecological form of the logistic function can be computed by the substitution
u
=
K
+
P
0
(
e
r
t
−
1
)
{\displaystyle u=K+P_{0}\left(e^{rt}-1\right)}
, since
d
u
=
r
P
0
e
r
t
d
t
{\displaystyle du=rP_{0}e^{rt}dt}
∫
K
P
0
e
r
t
K
+
P
0
(
e
r
t
−
1
)
d
t
=
∫
K
r
1
u
d
u
=
K
r
ln
u
+
C
=
K
r
ln
(
K
+
P
0
(
e
r
t
−
1
)
)
+
C
{\displaystyle \int {\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}\,dt=\int {\frac {K}{r}}{\frac {1}{u}}\,du={\frac {K}{r}}\ln u+C={\frac {K}{r}}\ln \left(K+P_{0}(e^{rt}-1)\right)+C}
Time-varying carrying capacity
Since the environmental conditions influence the carrying capacity, as a consequence it can be time-varying, with
K
(
t
)
>
0
{\displaystyle K(t)>0}
, leading to the following mathematical model:
d
P
d
t
=
r
P
⋅
(
1
−
P
K
(
t
)
)
.
{\displaystyle {\frac {dP}{dt}}=rP\cdot \left(1-{\frac {P}{K(t)}}\right).}
A particularly important case is that of carrying capacity that varies periodically with period
T
{\displaystyle T}
:
K
(
t
+
T
)
=
K
(
t
)
.
{\displaystyle K(t+T)=K(t).}
It can be shown that in such a case, independently from the initial value
P
(
0
)
>
0
{\displaystyle P(0)>0}
,
P
(
t
)
{\displaystyle P(t)}
will tend to a unique periodic solution
P
∗
(
t
)
{\displaystyle P_{*}(t)}
, whose period is
T
{\displaystyle T}
.
A typical value of
T
{\displaystyle T}
is one year: In such case
K
(
t
)
{\displaystyle K(t)}
may reflect periodical variations of weather conditions.
Another interesting generalization is to consider that the carrying capacity
K
(
t
)
{\displaystyle K(t)}
is a function of the population at an earlier time, capturing a delay in the way population modifies its environment. This leads to a logistic delay equation, which has a very rich behavior, with bistability in some parameter range, as well as a monotonic decay to zero, smooth exponential growth, punctuated unlimited growth (i.e., multiple S-shapes), punctuated growth or alternation to a stationary level, oscillatory approach to a stationary level, sustainable oscillations, finite-time singularities as well as finite-time death.
= In statistics and machine learning
=Logistic functions are used in several roles in statistics. For example, they are the cumulative distribution function of the logistic family of distributions, and they are, a bit simplified, used to model the chance a chess player has to beat their opponent in the Elo rating system. More specific examples now follow.
Logistic regression
Logistic functions are used in logistic regression to model how the probability
p
{\displaystyle p}
of an event may be affected by one or more explanatory variables: an example would be to have the model
p
=
f
(
a
+
b
x
)
,
{\displaystyle p=f(a+bx),}
where
x
{\displaystyle x}
is the explanatory variable,
a
{\displaystyle a}
and
b
{\displaystyle b}
are model parameters to be fitted, and
f
{\displaystyle f}
is the standard logistic function.
Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression.
Another application of the logistic function is in the Rasch model, used in item response theory. In particular, the Rasch model forms a basis for maximum likelihood estimation of the locations of objects or persons on a continuum, based on collections of categorical data, for example the abilities of persons on a continuum based on responses that have been categorized as correct and incorrect.
Neural networks
Logistic functions are often used in artificial neural networks to introduce nonlinearity in the model or to clamp signals to within a specified interval. A popular neural net element computes a linear combination of its input signals, and applies a bounded logistic function as the activation function to the result; this model can be seen as a "smoothed" variant of the classical threshold neuron.
A common choice for the activation or "squashing" functions, used to clip large magnitudes to keep the response of the neural network bounded, is
g
(
h
)
=
1
1
+
e
−
2
β
h
,
{\displaystyle g(h)={\frac {1}{1+e^{-2\beta h}}},}
which is a logistic function.
These relationships result in simplified implementations of artificial neural networks with artificial neurons. Practitioners caution that sigmoidal functions which are antisymmetric about the origin (e.g. the hyperbolic tangent) lead to faster convergence when training networks with backpropagation.
The logistic function is itself the derivative of another proposed activation function, the softplus.
= In medicine: modeling of growth of tumors
=Another application of logistic curve is in medicine, where the logistic differential equation is used to model the growth of tumors. This application can be considered an extension of the above-mentioned use in the framework of ecology (see also the Generalized logistic curve, allowing for more parameters). Denoting with
X
(
t
)
{\displaystyle X(t)}
the size of the tumor at time
t
{\displaystyle t}
, its dynamics are governed by
X
′
=
r
(
1
−
X
K
)
X
,
{\displaystyle X'=r\left(1-{\frac {X}{K}}\right)X,}
which is of the type
X
′
=
F
(
X
)
X
,
F
′
(
X
)
≤
0
,
{\displaystyle X'=F(X)X,\quad F'(X)\leq 0,}
where
F
(
X
)
{\displaystyle F(X)}
is the proliferation rate of the tumor.
If a chemotherapy is started with a log-kill effect, the equation may be revised to be
X
′
=
r
(
1
−
X
K
)
X
−
c
(
t
)
X
,
{\displaystyle X'=r\left(1-{\frac {X}{K}}\right)X-c(t)X,}
where
c
(
t
)
{\displaystyle c(t)}
is the therapy-induced death rate. In the idealized case of very long therapy,
c
(
t
)
{\displaystyle c(t)}
can be modeled as a periodic function (of period
T
{\displaystyle T}
) or (in case of continuous infusion therapy) as a constant function, and one has that
1
T
∫
0
T
c
(
t
)
d
t
>
r
→
lim
t
→
+
∞
x
(
t
)
=
0
,
{\displaystyle {\frac {1}{T}}\int _{0}^{T}c(t)\,dt>r\to \lim _{t\to +\infty }x(t)=0,}
i.e. if the average therapy-induced death rate is greater than the baseline proliferation rate, then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy (e.g. it does not take into account the phenomenon of clonal resistance).
= In medicine: modeling of a pandemic
=A novel infectious pathogen to which a population has no immunity will generally spread exponentially in the early stages, while the supply of susceptible individuals is plentiful. The SARS-CoV-2 virus that causes COVID-19 exhibited exponential growth early in the course of infection in several countries in early 2020. Factors including a lack of susceptible hosts (through the continued spread of infection until it passes the threshold for herd immunity) or reduction in the accessibility of potential hosts through physical distancing measures, may result in exponential-looking epidemic curves first linearizing (replicating the "logarithmic" to "logistic" transition first noted by Pierre-François Verhulst, as noted above) and then reaching a maximal limit.
A logistic function, or related functions (e.g. the Gompertz function) are usually used in a descriptive or phenomenological manner because they fit well not only to the early exponential rise, but to the eventual levelling off of the pandemic as the population develops a herd immunity. This is in contrast to actual models of pandemics which attempt to formulate a description based on the dynamics of the pandemic (e.g. contact rates, incubation times, social distancing, etc.). Some simple models have been developed, however, which yield a logistic solution.
Modeling early COVID-19 cases
A generalized logistic function, also called the Richards growth curve, has been applied to model the early phase of the COVID-19 outbreak. The authors fit the generalized logistic function to the cumulative number of infected cases, here referred to as infection trajectory. There are different parameterizations of the generalized logistic function in the literature. One frequently used forms is
f
(
t
;
θ
1
,
θ
2
,
θ
3
,
ξ
)
=
θ
1
[
1
+
ξ
exp
(
−
θ
2
⋅
(
t
−
θ
3
)
)
]
1
/
ξ
{\displaystyle f(t;\theta _{1},\theta _{2},\theta _{3},\xi )={\frac {\theta _{1}}{[1+\xi \exp(-\theta _{2}\cdot (t-\theta _{3}))]^{1/\xi }}}}
where
θ
1
,
θ
2
,
θ
3
{\displaystyle \theta _{1},\theta _{2},\theta _{3}}
are real numbers, and
ξ
{\displaystyle \xi }
is a positive real number. The flexibility of the curve
f
{\displaystyle f}
is due to the parameter
ξ
{\displaystyle \xi }
: (i) if
ξ
=
1
{\displaystyle \xi =1}
then the curve reduces to the logistic function, and (ii) as
ξ
{\displaystyle \xi }
approaches zero, the curve converges to the Gompertz function. In epidemiological modeling,
θ
1
{\displaystyle \theta _{1}}
,
θ
2
{\displaystyle \theta _{2}}
, and
θ
3
{\displaystyle \theta _{3}}
represent the final epidemic size, infection rate, and lag phase, respectively. See the right panel for an example infection trajectory when
(
θ
1
,
θ
2
,
θ
3
)
{\displaystyle (\theta _{1},\theta _{2},\theta _{3})}
is set to
(
10000
,
0.2
,
40
)
{\displaystyle (10000,0.2,40)}
.
One of the benefits of using a growth function such as the generalized logistic function in epidemiological modeling is its relatively easy application to the multilevel model framework, where information from different geographic regions can be pooled together.
= In chemistry: reaction models
=The concentration of reactants and products in autocatalytic reactions follow the logistic function.
The degradation of Platinum group metal-free (PGM-free) oxygen reduction reaction (ORR) catalyst in fuel cell cathodes follows the logistic decay function, suggesting an autocatalytic degradation mechanism.
= In physics: Fermi–Dirac distribution
=The logistic function determines the statistical distribution of fermions over the energy states of a system in thermal equilibrium. In particular, it is the distribution of the probabilities that each possible energy level is occupied by a fermion, according to Fermi–Dirac statistics.
= In optics: mirage
=The logistic function also finds applications in optics, particularly in modelling phenomena such as mirages. Under certain conditions, such as the presence of a temperature or concentration gradient due to diffusion and balancing with gravity, logistic curve behaviours can emerge.
A mirage, resulting from a temperature gradient that modifies the refractive index related to the density/concentration of the material over distance, can be modelled using a fluid with a refractive index gradient due to the concentration gradient. This mechanism can be equated to a limiting population growth model, where the concentrated region attempts to diffuse into the lower concentration region, while seeking equilibrium with gravity, thus yielding a logistic function curve.
= In material science: Phase diagrams
=See Diffusion bonding.
= In linguistics: language change
=In linguistics, the logistic function can be used to model language change: an innovation that is at first marginal begins to spread more quickly with time, and then more slowly as it becomes more universally adopted.
= In agriculture: modeling crop response
=The logistic S-curve can be used for modeling the crop response to changes in growth factors. There are two types of response functions: positive and negative growth curves. For example, the crop yield may increase with increasing value of the growth factor up to a certain level (positive function), or it may decrease with increasing growth factor values (negative function owing to a negative growth factor), which situation requires an inverted S-curve.
= In economics and sociology: diffusion of innovations
=The logistic function can be used to illustrate the progress of the diffusion of an innovation through its life cycle.
In The Laws of Imitation (1890), Gabriel Tarde describes the rise and spread of new ideas through imitative chains. In particular, Tarde identifies three main stages through which innovations spread: the first one corresponds to the difficult beginnings, during which the idea has to struggle within a hostile environment full of opposing habits and beliefs; the second one corresponds to the properly exponential take-off of the idea, with
f
(
x
)
=
2
x
{\displaystyle f(x)=2^{x}}
; finally, the third stage is logarithmic, with
f
(
x
)
=
log
(
x
)
{\displaystyle f(x)=\log(x)}
, and corresponds to the time when the impulse of the idea gradually slows down while, simultaneously new opponent ideas appear. The ensuing situation halts or stabilizes the progress of the innovation, which approaches an asymptote.
In a sovereign state, the subnational units (constituent states or cities) may use loans to finance their projects. However, this funding source is usually subject to strict legal rules as well as to economy scarcity constraints, especially the resources the banks can lend (due to their equity or Basel limits). These restrictions, which represent a saturation level, along with an exponential rush in an economic competition for money, create a public finance diffusion of credit pleas and the aggregate national response is a sigmoid curve.
Historically, when new products are introduced there is an intense amount of research and development which leads to dramatic improvements in quality and reductions in cost. This leads to a period of rapid industry growth. Some of the more famous examples are: railroads, incandescent light bulbs, electrification, cars and air travel. Eventually, dramatic improvement and cost reduction opportunities are exhausted, the product or process are in widespread use with few remaining potential new customers, and markets become saturated.
Logistic analysis was used in papers by several researchers at the International Institute of Applied Systems Analysis (IIASA). These papers deal with the diffusion of various innovations, infrastructures and energy source substitutions and the role of work in the economy as well as with the long economic cycle. Long economic cycles were investigated by Robert Ayres (1989). Cesare Marchetti published on long economic cycles and on diffusion of innovations. Arnulf Grübler's book (1990) gives a detailed account of the diffusion of infrastructures including canals, railroads, highways and airlines, showing that their diffusion followed logistic shaped curves.
Carlota Perez used a logistic curve to illustrate the long (Kondratiev) business cycle with the following labels: beginning of a technological era as irruption, the ascent as frenzy, the rapid build out as synergy and the completion as maturity.
= Inflection Point Determination in Logistic Growth Regression
=Logistic growth regressions carry significant uncertainty when data is available only up to around the inflection point of the growth process. Under these conditions, estimating the height at which the inflection point will occur may have uncertainties comparable to the carrying capacity (K) of the system.
A method to mitigate this uncertainty involves using the carrying capacity from a surrogate logistic growth process as a reference point. By incorporating this constraint, even if K is only an estimate within a factor of two, the regression is stabilized, which improves accuracy and reduces uncertainty in the prediction parameters. This approach can be applied in fields such as economics and biology, where analogous surrogate systems or populations are available to inform the analysis.
= Sequential analysis
=Link created an extension of Wald's theory of sequential analysis to a distribution-free accumulation of random variables until either a positive or negative bound is first equaled or exceeded. Link derives the probability of first equaling or exceeding the positive boundary as
1
/
(
1
+
e
−
θ
A
)
{\displaystyle 1/(1+e^{-\theta A})}
, the logistic function. This is the first proof that the logistic function may have a stochastic process as its basis. Link provides a century of examples of "logistic" experimental results and a newly derived relation between this probability and the time of absorption at the boundaries.
See also
Notes
References
External links
L.J. Linacre, Why logistic ogive and not autocatalytic curve?, accessed 2009-09-12.
https://web.archive.org/web/20060914155939/http://luna.cas.usf.edu/~mbrannic/files/regression/Logistic.html
Weisstein, Eric W. "Sigmoid Function". MathWorld.
Online experiments with JSXGraph
Esses are everywhere.
Seeing the s-curve in everything.
Restricted Logarithmic Growth with Injection
Kata Kunci Pencarian:
- Universitas Multimedia Nusantara
- Model generatif
- Statistika
- Analisis diskriminan linear
- HMBS Lawrence Major
- Ilmu aktuaria
- Statistika matematika
- Variabel acak
- Efek pengacau
- Eksperimen semu
- Logistic function
- Sigmoid function
- Generalised logistic function
- Logistic regression
- Softmax function
- Logistic distribution
- Logit
- Gompertz function
- Logistic
- Logistic equation