- Source: List of logarithmic identities
In mathematics, many logarithmic identities exist. The following is a compilation of the notable of these, many of which are used for computational purposes.
Trivial identities
Trivial mathematical identities are relatively simple (for an experienced mathematician), though not necessarily unimportant. Trivial logarithmic identities are:
= Explanations
=By definition, we know that:
log
b
(
y
)
=
x
⟺
b
x
=
y
{\displaystyle \log _{b}(y)=x\iff b^{x}=y}
,
where
b
≠
0
{\displaystyle b\neq 0}
and
b
≠
1
{\displaystyle b\neq 1}
.
Setting
x
=
0
{\displaystyle x=0}
,
we can see that:
b
x
=
y
⟺
b
(
0
)
=
y
⟺
1
=
y
⟺
y
=
1
{\displaystyle b^{x}=y\iff b^{(0)}=y\iff 1=y\iff y=1}
. So, substituting these values into the formula, we see that:
log
b
(
y
)
=
x
⟺
log
b
(
1
)
=
0
{\displaystyle \log _{b}(y)=x\iff \log _{b}(1)=0}
, which gets us the first property.
Setting
x
=
1
{\displaystyle x=1}
,
we can see that:
b
x
=
y
⟺
b
(
1
)
=
y
⟺
b
=
y
⟺
y
=
b
{\displaystyle b^{x}=y\iff b^{(1)}=y\iff b=y\iff y=b}
. So, substituting these values into the formula, we see that:
log
b
(
y
)
=
x
⟺
log
b
(
b
)
=
1
{\displaystyle \log _{b}(y)=x\iff \log _{b}(b)=1}
, which gets us the second property.
Cancelling exponentials
Logarithms and exponentials with the same base cancel each other. This is true because logarithms and exponentials are inverse operations—much like the same way multiplication and division are inverse operations, and addition and subtraction are inverse operations.
b
log
b
(
x
)
=
x
because
antilog
b
(
log
b
(
x
)
)
=
x
{\displaystyle b^{\log _{b}(x)}=x{\text{ because }}{\mbox{antilog}}_{b}(\log _{b}(x))=x}
log
b
(
b
x
)
=
x
because
log
b
(
antilog
b
(
x
)
)
=
x
{\displaystyle \log _{b}(b^{x})=x{\text{ because }}\log _{b}({\mbox{antilog}}_{b}(x))=x}
Both of the above are derived from the following two equations that define a logarithm:
(note that in this explanation, the variables of
x
{\displaystyle x}
and
x
{\displaystyle x}
may not be referring to the same number)
log
b
(
y
)
=
x
⟺
b
x
=
y
{\displaystyle \log _{b}(y)=x\iff b^{x}=y}
Looking at the equation
b
x
=
y
{\displaystyle b^{x}=y}
, and substituting the value for
x
{\displaystyle x}
of
log
b
(
y
)
=
x
{\displaystyle \log _{b}(y)=x}
, we get the following equation:
b
x
=
y
⟺
b
log
b
(
y
)
=
y
⟺
b
log
b
(
y
)
=
y
{\displaystyle b^{x}=y\iff b^{\log _{b}(y)}=y\iff b^{\log _{b}(y)}=y}
, which gets us the first equation.
Another more rough way to think about it is that
b
something
=
y
{\displaystyle b^{\text{something}}=y}
,
and that that "
something
{\displaystyle {\text{something}}}
" is
log
b
(
y
)
{\displaystyle \log _{b}(y)}
.
Looking at the equation
log
b
(
y
)
=
x
{\displaystyle \log _{b}(y)=x}
, and substituting the value for
y
{\displaystyle y}
of
b
x
=
y
{\displaystyle b^{x}=y}
, we get the following equation:
log
b
(
y
)
=
x
⟺
log
b
(
b
x
)
=
x
⟺
log
b
(
b
x
)
=
x
{\displaystyle \log _{b}(y)=x\iff \log _{b}(b^{x})=x\iff \log _{b}(b^{x})=x}
, which gets us the second equation.
Another more rough way to think about it is that
log
b
(
something
)
=
x
{\displaystyle \log _{b}({\text{something}})=x}
,
and that that something "
something
{\displaystyle {\text{something}}}
" is
b
x
{\displaystyle b^{x}}
.
Using simpler operations
Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below. The first three operations below assume that x = bc and/or y = bd, so that logb(x) = c and logb(y) = d. Derivations also use the log definitions x = blogb(x) and x = logb(bx).
Where
b
{\displaystyle b}
,
x
{\displaystyle x}
, and
y
{\displaystyle y}
are positive real numbers and
b
≠
1
{\displaystyle b\neq 1}
, and
c
{\displaystyle c}
and
d
{\displaystyle d}
are real numbers.
The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law:
x
y
=
b
log
b
(
x
)
b
log
b
(
y
)
=
b
log
b
(
x
)
+
log
b
(
y
)
⇒
log
b
(
x
y
)
=
log
b
(
b
log
b
(
x
)
+
log
b
(
y
)
)
=
log
b
(
x
)
+
log
b
(
y
)
{\displaystyle xy=b^{\log _{b}(x)}b^{\log _{b}(y)}=b^{\log _{b}(x)+\log _{b}(y)}\Rightarrow \log _{b}(xy)=\log _{b}(b^{\log _{b}(x)+\log _{b}(y)})=\log _{b}(x)+\log _{b}(y)}
The law for powers exploits another of the laws of indices:
x
y
=
(
b
log
b
(
x
)
)
y
=
b
y
log
b
(
x
)
⇒
log
b
(
x
y
)
=
y
log
b
(
x
)
{\displaystyle x^{y}=(b^{\log _{b}(x)})^{y}=b^{y\log _{b}(x)}\Rightarrow \log _{b}(x^{y})=y\log _{b}(x)}
The law relating to quotients then follows:
log
b
(
x
y
)
=
log
b
(
x
y
−
1
)
=
log
b
(
x
)
+
log
b
(
y
−
1
)
=
log
b
(
x
)
−
log
b
(
y
)
{\displaystyle \log _{b}{\bigg (}{\frac {x}{y}}{\bigg )}=\log _{b}(xy^{-1})=\log _{b}(x)+\log _{b}(y^{-1})=\log _{b}(x)-\log _{b}(y)}
log
b
(
1
y
)
=
log
b
(
y
−
1
)
=
−
log
b
(
y
)
{\displaystyle \log _{b}{\bigg (}{\frac {1}{y}}{\bigg )}=\log _{b}(y^{-1})=-\log _{b}(y)}
Similarly, the root law is derived by rewriting the root as a reciprocal power:
log
b
(
x
y
)
=
log
b
(
x
1
y
)
=
1
y
log
b
(
x
)
{\displaystyle \log _{b}({\sqrt[{y}]{x}})=\log _{b}(x^{\frac {1}{y}})={\frac {1}{y}}\log _{b}(x)}
= Derivations of product, quotient, and power rules
=These are the three main logarithm laws/rules/principles, from which the other properties listed above can be proven. Each of these logarithm properties correspond to their respective exponent law, and their derivations/proofs will hinge on those facts. There are multiple ways to derive/prove each logarithm law – this is just one possible method.
Logarithm of a product
To state the logarithm of a product law formally:
∀
b
∈
R
+
,
b
≠
1
,
∀
x
,
y
,
∈
R
+
,
log
b
(
x
y
)
=
log
b
(
x
)
+
log
b
(
y
)
{\displaystyle \forall b\in \mathbb {R} _{+},b\neq 1,\forall x,y,\in \mathbb {R} _{+},\log _{b}(xy)=\log _{b}(x)+\log _{b}(y)}
Derivation:
Let
b
∈
R
+
{\displaystyle b\in \mathbb {R} _{+}}
, where
b
≠
1
{\displaystyle b\neq 1}
,
and let
x
,
y
∈
R
+
{\displaystyle x,y\in \mathbb {R} _{+}}
. We want to relate the expressions
log
b
(
x
)
{\displaystyle \log _{b}(x)}
and
log
b
(
y
)
{\displaystyle \log _{b}(y)}
. This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to
log
b
(
x
)
{\displaystyle \log _{b}(x)}
and
log
b
(
y
)
{\displaystyle \log _{b}(y)}
quite often, we will give them some variable names to make working with them easier: Let
m
=
log
b
(
x
)
{\displaystyle m=\log _{b}(x)}
, and let
n
=
log
b
(
y
)
{\displaystyle n=\log _{b}(y)}
.
Rewriting these as exponentials, we see that
m
=
log
b
(
x
)
⟺
b
m
=
x
,
n
=
log
b
(
y
)
⟺
b
n
=
y
.
{\displaystyle {\begin{aligned}m&=\log _{b}(x)\iff b^{m}=x,\\n&=\log _{b}(y)\iff b^{n}=y.\end{aligned}}}
From here, we can relate
b
m
{\displaystyle b^{m}}
(i.e.
x
{\displaystyle x}
) and
b
n
{\displaystyle b^{n}}
(i.e.
y
{\displaystyle y}
) using exponent laws as
x
y
=
(
b
m
)
(
b
n
)
=
b
m
⋅
b
n
=
b
m
+
n
{\displaystyle xy=(b^{m})(b^{n})=b^{m}\cdot b^{n}=b^{m+n}}
To recover the logarithms, we apply
log
b
{\displaystyle \log _{b}}
to both sides of the equality.
log
b
(
x
y
)
=
log
b
(
b
m
+
n
)
{\displaystyle \log _{b}(xy)=\log _{b}(b^{m+n})}
The right side may be simplified using one of the logarithm properties from before: we know that
log
b
(
b
m
+
n
)
=
m
+
n
{\displaystyle \log _{b}(b^{m+n})=m+n}
, giving
log
b
(
x
y
)
=
m
+
n
{\displaystyle \log _{b}(xy)=m+n}
We now resubstitute the values for
m
{\displaystyle m}
and
n
{\displaystyle n}
into our equation, so our final expression is only in terms of
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
b
{\displaystyle b}
.
log
b
(
x
y
)
=
log
b
(
x
)
+
log
b
(
y
)
{\displaystyle \log _{b}(xy)=\log _{b}(x)+\log _{b}(y)}
This completes the derivation.
Logarithm of a quotient
To state the logarithm of a quotient law formally:
∀
b
∈
R
+
,
b
≠
1
,
∀
x
,
y
,
∈
R
+
,
log
b
(
x
y
)
=
log
b
(
x
)
−
log
b
(
y
)
{\displaystyle \forall b\in \mathbb {R} _{+},b\neq 1,\forall x,y,\in \mathbb {R} _{+},\log _{b}\left({\frac {x}{y}}\right)=\log _{b}(x)-\log _{b}(y)}
Derivation:
Let
b
∈
R
+
{\displaystyle b\in \mathbb {R} _{+}}
, where
b
≠
1
{\displaystyle b\neq 1}
,
and let
x
,
y
∈
R
+
{\displaystyle x,y\in \mathbb {R} _{+}}
.
We want to relate the expressions
log
b
(
x
)
{\displaystyle \log _{b}(x)}
and
log
b
(
y
)
{\displaystyle \log _{b}(y)}
. This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to
log
b
(
x
)
{\displaystyle \log _{b}(x)}
and
log
b
(
y
)
{\displaystyle \log _{b}(y)}
quite often, we will give them some variable names to make working with them easier: Let
m
=
log
b
(
x
)
{\displaystyle m=\log _{b}(x)}
, and let
n
=
log
b
(
y
)
{\displaystyle n=\log _{b}(y)}
.
Rewriting these as exponentials, we see that:
m
=
log
b
(
x
)
⟺
b
m
=
x
,
n
=
log
b
(
y
)
⟺
b
n
=
y
.
{\displaystyle {\begin{aligned}m&=\log _{b}(x)\iff b^{m}=x,\\n&=\log _{b}(y)\iff b^{n}=y.\end{aligned}}}
From here, we can relate
b
m
{\displaystyle b^{m}}
(i.e.
x
{\displaystyle x}
) and
b
n
{\displaystyle b^{n}}
(i.e.
y
{\displaystyle y}
) using exponent laws as
x
y
=
(
b
m
)
(
b
n
)
=
b
m
b
n
=
b
m
−
n
{\displaystyle {\frac {x}{y}}={\frac {(b^{m})}{(b^{n})}}={\frac {b^{m}}{b^{n}}}=b^{m-n}}
To recover the logarithms, we apply
log
b
{\displaystyle \log _{b}}
to both sides of the equality.
log
b
(
x
y
)
=
log
b
(
b
m
−
n
)
{\displaystyle \log _{b}\left({\frac {x}{y}}\right)=\log _{b}\left(b^{m-n}\right)}
The right side may be simplified using one of the logarithm properties from before: we know that
log
b
(
b
m
−
n
)
=
m
−
n
{\displaystyle \log _{b}(b^{m-n})=m-n}
, giving
log
b
(
x
y
)
=
m
−
n
{\displaystyle \log _{b}\left({\frac {x}{y}}\right)=m-n}
We now resubstitute the values for
m
{\displaystyle m}
and
n
{\displaystyle n}
into our equation, so our final expression is only in terms of
x
{\displaystyle x}
,
y
{\displaystyle y}
, and
b
{\displaystyle b}
.
log
b
(
x
y
)
=
log
b
(
x
)
−
log
b
(
y
)
{\displaystyle \log _{b}\left({\frac {x}{y}}\right)=\log _{b}(x)-\log _{b}(y)}
This completes the derivation.
Logarithm of a power
To state the logarithm of a power law formally:
∀
b
∈
R
+
,
b
≠
1
,
∀
x
∈
R
+
,
∀
r
∈
R
,
log
b
(
x
r
)
=
r
log
b
(
x
)
{\displaystyle \forall b\in \mathbb {R} _{+},b\neq 1,\forall x\in \mathbb {R} _{+},\forall r\in \mathbb {R} ,\log _{b}(x^{r})=r\log _{b}(x)}
Derivation:
Let
b
∈
R
+
{\displaystyle b\in \mathbb {R} _{+}}
, where
b
≠
1
{\displaystyle b\neq 1}
, let
x
∈
R
+
{\displaystyle x\in \mathbb {R} _{+}}
, and let
r
∈
R
{\displaystyle r\in \mathbb {R} }
. For this derivation, we want to simplify the expression
log
b
(
x
r
)
{\displaystyle \log _{b}(x^{r})}
. To do this, we begin with the simpler expression
log
b
(
x
)
{\displaystyle \log _{b}(x)}
. Since we will be using
log
b
(
x
)
{\displaystyle \log _{b}(x)}
often, we will define it as a new variable: Let
m
=
log
b
(
x
)
{\displaystyle m=\log _{b}(x)}
.
To more easily manipulate the expression, we rewrite it as an exponential. By definition,
m
=
log
b
(
x
)
⟺
b
m
=
x
{\displaystyle m=\log _{b}(x)\iff b^{m}=x}
, so we have
b
m
=
x
{\displaystyle b^{m}=x}
Similar to the derivations above, we take advantage of another exponent law. In order to have
x
r
{\displaystyle x^{r}}
in our final expression, we raise both sides of the equality to the power of
r
{\displaystyle r}
:
(
b
m
)
r
=
(
x
)
r
b
m
r
=
x
r
{\displaystyle {\begin{aligned}(b^{m})^{r}&=(x)^{r}\\b^{mr}&=x^{r}\end{aligned}}}
where we used the exponent law
(
b
m
)
r
=
b
m
r
{\displaystyle (b^{m})^{r}=b^{mr}}
.
To recover the logarithms, we apply
log
b
{\displaystyle \log _{b}}
to both sides of the equality.
log
b
(
b
m
r
)
=
log
b
(
x
r
)
{\displaystyle \log _{b}(b^{mr})=\log _{b}(x^{r})}
The left side of the equality can be simplified using a logarithm law, which states that
log
b
(
b
m
r
)
=
m
r
{\displaystyle \log _{b}(b^{mr})=mr}
.
m
r
=
log
b
(
x
r
)
{\displaystyle mr=\log _{b}(x^{r})}
Substituting in the original value for
m
{\displaystyle m}
, rearranging, and simplifying gives
(
log
b
(
x
)
)
r
=
log
b
(
x
r
)
r
log
b
(
x
)
=
log
b
(
x
r
)
log
b
(
x
r
)
=
r
log
b
(
x
)
{\displaystyle {\begin{aligned}\left(\log _{b}(x)\right)r&=\log _{b}(x^{r})\\r\log _{b}(x)&=\log _{b}(x^{r})\\\log _{b}(x^{r})&=r\log _{b}(x)\end{aligned}}}
This completes the derivation.
Changing the base
To state the change of base logarithm formula formally:
∀
a
,
b
∈
R
+
,
a
,
b
≠
1
,
∀
x
∈
R
+
,
log
b
(
x
)
=
log
a
(
x
)
log
a
(
b
)
{\displaystyle \forall a,b\in \mathbb {R} _{+},a,b\neq 1,\forall x\in \mathbb {R} _{+},\log _{b}(x)={\frac {\log _{a}(x)}{\log _{a}(b)}}}
This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for ln and for log10, but not all calculators have buttons for the logarithm of an arbitrary base.
= Proof/derivation
=Let
a
,
b
∈
R
+
{\displaystyle a,b\in \mathbb {R} _{+}}
, where
a
,
b
≠
1
{\displaystyle a,b\neq 1}
Let
x
∈
R
+
{\displaystyle x\in \mathbb {R} _{+}}
. Here,
a
{\displaystyle a}
and
b
{\displaystyle b}
are the two bases we will be using for the logarithms. They cannot be 1, because the logarithm function is not well defined for the base of 1. The number
x
{\displaystyle x}
will be what the logarithm is evaluating, so it must be a positive number. Since we will be dealing with the term
log
b
(
x
)
{\displaystyle \log _{b}(x)}
quite frequently, we define it as a new variable: Let
m
=
log
b
(
x
)
{\displaystyle m=\log _{b}(x)}
.
To more easily manipulate the expression, it can be rewritten as an exponential.
b
m
=
x
{\displaystyle b^{m}=x}
Applying
log
a
{\displaystyle \log _{a}}
to both sides of the equality,
log
a
(
b
m
)
=
log
a
(
x
)
{\displaystyle \log _{a}(b^{m})=\log _{a}(x)}
Now, using the logarithm of a power property, which states that
log
a
(
b
m
)
=
m
log
a
(
b
)
{\displaystyle \log _{a}(b^{m})=m\log _{a}(b)}
,
m
log
a
(
b
)
=
log
a
(
x
)
{\displaystyle m\log _{a}(b)=\log _{a}(x)}
Isolating
m
{\displaystyle m}
, we get the following:
m
=
log
a
(
x
)
log
a
(
b
)
{\displaystyle m={\frac {\log _{a}(x)}{\log _{a}(b)}}}
Resubstituting
m
=
log
b
(
x
)
{\displaystyle m=\log _{b}(x)}
back into the equation,
log
b
(
x
)
=
log
a
(
x
)
log
a
(
b
)
{\displaystyle \log _{b}(x)={\frac {\log _{a}(x)}{\log _{a}(b)}}}
This completes the proof that
log
b
(
x
)
=
log
a
(
x
)
log
a
(
b
)
{\displaystyle \log _{b}(x)={\frac {\log _{a}(x)}{\log _{a}(b)}}}
.
This formula has several consequences:
log
b
a
=
1
log
a
b
{\displaystyle \log _{b}a={\frac {1}{\log _{a}b}}}
log
b
n
a
=
log
b
a
n
{\displaystyle \log _{b^{n}}a={\log _{b}a \over n}}
b
log
a
d
=
d
log
a
b
{\displaystyle b^{\log _{a}d}=d^{\log _{a}b}}
−
log
b
a
=
log
b
(
1
a
)
=
log
1
/
b
a
{\displaystyle -\log _{b}a=\log _{b}\left({1 \over a}\right)=\log _{1/b}a}
log
b
1
a
1
⋯
log
b
n
a
n
=
log
b
π
(
1
)
a
1
⋯
log
b
π
(
n
)
a
n
,
{\displaystyle \log _{b_{1}}a_{1}\,\cdots \,\log _{b_{n}}a_{n}=\log _{b_{\pi (1)}}a_{1}\,\cdots \,\log _{b_{\pi (n)}}a_{n},}
where
π
{\textstyle \pi }
is any permutation of the subscripts 1, ..., n. For example
log
b
w
⋅
log
a
x
⋅
log
d
c
⋅
log
d
z
=
log
d
w
⋅
log
b
x
⋅
log
a
c
⋅
log
d
z
.
{\displaystyle \log _{b}w\cdot \log _{a}x\cdot \log _{d}c\cdot \log _{d}z=\log _{d}w\cdot \log _{b}x\cdot \log _{a}c\cdot \log _{d}z.}
= Summation/subtraction
=The following summation/subtraction rule is especially useful in probability theory when one is dealing with a sum of log-probabilities:
Note that the subtraction identity is not defined if
a
=
c
{\displaystyle a=c}
, since the logarithm of zero is not defined. Also note that, when programming,
a
{\displaystyle a}
and
c
{\displaystyle c}
may have to be switched on the right hand side of the equations if
c
≫
a
{\displaystyle c\gg a}
to avoid losing the "1 +" due to rounding errors. Many programming languages have a specific log1p(x) function that calculates
log
e
(
1
+
x
)
{\displaystyle \log _{e}(1+x)}
without underflow (when
x
{\displaystyle x}
is small).
More generally:
log
b
∑
i
=
0
N
a
i
=
log
b
a
0
+
log
b
(
1
+
∑
i
=
1
N
a
i
a
0
)
=
log
b
a
0
+
log
b
(
1
+
∑
i
=
1
N
b
(
log
b
a
i
−
log
b
a
0
)
)
{\displaystyle \log _{b}\sum _{i=0}^{N}a_{i}=\log _{b}a_{0}+\log _{b}\left(1+\sum _{i=1}^{N}{\frac {a_{i}}{a_{0}}}\right)=\log _{b}a_{0}+\log _{b}\left(1+\sum _{i=1}^{N}b^{\left(\log _{b}a_{i}-\log _{b}a_{0}\right)}\right)}
= Exponents
=A useful identity involving exponents:
x
log
(
log
(
x
)
)
log
(
x
)
=
log
(
x
)
{\displaystyle x^{\frac {\log(\log(x))}{\log(x)}}=\log(x)}
or more universally:
x
log
(
a
)
log
(
x
)
=
a
{\displaystyle x^{\frac {\log(a)}{\log(x)}}=a}
= Other/resulting identities
=1
1
log
x
(
a
)
+
1
log
y
(
a
)
=
log
x
y
(
a
)
{\displaystyle {\frac {1}{{\frac {1}{\log _{x}(a)}}+{\frac {1}{\log _{y}(a)}}}}=\log _{xy}(a)}
1
1
log
x
(
a
)
−
1
log
y
(
a
)
=
log
x
y
(
a
)
{\displaystyle {\frac {1}{{\frac {1}{\log _{x}(a)}}-{\frac {1}{\log _{y}(a)}}}}=\log _{\frac {x}{y}}(a)}
Inequalities
Based on, and
x
1
+
x
≤
ln
(
1
+
x
)
≤
x
(
6
+
x
)
6
+
4
x
≤
x
for all
−
1
<
x
{\displaystyle {\frac {x}{1+x}}\leq \ln(1+x)\leq {\frac {x(6+x)}{6+4x}}\leq x{\mbox{ for all }}{-1}
2
x
2
+
x
≤
3
−
27
3
+
2
x
≤
x
1
+
x
+
x
2
/
12
≤
ln
(
1
+
x
)
≤
x
1
+
x
≤
x
2
2
+
x
1
+
x
for
0
≤
x
, reverse for
−
1
<
x
≤
0
{\displaystyle {\begin{aligned}{\frac {2x}{2+x}}&\leq 3-{\sqrt {\frac {27}{3+2x}}}\leq {\frac {x}{\sqrt {1+x+x^{2}/12}}}\\[4pt]&\leq \ln(1+x)\leq {\frac {x}{\sqrt {1+x}}}\leq {\frac {x}{2}}{\frac {2+x}{1+x}}\\[4pt]&{\text{ for }}0\leq x{\text{, reverse for }}{-1}
All are accurate around
x
=
0
{\displaystyle x=0}
, but not for large numbers.
Calculus identities
= Limits
=lim
x
→
0
+
log
a
(
x
)
=
−
∞
if
a
>
1
{\displaystyle \lim _{x\to 0^{+}}\log _{a}(x)=-\infty \quad {\mbox{if }}a>1}
lim
x
→
0
+
log
a
(
x
)
=
∞
if
0
<
a
<
1
{\displaystyle \lim _{x\to 0^{+}}\log _{a}(x)=\infty \quad {\mbox{if }}0
lim
x
→
∞
log
a
(
x
)
=
∞
if
a
>
1
{\displaystyle \lim _{x\to \infty }\log _{a}(x)=\infty \quad {\mbox{if }}a>1}
lim
x
→
∞
log
a
(
x
)
=
−
∞
if
0
<
a
<
1
{\displaystyle \lim _{x\to \infty }\log _{a}(x)=-\infty \quad {\mbox{if }}0
lim
x
→
∞
x
b
log
a
(
x
)
=
∞
if
b
>
0
{\displaystyle \lim _{x\to \infty }x^{b}\log _{a}(x)=\infty \quad {\mbox{if }}b>0}
lim
x
→
∞
log
a
(
x
)
x
b
=
0
if
b
>
0
{\displaystyle \lim _{x\to \infty }{\frac {\log _{a}(x)}{x^{b}}}=0\quad {\mbox{if }}b>0}
The last limit is often summarized as "logarithms grow more slowly than any power or root of x".
= Derivatives of logarithmic functions
=d
d
x
ln
x
=
1
x
,
x
>
0
{\displaystyle {d \over dx}\ln x={1 \over x},x>0}
d
d
x
ln
|
x
|
=
1
x
,
x
≠
0
{\displaystyle {d \over dx}\ln |x|={1 \over x},x\neq 0}
d
d
x
log
a
x
=
1
x
ln
a
,
x
>
0
,
a
>
0
,
and
a
≠
1
{\displaystyle {d \over dx}\log _{a}x={1 \over x\ln a},x>0,a>0,{\text{ and }}a\neq 1}
= Integral definition
=ln
x
=
∫
1
x
1
t
d
t
{\displaystyle \ln x=\int _{1}^{x}{\frac {1}{t}}\ dt}
To modify the limits of integration to run from
x
{\displaystyle x}
to
1
{\displaystyle 1}
, we change the order of integration, which changes the sign of the integral:
−
∫
1
x
1
t
d
t
=
∫
x
1
1
t
d
t
{\displaystyle -\int _{1}^{x}{\frac {1}{t}}\,dt=\int _{x}^{1}{\frac {1}{t}}\,dt}
Therefore:
ln
1
x
=
∫
x
1
1
t
d
t
{\displaystyle \ln {\frac {1}{x}}=\int _{x}^{1}{\frac {1}{t}}\,dt}
= Riemann Sum
=ln
(
n
+
1
)
=
{\displaystyle \ln(n+1)=}
lim
k
→
∞
∑
i
=
1
k
1
x
i
Δ
x
=
{\displaystyle \lim _{k\to \infty }\sum _{i=1}^{k}{\frac {1}{x_{i}}}\Delta x=}
lim
k
→
∞
∑
i
=
1
k
1
1
+
i
−
1
k
n
⋅
n
k
=
{\displaystyle \lim _{k\to \infty }\sum _{i=1}^{k}{\frac {1}{1+{\frac {i-1}{k}}n}}\cdot {\frac {n}{k}}=}
lim
k
→
∞
∑
x
=
1
k
⋅
n
1
1
+
x
k
⋅
1
k
=
{\displaystyle \lim _{k\to \infty }\sum _{x=1}^{k\cdot n}{\frac {1}{1+{\frac {x}{k}}}}\cdot {\frac {1}{k}}=}
lim
k
→
∞
∑
x
=
1
k
⋅
n
1
k
+
x
=
lim
k
→
∞
∑
x
=
k
+
1
k
⋅
n
+
k
1
x
=
lim
k
→
∞
∑
x
=
k
+
1
k
(
n
+
1
)
1
x
{\displaystyle \lim _{k\to \infty }\sum _{x=1}^{k\cdot n}{\frac {1}{k+x}}=\lim _{k\to \infty }\sum _{x=k+1}^{k\cdot n+k}{\frac {1}{x}}=\lim _{k\to \infty }\sum _{x=k+1}^{k(n+1)}{\frac {1}{x}}}
for
Δ
x
=
n
k
{\displaystyle \textstyle \Delta x={\frac {n}{k}}}
and
x
i
{\displaystyle x_{i}}
is a sample point in each interval.
= Series representation
=The natural logarithm
ln
(
1
+
x
)
{\displaystyle \ln(1+x)}
has a well-known Taylor series expansion that converges for
x
{\displaystyle x}
in the open-closed interval
(
−
1
,
1
]
{\displaystyle (-1,1]}
:
ln
(
1
+
x
)
=
∑
n
=
1
∞
(
−
1
)
n
+
1
x
n
n
=
x
−
x
2
2
+
x
3
3
−
x
4
4
+
x
5
5
−
x
6
6
+
⋯
.
{\displaystyle \ln(1+x)=\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}x^{n}}{n}}=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-{\frac {x^{4}}{4}}+{\frac {x^{5}}{5}}-{\frac {x^{6}}{6}}+\cdots .}
Within this interval, for
x
=
1
{\displaystyle x=1}
, the series is conditionally convergent, and for all other values, it is absolutely convergent. For
x
>
1
{\displaystyle x>1}
or
x
≤
−
1
{\displaystyle x\leq -1}
, the series does not converge to
ln
(
1
+
x
)
{\displaystyle \ln(1+x)}
. In these cases, different representations or methods must be used to evaluate the logarithm.
= Harmonic number difference
=It is not uncommon in advanced mathematics, particularly in analytic number theory and asymptotic analysis, to encounter expressions involving differences or ratios of harmonic numbers at scaled indices. The identity involving the limiting difference between harmonic numbers at scaled indices and its relationship to the logarithmic function provides an intriguing example of how discrete sequences can asymptotically relate to continuous functions. This identity is expressed as
lim
k
→
∞
(
H
k
(
n
+
1
)
−
H
k
)
=
ln
(
n
+
1
)
{\displaystyle \lim _{k\to \infty }(H_{k(n+1)}-H_{k})=\ln(n+1)}
which characterizes the behavior of harmonic numbers as they grow large. This approximation (which precisely equals
ln
(
n
+
1
)
{\displaystyle \ln(n+1)}
in the limit) reflects how summation over increasing segments of the harmonic series exhibits integral properties, giving insight into the interplay between discrete and continuous analysis. It also illustrates how understanding the behavior of sums and series at large scales can lead to insightful conclusions about their properties. Here
H
k
{\displaystyle H_{k}}
denotes the
k
{\displaystyle k}
-th harmonic number, defined as
H
k
=
∑
j
=
1
k
1
j
{\displaystyle H_{k}=\sum _{j=1}^{k}{\frac {1}{j}}}
The harmonic numbers are a fundamental sequence in number theory and analysis, known for their logarithmic growth. This result leverages the fact that the sum of the inverses of integers (i.e., harmonic numbers) can be closely approximated by the natural logarithm function, plus a constant, especially when extended over large intervals. As
k
{\displaystyle k}
tends towards infinity, the difference between the harmonic numbers
H
k
(
n
+
1
)
{\displaystyle H_{k(n+1)}}
and
H
k
{\displaystyle H_{k}}
converges to a non-zero value. This persistent non-zero difference,
ln
(
n
+
1
)
{\displaystyle \ln(n+1)}
, precludes the possibility of the harmonic series approaching a finite limit, thus providing a clear mathematical articulation of its divergence. The technique of approximating sums by integrals (specifically using the integral test or by direct integral approximation) is fundamental in deriving such results. This specific identity can be a consequence of these approximations, considering:
∑
j
=
k
+
1
k
(
n
+
1
)
1
j
≈
∫
k
k
(
n
+
1
)
d
x
x
{\displaystyle \sum _{j=k+1}^{k(n+1)}{\frac {1}{j}}\approx \int _{k}^{k(n+1)}{\frac {dx}{x}}}
Harmonic limit derivation
The limit explores the growth of the harmonic numbers when indices are multiplied by a scaling factor and then differenced. It specifically captures the sum from
k
+
1
{\displaystyle k+1}
to
k
(
n
+
1
)
{\displaystyle k(n+1)}
:
H
k
(
n
+
1
)
−
H
k
=
∑
j
=
k
+
1
k
(
n
+
1
)
1
j
{\displaystyle H_{k(n+1)}-H_{k}=\sum _{j=k+1}^{k(n+1)}{\frac {1}{j}}}
This can be estimated using the integral test for convergence, or more directly by comparing it to the integral of
1
/
x
{\displaystyle 1/x}
from
k
{\displaystyle k}
to
k
(
n
+
1
)
{\displaystyle k(n+1)}
:
lim
k
→
∞
∑
j
=
k
+
1
k
(
n
+
1
)
1
j
=
∫
k
k
(
n
+
1
)
d
x
x
=
ln
(
k
(
n
+
1
)
)
−
ln
(
k
)
=
ln
(
k
(
n
+
1
)
k
)
=
ln
(
n
+
1
)
{\displaystyle \lim _{k\to \infty }\sum _{j=k+1}^{k(n+1)}{\frac {1}{j}}=\int _{k}^{k(n+1)}{\frac {dx}{x}}=\ln(k(n+1))-\ln(k)=\ln \left({\frac {k(n+1)}{k}}\right)=\ln(n+1)}
As the window's lower bound begins at
k
+
1
{\displaystyle k+1}
and the upper bound extends to
k
(
n
+
1
)
{\displaystyle k(n+1)}
, both of which tend toward infinity as
k
→
∞
{\displaystyle k\to \infty }
, the summation window encompasses an increasingly vast portion of the smallest possible terms of the harmonic series (those with astronomically large denominators), creating a discrete sum that stretches towards infinity, which mirrors how continuous integrals accumulate value across an infinitesimally fine partitioning of the domain. In the limit, the interval is effectively from
1
{\displaystyle 1}
to
n
+
1
{\displaystyle n+1}
where the onset
k
{\displaystyle k}
implies this minimally discrete region.
= Double series formula
=The harmonic number difference formula for
ln
(
m
)
{\displaystyle \ln(m)}
is an extension of the classic, alternating identity of
ln
(
2
)
{\displaystyle \ln(2)}
:
ln
(
2
)
=
lim
k
→
∞
∑
n
=
1
k
(
1
2
n
−
1
−
1
2
n
)
{\displaystyle \ln(2)=\lim _{k\to \infty }\sum _{n=1}^{k}\left({\frac {1}{2n-1}}-{\frac {1}{2n}}\right)}
which can be generalized as the double series over the residues of
m
{\displaystyle m}
:
ln
(
m
)
=
∑
x
∈
⟨
m
⟩
∩
N
∑
r
∈
Z
m
∩
N
(
1
x
−
r
−
1
x
)
=
∑
x
∈
⟨
m
⟩
∩
N
∑
r
∈
Z
m
∩
N
r
x
(
x
−
r
)
{\displaystyle \ln(m)=\sum _{x\in \langle m\rangle \cap \mathbb {N} }\sum _{r\in \mathbb {Z} _{m}\cap \mathbb {N} }\left({\frac {1}{x-r}}-{\frac {1}{x}}\right)=\sum _{x\in \langle m\rangle \cap \mathbb {N} }\sum _{r\in \mathbb {Z} _{m}\cap \mathbb {N} }{\frac {r}{x(x-r)}}}
where
⟨
m
⟩
{\displaystyle \langle m\rangle }
is the principle ideal generated by
m
{\displaystyle m}
. Subtracting
1
x
{\displaystyle \textstyle {\frac {1}{x}}}
from each term
1
x
−
r
{\displaystyle \textstyle {\frac {1}{x-r}}}
(i.e., balancing each term with the modulus) reduces the magnitude of each term's contribution, ensuring convergence by controlling the series' tendency toward divergence as
m
{\displaystyle m}
increases. For example:
ln
(
4
)
=
lim
k
→
∞
∑
n
=
1
k
(
1
4
n
−
3
−
1
4
n
)
+
(
1
4
n
−
2
−
1
4
n
)
+
(
1
4
n
−
1
−
1
4
n
)
{\displaystyle \ln(4)=\lim _{k\to \infty }\sum _{n=1}^{k}\left({\frac {1}{4n-3}}-{\frac {1}{4n}}\right)+\left({\frac {1}{4n-2}}-{\frac {1}{4n}}\right)+\left({\frac {1}{4n-1}}-{\frac {1}{4n}}\right)}
This method leverages the fine differences between closely related terms to stabilize the series. The sum over all residues
r
∈
N
{\displaystyle r\in \mathbb {N} }
ensures that adjustments are uniformly applied across all possible offsets within each block of
m
{\displaystyle m}
terms. This uniform distribution of the "correction" across different intervals defined by
x
−
r
{\displaystyle x-r}
functions similarly to telescoping over a very large sequence. It helps to flatten out the discrepancies that might otherwise lead to divergent behavior in a straightforward harmonic series. Note that the structure of the summands of this formula matches those of the interpolated harmonic number
H
x
{\displaystyle H_{x}}
when both the domain and range are negated (i.e.,
−
H
−
x
{\displaystyle -H_{-x}}
). However, the interpretation and roles of the variables differ.
Deveci's Proof
A fundamental feature of the proof is the accumulation of the subtrahends
1
x
{\displaystyle \textstyle {\frac {1}{x}}}
into a unit fraction, that is,
m
x
=
1
n
{\displaystyle \textstyle {\frac {m}{x}}={\frac {1}{n}}}
for
m
∣
x
{\displaystyle m\mid x}
, thus
m
=
ω
+
1
{\displaystyle m=\omega +1}
rather than
m
=
|
Z
m
∩
N
|
{\displaystyle m=|\mathbb {Z} _{m}\cap \mathbb {N} |}
, where the extrema of
Z
m
∩
N
{\displaystyle \mathbb {Z} _{m}\cap \mathbb {N} }
are
[
0
,
ω
]
{\displaystyle [0,\omega ]}
if
N
=
N
0
{\displaystyle \mathbb {N} =\mathbb {N} _{0}}
and
[
1
,
ω
]
{\displaystyle [1,\omega ]}
otherwise, with the minimum of
0
{\displaystyle 0}
being implicit in the latter case due to the structural requirements of the proof. Since the cardinality of
Z
m
∩
N
{\displaystyle \mathbb {Z} _{m}\cap \mathbb {N} }
depends on the selection of one of two possible minima, the integral
∫
1
t
d
t
{\displaystyle \textstyle \int {\frac {1}{t}}dt}
, as a set-theoretic procedure, is a function of the maximum
ω
{\displaystyle \omega }
(which remains consistent across both interpretations) plus
1
{\displaystyle 1}
, not the cardinality (which is ambiguous due to varying definitions of the minimum). Whereas the harmonic number difference computes the integral in a global sliding window, the double series, in parallel, computes the sum in a local sliding window—a shifting
m
{\displaystyle m}
-tuple—over the harmonic series, advancing the window by
m
{\displaystyle m}
positions to select the next
m
{\displaystyle m}
-tuple, and offsetting each element of each tuple by
1
m
{\displaystyle \textstyle {\frac {1}{m}}}
relative to the window's absolute position. The sum
∑
n
=
1
k
∑
1
x
−
r
{\displaystyle \textstyle \sum _{n=1}^{k}\sum {\frac {1}{x-r}}}
corresponds to
H
k
m
{\displaystyle H_{km}}
which scales
H
m
{\displaystyle H_{m}}
without bound. The sum
∑
n
=
1
k
−
1
n
{\displaystyle \textstyle \sum _{n=1}^{k}-{\frac {1}{n}}}
corresponds to the prefix
H
k
{\displaystyle H_{k}}
trimmed from the series to establish the window's moving lower bound
k
+
1
{\displaystyle k+1}
, and
ln
(
m
)
{\displaystyle \ln(m)}
is the limit of the sliding window (the scaled, truncated series):
∑
n
=
1
k
∑
r
=
1
ω
(
1
m
n
−
r
−
1
m
n
)
=
∑
n
=
1
k
∑
r
=
0
ω
(
1
m
n
−
r
−
1
m
n
)
{\displaystyle \sum _{n=1}^{k}\sum _{r=1}^{\omega }\left({\frac {1}{mn-r}}-{\frac {1}{mn}}\right)=\sum _{n=1}^{k}\sum _{r=0}^{\omega }\left({\frac {1}{mn-r}}-{\frac {1}{mn}}\right)}
=
∑
n
=
1
k
(
−
1
n
+
∑
r
=
0
ω
1
m
n
−
r
)
{\displaystyle =\sum _{n=1}^{k}\left(-{\frac {1}{n}}+\sum _{r=0}^{\omega }{\frac {1}{mn-r}}\right)}
=
−
H
k
+
∑
n
=
1
k
∑
r
=
0
ω
1
m
n
−
r
{\displaystyle =-H_{k}+\sum _{n=1}^{k}\sum _{r=0}^{\omega }{\frac {1}{mn-r}}}
=
−
H
k
+
∑
n
=
1
k
∑
r
=
0
ω
1
(
n
−
1
)
m
+
m
−
r
{\displaystyle =-H_{k}+\sum _{n=1}^{k}\sum _{r=0}^{\omega }{\frac {1}{(n-1)m+m-r}}}
=
−
H
k
+
∑
n
=
1
k
∑
j
=
1
m
1
(
n
−
1
)
m
+
j
{\displaystyle =-H_{k}+\sum _{n=1}^{k}\sum _{j=1}^{m}{\frac {1}{(n-1)m+j}}}
=
−
H
k
+
∑
n
=
1
k
(
H
n
m
−
H
m
(
n
−
1
)
)
{\displaystyle =-H_{k}+\sum _{n=1}^{k}\left(H_{nm}-H_{m(n-1)}\right)}
=
−
H
k
+
H
m
k
{\displaystyle =-H_{k}+H_{mk}}
lim
k
→
∞
H
k
m
−
H
k
=
∑
x
∈
⟨
m
⟩
∩
N
∑
r
∈
Z
m
∩
N
(
1
x
−
r
−
1
x
)
=
ln
(
ω
+
1
)
=
ln
(
m
)
{\displaystyle \lim _{k\to \infty }H_{km}-H_{k}=\sum _{x\in \langle m\rangle \cap \mathbb {N} }\sum _{r\in \mathbb {Z} _{m}\cap \mathbb {N} }\left({\frac {1}{x-r}}-{\frac {1}{x}}\right)=\ln(\omega +1)=\ln(m)}
= Integrals of logarithmic functions
=∫
ln
x
d
x
=
x
ln
x
−
x
+
C
=
x
(
ln
x
−
1
)
+
C
{\displaystyle \int \ln x\,dx=x\ln x-x+C=x(\ln x-1)+C}
∫
log
a
x
d
x
=
x
log
a
x
−
x
ln
a
+
C
=
x
(
ln
x
−
1
)
ln
a
+
C
{\displaystyle \int \log _{a}x\,dx=x\log _{a}x-{\frac {x}{\ln a}}+C={\frac {x(\ln x-1)}{\ln a}}+C}
To remember higher integrals, it is convenient to define
x
[
n
]
=
x
n
(
log
(
x
)
−
H
n
)
{\displaystyle x^{\left[n\right]}=x^{n}(\log(x)-H_{n})}
where
H
n
{\displaystyle H_{n}}
is the nth harmonic number:
x
[
0
]
=
log
x
{\displaystyle x^{\left[0\right]}=\log x}
x
[
1
]
=
x
log
(
x
)
−
x
{\displaystyle x^{\left[1\right]}=x\log(x)-x}
x
[
2
]
=
x
2
log
(
x
)
−
3
2
x
2
{\displaystyle x^{\left[2\right]}=x^{2}\log(x)-{\begin{matrix}{\frac {3}{2}}\end{matrix}}x^{2}}
x
[
3
]
=
x
3
log
(
x
)
−
11
6
x
3
{\displaystyle x^{\left[3\right]}=x^{3}\log(x)-{\begin{matrix}{\frac {11}{6}}\end{matrix}}x^{3}}
Then
d
d
x
x
[
n
]
=
n
x
[
n
−
1
]
{\displaystyle {\frac {d}{dx}}\,x^{\left[n\right]}=nx^{\left[n-1\right]}}
∫
x
[
n
]
d
x
=
x
[
n
+
1
]
n
+
1
+
C
{\displaystyle \int x^{\left[n\right]}\,dx={\frac {x^{\left[n+1\right]}}{n+1}}+C}
Approximating large numbers
The identities of logarithms can be used to approximate large numbers. Note that logb(a) + logb(c) = logb(ac), where a, b, and c are arbitrary constants. Suppose that one wants to approximate the 44th Mersenne prime, 232,582,657 −1. To get the base-10 logarithm, we would multiply 32,582,657 by log10(2), getting 9,808,357.09543 = 9,808,357 + 0.09543. We can then get 109,808,357 × 100.09543 ≈ 1.25 × 109,808,357.
Similarly, factorials can be approximated by summing the logarithms of the terms.
Complex logarithm identities
The complex logarithm is the complex number analogue of the logarithm function. No single valued function on the complex plane can satisfy the normal rules for logarithms. However, a multivalued function can be defined which satisfies most of the identities. It is usual to consider this as a function defined on a Riemann surface. A single valued version, called the principal value of the logarithm, can be defined which is discontinuous on the negative x axis, and is equal to the multivalued version on a single branch cut.
= Definitions
=In what follows, a capital first letter is used for the principal value of functions, and the lower case version is used for the multivalued function. The single valued version of definitions and identities is always given first, followed by a separate section for the multiple valued versions.
ln(r) is the standard natural logarithm of the real number r.
Arg(z) is the principal value of the arg function; its value is restricted to (−π, π]. It can be computed using Arg(x + iy) = atan2(y, x).
Log(z) is the principal value of the complex logarithm function and has imaginary part in the range (−π, π].
Log
(
z
)
=
ln
(
|
z
|
)
+
i
Arg
(
z
)
{\displaystyle \operatorname {Log} (z)=\ln(|z|)+i\operatorname {Arg} (z)}
e
Log
(
z
)
=
z
{\displaystyle e^{\operatorname {Log} (z)}=z}
The multiple valued version of log(z) is a set, but it is easier to write it without braces and using it in formulas follows obvious rules.
log(z) is the set of complex numbers v which satisfy ev = z
arg(z) is the set of possible values of the arg function applied to z.
When k is any integer:
log
(
z
)
=
ln
(
|
z
|
)
+
i
arg
(
z
)
{\displaystyle \log(z)=\ln(|z|)+i\arg(z)}
log
(
z
)
=
Log
(
z
)
+
2
π
i
k
{\displaystyle \log(z)=\operatorname {Log} (z)+2\pi ik}
e
log
(
z
)
=
z
{\displaystyle e^{\log(z)}=z}
= Constants
=Principal value forms:
Log
(
1
)
=
0
{\displaystyle \operatorname {Log} (1)=0}
Log
(
e
)
=
1
{\displaystyle \operatorname {Log} (e)=1}
Multiple value forms, for any k an integer:
log
(
1
)
=
0
+
2
π
i
k
{\displaystyle \log(1)=0+2\pi ik}
log
(
e
)
=
1
+
2
π
i
k
{\displaystyle \log(e)=1+2\pi ik}
= Summation
=Principal value forms:
Log
(
z
1
)
+
Log
(
z
2
)
=
Log
(
z
1
z
2
)
(
mod
2
π
i
)
{\displaystyle \operatorname {Log} (z_{1})+\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}z_{2}){\pmod {2\pi i}}}
Log
(
z
1
)
+
Log
(
z
2
)
=
Log
(
z
1
z
2
)
(
−
π
<
Arg
(
z
1
)
+
Arg
(
z
2
)
≤
π
;
e.g.,
Re
z
1
≥
0
and
Re
z
2
>
0
)
{\displaystyle \operatorname {Log} (z_{1})+\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}z_{2})\quad (-\pi <\operatorname {Arg} (z_{1})+\operatorname {Arg} (z_{2})\leq \pi ;{\text{ e.g., }}\operatorname {Re} z_{1}\geq 0{\text{ and }}\operatorname {Re} z_{2}>0)}
Log
(
z
1
)
−
Log
(
z
2
)
=
Log
(
z
1
/
z
2
)
(
mod
2
π
i
)
{\displaystyle \operatorname {Log} (z_{1})-\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}/z_{2}){\pmod {2\pi i}}}
Log
(
z
1
)
−
Log
(
z
2
)
=
Log
(
z
1
/
z
2
)
(
−
π
<
Arg
(
z
1
)
−
Arg
(
z
2
)
≤
π
;
e.g.,
Re
z
1
≥
0
and
Re
z
2
>
0
)
{\displaystyle \operatorname {Log} (z_{1})-\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}/z_{2})\quad (-\pi <\operatorname {Arg} (z_{1})-\operatorname {Arg} (z_{2})\leq \pi ;{\text{ e.g., }}\operatorname {Re} z_{1}\geq 0{\text{ and }}\operatorname {Re} z_{2}>0)}
Multiple value forms:
log
(
z
1
)
+
log
(
z
2
)
=
log
(
z
1
z
2
)
{\displaystyle \log(z_{1})+\log(z_{2})=\log(z_{1}z_{2})}
log
(
z
1
)
−
log
(
z
2
)
=
log
(
z
1
/
z
2
)
{\displaystyle \log(z_{1})-\log(z_{2})=\log(z_{1}/z_{2})}
= Powers
=A complex power of a complex number can have many possible values.
Principal value form:
z
1
z
2
=
e
z
2
Log
(
z
1
)
{\displaystyle {z_{1}}^{z_{2}}=e^{z_{2}\operatorname {Log} (z_{1})}}
Log
(
z
1
z
2
)
=
z
2
Log
(
z
1
)
(
mod
2
π
i
)
{\displaystyle \operatorname {Log} {\left({z_{1}}^{z_{2}}\right)}=z_{2}\operatorname {Log} (z_{1}){\pmod {2\pi i}}}
Multiple value forms:
z
1
z
2
=
e
z
2
log
(
z
1
)
{\displaystyle {z_{1}}^{z_{2}}=e^{z_{2}\log(z_{1})}}
Where k1, k2 are any integers:
log
(
z
1
z
2
)
=
z
2
log
(
z
1
)
+
2
π
i
k
2
{\displaystyle \log {\left({z_{1}}^{z_{2}}\right)}=z_{2}\log(z_{1})+2\pi ik_{2}}
log
(
z
1
z
2
)
=
z
2
Log
(
z
1
)
+
z
2
2
π
i
k
1
+
2
π
i
k
2
{\displaystyle \log {\left({z_{1}}^{z_{2}}\right)}=z_{2}\operatorname {Log} (z_{1})+z_{2}2\pi ik_{1}+2\pi ik_{2}}
Asymptotic identities
= Pronic numbers
=As a consequence of the harmonic number difference, the natural logarithm is asymptotically approximated by a finite series difference, representing a truncation of the integral at
k
=
n
{\displaystyle k=n}
:
H
2
T
[
n
]
−
H
n
∼
ln
(
n
+
1
)
{\displaystyle H_{2T[n]}-H_{n}\sim \ln(n+1)}
where
T
[
n
]
{\displaystyle T[n]}
is the nth triangular number, and
2
T
[
n
]
{\displaystyle 2T[n]}
is the sum of the first n even integers. Since the nth pronic number is asymptotically equivalent to the nth perfect square, it follows that:
H
n
2
−
H
n
∼
ln
(
n
+
1
)
{\displaystyle H_{n^{2}}-H_{n}\sim \ln(n+1)}
= Prime number theorem
=The prime number theorem provides the following asymptotic equivalence:
n
π
(
n
)
∼
ln
n
{\displaystyle {\frac {n}{\pi (n)}}\sim \ln n}
where
π
(
n
)
{\displaystyle \pi (n)}
is the prime counting function. This relationship is equal to:: 2
n
H
(
1
,
2
,
…
,
x
n
)
∼
ln
n
{\displaystyle {\frac {n}{H(1,2,\ldots ,x_{n})}}\sim \ln n}
where
H
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle H(x_{1},x_{2},\ldots ,x_{n})}
is the harmonic mean of
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\ldots ,x_{n}}
. This is derived from the fact that the difference between the
n
{\displaystyle n}
th harmonic number and
ln
n
{\displaystyle \ln n}
asymptotically approaches a small constant, resulting in
H
n
2
−
H
n
∼
H
n
{\displaystyle H_{n^{2}}-H_{n}\sim H_{n}}
. This behavior can also be derived from the properties of logarithms:
ln
n
{\displaystyle \ln n}
is half of
ln
n
2
{\displaystyle \ln n^{2}}
, and this "first half" is the natural log of the root of
n
2
{\displaystyle n^{2}}
, which corresponds roughly to the first
1
n
{\displaystyle \textstyle {\frac {1}{n}}}
th of the sum
H
n
2
{\displaystyle H_{n^{2}}}
, or
H
n
{\displaystyle H_{n}}
. The asymptotic equivalence of the first
1
n
{\displaystyle \textstyle {\frac {1}{n}}}
th of
H
n
2
{\displaystyle H_{n^{2}}}
to the latter
n
−
1
n
{\displaystyle \textstyle {\frac {n-1}{n}}}
th of the series is expressed as follows:
H
n
H
n
2
∼
ln
n
ln
n
=
1
2
{\displaystyle {\frac {H_{n}}{H_{n^{2}}}}\sim {\frac {\ln {\sqrt {n}}}{\ln n}}={\frac {1}{2}}}
which generalizes to:
H
n
H
n
k
∼
ln
n
k
ln
n
=
1
k
{\displaystyle {\frac {H_{n}}{H_{n^{k}}}}\sim {\frac {\ln {\sqrt[{k}]{n}}}{\ln n}}={\frac {1}{k}}}
k
H
n
∼
H
n
k
{\displaystyle kH_{n}\sim H_{n^{k}}}
and:
k
H
n
−
H
n
∼
(
k
−
1
)
ln
(
n
+
1
)
{\displaystyle kH_{n}-H_{n}\sim (k-1)\ln(n+1)}
H
n
k
−
H
n
∼
(
k
−
1
)
ln
(
n
+
1
)
{\displaystyle H_{n^{k}}-H_{n}\sim (k-1)\ln(n+1)}
k
H
n
−
H
n
k
∼
(
k
−
1
)
γ
{\displaystyle kH_{n}-H_{n^{k}}\sim (k-1)\gamma }
for fixed
k
{\displaystyle k}
. The correspondence sets
H
n
{\displaystyle H_{n}}
as a unit magnitude that partitions
H
n
k
{\displaystyle H_{n^{k}}}
across powers, where each interval
1
n
{\displaystyle \textstyle {\frac {1}{n}}}
to
1
n
2
{\displaystyle \textstyle {\frac {1}{n^{2}}}}
,
1
n
2
{\displaystyle \textstyle {\frac {1}{n^{2}}}}
to
1
n
3
{\displaystyle \textstyle {\frac {1}{n^{3}}}}
, etc., corresponds to one
H
n
{\displaystyle H_{n}}
unit, illustrating that
H
n
k
{\displaystyle H_{n^{k}}}
forms a divergent series as
k
→
∞
{\displaystyle k\to \infty }
.
= Real Arguments
=These approximations extend to the real-valued domain through the interpolated harmonic number. For example, where
x
∈
R
{\displaystyle x\in \mathbb {R} }
:
H
x
2
−
H
x
∼
ln
x
{\displaystyle H_{x^{2}}-H_{x}\sim \ln x}
= Stirling numbers
=The natural logarithm is asymptotically related to the harmonic numbers by the Stirling numbers and the Gregory coefficients. By representing
H
n
{\displaystyle H_{n}}
in terms of Stirling numbers of the first kind, the harmonic number difference is alternatively expressed as follows, for fixed
k
{\displaystyle k}
:
s
(
n
k
+
1
,
2
)
(
n
k
)
!
−
s
(
n
+
1
,
2
)
n
!
∼
(
k
−
1
)
ln
(
n
+
1
)
{\displaystyle {\frac {s(n^{k}+1,2)}{(n^{k})!}}-{\frac {s(n+1,2)}{n!}}\sim (k-1)\ln(n+1)}
See also
List of formulae involving π
List of integrals of logarithmic functions
List of mathematical identities
Lists of mathematics topics
List of trigonometric identities
References
External links
A lesson on logarithms can be found on Wikiversity
Logarithm in Mathwords
Kata Kunci Pencarian:
- List of logarithmic identities
- List of mathematical identities
- Logarithmic derivative
- Logarithmic differentiation
- List of theorems
- List of trigonometric identities
- Index of logarithm articles
- Vector calculus identities
- Lists of mathematics topics
- Identity (mathematics)