- Source: Exact differential equation
- Persamaan diferensial
- Persamaan diferensial biasa
- Terence Tao
- Persamaan integral
- Fungsi Green
- Exact differential equation
- Differential equation
- Exact differential
- Stochastic differential equation
- Bernoulli differential equation
- Exact Equation
- Partial differential equation
- Homogeneous differential equation
- Linear differential equation
- Ordinary differential equation
Taken 3 (2014)
The Count of Monte-Cristo (2024)
I Am Lisa (2020)
Artikel: Exact differential equation GudangMovies21 Rebahinxxi
In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering.
Definition
Given a simply connected and open subset D of
R
2
{\displaystyle \mathbb {R} ^{2}}
and two functions I and J which are continuous on D, an implicit first-order ordinary differential equation of the form
I
(
x
,
y
)
d
x
+
J
(
x
,
y
)
d
y
=
0
,
{\displaystyle I(x,y)\,dx+J(x,y)\,dy=0,}
is called an exact differential equation if there exists a continuously differentiable function F, called the potential function, so that
∂
F
∂
x
=
I
{\displaystyle {\frac {\partial F}{\partial x}}=I}
and
∂
F
∂
y
=
J
.
{\displaystyle {\frac {\partial F}{\partial y}}=J.}
An exact equation may also be presented in the following form:
I
(
x
,
y
)
+
J
(
x
,
y
)
y
′
(
x
)
=
0
{\displaystyle I(x,y)+J(x,y)\,y'(x)=0}
where the same constraints on I and J apply for the differential equation to be exact.
The nomenclature of "exact differential equation" refers to the exact differential of a function. For a function
F
(
x
0
,
x
1
,
.
.
.
,
x
n
−
1
,
x
n
)
{\displaystyle F(x_{0},x_{1},...,x_{n-1},x_{n})}
, the exact or total derivative with respect to
x
0
{\displaystyle x_{0}}
is given by
d
F
d
x
0
=
∂
F
∂
x
0
+
∑
i
=
1
n
∂
F
∂
x
i
d
x
i
d
x
0
.
{\displaystyle {\frac {dF}{dx_{0}}}={\frac {\partial F}{\partial x_{0}}}+\sum _{i=1}^{n}{\frac {\partial F}{\partial x_{i}}}{\frac {dx_{i}}{dx_{0}}}.}
= Example
=The function
F
:
R
2
→
R
{\displaystyle F:\mathbb {R} ^{2}\to \mathbb {R} }
given by
F
(
x
,
y
)
=
1
2
(
x
2
+
y
2
)
+
c
{\displaystyle F(x,y)={\frac {1}{2}}(x^{2}+y^{2})+c}
is a potential function for the differential equation
x
d
x
+
y
d
y
=
0.
{\displaystyle x\,dx+y\,dy=0.\,}
First-order exact differential equations
= Identifying first-order exact differential equations
=Let the functions
M
{\textstyle M}
,
N
{\textstyle N}
,
M
y
{\textstyle M_{y}}
, and
N
x
{\textstyle N_{x}}
, where the subscripts denote the partial derivative with respect to the relative variable, be continuous in the region
R
:
α
<
x
<
β
,
γ
<
y
<
δ
{\textstyle R:\alpha
. Then the differential equation
M
(
x
,
y
)
+
N
(
x
,
y
)
d
y
d
x
=
0
{\displaystyle M(x,y)+N(x,y){\frac {dy}{dx}}=0}
is exact if and only if
M
y
(
x
,
y
)
=
N
x
(
x
,
y
)
{\displaystyle M_{y}(x,y)=N_{x}(x,y)}
That is, there exists a function
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
, called a potential function, such that
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
and
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle \psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)}
So, in general:
M
y
(
x
,
y
)
=
N
x
(
x
,
y
)
⟺
{
∃
ψ
(
x
,
y
)
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle M_{y}(x,y)=N_{x}(x,y)\iff {\begin{cases}\exists \psi (x,y)\\\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}}
Proof
The proof has two parts.
First, suppose there is a function
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
such that
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
and
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle \psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)}
It then follows that
M
y
(
x
,
y
)
=
ψ
x
y
(
x
,
y
)
and
N
x
(
x
,
y
)
=
ψ
y
x
(
x
,
y
)
{\displaystyle M_{y}(x,y)=\psi _{xy}(x,y){\text{ and }}N_{x}(x,y)=\psi _{yx}(x,y)}
Since
M
y
{\displaystyle M_{y}}
and
N
x
{\displaystyle N_{x}}
are continuous, then
ψ
x
y
{\displaystyle \psi _{xy}}
and
ψ
y
x
{\displaystyle \psi _{yx}}
are also continuous which guarantees their equality.
The second part of the proof involves the construction of
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
and can also be used as a procedure for solving first-order exact differential equations. Suppose that
M
y
(
x
,
y
)
=
N
x
(
x
,
y
)
{\displaystyle M_{y}(x,y)=N_{x}(x,y)}
and let there be a function
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
for which
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
and
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle \psi _{x}(x,y)=M(x,y){\text{ and }}\psi _{y}(x,y)=N(x,y)}
Begin by integrating the first equation with respect to
x
{\displaystyle x}
. In practice, it doesn't matter if you integrate the first or the second equation, so long as the integration is done with respect to the appropriate variable.
∂
ψ
∂
x
(
x
,
y
)
=
M
(
x
,
y
)
{\displaystyle {\frac {\partial \psi }{\partial x}}(x,y)=M(x,y)}
ψ
(
x
,
y
)
=
∫
M
(
x
,
y
)
d
x
+
h
(
y
)
{\displaystyle \psi (x,y)=\int M(x,y)\,dx+h(y)}
ψ
(
x
,
y
)
=
Q
(
x
,
y
)
+
h
(
y
)
{\displaystyle \psi (x,y)=Q(x,y)+h(y)}
where
Q
(
x
,
y
)
{\displaystyle Q(x,y)}
is any differentiable function such that
Q
x
=
M
{\displaystyle Q_{x}=M}
. The function
h
(
y
)
{\displaystyle h(y)}
plays the role of a constant of integration, but instead of just a constant, it is function of
y
{\displaystyle y}
, since
M
{\displaystyle M}
is a function of both
x
{\displaystyle x}
and
y
{\displaystyle y}
and we are only integrating with respect to
x
{\displaystyle x}
.
Now to show that it is always possible to find an
h
(
y
)
{\displaystyle h(y)}
such that
ψ
y
=
N
{\displaystyle \psi _{y}=N}
.
ψ
(
x
,
y
)
=
Q
(
x
,
y
)
+
h
(
y
)
{\displaystyle \psi (x,y)=Q(x,y)+h(y)}
Differentiate both sides with respect to
y
{\displaystyle y}
.
∂
ψ
∂
y
(
x
,
y
)
=
∂
Q
∂
y
(
x
,
y
)
+
h
′
(
y
)
{\displaystyle {\frac {\partial \psi }{\partial y}}(x,y)={\frac {\partial Q}{\partial y}}(x,y)+h'(y)}
Set the result equal to
N
{\displaystyle N}
and solve for
h
′
(
y
)
{\displaystyle h'(y)}
.
h
′
(
y
)
=
N
(
x
,
y
)
−
∂
Q
∂
y
(
x
,
y
)
{\displaystyle h'(y)=N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)}
In order to determine
h
′
(
y
)
{\displaystyle h'(y)}
from this equation, the right-hand side must depend only on
y
{\displaystyle y}
. This can be proven by showing that its derivative with respect to
x
{\displaystyle x}
is always zero, so differentiate the right-hand side with respect to
x
{\displaystyle x}
.
∂
N
∂
x
(
x
,
y
)
−
∂
∂
x
∂
Q
∂
y
(
x
,
y
)
⟺
∂
N
∂
x
(
x
,
y
)
−
∂
∂
y
∂
Q
∂
x
(
x
,
y
)
{\displaystyle {\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial }{\partial x}}{\frac {\partial Q}{\partial y}}(x,y)\iff {\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial }{\partial y}}{\frac {\partial Q}{\partial x}}(x,y)}
Since
Q
x
=
M
{\displaystyle Q_{x}=M}
,
∂
N
∂
x
(
x
,
y
)
−
∂
M
∂
y
(
x
,
y
)
{\displaystyle {\frac {\partial N}{\partial x}}(x,y)-{\frac {\partial M}{\partial y}}(x,y)}
Now, this is zero based on our initial supposition that
M
y
(
x
,
y
)
=
N
x
(
x
,
y
)
{\displaystyle M_{y}(x,y)=N_{x}(x,y)}
Therefore,
h
′
(
y
)
=
N
(
x
,
y
)
−
∂
Q
∂
y
(
x
,
y
)
{\displaystyle h'(y)=N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)}
h
(
y
)
=
∫
(
N
(
x
,
y
)
−
∂
Q
∂
y
(
x
,
y
)
)
d
y
{\displaystyle h(y)=\int {\left(N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)\right)dy}}
ψ
(
x
,
y
)
=
Q
(
x
,
y
)
+
∫
(
N
(
x
,
y
)
−
∂
Q
∂
y
(
x
,
y
)
)
d
y
+
C
{\displaystyle \psi (x,y)=Q(x,y)+\int \left(N(x,y)-{\frac {\partial Q}{\partial y}}(x,y)\right)\,dy+C}
And this completes the proof.
= Solutions to first-order exact differential equations
=First-order exact differential equations of the form
M
(
x
,
y
)
+
N
(
x
,
y
)
d
y
d
x
=
0
{\displaystyle M(x,y)+N(x,y){\frac {dy}{dx}}=0}
can be written in terms of the potential function
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
∂
ψ
∂
x
+
∂
ψ
∂
y
d
y
d
x
=
0
{\displaystyle {\frac {\partial \psi }{\partial x}}+{\frac {\partial \psi }{\partial y}}{\frac {dy}{dx}}=0}
where
{
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle {\begin{cases}\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}}
This is equivalent to taking the total derivative of
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
.
∂
ψ
∂
x
+
∂
ψ
∂
y
d
y
d
x
=
0
⟺
d
d
x
ψ
(
x
,
y
(
x
)
)
=
0
{\displaystyle {\frac {\partial \psi }{\partial x}}+{\frac {\partial \psi }{\partial y}}{\frac {dy}{dx}}=0\iff {\frac {d}{dx}}\psi (x,y(x))=0}
The solutions to an exact differential equation are then given by
ψ
(
x
,
y
(
x
)
)
=
c
{\displaystyle \psi (x,y(x))=c}
and the problem reduces to finding
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
.
This can be done by integrating the two expressions
M
(
x
,
y
)
d
x
{\displaystyle M(x,y)\,dx}
and
N
(
x
,
y
)
d
y
{\displaystyle N(x,y)\,dy}
and then writing down each term in the resulting expressions only once and summing them up in order to get
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
.
The reasoning behind this is the following. Since
{
ψ
x
(
x
,
y
)
=
M
(
x
,
y
)
ψ
y
(
x
,
y
)
=
N
(
x
,
y
)
{\displaystyle {\begin{cases}\psi _{x}(x,y)=M(x,y)\\\psi _{y}(x,y)=N(x,y)\end{cases}}}
it follows, by integrating both sides, that
{
ψ
(
x
,
y
)
=
∫
M
(
x
,
y
)
d
x
+
h
(
y
)
=
Q
(
x
,
y
)
+
h
(
y
)
ψ
(
x
,
y
)
=
∫
N
(
x
,
y
)
d
y
+
g
(
x
)
=
P
(
x
,
y
)
+
g
(
x
)
{\displaystyle {\begin{cases}\psi (x,y)=\int M(x,y)\,dx+h(y)=Q(x,y)+h(y)\\\psi (x,y)=\int N(x,y)\,dy+g(x)=P(x,y)+g(x)\end{cases}}}
Therefore,
Q
(
x
,
y
)
+
h
(
y
)
=
P
(
x
,
y
)
+
g
(
x
)
{\displaystyle Q(x,y)+h(y)=P(x,y)+g(x)}
where
Q
(
x
,
y
)
{\displaystyle Q(x,y)}
and
P
(
x
,
y
)
{\displaystyle P(x,y)}
are differentiable functions such that
Q
x
=
M
{\displaystyle Q_{x}=M}
and
P
y
=
N
{\displaystyle P_{y}=N}
.
In order for this to be true and for both sides to result in the exact same expression, namely
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
, then
h
(
y
)
{\displaystyle h(y)}
must be contained within the expression for
P
(
x
,
y
)
{\displaystyle P(x,y)}
because it cannot be contained within
g
(
x
)
{\displaystyle g(x)}
, since it is entirely a function of
y
{\displaystyle y}
and not
x
{\displaystyle x}
and is therefore not allowed to have anything to do with
x
{\displaystyle x}
. By analogy,
g
(
x
)
{\displaystyle g(x)}
must be contained within the expression
Q
(
x
,
y
)
{\displaystyle Q(x,y)}
.
Ergo,
Q
(
x
,
y
)
=
g
(
x
)
+
f
(
x
,
y
)
and
P
(
x
,
y
)
=
h
(
y
)
+
d
(
x
,
y
)
{\displaystyle Q(x,y)=g(x)+f(x,y){\text{ and }}P(x,y)=h(y)+d(x,y)}
for some expressions
f
(
x
,
y
)
{\displaystyle f(x,y)}
and
d
(
x
,
y
)
{\displaystyle d(x,y)}
.
Plugging in into the above equation, we find that
g
(
x
)
+
f
(
x
,
y
)
+
h
(
y
)
=
h
(
y
)
+
d
(
x
,
y
)
+
g
(
x
)
⇒
f
(
x
,
y
)
=
d
(
x
,
y
)
{\displaystyle g(x)+f(x,y)+h(y)=h(y)+d(x,y)+g(x)\Rightarrow f(x,y)=d(x,y)}
and so
f
(
x
,
y
)
{\displaystyle f(x,y)}
and
d
(
x
,
y
)
{\displaystyle d(x,y)}
turn out to be the same function. Therefore,
Q
(
x
,
y
)
=
g
(
x
)
+
f
(
x
,
y
)
and
P
(
x
,
y
)
=
h
(
y
)
+
f
(
x
,
y
)
{\displaystyle Q(x,y)=g(x)+f(x,y){\text{ and }}P(x,y)=h(y)+f(x,y)}
Since we already showed that
{
ψ
(
x
,
y
)
=
Q
(
x
,
y
)
+
h
(
y
)
ψ
(
x
,
y
)
=
P
(
x
,
y
)
+
g
(
x
)
{\displaystyle {\begin{cases}\psi (x,y)=Q(x,y)+h(y)\\\psi (x,y)=P(x,y)+g(x)\end{cases}}}
it follows that
ψ
(
x
,
y
)
=
g
(
x
)
+
f
(
x
,
y
)
+
h
(
y
)
{\displaystyle \psi (x,y)=g(x)+f(x,y)+h(y)}
So, we can construct
ψ
(
x
,
y
)
{\displaystyle \psi (x,y)}
by doing
∫
M
(
x
,
y
)
d
x
{\displaystyle \int M(x,y)\,dx}
and
∫
N
(
x
,
y
)
d
y
{\displaystyle \int N(x,y)\,dy}
and then taking the common terms we find within the two resulting expressions (that would be
f
(
x
,
y
)
{\displaystyle f(x,y)}
) and then adding the terms which are uniquely found in either one of them –
g
(
x
)
{\displaystyle g(x)}
and
h
(
y
)
{\displaystyle h(y)}
.
Second-order exact differential equations
The concept of exact differential equations can be extended to second-order equations. Consider starting with the first-order exact equation:
I
(
x
,
y
)
+
J
(
x
,
y
)
d
y
d
x
=
0
{\displaystyle I(x,y)+J(x,y){dy \over dx}=0}
Since both functions
I
(
x
,
y
)
{\displaystyle I(x,y)}
,
J
(
x
,
y
)
{\displaystyle J(x,y)}
are functions of two variables, implicitly differentiating the multivariate function yields
d
I
d
x
+
(
d
J
d
x
)
d
y
d
x
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle {dI \over dx}+\left({dJ \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}(J(x,y))=0}
Expanding the total derivatives gives that
d
I
d
x
=
∂
I
∂
x
+
∂
I
∂
y
d
y
d
x
{\displaystyle {dI \over dx}={\partial I \over \partial x}+{\partial I \over \partial y}{dy \over dx}}
and that
d
J
d
x
=
∂
J
∂
x
+
∂
J
∂
y
d
y
d
x
{\displaystyle {dJ \over dx}={\partial J \over \partial x}+{\partial J \over \partial y}{dy \over dx}}
Combining the
d
y
d
x
{\textstyle {dy \over dx}}
terms gives
∂
I
∂
x
+
d
y
d
x
(
∂
I
∂
y
+
∂
J
∂
x
+
∂
J
∂
y
d
y
d
x
)
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle {\partial I \over \partial x}+{dy \over dx}\left({\partial I \over \partial y}+{\partial J \over \partial x}+{\partial J \over \partial y}{dy \over dx}\right)+{d^{2}y \over dx^{2}}(J(x,y))=0}
If the equation is exact, then
∂
J
∂
x
=
∂
I
∂
y
{\textstyle {\partial J \over \partial x}={\partial I \over \partial y}}
. Additionally, the total derivative of
J
(
x
,
y
)
{\displaystyle J(x,y)}
is equal to its implicit ordinary derivative
d
J
d
x
{\textstyle {dJ \over dx}}
. This leads to the rewritten equation
∂
I
∂
x
+
d
y
d
x
(
∂
J
∂
x
+
d
J
d
x
)
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle {\partial I \over \partial x}+{dy \over dx}\left({\partial J \over \partial x}+{dJ \over dx}\right)+{d^{2}y \over dx^{2}}(J(x,y))=0}
Now, let there be some second-order differential equation
f
(
x
,
y
)
+
g
(
x
,
y
,
d
y
d
x
)
d
y
d
x
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle f(x,y)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}(J(x,y))=0}
If
∂
J
∂
x
=
∂
I
∂
y
{\displaystyle {\partial J \over \partial x}={\partial I \over \partial y}}
for exact differential equations, then
∫
(
∂
I
∂
y
)
d
y
=
∫
(
∂
J
∂
x
)
d
y
{\displaystyle \int \left({\partial I \over \partial y}\right)\,dy=\int \left({\partial J \over \partial x}\right)\,dy}
and
∫
(
∂
I
∂
y
)
d
y
=
∫
(
∂
J
∂
x
)
d
y
=
I
(
x
,
y
)
−
h
(
x
)
{\displaystyle \int \left({\partial I \over \partial y}\right)\,dy=\int \left({\partial J \over \partial x}\right)\,dy=I(x,y)-h(x)}
where
h
(
x
)
{\displaystyle h(x)}
is some arbitrary function only of
x
{\displaystyle x}
that was differentiated away to zero upon taking the partial derivative of
I
(
x
,
y
)
{\displaystyle I(x,y)}
with respect to
y
{\displaystyle y}
. Although the sign on
h
(
x
)
{\displaystyle h(x)}
could be positive, it is more intuitive to think of the integral's result as
I
(
x
,
y
)
{\displaystyle I(x,y)}
that is missing some original extra function
h
(
x
)
{\displaystyle h(x)}
that was partially differentiated to zero.
Next, if
d
I
d
x
=
∂
I
∂
x
+
∂
I
∂
y
d
y
d
x
{\displaystyle {dI \over dx}={\partial I \over \partial x}+{\partial I \over \partial y}{dy \over dx}}
then the term
∂
I
∂
x
{\displaystyle {\partial I \over \partial x}}
should be a function only of
x
{\displaystyle x}
and
y
{\displaystyle y}
, since partial differentiation with respect to
x
{\displaystyle x}
will hold
y
{\displaystyle y}
constant and not produce any derivatives of
y
{\displaystyle y}
. In the second-order equation
f
(
x
,
y
)
+
g
(
x
,
y
,
d
y
d
x
)
d
y
d
x
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle f(x,y)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}(J(x,y))=0}
only the term
f
(
x
,
y
)
{\displaystyle f(x,y)}
is a term purely of
x
{\displaystyle x}
and
y
{\displaystyle y}
. Let
∂
I
∂
x
=
f
(
x
,
y
)
{\displaystyle {\partial I \over \partial x}=f(x,y)}
. If
∂
I
∂
x
=
f
(
x
,
y
)
{\displaystyle {\partial I \over \partial x}=f(x,y)}
, then
f
(
x
,
y
)
=
d
I
d
x
−
∂
I
∂
y
d
y
d
x
{\displaystyle f(x,y)={dI \over dx}-{\partial I \over \partial y}{dy \over dx}}
Since the total derivative of
I
(
x
,
y
)
{\displaystyle I(x,y)}
with respect to
x
{\displaystyle x}
is equivalent to the implicit ordinary derivative
d
I
d
x
{\displaystyle {dI \over dx}}
, then
f
(
x
,
y
)
+
∂
I
∂
y
d
y
d
x
=
d
I
d
x
=
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
+
d
h
(
x
)
d
x
{\displaystyle f(x,y)+{\partial I \over \partial y}{dy \over dx}={dI \over dx}={d \over dx}(I(x,y)-h(x))+{dh(x) \over dx}}
So,
d
h
(
x
)
d
x
=
f
(
x
,
y
)
+
∂
I
∂
y
d
y
d
x
−
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
{\displaystyle {dh(x) \over dx}=f(x,y)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}(I(x,y)-h(x))}
and
h
(
x
)
=
∫
(
f
(
x
,
y
)
+
∂
I
∂
y
d
y
d
x
−
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
)
d
x
{\displaystyle h(x)=\int \left(f(x,y)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}(I(x,y)-h(x))\right)\,dx}
Thus, the second-order differential equation
f
(
x
,
y
)
+
g
(
x
,
y
,
d
y
d
x
)
d
y
d
x
+
d
2
y
d
x
2
(
J
(
x
,
y
)
)
=
0
{\displaystyle f(x,y)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^{2}y \over dx^{2}}(J(x,y))=0}
is exact only if
g
(
x
,
y
,
d
y
d
x
)
=
d
J
d
x
+
∂
J
∂
x
=
d
J
d
x
+
∂
J
∂
x
{\displaystyle g\left(x,y,{dy \over dx}\right)={dJ \over dx}+{\partial J \over \partial x}={dJ \over dx}+{\partial J \over \partial x}}
and only if the below expression
∫
(
f
(
x
,
y
)
+
∂
I
∂
y
d
y
d
x
−
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
)
d
x
=
∫
(
f
(
x
,
y
)
−
∂
(
I
(
x
,
y
)
−
h
(
x
)
)
∂
x
)
d
x
{\displaystyle \int \left(f(x,y)+{\partial I \over \partial y}{dy \over dx}-{d \over dx}(I(x,y)-h(x))\right)\,dx=\int \left(f(x,y)-{\partial \left(I(x,y)-h(x)\right) \over \partial x}\right)\,dx}
is a function solely of
x
{\displaystyle x}
. Once
h
(
x
)
{\displaystyle h(x)}
is calculated with its arbitrary constant, it is added to
I
(
x
,
y
)
−
h
(
x
)
{\displaystyle I(x,y)-h(x)}
to make
I
(
x
,
y
)
{\displaystyle I(x,y)}
. If the equation is exact, then we can reduce to the first-order exact form which is solvable by the usual method for first-order exact equations.
I
(
x
,
y
)
+
J
(
x
,
y
)
d
y
d
x
=
0
{\displaystyle I(x,y)+J(x,y){dy \over dx}=0}
Now, however, in the final implicit solution there will be a
C
1
x
{\displaystyle C_{1}x}
term from integration of
h
(
x
)
{\displaystyle h(x)}
with respect to
x
{\displaystyle x}
twice as well as a
C
2
{\displaystyle C_{2}}
, two arbitrary constants as expected from a second-order equation.
= Example
=Given the differential equation
(
1
−
x
2
)
y
″
−
4
x
y
′
−
2
y
=
0
{\displaystyle (1-x^{2})y''-4xy'-2y=0}
one can always easily check for exactness by examining the
y
″
{\displaystyle y''}
term. In this case, both the partial and total derivative of
1
−
x
2
{\displaystyle 1-x^{2}}
with respect to
x
{\displaystyle x}
are
−
2
x
{\displaystyle -2x}
, so their sum is
−
4
x
{\displaystyle -4x}
, which is exactly the term in front of
y
′
{\displaystyle y'}
. With one of the conditions for exactness met, one can calculate that
∫
(
−
2
x
)
d
y
=
I
(
x
,
y
)
−
h
(
x
)
=
−
2
x
y
{\displaystyle \int (-2x)\,dy=I(x,y)-h(x)=-2xy}
Letting
f
(
x
,
y
)
=
−
2
y
{\displaystyle f(x,y)=-2y}
, then
∫
(
−
2
y
−
2
x
y
′
−
d
d
x
(
−
2
x
y
)
)
d
x
=
∫
(
−
2
y
−
2
x
y
′
+
2
x
y
′
+
2
y
)
d
x
=
∫
(
0
)
d
x
=
h
(
x
)
{\displaystyle \int \left(-2y-2xy'-{d \over dx}(-2xy)\right)\,dx=\int (-2y-2xy'+2xy'+2y)\,dx=\int (0)\,dx=h(x)}
So,
h
(
x
)
{\displaystyle h(x)}
is indeed a function only of
x
{\displaystyle x}
and the second-order differential equation is exact. Therefore,
h
(
x
)
=
C
1
{\displaystyle h(x)=C_{1}}
and
I
(
x
,
y
)
=
−
2
x
y
+
C
1
{\displaystyle I(x,y)=-2xy+C_{1}}
. Reduction to a first-order exact equation yields
−
2
x
y
+
C
1
+
(
1
−
x
2
)
y
′
=
0
{\displaystyle -2xy+C_{1}+(1-x^{2})y'=0}
Integrating
I
(
x
,
y
)
{\displaystyle I(x,y)}
with respect to
x
{\displaystyle x}
yields
−
x
2
y
+
C
1
x
+
i
(
y
)
=
0
{\displaystyle -x^{2}y+C_{1}x+i(y)=0}
where
i
(
y
)
{\displaystyle i(y)}
is some arbitrary function of
y
{\displaystyle y}
. Differentiating with respect to
y
{\displaystyle y}
gives an equation correlating the derivative and the
y
′
{\displaystyle y'}
term.
−
x
2
+
i
′
(
y
)
=
1
−
x
2
{\displaystyle -x^{2}+i'(y)=1-x^{2}}
So,
i
(
y
)
=
y
+
C
2
{\displaystyle i(y)=y+C_{2}}
and the full implicit solution becomes
C
1
x
+
C
2
+
y
−
x
2
y
=
0
{\displaystyle C_{1}x+C_{2}+y-x^{2}y=0}
Solving explicitly for
y
{\displaystyle y}
yields
y
=
C
1
x
+
C
2
1
−
x
2
{\displaystyle y={\frac {C_{1}x+C_{2}}{1-x^{2}}}}
Higher-order exact differential equations
The concepts of exact differential equations can be extended to any order. Starting with the exact second-order equation
d
2
y
d
x
2
(
J
(
x
,
y
)
)
+
d
y
d
x
(
d
J
d
x
+
∂
J
∂
x
)
+
f
(
x
,
y
)
=
0
{\displaystyle {d^{2}y \over dx^{2}}(J(x,y))+{dy \over dx}\left({dJ \over dx}+{\partial J \over \partial x}\right)+f(x,y)=0}
it was previously shown that equation is defined such that
f
(
x
,
y
t
)
=
d
h
t
(
x
)
d
x
+
d
d
x
(
I
(
x
,
y
)
−
h
(
x
)
)
−
∂
J
∂
x
d
y
d
x
{\displaystyle f(x,yt)={dht(x) \over dx}+{d \over dx}(I(x,y)-h(x))-{\partial J \over \partial x}{dy \over dx}}
Implicit differentiation of the exact second-order equation
n
{\displaystyle n}
times will yield an
(
n
+
2
)
{\displaystyle (n+2)}
th-order differential equation with new conditions for exactness that can be readily deduced from the form of the equation produced. For example, differentiating the above second-order differential equation once to yield a third-order exact equation gives the following form
d
3
y
d
x
3
(
J
(
x
,
y
)
)
+
d
2
y
d
x
2
d
J
d
x
+
d
2
y
d
x
2
(
d
J
d
x
+
∂
J
∂
x
)
+
d
y
d
x
(
d
2
J
d
x
2
+
d
d
x
(
∂
J
∂
x
)
)
+
d
f
(
x
,
y
)
d
x
=
0
{\displaystyle {d^{3}y \over dx^{3}}(J(x,y))+{d^{2}y \over dx^{2}}{dJ \over dx}+{d^{2}y \over dx^{2}}\left({dJ \over dx}+{\partial J \over \partial x}\right)+{dy \over dx}\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)+{df(x,y) \over dx}=0}
where
d
f
(
x
,
y
)
d
x
=
d
2
h
(
x
)
d
x
2
+
d
2
d
x
2
(
I
(
x
,
y
)
−
h
(
x
)
)
−
d
2
y
d
x
2
∂
J
∂
x
−
d
y
d
x
d
d
x
(
∂
J
∂
x
)
=
F
(
x
,
y
,
d
y
d
x
)
{\displaystyle {df(x,y) \over dx}={d^{2}h(x) \over dx^{2}}+{d^{2} \over dx^{2}}(I(x,y)-h(x))-{d^{2}y \over dx^{2}}{\partial J \over \partial x}-{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)=F\left(x,y,{dy \over dx}\right)}
and where
F
(
x
,
y
,
d
y
d
x
)
{\displaystyle F\left(x,y,{dy \over dx}\right)}
is a function only of
x
,
y
{\displaystyle x,y}
and
d
y
d
x
{\displaystyle {dy \over dx}}
. Combining all
d
y
d
x
{\displaystyle {dy \over dx}}
and
d
2
y
d
x
2
{\displaystyle {d^{2}y \over dx^{2}}}
terms not coming from
F
(
x
,
y
,
d
y
d
x
)
{\displaystyle F\left(x,y,{dy \over dx}\right)}
gives
d
3
y
d
x
3
(
J
(
x
,
y
)
)
+
d
2
y
d
x
2
(
2
d
J
d
x
+
∂
J
∂
x
)
+
d
y
d
x
(
d
2
J
d
x
2
+
d
d
x
(
∂
J
∂
x
)
)
+
F
(
x
,
y
,
d
y
d
x
)
=
0
{\displaystyle {d^{3}y \over dx^{3}}(J(x,y))+{d^{2}y \over dx^{2}}\left(2{dJ \over dx}+{\partial J \over \partial x}\right)+{dy \over dx}\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)+F\left(x,y,{dy \over dx}\right)=0}
Thus, the three conditions for exactness for a third-order differential equation are: the
d
2
y
d
x
2
{\displaystyle {d^{2}y \over dx^{2}}}
term must be
2
d
J
d
x
+
∂
J
∂
x
{\displaystyle 2{dJ \over dx}+{\partial J \over \partial x}}
, the
d
y
d
x
{\displaystyle {dy \over dx}}
term must be
d
2
J
d
x
2
+
d
d
x
(
∂
J
∂
x
)
{\displaystyle {d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)}
and
F
(
x
,
y
,
d
y
d
x
)
−
d
2
d
x
2
(
I
(
x
,
y
)
−
h
(
x
)
)
+
d
2
y
d
x
2
∂
J
∂
x
+
d
y
d
x
d
d
x
(
∂
J
∂
x
)
{\displaystyle F\left(x,y,{dy \over dx}\right)-{d^{2} \over dx^{2}}(I(x,y)-h(x))+{d^{2}y \over dx^{2}}{\partial J \over \partial x}+{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)}
must be a function solely of
x
{\displaystyle x}
.
= Example
=Consider the nonlinear third-order differential equation
y
y
‴
+
3
y
′
y
″
+
12
x
2
=
0
{\displaystyle yy'''+3y'y''+12x^{2}=0}
If
J
(
x
,
y
)
=
y
{\displaystyle J(x,y)=y}
, then
y
″
(
2
d
J
d
x
+
∂
J
∂
x
)
{\displaystyle y''\left(2{dJ \over dx}+{\partial J \over \partial x}\right)}
is
2
y
′
y
″
{\displaystyle 2y'y''}
and
y
′
(
d
2
J
d
x
2
+
d
d
x
(
∂
J
∂
x
)
)
=
y
′
y
″
{\displaystyle y'\left({d^{2}J \over dx^{2}}+{d \over dx}\left({\partial J \over \partial x}\right)\right)=y'y''}
which together sum to
3
y
′
y
″
{\displaystyle 3y'y''}
. Fortunately, this appears in our equation. For the last condition of exactness,
F
(
x
,
y
,
d
y
d
x
)
−
d
2
d
x
2
(
I
(
x
,
y
)
−
h
(
x
)
)
+
d
2
y
d
x
2
∂
J
∂
x
+
d
y
d
x
d
d
x
(
∂
J
∂
x
)
=
12
x
2
−
0
+
0
+
0
=
12
x
2
{\displaystyle F\left(x,y,{dy \over dx}\right)-{d^{2} \over dx^{2}}\left(I(x,y)-h(x)\right)+{d^{2}y \over dx^{2}}{\partial J \over \partial x}+{dy \over dx}{d \over dx}\left({\partial J \over \partial x}\right)=12x^{2}-0+0+0=12x^{2}}
which is indeed a function only of
x
{\displaystyle x}
. So, the differential equation is exact. Integrating twice yields that
h
(
x
)
=
x
4
+
C
1
x
+
C
2
=
I
(
x
,
y
)
{\displaystyle h(x)=x^{4}+C_{1}x+C_{2}=I(x,y)}
. Rewriting the equation as a first-order exact differential equation yields
x
4
+
C
1
x
+
C
2
+
y
y
′
=
0
{\displaystyle x^{4}+C_{1}x+C_{2}+yy'=0}
Integrating
I
(
x
,
y
)
{\displaystyle I(x,y)}
with respect to
x
{\displaystyle x}
gives that
x
5
5
+
C
1
x
2
+
C
2
x
+
i
(
y
)
=
0
{\displaystyle {x^{5} \over 5}+C_{1}x^{2}+C_{2}x+i(y)=0}
. Differentiating with respect to
y
{\displaystyle y}
and equating that to the term in front of
y
′
{\displaystyle y'}
in the first-order equation gives that
i
′
(
y
)
=
y
{\displaystyle i'(y)=y}
and that
i
(
y
)
=
y
2
2
+
C
3
{\displaystyle i(y)={y^{2} \over 2}+C_{3}}
. The full implicit solution becomes
x
5
5
+
C
1
x
2
+
C
2
x
+
C
3
+
y
2
2
=
0
{\displaystyle {x^{5} \over 5}+C_{1}x^{2}+C_{2}x+C_{3}+{y^{2} \over 2}=0}
The explicit solution, then, is
y
=
±
C
1
x
2
+
C
2
x
+
C
3
−
2
x
5
5
{\displaystyle y=\pm {\sqrt {C_{1}x^{2}+C_{2}x+C_{3}-{\frac {2x^{5}}{5}}}}}
See also
Exact differential
Inexact differential equation
References
Further reading
Boyce, William E.; DiPrima, Richard C. (1986). Elementary Differential Equations (4th ed.). New York: John Wiley & Sons, Inc. ISBN 0-471-07894-8
Kata Kunci Pencarian:
Artikel Terkait "exact differential equation"
2.7: Exact Differential Equations - Mathematics LibreTexts
A differential equation with a potential function is called exact. If you have had vector calculus , this is the same as finding the potential functions and using the fundamental theorem of line …
Differential Equations - Exact Equations - Pauls Online Math ...
16 Nov 2022 · In this section we will discuss identifying and solving exact differential equations. We will develop of a test that can be used to identify exact differential equations and give a …
Exact differential equation - Wikipedia
In mathematics, an exact differential equation or total differential equation is a certain kind of ordinary differential equation which is widely used in physics and engineering.
Exact Equations and Integrating Factors - Math is Fun
Hi! You might like to learn about differential equations and partial derivatives first! Exact Equation. An "exact" equation is where a first-order differential equation like this: M(x, y)dx + N(x, y)dy = …
Exact Differential Equation - Definition, Theorem, Proof and ...
A first-order differential equation (of one variable) is known as an exact, or an exact differential, if it is the result of a simple differentiation. The equation P(x, y)y′ + Q(x, y) = 0, or in the equivalent …
Exact Differential Equations - GeeksforGeeks
22 Mei 2024 · Exact differential equations are differential equations of first-order that can be combined into a single exact differential equation. What is the exact condition of a differential …
Exact Differential Equation Definition | Integrating Factors
In this article, we are going to discuss what is an exact differential equation, standard form, integrating factor, and how to solve exact differential equation in detail with examples and …
Differential Equations EXACT EQUATIONS
equation (o.d.e.): P(x,y)dx+Q(x,y)dy = 0 If ∂P ∂y = ∂Q ∂x then the o.de. is said to be exact. This means that a function u(x,y) exists such that: du = ∂u ∂x dx+ ∂u ∂y dy = P dx+Qdy = 0 . One …