- Source: 3D Face Morphable Model
In computer vision and computer graphics, the 3D Face Morphable Model (3DFMM) is a generative technique for modeling textured 3D faces. The generation of new faces is based on a pre-existing database of example faces acquired through a 3D scanning procedure. All these faces are in dense point-to-point correspondence, which enables the generation of a new realistic face (morph) by combining the acquired faces. A new 3D face can be inferred from one or multiple existing images of a face or by arbitrarily combining the example faces. 3DFMM provides a way to represent face shape and texture disentangled from external factors, such as camera parameters and illumination.
The 3D Morphable Model (3DMM) is a general framework that has been applied to various objects other than faces, e.g., the whole human body, specific body parts, and animals. 3DMMs were first developed to solve vision tasks by representing objects in terms of the prior knowledge that can be gathered from that object class. The prior knowledge is statistically extracted from a database of 3D examples and used as a basis to represent or generate new plausible objects of that class. Its effectiveness lies in the ability to efficiently encode this prior information, enabling the solution of otherwise ill-posed problems (such as single-view 3D object reconstruction).
Historically, face models have been the first example of morphable models, and the field of 3DFMM remains a very active field of research as today. In fact, 3DFMM has been successfully employed in face recognition, entertainment industry (gaming and extended reality, virtual try on, face replacement, face reenactment), digital forensics, and medical applications.
Modeling
In general, 3D faces can be modeled by three variational components extracted from the face dataset:
shape model - model of the distribution of geometrical shape across different subjects
expression model - model of the distribution of geometrical shape across different facial expressions
appearance model - model of the distribution of surface textures (color and illumination)
= Shape modeling
=The 3DFMM uses statistical analysis to define a statistical shape space, a vectorial space equipped with a probability distribution, or prior. To extract the prior from the example dataset, all the 3D faces must be in a dense point-to-point correspondence. This means that each point has the same semantical meaning on each face (e.g., nose tip, edge of the eye). In this way, by fixing a point, we can, for example, derive the probability distribution of the texture's red channel values over all the faces. A face shape
S
{\textstyle S}
of
n
{\textstyle n}
vertices is defined as the vector containing the 3D coordinates of the
n
{\displaystyle n}
vertices in a specified order, that is
S
∈
R
3
n
{\displaystyle S\in \mathbb {R} ^{3n}}
. A shape space is regarded as a
d
{\textstyle d}
-dimensional space that generates plausible 3D faces by performing a lower-dimensional (
d
≪
n
{\textstyle d\ll n}
) parametrization of the database. Thus, a shape
S
{\textstyle S}
can be represented through a generator function
c
:
R
d
→
R
3
n
{\textstyle \mathbf {c} :\mathbb {R} ^{d}\rightarrow \mathbb {R} ^{3n}}
by the parameters
w
∈
R
d
{\displaystyle \mathbf {w} \in \mathbb {R} ^{d}}
,
c
(
w
)
=
S
∈
R
3
n
{\textstyle \mathbf {c} (\mathbf {w} )=S\in \mathbb {R} ^{3n}}
. The most common statistical technique used in 3DFMM to generate the shape space is Principal Component Analysis (PCA), that generates a basis that maximizes the variance of the data. Performing PCA, the generator function is linear and defined as
c
(
w
)
=
c
¯
+
E
w
{\displaystyle \mathbf {c} (\mathbf {w} )=\mathbf {\bar {c}} +\mathbf {E} \mathbf {w} }
where
c
¯
{\textstyle \mathbf {\bar {c}} }
is the mean over the training data and
E
∈
R
3
n
×
d
{\displaystyle \mathbf {E} \in \mathbb {R} ^{3n\times d}}
is the matrix that contains the
d
{\textstyle d}
most dominant eigenvectors.
Using a unique generator function for the whole face leads to the imperfect representation of finer details. A solution is to use local models of the face by segmenting important parts such as the eyes, mouth, and nose.
= Expression modeling
=The modeling of the expression is performed by explicitly subdividing the representation of the identity from the facial expression. Depending on how identity and expression are combined, these methods can be classified as additive, multiplicative, and nonlinear.
The additive model is defined as a linear model and the expression is an additive offset with respect to the identity
c
(
w
s
,
w
w
)
=
c
¯
+
E
s
w
s
+
E
e
w
e
{\displaystyle \mathbf {c} (\mathbf {w} ^{s},\mathbf {w} ^{w})=\mathbf {\bar {c}} +\mathbf {E} ^{s}\mathbf {w} ^{s}+\mathbf {E} ^{e}\mathbf {w} ^{e}}
where
E
s
{\textstyle \mathbf {E} ^{s}}
,
E
e
{\textstyle \mathbf {E} ^{e}}
and
w
s
{\textstyle \mathbf {w} ^{s}}
,
w
e
{\textstyle \mathbf {w} ^{e}}
are the matrices basis and the coefficients vectors of the shape and expression space, respectively. With this model, given the 3D shape of a subject in a neutral expression
c
n
e
{\textstyle \mathbf {c} _{ne}}
and in a particular expression
c
e
x
p
{\displaystyle \mathbf {c} ^{exp}}
, we can transfer the expression to a different subject by adding the offset
Δ
c
=
c
e
x
p
−
c
n
e
{\displaystyle \Delta _{\mathbf {c} }=\mathbf {c} ^{exp}-\mathbf {c} ^{ne}}
. Two PCAs can be performed to learn two different spaces for shape and expression.
In a multiplicative model, shape and expression can be combined in different ways. For example, by exploiting
d
e
{\textstyle d_{e}}
operators
T
j
:
R
3
n
→
R
3
n
{\displaystyle \mathbf {T} _{j}:\mathbb {R} ^{3n}\rightarrow \mathbb {R} ^{3n}}
that transform a neutral expression into a target blendshape we can write
c
(
w
s
,
w
e
)
=
∑
j
=
1
d
e
w
j
e
T
j
(
c
(
w
s
)
+
δ
s
)
+
δ
j
e
{\displaystyle \mathbf {c} (\mathbf {w} ^{s},\mathbf {w} ^{e})=\sum _{j=1}^{d_{e}}w_{j}^{e}\mathbf {T} _{j}(\mathbf {c} (\mathbf {w} ^{s})+\mathbf {\delta } ^{s})+\mathbf {\delta } _{j}^{e}}
where
δ
s
{\displaystyle \mathbf {\delta } ^{s}}
and
δ
j
s
{\textstyle \mathbf {\delta } _{j}^{s}}
are vectors to correct to the target expression.
The nonlinear model uses nonlinear transformations to represent an expression.
= Appearance modeling
=The color information id often associated to each vertex of a 3D shape. This one-to-one correspondence allows us to represent appearance analogously to the linear shape model
d
(
w
t
)
=
d
¯
+
E
t
w
t
{\displaystyle \mathbf {d} (\mathbf {w} ^{t})=\mathbf {\bar {d}} +\mathbf {E} ^{t}\mathbf {w} ^{t}}
where
w
t
{\displaystyle \mathbf {w} ^{t}}
is the coefficients vector defined over the basis matrix
E
t
{\displaystyle \mathbf {E} ^{t}}
. PCA can be again be used to learn the appearance space.
History
Facial recognition can be considered the field that originated the concepts that later on converged into the formalization of the morphable models. The eigenface approach used in face recognition represented faces in a vector space and used principal component analysis to identify the main modes of variation. However, this method had limitations: it was constrained to fixed poses and illumination and lacked an effective representation of shape differences. As a result, changes in the eigenvectors did not accurately represent shifts in facial structures but caused structures to fade in and out. To address these limitations, researchers added an eigendecomposition of 2D shape variations between faces. The original eigenface approach aligned images based on a single point, while new methods established correspondences on many points. Landmark-based face warping was introduced by Craw and Cameron (1991), and the first statistical shape model, Active Shape Model, was proposed by Cootes et al. (1995). This model used shape alone, but Active Appearance Model by Cootes et al. (1998) combined shape and appearance. Since these 2D methods were effective only for fixed poses and illumination, they were extended by Vetter and Poggio (1997) to handle more diverse settings. Even though separating shape and texture was effective for face representation, handling pose and illumination variations required many separate models. On the other hand, advances in 3D computer graphics showed that simulating pose and illumination variations was straightforward. The combination of graphics methods with face modeling led to the first formulation of 3DMMs by Blanz and Vetter (1999). The analysis-by-synthesis approach enabled the mapping of the 3D and 2D domains and a new representation of 3D shape and appearance. Their work is the first to introduce a statistical model for faces that enabled 3D reconstruction from 2D images and a parametric face space for controlled manipulation.
In the original definition of Blanz and Vetter, the shape of a face is represented as the vector
S
=
(
X
1
,
Y
1
,
Z
1
,
.
.
.
,
X
n
,
Y
n
,
Z
n
)
T
∈
R
3
n
{\displaystyle S=(X_{1},Y_{1},Z_{1},...,X_{n},Y_{n},Z_{n})^{T}\in \mathbb {R} ^{3n}}
that contains the 3D coordinates of the
n
{\displaystyle n}
vertices. Similarly, the texture is represented as a vector
T
=
(
R
1
,
G
1
,
B
1
,
.
.
.
,
R
n
,
G
n
,
B
n
)
T
∈
R
3
n
{\displaystyle T=(R_{1},G_{1},B_{1},...,R_{n},G_{n},B_{n})^{T}\in \mathbb {R} ^{3n}}
that contains the three RGB color channels associated with each corresponding vertex. Due to the full correspondence between exemplar 3D faces, new shapes
S
m
o
d
e
l
s
{\displaystyle \mathbf {S} _{models}}
and textures
T
m
o
d
e
l
s
{\displaystyle \mathbf {T} _{models}}
can be defined as a linear combination of the
m
{\textstyle m}
example faces:
S
m
o
d
e
l
=
∑
i
=
1
m
a
i
S
i
T
m
o
d
e
l
=
∑
i
=
1
m
b
i
T
i
with
∑
i
=
1
m
a
i
=
∑
i
=
1
m
b
i
=
1
{\displaystyle \mathbf {S} _{model}=\sum _{i=1}^{m}a_{i}\mathbf {S} _{i}\qquad \mathbf {T} _{model}=\sum _{i=1}^{m}b_{i}\mathbf {T} _{i}\qquad {\text{with}}\;\sum _{i=1}^{m}a_{i}=\sum _{i=1}^{m}b_{i}=1}
Thus, a new face shape and texture is parametrized by the shape
a
=
(
a
1
,
a
2
,
.
.
.
,
a
m
)
T
{\displaystyle \mathbf {a} =(a_{1},a_{2},...,a_{m})^{T}}
and texture coefficients
b
=
(
b
1
,
b
2
,
.
.
.
,
b
m
)
T
{\displaystyle \mathbf {b} =(b_{1},b_{2},...,b_{m})^{T}}
. To extract the statistics from the dataset, they performed PCA to generate the shape space of dimension to
d
{\textstyle d}
and used a linear model for shape and appearance modeling. In this case, a new model can be generated in the orthogonal basis using the shape and the texture eigenvector
s
i
{\textstyle s_{i}}
and
t
i
{\textstyle t_{i}}
, respectively:
S
m
o
d
e
l
=
S
¯
+
∑
i
=
1
m
a
i
s
i
T
m
o
d
e
l
=
T
¯
+
∑
i
=
1
m
b
i
t
i
{\displaystyle \mathbf {S} _{model}=\mathbf {\bar {S}} +\sum _{i=1}^{m}a_{i}\mathbf {s} _{i}\qquad \mathbf {T} _{model}=\mathbf {\bar {T}} +\sum _{i=1}^{m}b_{i}\mathbf {t} _{i}\qquad }
where
S
¯
{\displaystyle \mathbf {\bar {S}} }
and
T
¯
{\textstyle \mathbf {\bar {T}} }
are the mean shape and texture of the dataset.
Publicly available databases
In the following table, we list the publicly available databases of human faces that can be used for the 3DFMM.
See also
Eigenface
3D modeling
Principal component analysis
Statistical shape analysis
References
External links
"3D Morphable Models". Retrieved 2024-07-08.
"Curated List of 3D Morphable Model Software and Data". GitHub. Retrieved 2024-07-11.
"Tutorial on 3D Morphable Models @ Symposium on Geometry Processing 2022". YouTube. 28 June 2022. Retrieved 2024-07-12.
"What is a Linear 3D Morphable Face Model?". YouTube. 5 April 2020. Retrieved 2024-07-12.
Kata Kunci Pencarian:
- 3D Face Morphable Model
- 3D Morphable Model
- Xiaoming Liu
- Statistical shape analysis
- Facial symmetry
- Poser (software)
- List of datasets in computer vision and image processing