- Source: Tensor product model transformation
In mathematics, the tensor product (TP) model transformation was proposed by Baranyi and Yam as key concept for higher-order singular value decomposition of functions. It transforms a function (which can be given via closed formulas or neural networks, fuzzy logic, etc.) into TP function form if such a transformation is possible. If an exact transformation is not possible, then the method determines a TP function that is an approximation of the given function. Hence, the TP model transformation can provide a trade-off between approximation accuracy and complexity.
A free MATLAB implementation of the TP model transformation can be downloaded at [1] or an old version of the toolbox is available at MATLAB Central [2]. A key underpinning of the transformation is the higher-order singular value decomposition.
Besides being a transformation of functions, the TP model transformation is also a new concept in qLPV based control which plays a central role in the providing a valuable means of bridging between identification and polytopic systems theories. The TP model transformation is uniquely effective in manipulating the convex hull of polytopic forms, and, as a result has revealed and proved the fact that convex hull manipulation is a necessary and crucial step in achieving optimal solutions and decreasing conservativeness in modern LMI based control theory. Thus, although it is a transformation in a mathematical sense, it has established a conceptually new direction in control theory and has laid the ground for further new approaches towards optimality. Further details on the control theoretical aspects of the TP model transformation can be found here: TP model transformation in control theory.
The TP model transformation motivated the definition of the "HOSVD canonical form of TP functions", on which further information can be found here. It has been proved that the TP model transformation is capable of numerically reconstructing this HOSVD based canonical form. Thus, the TP model transformation can be viewed as a numerical method to compute the HOSVD of functions, which provides exact results if the given function has a TP function structure and approximative results otherwise.
The TP model transformation has recently been extended in order to derive various types of convex TP functions and to manipulate them. This feature has led to new optimization approaches in qLPV system analysis and design, as described at TP model transformation in control theory.
Definitions
Finite element TP function
A given function
f
(
x
)
{\displaystyle f({\mathbf {x} })}
, where
x
∈
R
N
{\displaystyle \mathbf {x} \in R^{N}}
, is a TP function if it has the structure:
f
(
x
)
=
∑
i
1
=
1
I
1
∑
i
2
=
1
I
2
…
∑
i
N
=
1
I
N
∏
n
=
1
N
w
n
,
i
n
(
x
n
)
s
i
1
,
i
2
,
…
,
i
N
,
{\displaystyle f(\mathbf {x} )=\sum _{i_{1}=1}^{I_{1}}\sum _{i_{2}=1}^{I_{2}}\ldots \sum _{i_{N}=1}^{I_{N}}\prod _{n=1}^{N}w_{n,i_{n}}(x_{n})s_{i_{1},i_{2},\ldots ,i_{N}},}
that is, using compact tensor notation (using the tensor product operation
⊗
{\displaystyle \otimes }
of ):
f
(
x
)
=
S
⊗
n
=
1
N
w
n
(
x
n
)
,
{\displaystyle f(\mathbf {x} )={\mathcal {S}}\mathop {\otimes } _{n=1}^{N}\mathbf {w} _{n}(x_{n}),}
where core tensor
S
∈
R
I
1
×
I
2
×
…
×
I
N
{\displaystyle {\mathcal {S}}\in {\mathcal {R}}^{I_{1}\times I_{2}\times \ldots \times I_{N}}}
is constructed from
s
i
1
i
2
…
i
N
{\displaystyle s_{i_{1}i_{2}\ldots i_{N}}}
, and row vector
w
n
(
x
n
)
,
(
n
=
1
…
N
)
{\displaystyle \mathbf {w} _{n}(x_{n}),(n=1\ldots N)}
contains continuous univariate weighting functions
w
n
,
i
n
(
x
n
)
,
(
i
n
=
1
…
I
n
)
{\displaystyle w_{n,i_{n}}(x_{n}),(i_{n}=1\ldots I_{n})}
. The function
w
n
,
i
n
(
x
n
)
{\displaystyle w_{n,i_{n}}(x_{n})}
is the
i
n
{\displaystyle i_{n}}
-th weighting function defined on the
n
{\displaystyle n}
-th dimension, and
x
n
{\displaystyle x_{n}}
is the
n
{\displaystyle n}
-the element of vector
x
{\displaystyle \mathbf {x} }
. Finite element means that
I
n
{\displaystyle I_{n}}
is bounded for all
n
{\displaystyle n}
. For qLPV modelling and control applications a higher structure of TP functions are referred to as TP model.
Finite element TP model (TP model in short)
This is a higher structure of TP function:
F
(
x
)
=
S
⊠
n
=
1
N
w
n
(
x
n
)
.
{\displaystyle {\mathcal {F}}(\mathbf {x} )={\mathcal {S}}\boxtimes _{n=1}^{N}\mathbf {w} _{n}(x_{n}).}
Here
Y
=
F
(
x
)
{\displaystyle {\mathcal {Y}}={\mathcal {F}}({\mathbf {x} })}
is a tensor as
Y
∈
R
L
1
×
L
2
×
…
L
O
{\displaystyle {\mathcal {Y}}\in {\mathcal {R}}^{L_{1}\times L_{2}\times \ldots L_{O}}}
, thus the size of the core tensor is
S
∈
R
I
1
×
I
2
×
…
×
I
N
×
L
1
×
L
2
×
.
.
.
×
L
O
{\displaystyle {\mathcal {S}}\in {\mathcal {R}}^{I_{1}\times I_{2}\times \ldots \times I_{N}\times L_{1}\times L_{2}\times ...\times L_{O}}}
. The product operator
⊠
{\displaystyle \boxtimes }
has the same role as
⊗
{\displaystyle \otimes }
, but expresses the fact that the tensor product is applied on the
L
1
×
L
2
×
.
.
.
×
L
O
{\displaystyle L_{1}\times L_{2}\times ...\times L_{O}}
sized tensor elements of the core tensor
S
{\displaystyle {\mathcal {S}}}
. Vector
x
{\displaystyle \mathbf {x} }
is an element of the closed hypercube
Ω
=
[
a
1
,
b
1
]
×
[
a
2
,
b
2
]
×
.
.
.
×
[
a
N
,
b
N
]
⊂
R
N
{\displaystyle \Omega =[a_{1},b_{1}]\times [a_{2},b_{2}]\times ...\times [a_{N},b_{N}]\subset R^{N}}
.
Finite element convex TP function or model
A TP function or model is convex if the weighting functions hold:
∀
n
:
∑
i
n
=
1
I
n
w
n
,
i
n
(
x
n
)
=
1
{\displaystyle \forall n:\sum _{i_{n}=1}^{I_{n}}w_{n,i_{n}}(x_{n})=1}
and
w
n
,
i
n
(
x
n
)
∈
[
0
,
1
]
.
{\displaystyle w_{n,i_{n}}(x_{n})\in [0,1].}
This means that
f
(
x
)
{\displaystyle f(\mathbf {x} )}
is inside the convex hull defined by the core tensor for all
x
∈
Ω
{\displaystyle \mathbf {x} \in \Omega }
.
TP model transformation
Assume a given TP model
Y
=
F
(
x
)
{\displaystyle {\mathcal {Y}}={\mathcal {F}}(\mathbf {x} )}
, where
x
∈
Ω
⊂
R
N
{\displaystyle \mathbf {x} \in \Omega \subset R^{N}}
, whose TP structure maybe unknown (e.g. it is given by neural networks). The TP model transformation determines its TP structure as
F
(
x
)
=
S
⊠
n
=
1
N
w
n
(
x
n
)
{\displaystyle {\mathcal {F}}(\mathbf {x} )={\mathcal {S}}\boxtimes _{n=1}^{N}\mathbf {w} _{n}(x_{n})}
,
namely it generates the core tensor
S
{\displaystyle {\mathcal {S}}}
and the weighting functions
w
n
(
x
n
)
{\displaystyle \mathbf {w} _{n}(x_{n})}
for all
n
=
1
…
N
{\displaystyle n=1\ldots N}
. Its free MATLAB implementation is downloadable at [3] or at MATLAB Central [4].
If the given
F
(
x
)
{\displaystyle {\mathcal {F}}(\mathbf {x} )}
does not have TP structure (i.e. it is not in the class of TP models), then the TP model transformation determines its approximation:
F
(
x
)
≈
S
⊠
n
=
1
N
w
n
(
x
n
)
,
{\displaystyle {\mathcal {F}}(\mathbf {x} )\approx {\mathcal {S}}\boxtimes _{n=1}^{N}\mathbf {w} _{n}(x_{n}),}
where trade-off is offered by the TP model transformation between complexity (number of components in the core tensor or the number of weighting functions) and the approximation accuracy. The TP model can be generated according to various constrains. Typical TP models generated by the TP model transformation are:
HOSVD canonical form of TP functions or TP model (qLPV models),
Various kinds of TP type polytopic form or convex TP model forms (this advantage is used in qLPV system analysis and design).
Properties of the TP model transformation
It is a non-heuristic and tractable numerical method firstly proposed in control theory.
It transforms the given function into finite element TP structure. If this structure does not exist, then the transformation gives an approximation under a constraint on the number of elements.
It can be executed uniformly (irrespective of whether the model is given in the form of analytical equations resulting from physical considerations, or as an outcome of soft computing based identification techniques (such as neural networks or fuzzy logic based methods, or as a result of a black-box identification), without analytical interaction, within a reasonable amount of time. Thus, the transformation replaces the analytical and in many cases complex and not obvious conversions to numerical, tractable, straightforward operations.
It generates the HOSVD-based canonical form of TP functions, which is a unique representation. It was proven by Szeidl that the TP model transformation numerically reconstructs the HOSVD of functions. This form extracts the unique structure of a given TP function in the same sense as the HOSVD does for tensors and matrices, in a way such that:
the number of weighting functions are minimized per dimensions (hence the size of the core tensor);
the weighting functions are one variable functions of the parameter vector in an orthonormed system for each parameter (singular functions);
the sub tensors of the core tensor are also in orthogonal positions;
the core tensor and the weighting functions are ordered according to the higher-order singular values of the parameter vector;
it has a unique form (except for some special cases such as there are equal singular values);
introduces and defines the rank of the TP function by the dimensions of the parameter vector;
The above point can be extended to TP models (qLPV models to determine the HOSVD based canonical form of qLPV model to order the main component of the qLPV model). Since the core tensor is
N
+
O
{\displaystyle N+O}
dimensional, but the weighting functions are determined only for dimensions
n
=
1
…
N
{\displaystyle n=1\ldots N}
, namely the core tensor is constructed from
O
{\displaystyle O}
dimensional elements, therefore the resulting TP form is not unique.
The core step of the TP model transformation was extended to generate different types of convex TP functions or TP models (TP type polytopic qLPV models), in order to focus on the systematic (numerical and automatic) modification of the convex hull instead of developing new LMI equations for feasible controller design (this is the widely adopted approach). It is worth noting that both the TP model transformation and the LMI-based control design methods are numerically executable one after the other, and this makes the resolution of a wide class of problems possible in a straightforward and tractable, numerical way.
The TP model transformation is capable of performing trade-off between complexity and accuracy of TP functions via discarding the higher-order singular values, in the same manner as the tensor HOSVD is used for complexity reduction.
References
Baranyi, P. (2018). Extension of the Multi-TP Model Transformation to Functions with Different Numbers of Variables. Complexity, 2018.
External links
TPtoolBoxMATLAB
Kata Kunci Pencarian:
- Tensor product model transformation
- Tensor
- Tensor (machine learning)
- Metric tensor
- TP model transformation in control theory
- Triple product
- Kronecker product
- Finite strain theory
- Lorentz transformation
- Riemann curvature tensor