- Source: Siamese neural network
A Siamese neural network (sometimes called a twin neural network) is an artificial neural network that uses the same weights while working in tandem on two different input vectors to compute comparable output vectors. Often one of the output vectors is precomputed, thus forming a baseline against which the other output vector is compared. This is similar to comparing fingerprints but can be described more technically as a distance function for locality-sensitive hashing.
It is possible to build an architecture that is functionally similar to a twin network but implements a slightly different function. This is typically used for comparing similar instances in different type sets.
Uses of similarity measures where a twin network might be used are such things as recognizing handwritten checks, automatic detection of faces in camera images, and matching queries with indexed documents. The perhaps most well-known application of twin networks are face recognition, where known images of people are precomputed and compared to an image from a turnstile or similar. It is not obvious at first, but there are two slightly different problems. One is recognizing a person among a large number of other persons, that is the facial recognition problem. DeepFace is an example of such a system. In its most extreme form this is recognizing a single person at a train station or airport. The other is face verification, that is to verify whether the photo in a pass is the same as the person claiming he or she is the same person. The twin network might be the same, but the implementation can be quite different.
Learning
Learning in twin networks can be done with triplet loss or contrastive loss. For learning by triplet loss a baseline vector (anchor image) is compared against a positive vector (truthy image) and a negative vector (falsy image). The negative vector will force learning in the network, while the positive vector will act like a regularizer. For learning by contrastive loss there must be a weight decay to regularize the weights, or some similar operation like a normalization.
A distance metric for a loss function may have the following properties
Non-negativity:
δ
(
x
,
y
)
≥
0
{\displaystyle \delta (x,y)\geq 0}
Identity of Non-discernibles:
δ
(
x
,
y
)
=
0
⟺
x
=
y
{\displaystyle \delta (x,y)=0\iff x=y}
Commutativity:
δ
(
x
,
y
)
=
δ
(
y
,
x
)
{\displaystyle \delta (x,y)=\delta (y,x)}
Triangle inequality:
δ
(
x
,
z
)
≤
δ
(
x
,
y
)
+
δ
(
y
,
z
)
{\displaystyle \delta (x,z)\leq \delta (x,y)+\delta (y,z)}
In particular, the triplet loss algorithm is often defined with squared Euclidean (which unlike Euclidean, does not have triangle inequality) distance at its core.
= Predefined metrics, Euclidean distance metric
=The common learning goal is to minimize a distance metric for similar objects and maximize for distinct ones. This gives a loss function like
δ
(
x
(
i
)
,
x
(
j
)
)
=
{
min
‖
f
(
x
(
i
)
)
−
f
(
x
(
j
)
)
‖
,
i
=
j
max
‖
f
(
x
(
i
)
)
−
f
(
x
(
j
)
)
‖
,
i
≠
j
{\displaystyle {\begin{aligned}\delta (x^{(i)},x^{(j)})={\begin{cases}\min \ \|\operatorname {f} \left(x^{(i)}\right)-\operatorname {f} \left(x^{(j)}\right)\|\,,i=j\\\max \ \|\operatorname {f} \left(x^{(i)}\right)-\operatorname {f} \left(x^{(j)}\right)\|\,,i\neq j\end{cases}}\end{aligned}}}
i
,
j
{\displaystyle i,j}
are indexes into a set of vectors
f
(
⋅
)
{\displaystyle \operatorname {f} (\cdot )}
function implemented by the twin network
The most common distance metric used is Euclidean distance, in case of which the loss function can be rewritten in matrix form as
δ
(
x
(
i
)
,
x
(
j
)
)
≈
(
x
(
i
)
−
x
(
j
)
)
T
(
x
(
i
)
−
x
(
j
)
)
{\displaystyle \operatorname {\delta } (\mathbf {x} ^{(i)},\mathbf {x} ^{(j)})\approx (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})^{T}(\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})}
= Learned metrics, nonlinear distance metric
=A more general case is where the output vector from the twin network is passed through additional network layers implementing non-linear distance metrics.
if
i
=
j
then
δ
[
f
(
x
(
i
)
)
,
f
(
x
(
j
)
)
]
is small
otherwise
δ
[
f
(
x
(
i
)
)
,
f
(
x
(
j
)
)
]
is large
{\displaystyle {\begin{aligned}{\text{if}}\,i=j\,{\text{then}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {f} \left(x^{(j)}\right)\right]\,{\text{is small}}\\{\text{otherwise}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {f} \left(x^{(j)}\right)\right]\,{\text{is large}}\end{aligned}}}
i
,
j
{\displaystyle i,j}
are indexes into a set of vectors
f
(
⋅
)
{\displaystyle \operatorname {f} (\cdot )}
function implemented by the twin network
δ
(
⋅
)
{\displaystyle \operatorname {\delta } (\cdot )}
function implemented by the network joining outputs from the twin network
On a matrix form the previous is often approximated as a Mahalanobis distance for a linear space as
δ
(
x
(
i
)
,
x
(
j
)
)
≈
(
x
(
i
)
−
x
(
j
)
)
T
M
(
x
(
i
)
−
x
(
j
)
)
{\displaystyle \operatorname {\delta } (\mathbf {x} ^{(i)},\mathbf {x} ^{(j)})\approx (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})^{T}\mathbf {M} (\mathbf {x} ^{(i)}-\mathbf {x} ^{(j)})}
This can be further subdivided in at least Unsupervised learning and Supervised learning.
= Learned metrics, half-twin networks
=This form also allows the twin network to be more of a half-twin, implementing a slightly different functions
if
i
=
j
then
δ
[
f
(
x
(
i
)
)
,
g
(
x
(
j
)
)
]
is small
otherwise
δ
[
f
(
x
(
i
)
)
,
g
(
x
(
j
)
)
]
is large
{\displaystyle {\begin{aligned}{\text{if}}\,i=j\,{\text{then}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {g} \left(x^{(j)}\right)\right]\,{\text{is small}}\\{\text{otherwise}}&\,\operatorname {\delta } \left[\operatorname {f} \left(x^{(i)}\right),\,\operatorname {g} \left(x^{(j)}\right)\right]\,{\text{is large}}\end{aligned}}}
i
,
j
{\displaystyle i,j}
are indexes into a set of vectors
f
(
⋅
)
,
g
(
⋅
)
{\displaystyle \operatorname {f} (\cdot ),\operatorname {g} (\cdot )}
function implemented by the half-twin network
δ
(
⋅
)
{\displaystyle \operatorname {\delta } (\cdot )}
function implemented by the network joining outputs from the twin network
Twin networks for object tracking
Twin networks have been used in object tracking because of its unique two tandem inputs and similarity measurement. In object tracking, one input of the twin network is user pre-selected exemplar image, the other input is a larger search image, which twin network's job is to locate exemplar inside of search image. By measuring the similarity between exemplar and each part of the search image, a map of similarity score can be given by the twin network. Furthermore, using a Fully Convolutional Network, the process of computing each sector's similarity score can be replaced with only one cross correlation layer.
After being first introduced in 2016, Twin fully convolutional network has been used in many High-performance Real-time Object Tracking Neural Networks. Like CFnet, StructSiam, SiamFC-tri, DSiam, SA-Siam, SiamRPN, DaSiamRPN, Cascaded SiamRPN, SiamMask, SiamRPN++, Deeper and Wider SiamRPN.
See also
Artificial neural network
Triplet loss
Further reading
Chicco, Davide (2020), "Siamese neural networks: an overview", Artificial Neural Networks, Methods in Molecular Biology, vol. 2190 (3rd ed.), New York City, New York, USA: Springer Protocols, Humana Press, pp. 73–94, doi:10.1007/978-1-0716-0826-5_3, ISBN 978-1-0716-0826-5, PMID 32804361, S2CID 221144012
References
Kata Kunci Pencarian:
- Sah! Indonesia
- Siamese neural network
- Meta-learning (computer science)
- Recurrent neural network
- Sentence embedding
- Isabelle Guyon
- Latent space
- Google Neural Machine Translation
- Triplet loss
- Artificial neuron
- Universal approximation theorem