- Source: Locality-sensitive hashing
In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. (The number of buckets is much smaller than the universe of possible input items.) Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
Hashing-based approximate nearest-neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).
Locality-preserving hashing was initially devised as a way to facilitate data pipelining in implementations of massively parallel algorithms that use randomized routing and universal hashing to reduce memory contention and network congestion.
Definitions
A finite family
F
{\displaystyle {\mathcal {F}}}
of functions
h
:
M
→
S
{\displaystyle h\colon M\to S}
is defined to be an LSH family
for
a metric space
M
=
(
M
,
d
)
{\displaystyle {\mathcal {M}}=(M,d)}
,
a threshold
r
>
0
{\displaystyle r>0}
,
an approximation factor
c
>
1
{\displaystyle c>1}
,
and probabilities
p
1
>
p
2
{\displaystyle p_{1}>p_{2}}
if it satisfies the following condition. For any two points
a
,
b
∈
M
{\displaystyle a,b\in M}
and a hash function
h
{\displaystyle h}
chosen uniformly at random from
F
{\displaystyle {\mathcal {F}}}
:
If
d
(
a
,
b
)
≤
r
{\displaystyle d(a,b)\leq r}
, then
h
(
a
)
=
h
(
b
)
{\displaystyle h(a)=h(b)}
(i.e., a and b collide) with probability at least
p
1
{\displaystyle p_{1}}
,
If
d
(
a
,
b
)
≥
c
r
{\displaystyle d(a,b)\geq cr}
, then
h
(
a
)
=
h
(
b
)
{\displaystyle h(a)=h(b)}
with probability at most
p
2
{\displaystyle p_{2}}
.
Such a family
F
{\displaystyle {\mathcal {F}}}
is called
(
r
,
c
r
,
p
1
,
p
2
)
{\displaystyle (r,cr,p_{1},p_{2})}
-sensitive.
= LSH with respect to a similarity measure
=Alternatively it is possible to define an LSH family on a universe of items U endowed with a similarity function
ϕ
:
U
×
U
→
[
0
,
1
]
{\displaystyle \phi \colon U\times U\to [0,1]}
. In this setting, a LSH scheme is a family of hash functions H coupled with a probability distribution D over H such that a function
h
∈
H
{\displaystyle h\in H}
chosen according to D satisfies
P
r
[
h
(
a
)
=
h
(
b
)
]
=
ϕ
(
a
,
b
)
{\displaystyle Pr[h(a)=h(b)]=\phi (a,b)}
for each
a
,
b
∈
U
{\displaystyle a,b\in U}
.
= Amplification
=Given a
(
d
1
,
d
2
,
p
1
,
p
2
)
{\displaystyle (d_{1},d_{2},p_{1},p_{2})}
-sensitive family
F
{\displaystyle {\mathcal {F}}}
, we can construct new families
G
{\displaystyle {\mathcal {G}}}
by either the AND-construction or OR-construction of
F
{\displaystyle {\mathcal {F}}}
.
To create an AND-construction, we define a new family
G
{\displaystyle {\mathcal {G}}}
of hash functions g, where each function g is constructed from k random functions
h
1
,
…
,
h
k
{\displaystyle h_{1},\ldots ,h_{k}}
from
F
{\displaystyle {\mathcal {F}}}
. We then say that for a hash function
g
∈
G
{\displaystyle g\in {\mathcal {G}}}
,
g
(
x
)
=
g
(
y
)
{\displaystyle g(x)=g(y)}
if and only if all
h
i
(
x
)
=
h
i
(
y
)
{\displaystyle h_{i}(x)=h_{i}(y)}
for
i
=
1
,
2
,
…
,
k
{\displaystyle i=1,2,\ldots ,k}
. Since the members of
F
{\displaystyle {\mathcal {F}}}
are independently chosen for any
g
∈
G
{\displaystyle g\in {\mathcal {G}}}
,
G
{\displaystyle {\mathcal {G}}}
is a
(
d
1
,
d
2
,
p
1
k
,
p
2
k
)
{\displaystyle (d_{1},d_{2},p_{1}^{k},p_{2}^{k})}
-sensitive family.
To create an OR-construction, we define a new family
G
{\displaystyle {\mathcal {G}}}
of hash functions g, where each function g is constructed from k random functions
h
1
,
…
,
h
k
{\displaystyle h_{1},\ldots ,h_{k}}
from
F
{\displaystyle {\mathcal {F}}}
. We then say that for a hash function
g
∈
G
{\displaystyle g\in {\mathcal {G}}}
,
g
(
x
)
=
g
(
y
)
{\displaystyle g(x)=g(y)}
if and only if
h
i
(
x
)
=
h
i
(
y
)
{\displaystyle h_{i}(x)=h_{i}(y)}
for one or more values of i. Since the members of
F
{\displaystyle {\mathcal {F}}}
are independently chosen for any
g
∈
G
{\displaystyle g\in {\mathcal {G}}}
,
G
{\displaystyle {\mathcal {G}}}
is a
(
d
1
,
d
2
,
1
−
(
1
−
p
1
)
k
,
1
−
(
1
−
p
2
)
k
)
{\displaystyle (d_{1},d_{2},1-(1-p_{1})^{k},1-(1-p_{2})^{k})}
-sensitive family.
Applications
LSH has been applied to several problem domains, including:
Near-duplicate detection
Hierarchical clustering
Genome-wide association study
Image similarity identification
VisualRank
Gene expression similarity identification
Audio similarity identification
Nearest neighbor search
Audio fingerprint
Digital video fingerprinting
Shared memory organization in parallel computing
Physical data organization in database management systems
Training fully connected neural networks
Computer security
Machine Learning
Methods
= Bit sampling for Hamming distance
=One of the easiest ways to construct an LSH family is by bit sampling. This approach works for the Hamming distance over d-dimensional vectors
{
0
,
1
}
d
{\displaystyle \{0,1\}^{d}}
. Here, the family
F
{\displaystyle {\mathcal {F}}}
of hash functions is simply the family of all the projections of points on one of the
d
{\displaystyle d}
coordinates, i.e.,
F
=
{
h
:
{
0
,
1
}
d
→
{
0
,
1
}
∣
h
(
x
)
=
x
i
for some
i
∈
{
1
,
…
,
d
}
}
{\displaystyle {\mathcal {F}}=\{h\colon \{0,1\}^{d}\to \{0,1\}\mid h(x)=x_{i}{\text{ for some }}i\in \{1,\ldots ,d\}\}}
, where
x
i
{\displaystyle x_{i}}
is the
i
{\displaystyle i}
th coordinate of
x
{\displaystyle x}
. A random function
h
{\displaystyle h}
from
F
{\displaystyle {\mathcal {F}}}
simply selects a random bit from the input point. This family has the following parameters:
P
1
=
1
−
R
/
d
{\displaystyle P_{1}=1-R/d}
,
P
2
=
1
−
c
R
/
d
{\displaystyle P_{2}=1-cR/d}
.
That is, any two vectors
x
,
y
{\displaystyle x,y}
with Hamming distance at most
R
{\displaystyle R}
collide under a random
h
{\displaystyle h}
with probability at least
P
1
{\displaystyle P_{1}}
.
Any
x
,
y
{\displaystyle x,y}
with Hamming distance at least
c
R
{\displaystyle cR}
collide with probability at most
P
2
{\displaystyle P_{2}}
.
= Min-wise independent permutations
=Suppose U is composed of subsets of some ground set of enumerable items S and the similarity function of interest is the Jaccard index J. If π is a permutation on the indices of S, for
A
⊆
S
{\displaystyle A\subseteq S}
let
h
(
A
)
=
min
a
∈
A
{
π
(
a
)
}
{\displaystyle h(A)=\min _{a\in A}\{\pi (a)\}}
. Each possible choice of π defines a single hash function h mapping input sets to elements of S.
Define the function family H to be the set of all such functions and let D be the uniform distribution. Given two sets
A
,
B
⊆
S
{\displaystyle A,B\subseteq S}
the event that
h
(
A
)
=
h
(
B
)
{\displaystyle h(A)=h(B)}
corresponds exactly to the event that the minimizer of π over
A
∪
B
{\displaystyle A\cup B}
lies inside
A
∩
B
{\displaystyle A\cap B}
. As h was chosen uniformly at random,
P
r
[
h
(
A
)
=
h
(
B
)
]
=
J
(
A
,
B
)
{\displaystyle Pr[h(A)=h(B)]=J(A,B)\,}
and
(
H
,
D
)
{\displaystyle (H,D)\,}
define an LSH scheme for the Jaccard index.
Because the symmetric group on n elements has size n!, choosing a truly random permutation from the full symmetric group is infeasible for even moderately sized n. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen π. It has been established that a min-wise independent family of permutations is at least of size
lcm
{
1
,
2
,
…
,
n
}
≥
e
n
−
o
(
n
)
{\displaystyle \operatorname {lcm} \{\,1,2,\ldots ,n\,\}\geq e^{n-o(n)}}
, and that this bound is tight.
Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families.
Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most k.
Approximate min-wise independence differs from the property by at most a fixed ε.
= Open source methods
=Nilsimsa Hash
Nilsimsa is a locality-sensitive hashing algorithm used in anti-spam efforts. The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements:
The digest identifying each message should not vary significantly for changes that can be produced automatically.
The encoding must be robust against intentional attacks.
The encoding should support an extremely low risk of false positives.
Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.
TLSH
TLSH is locality-sensitive hashing algorithm designed for a range of security and digital forensic applications. The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar.
An implementation of TLSH is available as open-source software.
= Random projection
=The random projection method of LSH due to Moses Charikar called SimHash (also sometimes called arccos) uses an approximation of the cosine distance between vectors. The technique was used to approximate the NP-complete max-cut problem.
The basic idea of this technique is to choose a random hyperplane (defined by a normal unit vector r) at the outset and use the hyperplane to hash input vectors.
Given an input vector v and a hyperplane defined by r, we let
h
(
v
)
=
sgn
(
v
⋅
r
)
{\displaystyle h(v)=\operatorname {sgn}(v\cdot r)}
. That is,
h
(
v
)
=
±
1
{\displaystyle h(v)=\pm 1}
depending on which side of the hyperplane v lies. This way, each possible choice of a random hyperplane r can be interpreted as a hash function
h
(
v
)
{\displaystyle h(v)}
.
For two vectors u,v with angle
θ
(
u
,
v
)
{\displaystyle \theta (u,v)}
between them, it can be shown that
P
r
[
h
(
u
)
=
h
(
v
)
]
=
1
−
θ
(
u
,
v
)
π
.
{\displaystyle Pr[h(u)=h(v)]=1-{\frac {\theta (u,v)}{\pi }}.}
Since the ratio between
θ
(
u
,
v
)
π
{\displaystyle {\frac {\theta (u,v)}{\pi }}}
and
1
−
cos
(
θ
(
u
,
v
)
)
{\displaystyle 1-\cos(\theta (u,v))}
is at least 0.87856 when
θ
(
u
,
v
)
∈
[
0
,
π
]
{\displaystyle \theta (u,v)\in [0,\pi ]}
, the probability of two vectors being on the same side of the random hyperplane is approximately proportional to the cosine distance between them.
= Stable distributions
=The hash function
h
a
,
b
(
υ
)
:
R
d
→
N
{\displaystyle h_{\mathbf {a} ,b}({\boldsymbol {\upsilon }}):{\mathcal {R}}^{d}\to {\mathcal {N}}}
maps a d-dimensional vector
υ
{\displaystyle {\boldsymbol {\upsilon }}}
onto the set of integers. Each hash function
in the family is indexed by a choice of random
a
{\displaystyle \mathbf {a} }
and
b
{\displaystyle b}
where
a
{\displaystyle \mathbf {a} }
is a d-dimensional
vector with
entries chosen independently from a stable distribution and
b
{\displaystyle b}
is
a real number chosen uniformly from the range [0,r]. For a fixed
a
,
b
{\displaystyle \mathbf {a} ,b}
the hash function
h
a
,
b
{\displaystyle h_{\mathbf {a} ,b}}
is
given by
h
a
,
b
(
υ
)
=
⌊
a
⋅
υ
+
b
r
⌋
{\displaystyle h_{\mathbf {a} ,b}({\boldsymbol {\upsilon }})=\left\lfloor {\frac {\mathbf {a} \cdot {\boldsymbol {\upsilon }}+b}{r}}\right\rfloor }
.
Other construction methods for hash functions have been proposed to better fit the data.
In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.
= Semantic hashing
=Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have higher semantic similarity. The hashcodes are found via training of an artificial neural network or graphical model.
Algorithm for nearest neighbor search
One of the main applications of LSH is to provide a method for efficient approximate nearest neighbor search algorithms. Consider an LSH family
F
{\displaystyle {\mathcal {F}}}
. The algorithm has two main parameters: the width parameter k and the number of hash tables L.
In the first step, we define a new family
G
{\displaystyle {\mathcal {G}}}
of hash functions g, where each function g is obtained by concatenating k functions
h
1
,
…
,
h
k
{\displaystyle h_{1},\ldots ,h_{k}}
from
F
{\displaystyle {\mathcal {F}}}
, i.e.,
g
(
p
)
=
[
h
1
(
p
)
,
…
,
h
k
(
p
)
]
{\displaystyle g(p)=[h_{1}(p),\ldots ,h_{k}(p)]}
. In other words, a random hash function g is obtained by concatenating k randomly chosen hash functions from
F
{\displaystyle {\mathcal {F}}}
. The algorithm then constructs L hash tables, each corresponding to a different randomly chosen hash function g.
In the preprocessing step we hash all n d-dimensional points from the data set S into each of the L hash tables. Given that the resulting hash tables have only n non-zero entries, one can reduce the amount of memory used per each hash table to
O
(
n
)
{\displaystyle O(n)}
using standard hash functions.
Given a query point q, the algorithm iterates over the L hash functions g. For each g considered, it retrieves the data points that are hashed into the same bucket as q. The process is stopped as soon as a point within distance cR from q is found.
Given the parameters k and L, the algorithm has the following performance guarantees:
preprocessing time:
O
(
n
L
k
t
)
{\displaystyle O(nLkt)}
, where t is the time to evaluate a function
h
∈
F
{\displaystyle h\in {\mathcal {F}}}
on an input point p;
space:
O
(
n
L
)
{\displaystyle O(nL)}
, plus the space for storing data points;
query time:
O
(
L
(
k
t
+
d
n
P
2
k
)
)
{\displaystyle O(L(kt+dnP_{2}^{k}))}
;
the algorithm succeeds in finding a point within distance cR from q (if there exists a point within distance R) with probability at least
1
−
(
1
−
P
1
k
)
L
{\displaystyle 1-(1-P_{1}^{k})^{L}}
;
For a fixed approximation ratio
c
=
1
+
ϵ
{\displaystyle c=1+\epsilon }
and probabilities
P
1
{\displaystyle P_{1}}
and
P
2
{\displaystyle P_{2}}
, one can set
k
=
⌈
log
n
log
1
/
P
2
⌉
{\displaystyle k=\left\lceil {\tfrac {\log n}{\log 1/P_{2}}}\right\rceil }
and
L
=
⌈
P
1
−
k
⌉
=
O
(
n
ρ
P
1
−
1
)
{\displaystyle L=\lceil P_{1}^{-k}\rceil =O(n^{\rho }P_{1}^{-1})}
, where
ρ
=
log
P
1
log
P
2
{\displaystyle \rho ={\tfrac {\log P_{1}}{\log P_{2}}}}
. Then one obtains the following performance guarantees:
preprocessing time:
O
(
n
1
+
ρ
P
1
−
1
k
t
)
{\displaystyle O(n^{1+\rho }P_{1}^{-1}kt)}
;
space:
O
(
n
1
+
ρ
P
1
−
1
)
{\displaystyle O(n^{1+\rho }P_{1}^{-1})}
, plus the space for storing data points;
query time:
O
(
n
ρ
P
1
−
1
(
k
t
+
d
)
)
{\displaystyle O(n^{\rho }P_{1}^{-1}(kt+d))}
;
= Improvements
=When t is large, it is possible to reduce the hashing time from
O
(
n
ρ
)
{\displaystyle O(n^{\rho })}
.
This was shown by and which gave
query time:
O
(
t
log
2
(
1
/
P
2
)
/
P
1
+
n
ρ
(
d
+
1
/
P
1
)
)
{\displaystyle O(t\log ^{2}(1/P_{2})/P_{1}+n^{\rho }(d+1/P_{1}))}
;
space:
O
(
n
1
+
ρ
/
P
1
+
log
2
(
1
/
P
2
)
/
P
1
)
{\displaystyle O(n^{1+\rho }/P_{1}+\log ^{2}(1/P_{2})/P_{1})}
;
It is also sometimes the case that the factor
1
/
P
1
{\displaystyle 1/P_{1}}
can be very large.
This happens for example with Jaccard similarity data, where even the most similar neighbor often has a quite low Jaccard similarity with the query.
In it was shown how to reduce the query time to
O
(
n
ρ
/
P
1
1
−
ρ
)
{\displaystyle O(n^{\rho }/P_{1}^{1-\rho })}
(not including hashing costs) and similarly the space usage.
See also
Bloom filter – Data structure for approximate set membership
Curse of dimensionality – Difficulties arising when analyzing data with many aspects ("dimensions")
Feature hashing – Vectorizing features using a hash function
Fourier-related transforms
Geohash – Public domain geocoding invented in 2008
Multilinear subspace learning – Approach to dimensionality reduction
Principal component analysis – Method of data analysis
Random indexing
Rolling hash – Type of hash function
Singular value decomposition – Matrix decomposition
Sparse distributed memory – Mathematical model of memory
Wavelet compression – Mathematical technique used in data compression and analysisPages displaying short descriptions of redirect targets
References
Further reading
Samet, H. (2006) Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann. ISBN 0-12-369446-9
Indyk, Piotr; Motwani, Rajeev; Raghavan, Prabhakar; Vempala, Santosh (1997). "Locality-preserving hashing in multidimensional spaces". Proceedings of the twenty-ninth annual ACM symposium on Theory of computing. STOC '97. pp. 618–625. CiteSeerX 10.1.1.50.4927. doi:10.1145/258533.258656. ISBN 978-0-89791-888-6. S2CID 15693787.
Chin, Andrew (1994). "Locality-preserving hash functions for general purpose parallel computation" (PDF). Algorithmica. 12 (2–3): 170–181. doi:10.1007/BF01185209. S2CID 18108051.
External links
Alex Andoni's LSH homepage
LSHKIT: A C++ Locality Sensitive Hashing Library
A Python Locality Sensitive Hashing library that optionally supports persistence via redis
Caltech Large Scale Image Search Toolbox: a Matlab toolbox implementing several LSH hash functions, in addition to Kd-Trees, Hierarchical K-Means, and Inverted File search algorithms.
Slash: A C++ LSH library, implementing Spherical LSH by Terasawa, K., Tanaka, Y
LSHBOX: An Open Source C++ Toolbox of Locality-Sensitive Hashing for Large Scale Image Retrieval, Also Support Python and MATLAB.
SRS: A C++ Implementation of An In-memory, Space-efficient Approximate Nearest Neighbor Query Processing Algorithm based on p-stable Random Projection
TLSH open source on Github
JavaScript port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as node.js module
Java port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as maven package
Kata Kunci Pencarian:
- Locality-sensitive hashing
- Fuzzy hashing
- Similarity search
- MinHash
- Perceptual hashing
- Nearest neighbor search
- Hash function
- Nilsimsa Hash
- Approximate string matching
- Hierarchical navigable small world