 Research
 Open Access
 Published:
Embedding edit distance to enable private keyword search
Humancentric Computing and Information Sciences volume 2, Article number: 2 (2012)
Abstract
Background
Our work is focused on fuzzy keyword search over encrypted data in Cloud Computing.
Methods
We adapt results on private identification schemes by Bringer et al. to this new context. We here exploit a classical embedding of the edit distance into the Hamming distance.
Results
Our way of doing enables some flexibility on the tolerated edit distance when looking for close keywords while preserving the confidentiality of the queries.
Conclusion
Our proposal is proved secure in a security model taking into account privacy.
Introduction
Cloud Computing enables users to have access to shared resources somewhere on the Internet. At least, some storage capacities can easily be envisaged. This brings many sensitive information in the Cloud where they should stay, to preserve their confidentiality, encrypted. To look at their content remotely (and without decrypting them), some specific procedures have been developed. Searchable encryption [1] builds up an index for each keyword of interest. This way, a user can search over his encrypted data for such a keyword and retrieve the files containing it. Note that this search should be made with great care, for privacy reasons, in order for the Cloud to not be able to find out what is the underlying keyword. Symmetric Searchable Encryption (SSE) as introduced by [2] relies on symmetric encryption primitives for efficiency reasons. In [3], Li et al. build on SSE for a solution for fuzzy keyword search over encrypted data in Cloud Computing. The fuzziness should here be understood as minor typos introduced by users when entering the request through their keyboard. In this context, the edit distance (Levenshtein distance) is relevant to measure the strings similarity.
Related works
Considers two different techniques: wildcardbased and grambased techniques [3], for achieving fuzzy keyword search over encrypted data. These two methods build a set consisting of the searched keyword and the nearby words according to the used technique. For instance, for the keyword CASTLE, the fuzzy keyword set for wildcardbased technique consists of {CASTLE, *CASTLE, *ASTLE, C*ASTLE, C*STLE, ..., CASTL*E, CASTL*, CASTLE*} (respectively {CASTLE, CSTLE, CATLE, CASLE, CASTE, CASTL, ASTLE} for the grambased technique) for an edit distance of 1. The idea behind these fuzzy keyword sets is to index  before the search phase  the exact keywords but also the ones differing slightly according to a fixed bound on the tolerated edit distance.
Our approach is somewhat different. For iriscode biometric data, the comparison of two iriscodes is made thanks to the computation of an Hamming distance [4]. There is today a trend to generalize this way of performing biometric matching for other modalities [5, 6] for easier embedding into cryptographic protocols. In their works on private identification, Bringer et al. [7–9] (see also Section Private identification schemes) actually show how to carry out fuzzy keyword search for the Hamming distance. Following this trend, our idea is to combine this with a classical embedding of edit distance into the Hamming distance [10, 11] (see Section Edit distance approximation) to obtain a fuzzy keyword search for the edit distance. This way of doing has at least two advantages. Firstly, contrary to [3] our way of proceeding does not need to a priori define the set of words which are considered as acceptable for the search. Moreover, we inherit of the security properties of [7] in their security model. Note that our proposal thus relies on an asymmetric security model. This can be seen as an asset for Cloud Computing applications. Indeed, using publickey encryption seems relevant in this context. To the best of our knowledge, this is the first scheme enabling fuzzy search with respect to edit distance over data encrypted with a publickey scheme.
Contribution and organization
The main contribution of this work is the proposal for a fuzzy keyword search over encrypted data where fuzzy means that we tolerate some edit distance deviation. A natural application of our results is Cloud Computing. We give proofs for the security properties of our scheme. We also discuss briefly and give some elements about its performances.
In the next Section, we briefly describe classical cryptographic primitives that we use. In Section Model presentation, we present our security model. In Section Useful technical tools, we recall some already published works on private identification schemes and the embedding of edit distances into the Hamming distance. In Section Our construction, we introduce our work and explain its properties.
Cryptographic primitives
Private information retrieval protocol
A Private Information Retrieval protocol (PIR, [12]) is a scheme that enables to retrieve a specific information from a remote server in such a way that the latter does not learn information about the query.
Suppose a database is constituted with M bits × = x_{1},...,x_{M}. To be secure, the protocol should satisfy the following properties [13]:

Soundness: When the user and the database follow the protocol, the result of the request is exactly the requested bit.

User Privacy: For all × ∈ {0,1}^{M}, for 1≤ i,j ≤ M, for any algorithm used by the database, it cannot distinguish with a nonnegligible probability the difference between the requests of index i and j.
Among the known constructions of computational secure PIR, blockbased PIR  i.e. working on block of bits  allows to efficiently reduce the cost. The best performances are from Gentry and Ramzan [14] and Lipmaa [15] with a communication complexity polynomial in the logarithm of M. Surveys of the subject are available in [16, 17].
Some PIR protocols are called Symmetric Private Information Retrieval, when they comply with the Data Privacy requirement [13]. This condition states that the querier cannot distinguish between a database that possesses only the information he requested, and a regular one; in other words, that the querier does not get more information than he asked for.
Private information storage protocol
PIR protocols enable to retrieve information of a database. A Private Information Storage (PIS) protocol [17] is a protocol that enables to write information in a database with properties that are similar to that of PIR. The goal is to prevent the database from knowing the content of the information that is being stored; for detailed description of such protocols, see [1, 18].
To be secure, the protocol must also satisfy the Soundness and User Privacy properties, meaning that 1. following the protocol results in the update of the database with the appropriate value, and 2. any algorithm run by the database cannot distinguish between two writing requests.
Model presentation
In this section, we introduce the model of security for an ErrorTolerant Searchable Encryption scheme for edit distance by adapting the model from [7].
Entities for the protocol
The context is Cloud Computing where users can either store or retrieve data from the Cloud. This leads to three different entities:

The Cloud $\mathcal{CL}$ which represents a single point of access to remote shared resources (i.e. a remote storage system). The Cloud is assumed to be untrusted, so we consider the content as publicly accessible to a third party and that communications in the Cloud and with users can be eavesdropped.

The sender $\mathcal{X}$ sends data to be stored on the Cloud $\mathcal{CL}$.

The receiver $\mathcal{Y}$ generates queries to the Cloud $\mathcal{CL}$ to obtain the results of his searches.
Note that the sender and the receiver are not necessarily the same user and it is even possible that several senders and several receivers exist and interact. This corresponds well to the Cloud Computing model.
Definition of the primitives
In the sequel, messages are strings of length N, and ed(m_{1}m_{2}) denotes the edit distance between m_{1},m_{2} ∈{0,1}^{N}, i.e. the minimum number of character insertions, deletions and substitutions needed to transform one string into the other. Note that edit distance is well defined on larger alphabet and variable length strings. The scheme can be extended to these cases.
To enable errortolerant searchable encryption, we need three main primitives: the key materials generation, the send request and the receive request.
Definition 1. A (ϵ, λ_{ min },λ_{ max })Public Key ErrorTolerant Searchable Encryption for the edit distance is obtained with the following probabilistic polynomialtime methods:

KeyGen(1^{ℓ}) initializes the system, and generates public and private keys (pk,sk) for a security parameter ℓ. The public key pk is used to store data in the Cloud, and the secret key sk is used to retrieve information.

$${Send}_{\mathcal{X},\mathcal{CL}}\left(m,pk\right)$$
is a protocol in which $\mathcal{X}$ sends to $\mathcal{CL}$ the data m ∈ {0,1}^{N}to be stored in the Cloud. At the end of the protocol, $\mathcal{CL}$ has stored the message m at a virtual address noted φ(m).

$${Retrieve}_{,\mathcal{CL}}\left(m\prime ,sk\right)$$
is a protocol in which, given a fresh message m' ∈ {0,1}^{N}, $\mathcal{Y}$ asks for the virtual addresses of all data that are stored on $\mathcal{CL}$ and are close to m', with respect to the Completeness(λ_{ min }) and Soundness(λ_{ max }) criteria (cf. Section Security requirements). This outputs a set of virtual addresses, noted Φ(m'), where $\mathcal{Y}$ can reach the corresponding messages.
Completeness and Soundness criteria for the parameters λ_{ min }, λ_{ max } represent the fact that a stored message will be actually retrieved if m' is at an edit distance less than λ_{ min } and that no message at a distance greater than λ_{ max } from m' will be returned (with a given non negligible probability). We emphasize that the definition above is focused on the searching problem (which is the tough task here): the algorithms' outputs are the virtual addresses where the retriever $\mathcal{Y}$ can retrieve the messages. The messages are possibly stored encrypted via a second encryption scheme.
An important difference compared to [3] is that we do not rely on fuzzy keyword sets, we want to ensure a given tolerance (materialized by λ_{ min }, λ_{ max }). By avoiding wildcards and grams, we do not make any prior assumption on the location of the errors.
Security requirements
We first recall the completeness and soundness criteria that formalized the condition for the scheme and the Cloud to actually return the correct answer.
Condition 1. Completeness( λ_{ min }), Soundness( λ_{ max }) Let m_{1}, ..., m_{p} ∈ {0,1}^{N} be p different binary strings, and let m'∈{0,1}^{N} be another string. Assume that, after initialization of the system, all the messages m_{i} have been stored in the Cloud $\mathcal{CL}$ with virtual addresses φ(x_{ i }), and that a user $\mathcal{Y}$ retrieved the set of virtual addresses Φ(m') associated to m'.

1.
The scheme is said to be complete, up to a probability 1  ϵ_{1} if
$$\underset{m\prime}{Pr}\left[\exists i,\phantom{\rule{2.22198pt}{0ex}}ed\left(m\prime ,{m}_{i}\right)\le {\lambda}_{min}\&\phi \left({m}_{i}\right)\notin \Phi \left(m\prime \right)\right]\le {\mathit{\u03f5}}_{1}$$
(i.e. that except with a small probability all close messages are retrieved during the search through a Retrieve query).

2.
The scheme is said to be sound, up to a probability 1  ϵ_{2} if
$$\underset{m\prime}{Pr}\left[\exists i,\phantom{\rule{2.22198pt}{0ex}}d\left(m\prime ,{m}_{i}\right)>{\lambda}_{max}\&\phi \left({m}_{i}\right)\in \Phi \left(m\prime \right)\right]$$
is bounded by ϵ_{2} (i.e. that a false positive happens only with a small probability).
We now give the definition of the security properties that the scheme needs to fulfill to ensure that the data stored in the Cloud are kept confidential and that privacy of queries is ensured.
Condition 2. Sender Privacy The scheme is said to respect Sender Privacy if the advantage of any server is negligible in the $Ex{p}_{\mathcal{A}}^{Sender\phantom{\rule{2.77695pt}{0ex}}Privacy}$ experiment, described below. Here, $\mathcal{A}$ is a malicious opponent taking the place of $\mathcal{CL}$, and $\mathcal{C}$ is a challenger at the user side.
The advantage of the adversary is $\leftPr\right.\left[e\prime =e\right]\frac{1}{2}\left.\right.$
This experiment corresponds to a first phase where the adversary receives Send requests that he chose himself. Then $\mathcal{A}$ selects a pair (m_{0},m_{1}) of messages and the challenger $\mathcal{C}$ chooses randomly one of the two messages to be stored in the Cloud. At the end, after a polynomial number of other Send requests, the adversary tries to guess which one of m_{0} or m_{1} has been sent. When the advantage of the adversary is negligible, we can assume that the data stored in the Cloud remains private.
The next condition focuses on retrieve queries. We want to ensure that the Cloud does not learn information on the retrieve queries, i.e. neither on the input message m', nor on the close retrieved messages.
Condition 3. Receiver Privacy The scheme is said to respect Receiver Privacy if the advantage of the Cloud is negligible in the experiment $Ex{p}_{\mathcal{A}}^{\mathsf{\text{Receiver}}\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{Privacy}}}$ described below. $\mathcal{A}$ denotes the malicious opponent taking the place of $\mathcal{CL}$, and $\mathcal{C}$ the challenger at the user side.
The advantage of the adversary is $\leftPr\right.\left[e\prime =e\right]\frac{1}{2}\left.\right.$
This experiment begins with the adversary's choice of messages to be stored in the Cloud. Then $\mathcal{A}$ chooses a number of retrieve queries to be made by the challenger. Following this, $\mathcal{A}$ selects a pair of challenges $\left({m}_{0}^{\prime},{m}_{1}^{\prime}\right)$ and one of them is randomly selected by $\mathcal{C}$ as input to a Retrieve query. Note that $\mathcal{A}$ should not see the result of the Retrieve queries. At the end of the experiment, $\mathcal{A}$ tries to guess which one it was.
This condition captures the privacy of the receiver $\mathcal{Y}$ when generating Retrieve queries: $\mathcal{CL}$ does not learn information on their content.
Useful technical tools
Private identification schemes
The principle of a private identification scheme is to manage nearest neighbor search in the encrypted domain. The two main subproblems are the Approximate Nearest Neighbor (ANN) problem and Searchable Encryption
The Approximate Nearest Neighbor (ANN) problem is defined as follows: Let $\mathcal{P}$ be a set of points in a metric space (E,d_{E}). For an input x ∈ E and ϵ ≥ 0, find a point p_{x} ∈ $\mathcal{P}$ such that
This is an approximation of the Nearest Neighbor problem as the exact case is hard to solve in large dimension spaces. Several algorithms for the ANN problem have been proposed [19] and the basic principle is to rely on sketching methods which output shorter vectors with increased stability and which enable to simplify the search: $\mathcal{P}$ is preprocessed with such sketching to endup with a lookup table of short vectors on which the search can be realized quickly through counting the number of the exact or almost exact matches. Sketching needs there to guarantee that two close inputs would give with a good probability the same short vector. Examples of sketching methods are numerous for vector space (with Hamming distance or Euclidean distance) [20–23]; for instance random projections on small subspace. In the private identification schemes [7–9], the authors suggest to use a construction exploited in [24] for iris biometry. This is adapted to binary vectors with Hamming distance comparison. The sketching functions are restriction of n bits vectors over r ≪ n of their coordinates to obtain r bits vectors:
Definition 2. Let $\mathcal{F}=\left({\mathsf{\text{f}}}_{1},\dots ,{\mathsf{\text{f}}}_{\mu}\right)$ be a family of function from {0,1}^{n} to {0,1}^{r} such that for x ∈ {0,1}^{n}, we have for all i ∈ {1,...,μ}, ${f}_{i}\left(x\right)=\left({x}_{{i}_{1}},\dots ,{x}_{{i}_{r}}\right)$. We say that $\mathcal{F}$ is a sketching family for the Hamming distance from dimension n to dimension r.
With a sketching family where all functions are independent and if we assume that the inputs are uniformly distributed, the probability to obtain the same output with two distinct inputs can be estimated as follows.
In our construction, we rely on this idea for Hamming distance approximation combined with the embedding method from [10, 11] of edit distance into the Hamming space.
As far privacy and security are concerned, private identification schemes are based on searchable encryption principle. The main goal of searchable encryption [2, 25] is to store messages into an encrypted database while still enabling to search the messages related to some keywords. For instance this could correspond to a remote mailing service where the user wants to retrieve his messages which contain a given keyword, without letting the server learn information on the content of his mails. [3] also uses such technique but only in a symmetric context. Following [7]'s idea, we adapt an asymmetric searchable encryption scheme for our construction (cf. Section Our construction).
A general solution to design a searchable encryption scheme is to associate a message to a set of keywords and to consider each keyword as a virtual address where the receiver can recover a link toward the associated messages. To manage all these relations in an efficient way, we follow [1, 26, 27] by using Bloom filters. Bloom filter [28] is a notion used in membership checking applications to reduce the memory cost of the data storage. We use an extension of this notion called Bloom filters with storage. It enables to store identifiers of elements in each array.
Definition 3. Bloom Filter with Storage, [1] Let $\mathcal{S}$ be a finite subset of a space E and a set of identifiers associated to $\mathcal{S}$. For a family of v (independent and random) hash functions $\mathcal{H}=\left\{{h}_{1},\dots ,{h}_{v}\right\}$, with each h_{i}:E→{1,...,k}, a (v,k)Bloom Filter with Storage for indexation of $\mathcal{S}$ is $\mathcal{H}$, together with the array (t_{1},...,t_{k}), defined recursively as:

1.
∀i∈{1,...,k}, t_{ i }←∅,

2.
$\forall x\in \mathcal{S},\forall j\in \left\{1,\dots ,v\right\},{t}_{{h}_{j}\left(x\right)}\leftarrow {t}_{{h}_{j}\left(x\right)}\cup \left\{Id\left(x\right)\right\}$where Id(x) is the identifier of x.
In other words, the array is empty at the beginning and for each element$x\in \mathcal{S}$, we add the identifier Id(x) of x at the cells indexed by h_{1}(x),...,h_{v}(x). To recover the identifiers associated to an element y, we compute $T\left(y\right)={\bigcap}_{j=1}^{v}\phantom{\rule{0.3em}{0ex}}{t}_{{h}_{j}\left(y\right)}$. The following lemma describes the accuracy of this storage method.
Lemma 1. [28] Let $\left(\mathcal{H},{t}_{1},\dots ,{t}_{k}\right)$ be a (v,k)Bloom filter with storage indexing $\mathcal{S}$. For $x\in \mathcal{S}$, the following properties hold:

$Id\left(x\right)\in T\left(x\right)={\bigcap}_{j=1}^{\nu}\phantom{\rule{0.3em}{0ex}}{t}_{{h}_{j}\left(x\right)}$, i.e. the identifier of $x\in \mathcal{S}$ is always retrieved,

the probability Pr[t∈T(y) and t≠Id(y)] to obtain a false positive is ${\left(1{\left(1\frac{\nu}{k}\right)}^{\mathcal{S}}\right)}^{\nu}.$
Edit distance approximation
Our construction is based on the embedding of edit distance into Hamming distance designed in [10]. To solve problems such as those described in Section Private identification schemes, data are embedded into Hamming space and then we can apply techniques dedicated to Hamming distance.
Definition 4. Let $\left({E}_{1},{d}_{{E}_{1}}\right)$ and $\left({E}_{2},{d}_{{E}_{2}}\right)$ be two metric spaces. An embedding $\psi :\left({E}_{1},{d}_{{E}_{1}}\right)\to \left({E}_{2},{d}_{{E}_{2}}\right)$ has a distortion c if for all (x,y) ∈ E_{1},
[10] proves that {0,1}^{N} with edit distance can be embedded into ℓ_{1} with small distortion ${2}^{O\left(\sqrt{{log}_{2}N{log}_{2}{log}_{2}N}\right)}$ and then shows from a previous work [20] how to end upefficiently into the Hamming space. More precisely:
Lemma 2. [10] There exists a probabilistic polynomial time algorithm π and constants c_{1},c_{2} > 0 that, for every N ∈ ℕ, for every 4^{N} ≫ δ > 0, and for all × ∈ {0,1}^{N}, computes $\pi \left(x\right)\in {\ell}_{1}^{{c}_{2}\left({N}^{2}{log}_{2}\left(N/\delta \right)\right)}$ and such that for all (x,y) ∈ {0,1}^{N}, with probability at least 1  δ,
where L_{1} denotes the distance L_{1}.
The principle of the algorithm is to partition a string x into about
${2}^{\left(\sqrt{{log}_{2}N{log}_{2}{log}_{2}N}\right)}$ substrings. From each substring x^{i}, sets of all substrings (shingles) when taking a window of a fixed size t are considered (i.e. all possible substrings of x^{i} formed by t subsequent coordinates). By considering the metric defined by the minimum cost perfect matching algorithm between sets, [10] then explains how such sets are embedded into ℓ_{1}. Note that this technique introduces a lot of redundancy in the substrings which are embedded and this increases the dimension by a factor at least N^{2}, but this is interesting for our construction as the distortion is very low and the algorithm remains polynomial in N.
Based on [20], the authors then show that there exist 0 < α < β < c_{2} and an embedding Ψ from {0,1}^{N} with edit distance ed to ${\left\{0,1\right\}}^{{c}_{2}\left({log}_{2}\left(1/\delta \right)\right)}$ with Hamming distance HD that computes Ψ(x) = (x;t) for every t ∈ ℕ and such that with probability at least 1  \delta:

If ed(x,y) ≤ t, then HD(ψ(x), ψ(y)) ≤ α log_{2}(1/δ).

If $ed\left(x,y\right)\ge {2}^{{c}_{1}\left(\sqrt{{log}_{2}N{log}_{2}{log}_{2}N}\right)}t$ then HD(ψ(x), ψ(y)) ≥ β log_{2}(1/δ).
Our construction
Technical description
Setup
Let {0,1}^{N} be equipped with the edit distance. Let Ψ be the embedding of ({0,1}^{N},ed) into $\left({\left\{0,1\right\}}^{{c}_{2}\left({log}_{2}\left(1/\delta \right)\right)},HD\right)$ (cf. previous section). Let $\mathcal{F}=\left({f}_{1},\dots ,{f}_{\mu}\right)$ be a sketching family for the Hamming distance from dimension c_{2}(log_{2}(1/δ)) to a dimension r. Let $\left(\mathcal{H},{t}_{1},\dots ,{t}_{k}\right)$, with $\mathcal{H}=\left\{{h}_{1},\dots ,{h}_{v}\right\}$, and h_{ i }:{1,...,μ}×{0,1}^{r}ow{1,...,k}, be a (v,k)Bloom Filter with Storage.
Let (Gen, Enc, Dec) be a semantically secure (INDCPA, [29]) public key cryptosystem, let ${Query}_{DB}^{PIR}$ be the retrieve query from a database DB of a Private Information Retrieval protocol and let ${Update}_{DB}^{PIS}\left(val,i\right)$ be the write query into a database DB (that adds val to the ith field) of a Private Information Storage protocol.
A Private Information Retrieval (PIR) [16] protocol enables to retrieve a specific block from a database without letting the database learn anything about the query and the answer (i.e. neither the index of the block nor the value of the block). This is done through a method ${Query}_{DB}^{PIR}\left(i\right)$, that allows a user to recover the element stored at index i in DB by running the PIR protocol. A Private Information Storage (PIS) protocol [17] enables to write information in a database while preventing the database from learning information on what is being stored (neither the value of the data, nor the index of the location where the data is being stored). Such a protocol provides a method ${Update}_{DB}^{PIS}\left(val,index\right)$, which takes as input an element and a database index, and puts the value val into the database entry index. See Section Cryptographic primitives for more details on these notions.
KeyGen(1^{ℓ})
The function takes a security parameter ℓ as input and uses Gen to generate a public and private key pair (pk,sk). It also initializes the Bloom filter array, (t_{1},...,t_{k}←(Ø,...,Ø)), and provides it to the Cloud.
To send a message to the Cloud, a user $\mathcal{X}$ executes the following algorithm.

1.
$\mathcal{X}$ sends Enc(m,pk) to $\mathcal{CL}$ which will give him back a virtual address φ(m).

2.
$\mathcal{X}$ computes the embedding Ψ(m) and for all i ∈ {1,...,μ}, f_{ i }∘ψ(m) and for all j ∈ {1,...,v}, $\mathcal{X}$ asks to $\mathcal{CL}$ to update the Bloom filter array through queries
$${Update}_{\mathcal{CL}}^{PIS}\left(Enc\left(\phi \left(m\right),pk\right),{h}_{j}\left(i\left\right{f}_{i}\circ \psi \left(m\right)\right)\right)$$
in order to add the identifier into the cell ${t}_{{h}_{j}\left(i\left\right{f}_{i}\circ \psi \left(m\right)\right)}.$
For privacy concerns, $\mathcal{X}$ will also complete the Bloom filter array with random data in order to get the same number l of elements for all cells t_{1},...,t_{k}.
At the end of the algorithm, $\mathcal{CL}$ has stored the message m at a virtual address noted φ(m) and the Bloom filter structure has been filled of encrypted identifiers via indexation by several sketches that enable to search with approximate data.
To retrieve a message in the Cloud, a user $\mathcal{Y}$ proceeds as follows.

1.
For all i ∈ {1,...,μ} and for all j ∈ {1,...,v}, $\mathcal{Y}$ computes α_{ i,j } = h_{ j }(if_{ i }∘ψ(m))_{.}

2.
$\mathcal{Y}$ executes ${Query}_{\mathcal{CL}}^{PIR}\left({\alpha}_{i,j}\right)$ to retrieve the content of the cells ${t}_{{\alpha}_{i,j}}$ from the Bloom filters stored into $\mathcal{CL}$.

3.
$\mathcal{Y}$ decrypts the content of the cells with Dec(.,sk) and for i ∈ {1,...,μ}

$\mathcal{Y}$ computes the intersection of all the decrypted version of the cells ${t}_{{\alpha}_{i,1}},\dots ,{t}_{{\alpha}_{i,\nu}}$.

If φ(m) is in this intersection, this means that $\mathcal{Y}$ most probably found a match f_{ i }∘ψ(m) = f_{ i }∘ψ(m')


4.
$\mathcal{Y}$ counts the number of times an identifier is retrieved in such intersections ${\cap}_{j=1}^{\nu}{t}_{{\alpha}_{i,j}}$ (for i ∈ {1,...μ}).

5.
$\mathcal{Y}$ selects all the identifier which are retrieved above some threshold τ. This leads to the result $\Phi \left(m\prime \right)=\left\{\phi \left({m}_{{i}_{1}}\right),\dots ,\phi \left({m}_{{i}_{\gamma}}\right)\right\}$ of the execution of Retrieve.
Note that as the queries are made through a PIR protocol, the Cloud can not learn any information. The advantage of using Bloom filters here is to permit an efficient lookup into the structure, as for classical Bloom filter (i.e. without any encryption) compared to other hash tables techniques.
Security properties
In this section, we explain why this construction achieves the security requirements of Section Security requirements.
Lemma 3. Completeness The scheme is complete up to a probability 1  ∈_{1} with
Proof. (sketch of) For m,m' such that ed(m,m') ≤ λ_{ min }, Section Edit distance approximation implies that HD(ψ(m;λ_{ min }), ψ(m';λ_{ min }) ≤ α log_{2}(1/δ) with probability 1  δ. Hence
This leads to a probability lower than $1{\left(1\frac{\alpha}{{c}_{2}}\right)}^{r\tau}$ to find less than τ times the identifier of a close message; probability that can thus be made small, cf. the example in Section Discussion.
More precisely, ${\mathit{\u03f5}}_{1}\approx {\sum}_{i=0}^{\tau 1}\phantom{\rule{0.3em}{0ex}}\left(\begin{array}{c}\hfill \mu \hfill \\ \hfill i\hfill \end{array}\right){\left(1{\left(1\frac{\alpha}{{c}_{2}}\right)}^{r}\right)}^{\mu i}{\left(1\frac{\alpha}{{c}_{2}}\right)}^{ri}.$
Lemma 4. Soundness With ${\lambda}_{max}={2}^{{c}_{1}\left(\sqrt{{log}_{2}N{log}_{2}{log}_{2}N}\right)}{\lambda}_{min}$ and provided that Bloom filter functions from $\mathcal{H}$ behave like pseudorandom functions from {1,...,μ} × {0,1}^{r} to {1,...,k}, then the scheme is sound up to a probability 1  ϵ_{2}, with:
Proof. (sketch of) For m,m' such that ed(m,m') > λ_{ max }, then Section Edit distance approximation implies that HD(ψ(m;λ_{ min }), ψ(m';λ_{ min }) ≥ β log_{2}(1/δ)_{.} Hence
The other cause for an error could come from v collisions in the Bloom filter hashes.
Lemma 5. Sender Privacy Assume that the PIS protocol achieves PIS User Privacy, the scheme ensures Sender Privacy.
Proof. (sketch of) $\mathcal{CL}$ receives only encrypted messages and Update^{PIS} queries that do not enable to distinguish between the output of Send(m_{0}, pk) and the output of Send(m_{1}, pk), after the execution of Send(m_{1}, pk), i ∈ {2,...,Ω} as we assume that the underlying encryption scheme is semantically secure and that the PIS protocol achieves PIS User Privacy.
Lemma 6. Receiver Privacy Assume that the PIR protocol ensures PIR User Privacy, then the scheme ensures Receiver Privacy.
Proof. (sketch of) The Cloud $\mathcal{CL}$ receives and answers only to Query^{PIR} requests, that by assumption do not leak information neither on their content nor on the outputs.
Discussion
To illustrate the error rates that one can expect, we give an example of choice of parameters. For instance, we choose a Bloom filter array of size k = 128 with v = 64 hash functions. Then we can approximate ϵ_{2} as ${\left(1\frac{\beta}{{c}_{2}}\right)}^{r\tau}$. We have ${\mathit{\u03f5}}_{1}\approx {\sum}_{i=0}^{\tau 1}\phantom{\rule{0.3em}{0ex}}\left(\begin{array}{c}\hfill \mu \hfill \\ \hfill i\hfill \end{array}\right){\left(1{\left(1\frac{\alpha}{{c}_{2}}\right)}^{r}\right)}^{\mu i}{\left(1\frac{\alpha}{{c}_{2}}\right)}^{ri}$ where α < β. Assume that α = c_{2}/4 and β = c_{2}/2 then with μ = 128 functions in the sketching family for the Hamming distance, r = 10 and τ = 3, we obtain ϵ_{2} negligible and ϵ_{1} ≈ 0.023. With these parameters, we have μ × v = 2^{13} for the number of queries during Send and Retrieve phases. Concerning the cost of PIR and PIS queries, the size of the Bloom filter array should remain not too large, like k = 128 here, to be efficient.
Note that in practice, the choice of λ_{ min } depends on the number of errors between two words that one wants to tolerate for fuzzy search. Our embedding is made such that λ_{ max } is made close to λ_{ min }. The other parameters have then to be tuned to obtain small or negligible error rates ϵ_{1} and ϵ_{2} (cf. Lemma 3 and Lemma 4). The purpose of this paper is to introduce a new encrypted search with edit distance. At this point, our contribution is mainly theoretical. To go further, one should consider a practical use case over the cloud to be able to devise an efficient implementation.
References
 1.
Boneh D, Kushilevitz E, Ostrovsky R, Skeith WE III: Public Key Encryption That Allows PIR Queries. In CRYPTO, Volume 4622 of Lecture Notes in Computer Science. Edited by: Menezes A. Springer; 2007:50–67.
 2.
Curtmola R, Garay JA, Kamara S, Ostrovsky R: Searchable symmetric encryption: improved definitions and efficient constructions. In CCS'06: Proceedings of the 13th ACM conference on Computer and communications security. ACM; 2006:79–88.
 3.
Li J, Wang Q, Wang C, Cao N, Ren K, Lou W: Enabling Efficient Fuzzy Keyword Search over Encrypted Data in Cloud Computing. Cryptology ePrint Archive, Report 2009/593 2009, 16.
 4.
Daugman J: The importance of being random: statistical principles of iris recognition. Pattern Recognit 2003,36(2):279–291. 10.1016/S00313203(02)000304
 5.
Bringer J, Despiegel V: Binary feature vector fingerprint representation from minutiae vicinities. Biometrics: Theory, Applications, and Systems, 2010. BTAS'10. IEEE 4th International Conference on 2010.
 6.
Bringer J, Despiegel V, Favre M: Adding localization information in a fingerprint binary feature vector representation. SPIE Defense, Security, Sensing 2011.
 7.
Bringer J, Chabanne H, Kindarji B: Errortolerant searchable encryption. IEEE ICC 2009 CISS 2009.
 8.
Bringer J, Chabanne H, Kindarji B: Identification with encrypted biometric data. Security Comm Networks 2011,4(5):548–562. 10.1002/sec.206
 9.
Adjedj M, Bringer J, Chabanne H, Kindarji B: Biometric Identification over Encrypted Data Made Feasible. In ICISS, Volume 5905 of Lecture Notes in Computer Science. Edited by: Prakash A, Gupta I. Springer; 2009:86–100.
 10.
Ostrovsky R, Rabani Y: Low distortion embeddings for edit distance. In STOC. Edited by: Gabow HN, Fagin R. ACM; 2005:218–224.
 11.
Ostrovsky R, Rabani Y: Low distortion embeddings for edit distance. J ACM 2007.,54(5):
 12.
Chor B, Kushilevitz E, Goldreich O, Sudan M: Private Information Retrieval. J ACM 1998,45(6):965–981. 10.1145/293347.293350
 13.
Gertner Y, Ishai Y, Kushilevitz E, Malkin T: Protecting data privacy in private information retrieval schemes. STOC 1998, 151–160.
 14.
Gentry C, Ramzan Z: Singledatabase private information retrieval with constant communication rate. In ICALP, Volume 3580 of Lecture Notes in Computer Science. Edited by: Caires L, Italiano GF, Monteiro L, Palamidessi C, Yung M. Springer; 2005:803–815.
 15.
Lipmaa H: An oblivious transfer protocol with logsquared communication. In ISC, Volume 3650 of Lecture Notes in Computer Science. Edited by: Zhou J, Lopez J, Deng RH, Bao F. Springer; 2005:314–328.
 16.
Gasarch WI: A Survey on Private Information Retrieval. [http://www.cs.umd.edu/~gasarch/pir/pir.html]
 17.
Ostrovsky R, Shoup V: Private information storage (extended abstract). STOC 1997, 294–303.
 18.
Ostrovsky R, Skeith WE III: Algebraic Lower Bounds for Computing on Encrypted Data. Cryptology ePrint Archive, Report 2007/064 2007.
 19.
Piotr I: Nearest neighbors in highdimensional spaces. In Handbook of Discrete and Computational Geometry, Chapter 39. 2nd edition. Edited by: Goodman JE, O'Rourke J. CRC Press; 2004.
 20.
Kushilevitz E, Ostrovsky R, Rabani Y: Efficient Search for approximate nearest neighbor in high dimensional spaces. Symposium on the Theory Of Computing 1998, 614–623.
 21.
Kirsch A, Mitzenmacher M: Distancesensitive bloom filters. Algorithm Engineering & Experiments 2006.
 22.
Piotr I, Rajeev M: Approximate nearest neighbors: towards removing the curse of dimensionality. Symposium on the Theory Of Computing 1998, 604–613.
 23.
Andoni A, Piotr I: Nearoptimal hashing algorithms for approximate nearest neighbor in high dimensions. Commun ACM 2008, 51: 117–122.
 24.
Hao F, Daugman J, Zielinski P: A Fast Search Algorithm for a Large Fuzzy Database. Inf Forensics Security, IEEE Trans 2008,3(2):203–212.
 25.
Boneh D, Di Crescenzo G, Ostrovsky R, Persiano G: Public Key Encryption with Keyword Search. In EUROCRYPT, Volume 3027 of LCNS. Edited by: Cachin C, Camenisch J. Springer; 2004:506–522.
 26.
Goh EJ: Secure indexes. Cryptology ePrint Archive, Report 2003/216 2003.
 27.
Bethencourt J, Song DX, Waters B: New constructions and practical applications for private stream searching (extended abstract). In IEEE Symposium on Security and Privacy. IEEE Computer Society; 2006:132–139.
 28.
Bloom BH: Space/time tradeoffs in hash coding with allowable errors. Commun ACM 1970,13(7):422–426. 10.1145/362686.362692
 29.
Goldwasser S, Micali S: Probabilistic Encryption. J Comput Syst Sci 1984,28(2):270–299. 10.1016/00220000(84)900709
Acknowledgements
The authors thank Céline Chevalier for her support.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
JB and HC follow their previous work on biometric identification to extend it to the new area of application of cloud computing. Both authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Bringer, J., Chabanne, H. Embedding edit distance to enable private keyword search. Hum. Cent. Comput. Inf. Sci. 2, 2 (2012) doi:10.1186/2192196222
Received
Accepted
Published
DOI
Keywords
 Edit distance
 Embeddings for edit distance
 Private Identification schemes