Embedding edit distance to enable private keyword search

Our work is focused on fuzzy keyword search over encrypted data in Cloud Computing. We adapt results on private identification schemes by Bringer et al. to this new context. We here exploit a classical embedding of the edit distance into the Hamming distance. Our way of doing enables some flexibility on the tolerated edit distance when looking for close keywords while preserving the confidentiality of the queries. Our proposal is proved secure in a security model taking into account privacy.


Introduction
Cloud Computing enables users to have access to shared resources somewhere on the Internet. At least, some storage capacities can easily be envisaged. This brings many sensitive information in the Cloud where they should stay, to preserve their confidentiality, encrypted. To look at their content remotely (and without decrypting them), some specific procedures have been developed. Searchable encryption [1] builds up an index for each keyword of interest. This way, a user can search over his encrypted data for such a keyword and retrieve the files containing it. Note that this search should be made with great care, for privacy reasons, in order for the Cloud to not be able to find out what is the underlying keyword. Symmetric Searchable Encryption (SSE) as introduced by [2] relies on symmetric encryption primitives for efficiency reasons. In [3], Li et al. build on SSE for a solution for fuzzy keyword search over encrypted data in Cloud Computing. The fuzziness should here be understood as minor typos introduced by users when entering the request through their keyboard. In this context, the edit distance (Levenshtein distance) is relevant to measure the strings similarity.

Related works
Considers two different techniques: wildcard-based and gram-based techniques [3], for achieving fuzzy keyword search over encrypted data. These two methods build a set consisting of the searched keyword and the nearby words according to the used technique. For instance, for the keyword CASTLE, the fuzzy keyword set for wildcard-based technique consists of {CASTLE, *CASTLE, *ASTLE, C*ASTLE, C*STLE, ..., CASTL*E, CASTL*, CASTLE*} (respectively {CASTLE, CSTLE, CATLE, CASLE, CASTE, CASTL, ASTLE} for the gram-based technique) for an edit distance of 1. The idea behind these fuzzy keyword sets is to index -before the search phase -the exact keywords but also the ones differing slightly according to a fixed bound on the tolerated edit distance.
Our approach is somewhat different. For iriscode biometric data, the comparison of two iriscodes is made thanks to the computation of an Hamming distance [4]. There is today a trend to generalize this way of performing biometric matching for other modalities [5,6] for easier embedding into cryptographic protocols. In their works on private identification, Bringer et al. [7][8][9] (see also Section Private identification schemes) actually show how to carry out fuzzy keyword search for the Hamming distance. Following this trend, our idea is to combine this with a classical embedding of edit distance into the Hamming distance [10,11] (see Section Edit distance approximation) to obtain a fuzzy keyword search for the edit distance. This way of doing has at least two advantages. Firstly, contrary to [3] our way of proceeding does not need to a priori define the set of words which are considered as acceptable for the search. Moreover, we inherit of the security properties of [7] in their security model. Note that our proposal thus relies on an asymmetric security model. This can be seen as an asset for Cloud Computing applications. Indeed, using public-key encryption seems relevant in this context. To the best of our knowledge, this is the first scheme enabling fuzzy search with respect to edit distance over data encrypted with a public-key scheme.

Contribution and organization
The main contribution of this work is the proposal for a fuzzy keyword search over encrypted data where fuzzy means that we tolerate some edit distance deviation. A natural application of our results is Cloud Computing. We give proofs for the security properties of our scheme. We also discuss briefly and give some elements about its performances.
In the next Section, we briefly describe classical cryptographic primitives that we use. In Section Model presentation, we present our security model. In Section Useful technical tools, we recall some already published works on private identification schemes and the embedding of edit distances into the Hamming distance. In Section Our construction, we introduce our work and explain its properties.

Private information retrieval protocol
A Private Information Retrieval protocol (PIR, [12]) is a scheme that enables to retrieve a specific information from a remote server in such a way that the latter does not learn information about the query.
Suppose a database is constituted with M bits × = x 1 ,...,x M . To be secure, the protocol should satisfy the following properties [13]: • Soundness: When the user and the database follow the protocol, the result of the request is exactly the requested bit.
• User Privacy: For all × {0,1} M , for 1≤ i,j ≤ M, for any algorithm used by the database, it cannot distinguish with a non-negligible probability the difference between the requests of index i and j.
Among the known constructions of computational secure PIR, block-based PIR -i.e. working on block of bits -allows to efficiently reduce the cost. The best performances are from Gentry and Ramzan [14] and Lipmaa [15] with a communication complexity polynomial in the logarithm of M. Surveys of the subject are available in [16,17].
Some PIR protocols are called Symmetric Private Information Retrieval, when they comply with the Data Privacy requirement [13]. This condition states that the querier cannot distinguish between a database that possesses only the information he requested, and a regular one; in other words, that the querier does not get more information than he asked for.

Private information storage protocol
PIR protocols enable to retrieve information of a database. A Private Information Storage (PIS) protocol [17] is a protocol that enables to write information in a database with properties that are similar to that of PIR. The goal is to prevent the database from knowing the content of the information that is being stored; for detailed description of such protocols, see [1,18].
To be secure, the protocol must also satisfy the Soundness and User Privacy properties, meaning that 1. following the protocol results in the update of the database with the appropriate value, and 2. any algorithm run by the database cannot distinguish between two writing requests.

Model presentation
In this section, we introduce the model of security for an Error-Tolerant Searchable Encryption scheme for edit distance by adapting the model from [7].

Entities for the protocol
The context is Cloud Computing where users can either store or retrieve data from the Cloud. This leads to three different entities: • The Cloud CL which represents a single point of access to remote shared resources (i.e. a remote storage system). The Cloud is assumed to be untrusted, so we consider the content as publicly accessible to a third party and that communications in the Cloud and with users can be eavesdropped.
• The sender X sends data to be stored on the Cloud CL .
• The receiver Y generates queries to the Cloud CL to obtain the results of his searches.
Note that the sender and the receiver are not necessarily the same user and it is even possible that several senders and several receivers exist and interact. This corresponds well to the Cloud Computing model.

Definition of the primitives
In the sequel, messages are strings of length N, and ed(m 1 m 2 ) denotes the edit distance between m 1 ,m 2 {0,1} N , i.e. the minimum number of character insertions, deletions and substitutions needed to transform one string into the other. Note that edit distance is well defined on larger alphabet and variable length strings. The scheme can be extended to these cases.
To enable error-tolerant searchable encryption, we need three main primitives: the key materials generation, the send request and the receive request. Definition 1. A (, λ min ,λ max )-Public Key Error-Tolerant Searchable Encryption for the edit distance is obtained with the following probabilistic polynomial-time methods: • KeyGen(1 ℓ ) initializes the system, and generates public and private keys (pk,sk) for a security parameter ℓ. The public key pk is used to store data in the Cloud, and the secret key sk is used to retrieve information.
• Send X ,CL (m, pk) is a protocol in which X sends to CL the data m {0,1} N to be stored in the Cloud. At the end of the protocol, CL has stored the message m at a virtual address noted (m).
• Retrieve ,CL (m , sk) is a protocol in which, given a fresh message m' {0,1} N , Y asks for the virtual addresses of all data that are stored on CL and are close to m', with respect to the Completeness(λ min ) and Soundness(λ max ) criteria (cf. Section Security requirements). This outputs a set of virtual addresses, noted Φ(m'), where Y can reach the corresponding messages. Completeness and Soundness criteria for the parameters λ min , λ max represent the fact that a stored message will be actually retrieved if m' is at an edit distance less than λ min and that no message at a distance greater than λ max from m' will be returned (with a given non negligible probability). We emphasize that the definition above is focused on the searching problem (which is the tough task here): the algorithms' outputs are the virtual addresses where the retriever Y can retrieve the messages. The messages are possibly stored encrypted via a second encryption scheme.
An important difference compared to [3] is that we do not rely on fuzzy keyword sets, we want to ensure a given tolerance (materialized by λ min , λ max ). By avoiding wildcards and grams, we do not make any prior assumption on the location of the errors.

Security requirements
We first recall the completeness and soundness criteria that formalized the condition for the scheme and the Cloud to actually return the correct answer. Condition 1. Completeness(λ min ), Soundness(λ max ) Let m 1 , ..., m p {0,1} N be p different binary strings, and let m' {0,1} N be another string. Assume that, after initialization of the system, all the messages m i have been stored in the Cloud CL with virtual addresses (x i ), and that a user Y retrieved the set of virtual addresses Φ(m') associated to m'.
1. The scheme is said to be complete, up to a probability 1 -1 if (i.e. that except with a small probability all close messages are retrieved during the search through a Retrieve query).
2. The scheme is said to be sound, up to a probability 1 -2 if is bounded by 2 (i.e. that a false positive happens only with a small probability). We now give the definition of the security properties that the scheme needs to fulfill to ensure that the data stored in the Cloud are kept confidential and that privacy of queries is ensured.
Condition 2. Sender Privacy The scheme is said to respect Sender Privacy if the advantage of any server is negligible in the Exp Sender Privacy A experiment, described below.
Bringer and Chabanne Human-centric Computing and Information Sciences 2012, 2:2 http://www.hcis-journal.com/content/2/1/2 Here, A is a malicious opponent taking the place of CL , and C is a challenger at the user side.
The advantage of the adversary is |Pr [e = e] 1 2 |.
This experiment corresponds to a first phase where the adversary receives Send requests that he chose himself. Then A selects a pair (m 0 ,m 1 ) of messages and the challenger C chooses randomly one of the two messages to be stored in the Cloud. At the end, after a polynomial number of other Send requests, the adversary tries to guess which one of m 0 or m 1 has been sent. When the advantage of the adversary is negligible, we can assume that the data stored in the Cloud remains private.
The next condition focuses on retrieve queries. We want to ensure that the Cloud does not learn information on the retrieve queries, i.e. neither on the input message m', nor on the close retrieved messages. This experiment begins with the adversary's choice of messages to be stored in the Cloud. Then A chooses a number of retrieve queries to be made by the challenger.
Following this, A selects a pair of challenges (m 0 , m 1 ) and one of them is randomly selected by C as input to a Retrieve query. Note that A should not see the result of the Retrieve queries. At the end of the experiment, A tries to guess which one it was. This condition captures the privacy of the receiver Y when generating Retrieve queries: CL does not learn information on their content.

Private identification schemes
The principle of a private identification scheme is to manage nearest neighbor search in the encrypted domain. The two main sub-problems are the Approximate Nearest Neighbor (ANN) problem and Searchable Encryption The Approximate Nearest Neighbor (ANN) problem is defined as follows: Let P be a set of points in a metric space (E,d E ). For an input x E and ≥ 0, find a point p x P such that This is an approximation of the Nearest Neighbor problem as the exact case is hard to solve in large dimension spaces. Several algorithms for the ANN problem have been proposed [19] and the basic principle is to rely on sketching methods which output shorter vectors with increased stability and which enable to simplify the search: P is preprocessed with such sketching to end-up with a look-up table of short vectors on which the search can be realized quickly through counting the number of the exact or almost exact matches. Sketching needs there to guarantee that two close inputs would give with a good probability the same short vector. Examples of sketching methods are numerous for vector space (with Hamming distance or Euclidean distance) [20][21][22][23]; for instance random projections on small subspace. In the private identification schemes [7][8][9], the authors suggest to use a construction exploited in [24] for iris biometry. This is adapted to binary vectors with Hamming distance comparison. The sketching functions are restriction of n bits vectors over r ≪ n of their coordinates to obtain r bits vectors: Definition 2. Let F = (f 1 , . . . , f µ ) be a family of function from {0,1} n to {0,1} r such that for x {0,1} n , we have for all i {1,...,μ}, f i (x) = (x i 1 , . . . , x i r ). We say that F is a sketching family for the Hamming distance from dimension n to dimension r.
With a sketching family where all functions are independent and if we assume that the inputs are uniformly distributed, the probability to obtain the same output with two distinct inputs can be estimated as follows.
In our construction, we rely on this idea for Hamming distance approximation combined with the embedding method from [10,11] of edit distance into the Hamming space.
As far privacy and security are concerned, private identification schemes are based on searchable encryption principle. The main goal of searchable encryption [2,25] is to store messages into an encrypted database while still enabling to search the messages related to some keywords. For instance this could correspond to a remote mailing service where the user wants to retrieve his messages which contain a given keyword, without letting the server learn information on the content of his mails. [3] also uses such technique but only in a symmetric context. Following [7]'s idea, we adapt an asymmetric searchable encryption scheme for our construction (cf. Section Our construction).
A general solution to design a searchable encryption scheme is to associate a message to a set of keywords and to consider each keyword as a virtual address where the receiver can recover a link toward the associated messages. To manage all these relations in an efficient way, we follow [1,26,27] by using Bloom filters. Bloom filter [28] is a notion used in membership checking applications to reduce the memory cost of the data storage. We use an extension of this notion called Bloom filters with storage. It enables to store identifiers of elements in each array.
In other words, the array is empty at the beginning and for each elementx ∈ S , we add the identifier Id(x) of x at the cells indexed by h 1 (x),...,h v (x). To recover the identifiers associated to an element y, we compute T(y) = v j=1 t h j (y) . The following lemma describes the accuracy of this storage method. Lemma 1. [28] Let (H, t 1 , . . . , t k ) be a (v,k)-Bloom filter with storage indexing S . For x ∈ S , the following properties hold: • Id(x) ∈ T(x) = ν j=1 t h j (x) , i.e. the identifier of x ∈ S is always retrieved, • the probability Pr[t T(y) and t≠Id(y)] to obtain a false positive is

Edit distance approximation
Our construction is based on the embedding of edit distance into Hamming distance designed in [10]. To solve problems such as those described in Section Private identification schemes, data are embedded into Hamming space and then we can apply techniques dedicated to Hamming distance. Definition 4. Let (E 1 , d E 1 ) and (E 2 , d E 2 ) be two metric spaces. An embedding ψ : [10] proves that {0,1} N with edit distance can be embedded into ℓ 1 with small distortion 2 O( √ log 2 N log 2 log 2 N) and then shows from a previous work [20] how to end upefficiently into the Hamming space. More precisely: Lemma 2.
[10] There exists a probabilistic polynomial time algorithm π and constants c 1 ,c 2 > 0 that, for every N N, for every 4 -N ≫ δ > 0, and for all × {0,1} N , computes π (x) ∈ 2 −c 1 ( √ log 2 N log 2 log 2 N) ed(x, y) ≤ L 1 (π (x), π (y)) ≤ 2 c 1 ( √ log 2 N log 2 log 2 N) ed(x, y) where L 1 denotes the distance L 1 . The principle of the algorithm is to partition a string x into about 2 ( √ log 2 N log 2 log 2 N) substrings. From each substring x i , sets of all substrings (shingles) when taking a window of a fixed size t are considered (i.e. all possible substrings of x i formed by t subsequent coordinates). By considering the metric defined by the minimum cost perfect matching algorithm between sets, [10] then explains how such sets are embedded into ℓ 1 . Note that this technique introduces a lot of redundancy in the substrings which are embedded and this increases the dimension by a factor at least N 2 , but this is interesting for our construction as the distortion is very low and the algorithm remains polynomial in N.

Our construction
Technical description Let (Gen, Enc, Dec) be a semantically secure (IND-CPA, [29]) public key cryptosystem, let Query PIR DB be the retrieve query from a database DB of a Private Information Retrieval protocol and let Update PIS DB (val, i) be the write query into a database DB (that adds val to the i-th field) of a Private Information Storage protocol.
A Private Information Retrieval (PIR) [16] protocol enables to retrieve a specific block from a database without letting the database learn anything about the query and the answer (i.e. neither the index of the block nor the value of the block). This is done through a method Query PIR DB (i), that allows a user to recover the element stored at index i in DB by running the PIR protocol. A Private Information Storage (PIS) protocol [17] enables to write information in a database while preventing the database from learning information on what is being stored (neither the value of the data, nor the index of the location where the data is being stored). Such a protocol provides a method Update PIS DB (val, index) , which takes as input an element and a database index, and puts the value val into the database entry index. See Section Cryptographic primitives for more details on these notions.
KeyGen(1 ℓ ) The function takes a security parameter ℓ as input and uses Gen to generate a public and private key pair (pk,sk). It also initializes the Bloom filter array, (t 1 ,...,t k (Ø,...,Ø)), and provides it to the Cloud.
To send a message to the Cloud, a user X executes the following algorithm. 1. X sends Enc(m,pk) to CL which will give him back a virtual address (m). 2. X computes the embedding Ψ(m) and for all i {1,...,μ}, f i ∘ψ(m) and for all j {1,...,v}, X asks to CL to update the Bloom filter array through queries in order to add the identifier into the cell t h j (i||fi•ψ(m)) .
For privacy concerns, X will also complete the Bloom filter array with random data in order to get the same number l of elements for all cells t 1 ,...,t k .
At the end of the algorithm, CL has stored the message m at a virtual address noted (m) and the Bloom filter structure has been filled of encrypted identifiers via indexation by several sketches that enable to search with approximate data.

Y executes Query PIR
CL (α i,j ) to retrieve the content of the cells t α i,j from the Bloom filters stored into CL . Note that as the queries are made through a PIR protocol, the Cloud can not learn any information. The advantage of using Bloom filters here is to permit an efficient look-up into the structure, as for classical Bloom filter (i.e. without any encryption) compared to other hash tables techniques.

Security properties
In this section, we explain why this construction achieves the security requirements of Section Security requirements.
Lemma 3. Completeness The scheme is complete up to a probability 1 -1 with Proof. (sketch of) For m,m' such that ed(m,m') ≤ λ min , Section Edit distance approximation implies that HD(ψ(m;l min ), ψ(m';l min ) ≤ alog 2 (1/δ) with probability 1 -δ. Hence This leads to a probability lower than 1 − (1 − α c 2 ) rτ to find less than τ times the identifier of a close message; probability that can thus be made small, cf. the example in Section Discussion.
Lemma 4. Soundness With λ max = 2 c 1 ( √ log 2 N log 2 log 2 N) λ min and provided that Bloom filter functions from H behave like pseudo-random functions from {1,...,μ} × {0,1} r to {1,...,k}, then the scheme is sound up to a probability 1 -2 , with: Proof. (sketch of) For m,m' such that ed(m,m') > λ max , then Section Edit distance approximation implies that HD(ψ(m;l min ), ψ(m';l min ) ≥ blog 2 (1/δ) . Hence The other cause for an error could come from v collisions in the Bloom filter hashes. Lemma 5. Sender Privacy Assume that the PIS protocol achieves PIS User Privacy, the scheme ensures Sender Privacy.
Proof. (sketch of) CL receives only encrypted messages and Update PIS queries that do not enable to distinguish between the output of Send(m 0 , pk) and the output of Send (m 1 , pk), after the execution of Send(m 1 , pk), i {2,...,Ω} as we assume that the underlying encryption scheme is semantically secure and that the PIS protocol achieves PIS User Privacy. Lemma 6. Receiver Privacy Assume that the PIR protocol ensures PIR User Privacy, then the scheme ensures Receiver Privacy.
Proof. (sketch of) The Cloud CL receives and answers only to Query PIR requests, that by assumption do not leak information neither on their content nor on the outputs.

Discussion
To illustrate the error rates that one can expect, we give an example of choice of parameters. For instance, we choose a Bloom filter array of size k = 128 with v = 64 hash functions. Then we can approximate 2 as (1 − β c 2 ) rτ . We have Assume that α = c 2 /4 and β = c 2 /2 then with μ = 128 functions in the sketching family for the Hamming distance, r = 10 and τ = 3, we obtain 2 negligible and 1 ≈ 0.023. With these parameters, we have μ × v = 2 13 for the number of queries during Send and Retrieve phases.
Concerning the cost of PIR and PIS queries, the size of the Bloom filter array should remain not too large, like k = 128 here, to be efficient. Note that in practice, the choice of λ min depends on the number of errors between two words that one wants to tolerate for fuzzy search. Our embedding is made such that λ max is made close to λ min . The other parameters have then to be tuned to obtain small or negligible error rates 1 and 2 (cf. Lemma 3 and Lemma 4). The purpose of this paper is to introduce a new encrypted search with edit distance. At this point, our contribution is mainly theoretical. To go further, one should consider a practical use case over the cloud to be able to devise an efficient implementation.