Abstract
This paper aims to move research in the bounded retrieval model (BRM) from theory to practice by considering symmetric (rather than public-key) encryption, giving efficient schemes, and providing security analyses with sharp, concrete bounds. The threat addressed is malware that aims to exfiltrate a user’s key. Our schemes aim to thwart this by using an enormously long key, yet paying for this almost exclusively in storage cost, not speed. Our main result is a general-purpose lemma, the subkey prediction lemma, that gives a very good bound on an adversary’s ability to guess a (modest length) subkey of a big-key, the subkey consisting of the bits of the big-key found at random, specified locations, after the adversary has exfiltrated partial information about the big-key (e.g., half as many bits as the big-key is long). We then use this to design a new kind of key encapsulation mechanism, and, finally, a symmetric encryption scheme. Both are in the random-oracle model. We also give a less efficient standard-model scheme that is based on universal computational extractors (UCE). Finally, we define and achieve hedged BRM symmetric encryption, which provides authenticity in the absence of leakage.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
This paper is concerned with the possibility of mass surveillance by APTs. An APT (Advanced Persistent Threat) is malware that resides on your system and attempts to exfiltrate your key. (This means that it aims to communicate your key to its home base, probably using your system’s network connection.) How might one protect against this? One answer is: by strengthening system security to the point that we eliminate APTs. Unfortunately, this approach seems out of reach. Indeed, the Snowden revelations show that the NSA (through TAO, their Tailored Access Operations unit) and others have sophisticated system penetration capabilities that they use to plant APTs. Another answer is provided by the bounded retrieval model (BRM) [2, 3, 17, 20, 23], namely to make secret keys so big that their undetected exfiltration is difficult.
So far, BRM research has been largely theoretical and foundational. Our intent is to move it towards being a plausible countermeasure to mass surveillance. This involves the following. First, we treat symmetric rather than asymmetric encryption. Second, we focus on simple, efficient schemes. Third, we provide security analyses that are strong and fully concrete (no hidden constants), giving good numerical bounds on security.
Our main technical contribution is a very good upper bound on the probability of predicting a subset of random positions in a large key in the presence of leakage. We then give a big-key encapsulation mechanism, and thence a big-key symmetric encryption scheme, both efficient and in the random-oracle model (ROM) [12]. Let us now look at all this in more detail.
The BRM. The BRM evolved through a series of works [2, 3, 17, 20, 23]. It is part of the broader area of Leakage-Resilient Cryptography [1, 25, 34] and is also related to the Bounded Storage Model [16, 29]. A survey by Alwen et al. [4] explains that
If an attacker hacks into a remote system (or infects it with some malware) it may ... be infeasible/impractical for the attacker to download “too much"data (say, more than 10 GBytes).
In such a setting, making the key very big, say 1 TByte, ensures that the adversary obtains limited information about it. The idea was echoed by Adi Shamir at the RSA 2013 conference in a somewhat broader context of secrets that are not necessarily keys, and with specific reference to APTs. He said:
We have to think in a totally different way about how we are going to protect computer systems assuming there are APTs inside already which cannot be detected. Is everything lost? I claim that not: there are many things that you can do, because the APT is basically going to have a very, very narrow pipeline to the outside world. \(\ldots \) I would like, for example, all the small data to become big data, just in terms of size. I want that the secret of the Coco-Cola company to be kept not in a tiny file of one kilobyte, which can be exfiltrated easily by an APT \(\cdots \). I want that file to be a terabyte, which cannot be [easily] exfiltrated.
The problem we aim to solve is how to effectively utilize a key \({{\varvec{K}}}\) whose length k is big in the presence of an adversary that has some information about \({{\varvec{K}}}\). In the BRM, the APT is modeled as a function that takes \({{\varvec{K}}}\) as input and returns a string L of \(\ell < k\) bits, the leakage, where \(\ell \) is some parameter; for example, \(\ell = k/10\) corresponding to the assumption that the adversary can’t exfiltrate more than 100 GBytes of information about a 1 TByte key. In the BRM, effective utilization imposes two requirements, one on security and the other on efficiency. The former is that security must be maintained in the presence of the leakage. The latter, called locality, is that the scheme’s algorithms may access only a very small part of the key. Without this, working with a big-key would be too inefficient.
ADW [3] give BRM schemes for authenticated key-exchange and public-key identification. ADNSWW [2] give BRM schemes for public-key encryption. (Here \({{\varvec{K}}}\) is the secret decryption key.) The latter construction is based on identity-based hash proof systems, which are instantiated via bilinear maps, lattices and quadratic residuosity.
Overview. We treat symmetric encryption in the BRM. We refer to it, synonymously, as big-key symmetric encryption, to emphasize the use of a large key. The k-bit key \({{\varvec{K}}}\) of a big-key symmetric encryption scheme \({\mathbf {\mathsf{{SE}}}}\) is shared between sender and receiver. Algorithm \({\mathbf {\mathsf{{SE}}}}.\mathsf {Enc}\) maps \({{\varvec{K}}}\) and a plaintext to a ciphertext, while \({\mathbf {\mathsf{{SE}}}}.\mathsf {Dec}\) maps \({{\varvec{K}}}\) and a ciphertext to a plaintext, both making few accesses to \({{\varvec{K}}}\).
Our scheme is simple, efficient, and easily described. It is parameterized by the number of probes \(p\) into the k-bit key \({{\varvec{K}}}\), for example, \(p=500\). To encrypt message M, pick a random R, apply the random oracle \(\textsc {RO}\) to it to get a sequence of probes \(\mathbf {p}[1],\ldots ,\mathbf {p}[p] \in [1..k]\) into \({{\varvec{K}}}\), and let \(J = {{\varvec{K}}}[\mathbf {p}[1]]\ldots {{\varvec{K}}}[\mathbf {p}[p]]\) be the corresponding bits of \({{\varvec{K}}}\). Next, obtain a short, conventional key K by applying \(\textsc {RO}\) to J. Finally, encrypt M under K with a conventional symmetric encryption scheme to get a ciphertext C. Return (R, C) as the ciphertext of the big-key scheme.
This scheme is derived via a modular framework with three steps, each involving its own definition, problem and analysis, namely subkey prediction, encapsulation and encryption. We will discuss them in turn below. Then we describe two extensions of the basic scheme sketched above, namely, a standard-model variant based on UCE, and the idea of hedged big-key encryption.
Subkey prediction. Our core technical contribution concerns the subkey-prediction problem. We consider a game parameterized by the length k of the big-key, a number \(p\) of random probes into it, and a bound \(\ell \) on the leakage. The game selects a k-bit big-key \({{\varvec{K}}}\) at random, and the adversary is given the result \(L\in \{0,1\}^{\ell }\) of applying an arbitrary leakage function to \({{\varvec{K}}}\). This L represents the exfiltrated information, and, as a special case, could consist of \(\ell \) bits of \({{\varvec{K}}}\). Now the game picks random indices \(\mathbf {p}[1], \ldots ,\mathbf {p}[p]\in [1..k]\), called probes. These probes are also given to the adversary. We ask the adversary, given the leakage and probes, to predict, in its entirety, the induced \(p\)-bit subkey \(J = {{\varvec{K}}}[\mathbf {p}[1]]{{\varvec{K}}}[\mathbf {p}[2]] \cdots {{\varvec{K}}}[\mathbf {p}[p]]\) found at those spots. We ask: how well can the adversary do at this? That is, we want to know the adversary’s maximum probability, denoted \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell )\) in our formalization of Sect. 3, of guessing J as a function of k, \(\ell \), and \(p\).
The analysis turns out to be surprisingly technical. One might think that there is no better strategy than to leak some \(\ell \) bits of \({{\varvec{K}}}\), for example the first \(\ell \), and then predict J in the obvious way. (If \(\mathbf {p}[i] \in [1..\ell ]\) then \({{\varvec{K}}}[\mathbf {p}[i]]\) is known from the leakage, else guess it.) We give a counter-example showing that this is not the best strategy, and one can do better using an error-correcting code. We then show that, roughly, the best strategy for the adversary is to select the leakage function so that the pre-images of any point under this function are sandwiched between adjacent Hamming balls. In Sect. 3 we formalize and prove this and then use it to show that
where \( {H}_2(x) = -x\lg (x)-(1-x)\lg (1-x)\) is the binary entropy function and \( {H}_2^{-1}(1-\lambda )\) is the smaller of the two possible inverses of \(1-\lambda \) under \( {H}_2\). The salient point is that the probability decreases exponentially in the number of probes \(p\), with the factor in the exponent depending on the fraction \(\lambda = \ell /k\) of the bits of the big-key that are leaked. See Fig. 1 for a plot of \({w}(\lambda )\) as a function of \(\lambda \).
Related settings are analyzed by several lemmas in the literature, notably NZ [30, Lemma 11], Vadhan [35, Lemma 9] and ADW [3, Lemma A.3]. We are not aware of any direct way of applying the first two to get bounds on subkey prediction probability. (They do give bounds on what in Sect. 3 we call the restricted subkey prediction probability.) They also involve hidden constants that make it hard to obtain the concrete bounds needed to estimate security in usage. In contrast, the elegant lemma of ADW [3, Lemma A.3] can be directly applied to bound \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell )\), and it gives a concrete bound with no hidden constants. However, the bound obtained in this way is much inferior to ours, as we now illustrate.
In Sect. 3 we show that [3, Lemma A.3] implies \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell ) \le 2^{-c}\) for c given by Eq. (8), namely \(c = p(k-\ell -5)/(2k\lg (2k)+3p)\). Figure 2 compares the bounds obtained by our result (column “New”) with the ones obtained via [3, Lemma A.3] (column “Old”). We see for example that for k being 1 TB and \(\ell /k = 0.1\), for 500 probes, we show that \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell ) \approx 2^{-274}\) while the prior bound would be only \(2^{-5.1}\). Other entries show similarly large gaps for other parameter values. Another way to compare is, for a certain fixed \(k,\ell \), to ask how many probes are needed to get 256 bits of security, meaning have \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell ) \le 2^{-256}\). According to Fig. 2, for k being 1 TB and \(\ell /k = 0.1\), our result says that 468 probes suffice, while the prior result would require us to use 24954 probes. The difference in efficiency is dramatic, meaning that reliance on the prior bounds would translate to a significant loss of practical efficiency for big-key symmetric encryption.
Encapsulation. Building on the above, we provide a general tool, \(\mathsf {XKEY}\), for using big-keys in symmetric settings. \(\mathsf {XKEY}\) takes in a key \({{\varvec{K}}}\) and a random selector \(R\), which is a short string (like 128–256 bits). It returns a derived key \(K=\mathsf {XKEY}({{\varvec{K}}},R)\), which has conventional length (like 128 bits) and can be used in a conventional scheme. We formalize the goal of \(\mathsf {XKEY}\), which we call big-key encapsulation. It asks that the derived key is indistinguishable from a random string of the same length, even given the selector and leakage on the big-key. This is reminiscent of a classical key-encapsulation mechanism (KEM) as defined by CS [18], yet it is also very different, since we are in the presence of leakage and in the symmetric (rather than asymmetric) setting. Additionally, in the ROM, not only does the adversary have access to the random oracle \(\textsc {RO}\), but, also, the leakage function can also itself invoke the random oracle. This is crucial, as otherwise it is easy to give an example of a scheme that is secure in the ROM yet insecure when the random oracle is instantiated. This element increases the technical difficulty of our security proof.
Given \({{\varvec{K}}}\) and \(R\), our \(\mathsf {XKEY}\) algorithm applies the random oracle \(\textsc {RO}\) to \(R\) to specify probes \(\mathbf {p}[1], \ldots ,\mathbf {p}[p]\in [1..k]\) into the big-key \({{\varvec{K}}}\). It lets \(J = {{\varvec{K}}}[\mathbf {p}[1]]{{\varvec{K}}}[\mathbf {p}[2]] \cdots {{\varvec{K}}}[\mathbf {p}[p]]\) be the corresponding subkey. By subkey unpredictability, J is unpredictable, but it is not guaranteed to be indistinguishable from random. \(\mathsf {XKEY}\) further applies \(\textsc {RO}\) to \(R\Vert J\) to obtain the derived key K. Theorem 12 says that this derived key is indistinguishable from random even to an adversary that sees multiple encapsulations and gets leakage about \({{\varvec{K}}}\). The theorem gives a concrete bound on the adversary advantage. The proof has two steps, first addressing the ability of the leakage function to use the random oracle by a coin-fixing argument, and then reducing to subkey prediction via a game sequence.
Big-key encryption. In Sect. 5, we define and achieve big-key symmetric encryption. Our definition ensures indistinguishability in the presence of leakage on the big-key, the leakage again allowed to depend on the random oracle. To encrypt a message M under \({{\varvec{K}}}\), we pick \(R\) at random, obtain a session key \(K = \mathsf {XKEY}({{\varvec{K}}},R)\), and output as ciphertext a pair \((R,C)\) where C is an encryption of M under K with a base, conventional symmetric encryption scheme, for example an AES mode of operation. Theorem 13 shows that this achieves our definition of big-key encryption privacy assuming that \(\mathsf {XKEY}\) achieves our definition of encapsulation security and the base scheme meets a standard privacy definition for symmetric encryption. The scheme is very efficient. Relative to the base scheme, the added communications cost is small (transmission of R) and the added computation cost is also small (one \(\mathsf {XKEY}\) operation).
Standard-model scheme. A variant of our scheme, still quite efficient, can be proven secure in the standard model. We can focus on encapsulation, since our reduction of encryption to the latter does not use a random oracle. We consider a variant \(\mathsf {XKEY2}\) of \(\mathsf {XKEY}\) where the selector \(R=(I,\mathbf {p})\) is a key I for a UCE (Universal Computation Extractor) \(\mathsf {H}\) [8] together with a sequence of probes \(\mathbf {p}[1], \ldots , \mathbf {p}[p] \in [1..k]\) into \({{\varvec{K}}}\). Given \({{\varvec{K}}}\) and this selector, \(\mathsf {XKEY2}\) lets \(J = {{\varvec{K}}}[\mathbf {p}[1]]\ldots {{\varvec{K}}}[\mathbf {p}[p]]\) be the corresponding bits of \({{\varvec{K}}}\) and obtains subkey K by applying \(\mathsf {H}(I,\cdot )\) to J. Efficiency is the same as for our ROM scheme, but the ciphertext of the encryption scheme is longer because the selector, which is included in the ciphertext, is longer. Theorem 14 proves security assuming \(\mathsf {H}\) is \(\mathrm {UCE}[{\mathcal {S}}^{\mathrm {sup}}]\)-secure, namely UCE-secure for statistically unpredictable sources. This version of UCE, from [8, 15], has been viable and has been used in many applications. We use our subkey unpredictability bound in a crucial way, to prove statistical unpredictability of the source constructed in the reduction.
Authenticity and hedged big-key encryption. Our big-key encryption schemes provide privacy. What about authenticity—meaning big-key authenticated encryption (AE)? ADW [3] remark that secure signatures are not possible in the BRM. The same attack applies to rule out big-key AE. Namely, the leakage function can simply compute and leak a valid ciphertext. We take the view that authenticity is important in normal usage but the target of mass surveillance is violating privacy, not authenticity. Accordingly, we suggest hedged big-key encryption. Encryption would continue to provide, in the presence of leakage, the guarantees of our above-discussed schemes. Additionally, in the absence of leakage, the same scheme should provide AE, meeting a standard and strong formalizations of the latter [10]. Throughout all this, the scheme must remain true to the local efficiency requirement of the BRM. In Sect. 7 we give a simple way to turn a privacy-only big-key scheme into a hedged one while preserving locality.
Discussion. We clarify some assumptions and limitations of the BRM and our work. In the BRM, leakage (exfiltration) on the big-key is assumed to occur once, at the beginning. The leakage function cannot depend on ciphertexts. This is true in most models of leakage-resilient cryptography, but leakage after encryption has been considered [26] and it would be interesting to extend this to big-key symmetric encryption. We assume encryption code is trusted. Algorithm substitution attacks [11] consider the case of untrusted encryption code. Whether any defense against ASAs is possible in the big-key setting remains open. Finally we assume the availability of trusted randomness in the encryption process.
Our schemes view the big-key as a string over \(\{0,1\}\), so each probe draws one bit of the big-key. More generally, we could view the big-key as a vector over \(\{0,1\}^b\) where b is some block or word length, for example \(b=8\) or \(b=32\). Each probe would then result in a b-bit string. This could increase efficiency and ease of implementation of the scheme. Our current subkey prediction lemma addresses only the \(b=1\) case. One can apply [3, Lemma A.3] to get a bound for larger b, but we would expect that an extension of our subkey prediction Lemma would yield better bounds. Obtaining such an extension is an interesting open question.
Related work. In Maurer’s bounded storage model [29], parties have access to a public source of randomness that transmits a sequence \(X_1,X_2,\ldots \) of high min-entropy strings. Parties are limited in storage and the goal is information theoretic security. Symmetric encryption in this setting is studied in [5, 6, 24, 28, 29, 35]. However, in this information-theoretic setting one can derive only one session key from each output of the source and the ability to encrypt multiple messages relies on the expectation of receiving a continuous stream of strings from the source. In contrast, in the BRM setting, the big-key is static, and, in the presence of leakage on it, we want to encrypt an arbitrary number of messages without changing the key.
Di Crescenzo et al. [20] and Dziembowski [23] independently introduced the bounded retrieval model (BRM), where the adversary has a bounded amount of information on the data stored by the users. See the excellent survey of Alwen et al. [4] for history and results in this setting. Here we touch on only a few examples. DLW [20] design password protocols for the setting where the “password file” stored by the server is huge but the amount of information the adversary can get about it is limited, yet the server is efficient. Dziembowski [23] considers malicious code that can exfiltrate only a limited portion of a long key before it is sanitized. Dziembowski’s aim is to achieve entity authentication and session-key distribution. Like us, the author works in the ROM. His symmetric key-derivation scheme, and its analysis, are similar to ours. Following up on this work, CDDLLW [17] provide a general paradigm for achieving intrusion-resilient authenticated key exchange (AKE), as well as a solution in the standard (as opposed to RO) model. Alwen et al. [3] design authenticated key agreement protocols in the public-key setting where the secret key is huge but the public key is small and security must be maintained in the presence of bounded leakage on the secret key. ADNSWW [2] construct public-key encryption schemes in this model.
Predating the BRM, Kelsey and Schneier consider an authentication scheme in which a user with a large-memory token authenticates itself by providing XORs of randomly specified subsets of its bits [27]. An adversary who manages to exfiltrate only some of the bits of the device will be unable to subsequently impersonate the token. Dagon, Lee, and Lipton consider the problem of securely storing a ciphertext encrypted in a weak password on a device that’s subject to adversarial attack [19]. They create a long ciphertext for a short plaintext where partial knowledge of the ciphertext will frustrate dictionary attacks.
Lu [28] and Vadhan [35] construct locally computable extractors. These yield big-key encapsulation schemes, but with the limitation that one can only obtain a small, bounded number of encapsulated keys from one big-key. (Encapsulated keys are statistically close to random, so after a few derivations, the entropy of the big-key is exhausted.) This is not sufficient for big-key symmetric encryption, where, with the big-key in place, we want to encrypt an arbitrary number of messages. \(\mathsf {XKEY}\) in this light yields a locally computable computational extractor in the ROM. (The computational element is that the number of queries to the random oracle is limited. In asymptotic terms, it is a polynomial.) It uses the random oracle as a hardcore function following [12] to be able to encapsulate an unbounded number of keys under a single big-key. Our UCE-based encapsulation scheme \(\mathsf {XKEY2}\) similarly yields a standard-model locally-computable computational extractor. One might also view \(\mathsf {XKEY}\) and \(\mathsf {XKEY2}\) as reusable locally-computable extractors following [2, 17, 23]. Reusability of extractors also aims to address deriving multiple subkeys and arose in [14, 21, 31].
A condenser [22, 32, 33] is a min-entropy extractor. Our subkey prediction Lemma can be viewed as building a BRM (or locally computable) condenser for a random source. The algorithm is to simply return the subkey J given by random probes into the big-key.
One could obtain a big-key symmetric encryption scheme by adapting the asymmetric BRM scheme of ADNSWW [2]. Our schemes are much more efficient.
2 Notation
Notation. For integers \(a \le b\) we let \([a..b] = \{a,\ldots ,b\}\). If \(\mathbf {x}\) is a vector then \(|\mathbf {x}|\) denotes its length and \(\mathbf {x}[i]\) denotes its i-th coordinate. (For example if \(\mathbf {x}= (10,00,1)\) then \(|\mathbf {x}|=3\) and \(\mathbf {x}[2]=00\).) We let \(\varepsilon \) denote the empty vector, which has length 0. If \(0 \le i \le |\mathbf {x}|\) then we let \(\mathbf {x}[1..i] = (\mathbf {x}[1],\ldots ,\mathbf {x}[i])\), this being \(\varepsilon \) when \(i=0\). We let \(S^n\) denote the set of all length n vectors over the set S and we let \(S^*\) denote the set of all finite-length vectors over the set S. Strings are treated as the special case of vectors over \(\{0,1\}\). Thus, if x is a string then |x| is its length, x[i] is its i-th bit, \(x[1..i]=x[1]...x[i]\), \(\varepsilon \) is the empty string, \(\{0,1\}^n\) is the set of n-bit strings and \(\{0,1\}^*\) the set of all strings. If \({{\varvec{K}}}\) is a k-bit string and \(\mathbf {p}\) is a \(p\)-vector over [1..k] then we let \({{\varvec{K}}}[\mathbf {p}] = {{\varvec{K}}}[\mathbf {p}[1]]{{\varvec{K}}}[\mathbf {p}[2]]\cdots {{\varvec{K}}}[\mathbf {p}[p]]\) denote the length-\(p\) string consisting of the bits of K in the positions indicated by \(\mathbf {p}\). For example, if \(K=01010101\) and \(\mathbf {p}=(1,8,2,2)\) then \({{\varvec{K}}}[\mathbf {p}]=0111\).
If \({X}\) is a finite set, we let denote picking an element of \({X}\) uniformly at random and assigning it to x. Algorithms may be randomized unless otherwise indicated. Running time is worst case. If A is an algorithm, we let \(y \leftarrow A(x_1,\cdots ;r)\) denote running A with random coins r on inputs \(x_1,\cdots \) and assigning the output to y. We let be the result of picking r at random and letting \(y \leftarrow A(x_1,\cdots ;r)\). We let \([A(x_1,\cdots )]\) denote the set of all possible outputs of A when invoked with inputs \(x_1,\cdots \). We denote by \(\mathrm {Func}[a,b]\) the set of all functions f from \(\{0,1\}^a\) to \(\{0,1\}^b\).
We use the code-based game-playing framework [13] (see Fig. 3 for an example). By \(\Pr [\mathbf G ]\) we denote the probability that game \(\mathbf G \) returns \(\mathsf {true}\). Uninitialized boolean variables, sets and integers are assume initialized to \(\mathsf {false}\), the empty set and 0, respectively.
3 The Subkey Prediction Lemma
Suppose an adversary computes 1 TByte of information L about a random 2 TByte key \({{\varvec{K}}}\). Afterward, we challenge the adversary to identify the 128-bit substring K whose bits are those found at some 128 random locations of \({{\varvec{K}}}\). We tell the adversary those locations. How well can the adversary do at this game? This section introduces what we call the subkey prediction game to formalize this question and answer it.
The subkey-prediction game. Let \(k,\ell ,p,q\ge 1\) be integers with \(k\ge \ell \). We call these values the key length, the leakage length, the probe length, and the iteration count. Let \(\mathsf {Lk}{:\;\,}\{0,1\}^k\rightarrow \{0,1\}^{\ell }\) be a function, the leakage function. We associate to these values and an adversary \({\mathscr {A}}\) the subkey prediction game \(\mathbf {G}^{\mathrm {skp}}_{k,p,q}({\mathscr {A}},\mathsf {Lk})\) depicted in the left panel of Fig. 3. (Ignore the other game for now). The game picks a random key \({{\varvec{K}}}\) (the big-key) of length k and computes leakage \(L \leftarrow \mathsf {Lk}({{\varvec{K}}})\). It then picks q random probes \(\mathbf {p}_1,\ldots ,\mathbf {p}_q\), each consisting of \(p\) random indexes into the key \({{\varvec{K}}}\). The adversary \({\mathscr {A}}\) must guess one of the strings \({{\varvec{K}}}[\mathbf {p}_i]\) given leakage L and probe locations \(\mathbf {p}_1,\ldots ,\mathbf {p}_q\). Any one will do. It outputs a guess \(J\) and the game returns \(\mathsf {true}\) if and only if \(J\) is one of the values \({{\varvec{K}}}[\mathbf {p}_i]\). The adversary needn’t identify which subkey it has guessed. Now define
The first is the probability that the game returns \(\mathsf {true}\), that is, the probability that \({\mathscr {A}}\) wins the game. For the second definition the maximum is over all adversaries \({\mathscr {A}}\), regardless of running time. For the third definition the maximum is over all leakage functions \(\mathsf {Lk}{:\;\,}\{0,1\}^k\rightarrow \{0,1\}^{\ell }\) that return \(\ell \) bits. This last advantage function measures the best possible prediction probability when the leakage is restricted to \(\ell \) bits and the adversary is arbitrary. We are interested in upper bounding this last advantage as a function of \(k,\ell ,p,q\).
The idea here is that an adversary has some information about \({{\varvec{K}}}\), but it is limited to \(\mathsf {Lk}({{\varvec{K}}})\), which is \(\ell \) bits. This leaves \(k-\ell \) bits of average min-entropy in \({{\varvec{K}}}\). We are trying to extract it in a particular and efficient way, by taking a random subset of positions of \({{\varvec{K}}}\). We are asking “only” for unpredictability, and will then apply a random oracle to get random-looking bits.
From many queries to one. One simple observation is that we can reduce the case of q probes to the case of a single probe via the union bound:
Lemma 1
Let \(k,\ell ,p,q\) be integers with \(k\ge \ell \). Then
Proof
(Lemma 1 ). Given adversary \({\mathscr {A}}\) we let \({\mathscr {A}}_1\) be the adversary defined as follows. On input \(L,\mathbf {p}\) it picks and for \(j\in [1..q]\setminus \{g\}\). It lets \(\mathbf {p}_g \leftarrow \mathbf {p}\) and returns . By a union bound we have \(\mathsf {Adv}^\mathrm{skp}_{k,p,q}({\mathscr {A}},\mathsf {Lk}) \le q\cdot \mathsf {Adv}^\mathrm{skp}_{k,p,1}({\mathscr {A}}_1,\mathsf {Lk})\) and the lemma follows.
We retain the q-probe definition because this is what we will use in applications, but Lemma 1 allows us to focus on the \(q=1\) case for the remainder of this section.
Connection to Hamming balls. Given \(k,\ell \), let us ask which leakage function \(\mathsf {Lk}{:\;\,}\{0,1\}^k \rightarrow \{0,1\}^{\ell }\) maximizes the advantage. At first glance it’s hard to imagine there’s a strategy better than the greedy one of leaking the first \(\ell \) bits of \({{\varvec{K}}}\). Specifically let \(\mathsf {Lk}_{\ell }{:\;\,}\{0,1\}^k\rightarrow \{0,1\}^{\ell }\) be defined by \(\mathsf {Lk}_{\ell }({{\varvec{K}}})={{\varvec{K}}}[1]\cdots {{\varvec{K}}}[\ell ]\), the function that returns the first \(\ell \) bits of its input. Then a natural conjecture is that \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell ) = \mathsf {Adv}^\mathrm{skp}_{k,p,1}(\mathsf {Lk}_{\ell })\), meaning that the subkey prediction advantage is maximized by \(\mathsf {Lk}_{\ell }\). Indeed this was our first guess.
This conjecture, however, is false. We now give a counter-example that shows this. Besides indicating the subtleties in the problem, it makes a connection with error-correcting codes and Hamming balls that will underly our eventual results and bound. Here is the counter-example. Let \(k=7\) and \(\ell =4\). Let \(p=1\). Then
Now consider the following alternative \(\mathsf {Lk}{:\;\,}\{0,1\}^7\rightarrow \{0,1\}^4\) for which we will show that \(\mathsf {Adv}^\mathrm{skp}_{k,1,1}(\mathsf {Lk})=7/8 > 11/14\). Let \(W_1,\ldots ,W_{16} \in \{0,1\}^7\) be the codewords of the Hamming (7,4) code. (See Wikipedia article “Hamming(7,4)” for the definition.) This code has message length \(\ell =4\) and codeword length \(k=7\), so that it describes an (injective) encoding function \(E{:\;\,}\{0,1\}^4\rightarrow \{W_1,\ldots ,W_{16}\} \subseteq \{0,1\}^7\). It has minimum distance 3 and corrects one error. Let \(B_i\) be the set of all 7-bit strings whose Hamming distance from \(W_i\) is \(\le 1\). Then \(|B_i|=8\) and the sets \(B_1,\ldots ,B_{16}\) are a partition of \(\{0,1\}^7\). The decoding function \(D{:\;\,}\{0,1\}^7\rightarrow \{0,1\}^4\), given \({{\varvec{K}}}\), finds the unique i such that \({{\varvec{K}}}\in B_i\). It then returns message \(L = E^{-1}(W_i)\). The leakage function \(\mathsf {Lk}\) is simply D; namely, given the big-key \({{\varvec{K}}}\), it returns its decoding \(D({{\varvec{K}}})\). Now the adversary \({\mathscr {A}}\) receives \((L,p_1)\) where \(p_1\in [1..7]\) is the random probe into \({{\varvec{K}}}\) chosen by the game and \(L = \mathsf {Lk}({{\varvec{K}}}) = D({{\varvec{K}}})\), and it wants to predict \({{\varvec{K}}}[p_1]\). Adversary \({\mathscr {A}}\) lets \(W = E(L)\) and returns \(W[p_1]\) as its guess. Then \(\mathsf {Adv}^\mathrm{skp}_{k,1,1}({\mathscr {A}},\mathsf {Lk})=7/8\) because if \(W=W_i\) then 7 of the 8 strings in \(B_i\) have \(W_i[p_1]\) as their \(p_1\)-th bit. So
Subkey-prediction bound. We now give our upper bound on \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell )\) as a function of \(k,\ell ,p\). Let \(w_{\mathrm {H}}(x)\) denote the Hamming weight of string x. Let \(\mathcal{B}_{k}(r) = \{\,x\in \{0,1\}^k :\,w_{\mathrm {H}}(x)\le r\,\}\) be the Hamming ball of radius r with center \(0^k\) and let \(B_{k}(r)=|\mathcal{B}_{k}(r)|\) be its size. Then
Now define the radius \(\mathsf {rd}_{k}(N)\) as the largest integer r such that \(B_{k}(r) \le N\), and let
The following says this function provides an upper bound on the subkey prediction probability:
Theorem 2
(Subkey-prediction bound). Let \(k,\ell ,p\) be integers, \(k\ge \ell \). Then
Before we prove Theorem 2, let’s try to understand the growth rate of the bound for parameters of interest.
Estimates. The \(G_{k,p}(N)\) formula itself is somewhat intractable. To use Theorem 2 it is easier to work with the following approximation that we used in Sect. 1. The approximation is very good. For \(\lambda \in [0,1]\) we let
where \(\lg \) is the logarithm to base two, \( {H}_2\) is the binary entropy function defined for \(x\in [0,1]\) by \( {H}_2(x)=-x\ln (x)-(1-x)\ln (1-x)\) and \( {H}_2^{-1}(1-\lambda )\) returns the smaller of the two values x satisfying \( {H}_2(x)=1-\lambda \). Then for \(\ell \le k\) we have
We now justify this. Let us define
The following lemma gives both a lower bound and an upper bound on \(G^*_{k,p,r}(N)\) that are within a factor two of each other, showing that the estimate of the lemma is tight to within a small constant. The proof is in [9].
Lemma 3
Let \(N,k,r,p\ge 1\) be integers with \(r\le k\). Let \(\theta = r/k\) and \(\gamma = 2 {k \atopwithdelims ()r} / N\). Suppose: (1) \(p \le 0.27\cdot k\) and (2) \(\theta \le 0.22\). Let \(\alpha = -\lg (1-\theta ) > 0\). Then
Letting \(r^*=\mathsf {rd}_{k}(2^{k-\ell })\), from Eq. (3) we have \(G_{k,p}(2^{k-\ell }) \approx G^*_{k,p,r^*}(2^{k-\ell })\), which by Lemma 3 is \(\approx 2^{-\alpha \cdot p}\) where \(\alpha = -\lg (1-r^*/k)\). So to justify Eq. (5) we need to show that \(\alpha \approx {w}(\ell /k)\). For this we use the well-known estimate \(2^{k-\ell } \approx B_k(r^*) \approx 2^{k \cdot {H}_2(r^*/k)}\) to get \(1-\ell /k \approx {H}_2(r^*/k)\) or \(r^*/k \approx {H}_2^{-1}(1-\ell /k)\). Thus \(-\lg (1-r^*/k) \approx -\lg (1- {H}_2^{-1}(1-\ell /k)) = {w}(\ell /k)\) as desired.
Restricted subkey-prediction. Our proof of Theorem 2 will rely on an analysis of the restricted subkey-prediction game \(\mathbf {G}^{\mathrm {skp1}}_{k,p}({\mathscr {A}},{\mathcal {K}})\) shown in the right panel of Fig. 3. The game is associated to integers \(k,p\), adversary \({\mathscr {A}}\) and a set \({\mathcal {K}}\subseteq \{0,1\}^k\). Our results concerning this game are also of independent interest because we obtain bounds that are tight. In this game, the big-key \({{\varvec{K}}}\) is drawn from \({\mathcal {K}}\) rather than from \(\{0,1\}^k\), and there is no leakage. Also, there is only one probe. The rest is the same as in the subkey prediction game. Intuitively, one can think of \({\mathcal {K}}\) as being \(\mathsf {Lk}^{-1}(L)\) for some particular L received by \({\mathscr {A}}\) in game \(\mathbf {G}^{\mathrm {skp}}\), so that the new game, \(\mathbf {G}^{\mathrm {skp1}}\), effectively represents the view of \({\mathscr {A}}\) in the prior game at the point it receives leakage L. Now define
The first advantage is the probability that \({\mathscr {A}}\) wins the game. In the second, the maximum is over all adversaries \({\mathscr {A}}\), regardless of running time. In the third, the maximum is over all sets \({\mathcal {K}}\subseteq \{0,1\}^k\) that have size \(|{\mathcal {K}}|=N\).
Main lemmas and proof of theorem. Theorem 2 is obtained via two main lemmas. Here we state them and show how they yield the theorem. We will then prove the lemmas. The first main lemma reduces the task of upper bounding the subkey prediction probability \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell )\) to the task of bounding the special subkey prediction probability \(\mathsf {Adv}^\mathrm{skp1}_{k,p}(N)\) for \(N=2^{k-\ell }\):
Lemma 4
Let \(k,\ell ,p\ge 1\) be integers with \(k\ge \ell \). Then
The proof of this lemma, given below, involves a definition of concavity for discrete functions and a lemma saying where such functions attain their maximum. Then a particular function \(F_{k,p}\) we define is shown to meet this definition of concavity, and Lemma 4 results. The second main lemma characterizes the special subkey prediction probability:
Lemma 5
Suppose \(1\le N\le 2^k\) and \(p\ge 1\). Then
We note that Lemma 5 is an equality, not a bound. We are able to say exactly what is the special subkey prediction probability for a given value of N. Lemma 5 is obtained by showing that for a given \(N\), the maximum of \(\mathsf {Adv}^\mathrm{skp1}_{k,p}({\mathcal {K}})\) over sets \({\mathcal {K}}\) of size \(N\), occurs for a set that is monotone and sandwiched in between two adjacent Hamming balls. Monotone means that if a string is in the set, so is any string obtained by flipping one bits to zero bits on the original string. For monotone sets, it is quite easy to estimate the optimal advantage. All this put together will lead to Lemma 5.
The proof of Theorem 2 is immediate from these two lemmas, which we still need to prove. But first, with the above, we are in a position to compare with prior work.
Comparison with prior work. Lemmas related to subkey and restricted subkey prediction have been given by NZ [30, Lemma 11], Vadhan [35, Lemma 9] and ADW [3, Lemma A.3]. Briefly, the first two don’t give bounds on subkey prediction. They do give bounds on restricted subkey prediction but they are hard to use due to hidden constants, and these bound are inferior to Lemma 4 since the latter is tight. The elegant lemma of ADW [3, Lemma A.3], however, not only applies directly to subkey prediction but also gives a concrete bound with no hidden constants. The difference here, as quantified in Sect. 1, is that the bound is much inferior to that of Theorem 2, translating to a significant loss of practical efficiency for big-key symmetric encryption.
NZ [30, Lemma 11] considers drawing a string \({{\varvec{K}}}\) from \(\{0,1\}^k\) according to a distribution D with min-entropy \(\delta \). Like us, for a random probe \(\mathbf {p}\in [1..k]^{p}\), they then consider \({{\varvec{K}}}[\mathbf {p}]\). The lemma specifies \(\epsilon ,\delta '\) such that for all but an \(\epsilon \) fraction of the \(\mathbf {p}\)’s, the distribution \({{\varvec{K}}}[\mathbf {p}]\) is within statistical distance \(\epsilon \) of a \(\delta '\) source. Unlike our work there is no leakage. Their setting does not capture subkey prediction but it does capture restricted subkey prediction game, which corresponds to the distribution D that puts a uniform probability on \({\mathcal {K}}\) and 0 probability on points outside it. However, the formulas in [30, Lemma 11] make \(\epsilon \) very large for the parameters of interest to us, and also have un-specified constants. In contrast Lemma 5 gives a tight bound with no unspecified constants. The same remains true with Vadhan [35, Lemma 9]. (Here the hidden constant is an exponent in the statistical distance.) The values of the constants can in principle, of course, be obtained from the proofs, but since our bound of Lemma 4 for restricted subkey prediction is tight, we would not see an improvement. (And the indication from what follows, where concrete bounds are given, is that the bounds would be substantially worse than ours anyway.)
ADW [3, Lemma A.3] can, however, be directly applied to get a bound on \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell )\). Referring to their lemma, let random variable X represent the big-key \({{\varvec{K}}}\), and let experiment \(\mathcal{E}_1\) represent the leakage. Their t corresponds to our \(p\). (We think the N in their lemma is a typo. It should be t.) Then \(\tilde{\mathrm {H}}_{\infty }(X)=k - \ell \). The lemma then implies that \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell ) \le 2^{-c}\) as long as \( k-\ell \ge 2ck\lg (2k)/p+3c+5\). This translates to
This is the formula used for Fig. 2.
We note that ADW [3, Lemma A.3] considers a more general setting than subkey prediction. We are saying that in this special setting, we can get much better bounds. In the big-key context, an important case where their bound applies but ours does not is when we work over blocks rather than bits. Here \({{\varvec{K}}}\) is a k-vector over \(\{0,1\}^b\) for some block length parameter b, so that each probe draws a b-bit block. This setting is more convenient for implementations of big-key symmetric encryption. Giving better bounds for this setting is an interesting open question.
Proof of Lemma 5. Recall that \(w_{\mathrm {H}}(x)\) denotes the Hamming weight of string x. For equal-length strings \(x,y \in \{0,1\}^*\), write \(x\preceq y\) if \(x[i] \le y[i]\) —that is, if \(x[i]=1\) then \(y[i]=1\)— for every \(i\in [1..|x|]\). Write \(x \prec y\) if \(x \preceq y\) and \(x \ne y\). Then \(\preceq \) is a partial order on \(\{0,1\}^k\), satisfying the required conditions, namely it is reflexive, anti-symmetric and transitive. If \(U\subseteq \{0,1\}^*\) is a finite set then we let \(w_{\mathrm {H}}(U) = \sum _{u\in U}w_{\mathrm {H}}(u)\) be the Hamming weight of U. Call a subset \({\mathcal {K}}\subseteq \{0,1\}^k\) monotone if for all \(x,y\in \{0,1\}^k\) we have that if \(x \prec y\) and \(y\in {\mathcal {K}}\) then \(x\in {\mathcal {K}}\). The following lemma says that the maximum restricted subkey prediction advantage occurs for a set \({\mathcal {K}}\) that is monotone. The proof is in [9].
Lemma 6
Suppose \(1\le N \le 2^k\) and \(p\ge 1\). Then there is a monotone \({\mathcal {K}}\subseteq \{0,1\}^k\) such that \(|{\mathcal {K}}|=N\) and
Now define the function \(g_{k,p}\), taking input a set \({\mathcal {K}}\subseteq \{0,1\}^k\), via
A nice property of monotone sets is that we can easily compute the maximal advantage on them, via the function just defined:
Lemma 7
Suppose \(1\le N \le 2^k\) and \(p\ge 1\). Suppose \({\mathcal {K}}\subseteq \{0,1\}^k\) is a monotone set of size N. Then
Proof
(Lemma 7 ). Since \({\mathcal {K}}\) is monotone, the best strategy for the adversary given \(\mathbf {p}\) is to simply return \(K=0^k\) as the guess. Letting \({\mathscr {A}}\) denote the adversary that does this, we now want to evaluate \(\mathsf {Adv}^\mathrm{skp1}_{k,p}({\mathscr {A}},{\mathcal {K}})\). This is just the probability that if x is chosen at random from \({\mathcal {K}}\) and \(\mathbf {p}[1],\ldots ,\mathbf {p}[p]\) are chosen at random from [1..k] then \(x[\mathbf {p}[j]]=0\) for all \(j \in [1..p]\). This probability is \(g_{k,p}({\mathcal {K}})\).
We will now further characterize the sets which attain the maximum. We say that \({\mathcal {K}}\subseteq \{0,1\}^k\) is sandwiched between Hamming balls if there is an r such that \(\mathcal{B}_{k}(r)\subseteq {\mathcal {K}}\subset \mathcal{B}_{k}(r+1)\). Note in this case it must be that \(r=\mathsf {rd}_{k}(N)\) where \(N\) is the size of \({\mathcal {K}}\). The proof of the following, which exploits Lemma 6, is in [9].
Lemma 8
Suppose \(1\le N\le 2^k\) and \(p\ge 1\). Let \(r = \mathsf {rd}_{k}(N)\). Then there is a monotone, size \(N\) set \({\mathcal {K}}\) such that \(\mathcal{B}_{k}(r)\subseteq {\mathcal {K}}\subset \mathcal{B}_{k}(r+1)\) and
We are now in a position to prove our second main lemma.
Proof
(Lemma 5 ). Let \(r = \mathsf {rd}_{k}(N)\). By Lemma 8 there is a monotone, size \(N\) set \({\mathcal {K}}\) such that \(\mathcal{B}_{k}(r)\subseteq {\mathcal {K}}\subset \mathcal{B}_{k}(r+1)\) and \(\mathsf {Adv}^\mathrm{skp1}_{k,p}(N) = \mathsf {Adv}^\mathrm{skp1}_{k,p}({\mathcal {K}})\). But
where the first equality is by Lemma 7 and the second is because \(\mathcal{B}_{k}(r)\subseteq {\mathcal {K}}\subset \mathcal{B}_{k}(r+1)\).
Proof of Lemma 4. We now prove the first main lemma. We begin with a general result about the maximization of discrete concave functions.
The standard definition of concavity of functions applies to continuous functions. Here we provide a definition for functions defined on a discrete domain that allows us to prove a lemma we will use later. Proceeding, suppose \(F {:\;\,}{{\mathbb Z}}_M \rightarrow {{\mathbb R}}\). We say that F is concave if \(F(a+1)-F(a) \le F(b+1)-F(b)\) for all \(a,b\in {{\mathbb Z}}_{M-1}\) satisfying \(a \ge b\). Now suppose \(t,m\ge 1\) are integers with \(1\le m\le t\). Then we let
Define \(F^m{:\;\,}{{\mathbb Z}}_M^m\rightarrow {{\mathbb R}}\) by \(F^m(x_1,\ldots ,x_m) = F(x_1)+\cdots +F(x_m)\). The proof of the following is in [9].
Lemma 9
Suppose \(F{:\;\,}{{\mathbb Z}}_M \rightarrow {{\mathbb R}}\) is concave. Suppose \(1 \le m \le t\) are integers such that m divides t and \(t/m \in {{\mathbb Z}}_M\). Then
That is, the maximum of \(F^m\) over S(M, m, t) is attained when all inputs have the same value \(t/m \in {{\mathbb Z}}_M\), and is thus equal to \(m\cdot F(t/m)\). To apply this in our setting, we introduce the function \(F_{k,p}{:\;\,}{{\mathbb Z}}_{2^k+1}\rightarrow {{\mathbb R}}\) defined by
Rewriting the claim of Lemma 4 in terms of the function \(F_{k,p}\), our aim is to show that \(\mathsf {Adv}^\mathrm{skp}_{k,p,1}(\ell ) \;\le \;2^{\ell }\cdot F_{k,p}(2^{k-\ell })\). The following says that the function \(F_{k,p}\) meets our definition of concavity. The proof is in [9].
Lemma 10
Let \(k,p\ge 1\) be integers. Then the function \(F_{k,p}\) is concave.
We now show how to obtain our first main lemma.
Proof
(Lemma 4 ). Let \(M=2^k+1\), \(m=2^{\ell }\) and \(t=2^k\). Now we have
Equation (11) is justified as follows. Lemma 10 says that \(F_{k,p}\) is concave. So we can apply Lemma 9, and here \(t/m = 2^{k-\ell }\). Equation (12) is by Eq. (10) and because \(m=2^{\ell }\).
4 Encapsulating a Key
In this section we introduce big-key encapsulation. A scheme for this aim lets a user encapsulate a random, conventional-length key \(K\) using a big-key \({{\varvec{K}}}\). We speak of “encapsulation” instead of “encryption” because the user never selects a value \(K\) to encrypt: rather, a random \(R\) is chosen and this value, together with the big-key \({{\varvec{K}}}\), determines a derived key \(K= \mathsf {KEY}({{\varvec{K}}},R)\). A user can transmit \(R\) to a party that knows \({{\varvec{K}}}\) and in this way name an induced key \(K\). While the aim is similar to a KEM (key encapsulation mechanism) [18], there are also many differences, which we will later discuss.
Definitions. A big-key encapsulation algorithm is a deterministic algorithm \(\mathsf {KEY}\) that, given strings \({{\varvec{K}}}\in \{0,1\}^k\) and \(R\in \{0,1\}^r\), returns a string \(K= \mathsf {KEY}({{\varvec{K}}},R)\in \{0,1\}^\kappa \). We call \({{\varvec{K}}}\), \(R\), and \(K\) the big-key, selector, and derived key, respectively. Their lengths, being part of the signature of \(\mathsf {KEY}\), are numbers associated to it. Since we will be working in the RO model, we allow the encapsulation algorithm to depend on an oracle \(\textsc {RO}\). We write such a function as a superscript, \(K= \mathsf {KEY}^\textsc {RO}({{\varvec{K}}},R)\), if we want to emphasize its presence.
The security requirement for an encapsulation algorithm captures the idea that a derived key \(K\) should be indistinguishable from a uniform random string even when accompanied by \(R\) and some bounded amount of leakage from the big-key \({{\varvec{K}}}\). This is formalized via the game \(\mathbf {G}^{\mathrm {key}}_{\mathsf {KEY}}({\mathscr {A}})\) on the left of Fig. 4. The game is associated to encapsulation algorithm \(\mathsf {KEY}\) and adversary \({\mathscr {A}}\). The game is in the ROM, oracle \(\textsc {RO}\) taking (x, l) and returning a random l-bit string. Not only do algorithms and the adversary have access to \(\textsc {RO}\), but, importantly, so does the leakage function \(\mathsf {LK}^\textsc {RO}{:\;\,}\{0,1\}^k\rightarrow \{0,1\}^{\kappa }\), now called an oracle leakage function to emphasize this.
In its first stage, the adversary specifies the leakage function it wants. It then makes a sequence of \(\textsc {Derive}\) calls, each providing an \((R,K)\) pair that is either real—the derived key was determined by running \(\mathsf {KEY}\) with a random selector—or random—the derived key is uniformly random. Which of these possibilities occurs depends on a bit b chosen at the beginning of the game. The adversary must guess that bit. We let \(\mathsf {Adv}^\mathrm{key}_{\mathsf {KEY}}({\mathscr {A}})=2\Pr [\mathbf {G}^{\mathrm {key}}_{\mathsf {KEY}}({\mathscr {A}})]-1\) be its advantage in doing so.
Discussion. A big-key encapsulation algorithm is in some ways similar to a conventional key encapsulation mechanism [18]. But there are many differences. First, we are in the symmetric setting instead of the asymmetric setting. Second, we are considering security under leakage, so the leakage function, chosen by the adversary, comes into the picture. Finally, we have chosen a syntax under which we do not have a separate decapsulation algorithm, preferring to surface the coins \(R\) across the encapsulation algorithm’s interface.
One may ask why we let the adversary encapsulate an arbitrary number of conventional keys, using its \(\textsc {Derive}\) oracle q times; wouldn’t a hybrid argument show that providing a single derived key is equivalent, up to a factor of q? In fact, such a hybrid argument fails because the single-query adversary has no means to simulate the real derived keys. For this reason, it is important to consider multiple keys.
We explain the importance of giving the leakage function access to \(\textsc {RO}\). Otherwise, the scheme \(\mathsf {KEY}^{\textsc {RO}}({{\varvec{K}}},R) = \textsc {RO}({{\varvec{K}}},\kappa )\) is secure. Yet, in practice, when \(\textsc {RO}\) is instantiated with a concrete hash function \(H{:\;\,}\{0,1\}^*\rightarrow \{0,1\}^{\kappa }\), this scheme is certainly not secure because the leakage function can return \(H({{\varvec{K}}})\). (We note that in this example the scheme does not make efficient use of \({{\varvec{K}}}\) as we want, but this is an orthogonal issue to security.) Once the leakage function has access to \(\textsc {RO}\), it too can return \(\textsc {RO}({{\varvec{K}}},\kappa )\), preventing this and other similar schemes from being deemed secure.
Encapsulation scheme XKEY. Let \(k,\kappa ,r\ge 1\) be integers, and, for convenience, assume k is a power of two. Given the results of Sect. 3, a natural big-key encapsulation algorithm is as follows. On input the big-key \({{\varvec{K}}}\in \{0,1\}^k\), algorithm \(\mathsf {KEY}\) picks random probes \(\mathbf {p}\in [1..k]^p\) and computes the induced subkey \(J = {{\varvec{K}}}[\mathbf {p}]\). While this subkey is unpredictable, up to the bounds we have seen, it is not indistinguishable from random bits, so cannot, by itself, function as the derived key \(K\). So, instead, the algorithm would let the derived key be \(K\leftarrow \textsc {RO}(J,\kappa )\) where \(\kappa \) is the desired length for the derived key.
While the above might sound reasonable, there are two problems with the scheme—one with regard to security and the other with regard to efficiency. We explain these and then present a scheme that resolves them.
The security problem is that, we can only show security for the scheme just described if the leakage function does not have access to the oracle \(\textsc {RO}\). But we have argued that it must have such access, and in that case the proof breaks down and it is not clear if the scheme is secure. A simple remedy is to have the scheme pick a random string and include it in the scope of the RO used to determine the subkey. The proof can then exploit the fact that leakage function can’t depend on this (not yet chosen) value.
This still leaves the efficiency issue, which is that the probe \(\mathbf {p}\) is quite long, a total of \(p\lg (k)\) bits. The number of components \(p\) of \(\mathbf {p}\) is fairly large and grows with the fraction of bits potentially leaked; it will typically be 100–1000. If \({{\varvec{K}}}\) is 1 TByte then \(k \approx 2^{43}\) and we’re using up a few kilobytes to communicate the selector \(\mathbf {p}\), which is unpleasant.
Since we’re working in the ROM, an easy solution is to obtain \(\mathbf {p}\) by applying \(\textsc {RO}\) to a short seed. Conveniently, the same random choices needed for this remedy can be used for the security problem as well. The resulting encapsulation algorithm \(\mathsf {XKEY}_{k,\kappa ,p,r}{:\;\,}\{0,1\}^k\times \{0,1\}^r\rightarrow \{0,1\}^\kappa \) is shown in Fig. 5 and described below.
The XKEY encapsulation algorithm picks or is provided a random selector \(R\) of length \(r\). It picks the ith probe into \({{\varvec{K}}}\) not directly at random, but by applying the random oracle \(\textsc {RO}\) to a string encoding \(R\), i, and 0. In defining \(\mathbf {p}[i]\) we interpret a \(\lg (k)\)-bit string as an integer in [1..k] in the natural way. The encapsulation algorithm then lets the subkey J be the positions of \({{\varvec{K}}}\) indicated by the probes. The derived key \(K\) is obtained by applying the RO to a string encoding J, \(R\), and 0. The third component in each encoding \(\langle \cdot ,\cdot ,\cdot \rangle \) is for domain separation.
Theorem 12 will establish security of XKEY, providing concrete bounds. Those bounds indicate that \(r=|R|\) can be chosen to be quite small, like 128–256. Since only this value is transmitted when XKEY is used, bandwidth overhead in small and independent of \(p\).
Enhanced subkey-prediction game. Towards the proof of Theorem 12 it is convenient to consider an enhanced version of the subkey-prediction problem. The security goal reflects two changes over our earlier subkey-prediction game \(\mathbf {G}^{\mathrm {skp}}\). First, the leakage function is not fixed but dynamically chosen by an adversary. Second, that choice may depend on the RO, which the leakage function itself may depend on. These issues, particularly the second, lead to design choices embedded in \(\mathsf {XKEY}\) and make proofs more challenging. We must now revisit sample predictability, formulate it in the extended setting just discussed, and then show that security in this new setting is implied by security in the basic one. Afterwards, we will be in a position to prove security for XKEY.
As in the basic sample predictability problem, let \(k,\ell ,p,q \ge 1\) be integers with \(\ell \le k\). Consider game \(\mathbf {G}^{\mathrm {skp2}}_{k,p,q}({\mathscr {B}})\) defined in Fig. 6. In its first stage, the adversary returns an oracle leakage function \(\mathsf {Lk}\). We say that \({\mathscr {B}}\) leaks (or exfiltrates) \(\ell \) bits if \(\mathsf {LK}{:\;\,}\{0,1\}^k\rightarrow \{0,1\}^{\ell }\). Adversary \({\mathscr {B}}\) also returns state \(\sigma \) representing information that could be known to \(\mathsf {LK}\) and will be passed to the adversary’s second stage. The game picks the big-key \({{\varvec{K}}}\), computes the leakage as \(\mathsf {LK}^{\textsc {RO}}({{\varvec{K}}})\) and picks probe vectors \(\mathbf {p}_1,\ldots ,\mathbf {p}_q\). These implicitly determine subkeys \(K_1,\ldots ,K_q\) where \(K_i={{\varvec{K}}}[\mathbf {p}_i]\). In its second stage, the adversary \({\mathscr {B}}\) gets the leakage L and probe vectors as before, but now additionally gets \(\sigma \). Its task is to guess one of the subkeys, and the game returns \(\mathsf {true}\) if it does this correctly. We let \(\mathsf {Adv}^\mathrm{skp2}_{k,p,q}({\mathscr {B}}) \;=\;\Pr [\mathbf {G}^{\mathrm {skp2}}_{k,p,q}({\mathscr {B}})]\) be its advantage.
Lemma 11
Let \(\ell ,k,p,q \ge 1\) be integers with \(\ell \le k\). Let \({\mathscr {B}}\) be an adversary leaking \(\ell \) bits. Then
The proof uses a fairly standard “coin-fixing” argument in which a predictor adversary uses the “best” choice of random oracle and coins for \({\mathscr {B}}\). The details follow.
Proof
(Lemma 11 ). Let \(\mathbf {H}\) denote the set of all functions H such that \(H(\cdot ,l) {:\;\,}\{0,1\}^* \rightarrow \{0,1\}^{l}\) for all \(l \in {{\mathbb N}} \). A random oracle is a function drawn at random from \(\mathbf {H}\). Regard \({\mathscr {B}},\ell ,k,p,q\) as fixed and define the function \(g{:\;\,}\mathbf {H}\times \{0,1\}^* \rightarrow [0,1]\) as \(g(H,\omega ) = \Pr [\mathbf G (H,\omega )]\) where game \(\mathbf G (H,\omega )\) is on the left below:
In this game, the coins of \({\mathscr {B}}\) are fixed to \(\omega \) and its random oracle is fixed to H. The probability is only over the choices made in the game. Let \((H^*,\omega ^*) \in \mathbf {H}\times \{0,1\}^*\) be such that \(g(H^*,\omega ^*) \ge g(H,\omega )\) for all \((H,\omega ) \in \mathbf {H}\times \{0,1\}^*\), and let \((\mathsf {LK},\sigma ^*)\leftarrow {\mathscr {B}}^{H^*}(\omega ^*)\). Let \(\mathsf {Lk}= \mathsf {LK}^{H^*}\). This is a basic (not oracle) leakage function, \(\mathsf {Lk}{:\;\,}\{0,1\}^k\rightarrow \{0,1\}^{\ell }\). Define predictor adversary \(\mathcal{P}\) as on the right above. Then
which yields Eq. (13).
Theorem 12
Let \(k,\kappa ,p,r\ge 1\) be integers with k a power of two. Let \(\mathsf {KEY}= \mathsf {XKEY}_{k,\kappa ,p,r}\) be the big-key encapsulation scheme associated to them as per Fig. 5. Let \({\mathscr {A}}\) be an adversary making at most q queries to its \(\textsc {Derive}\) oracle and leaking \(\ell \) bits. Assume the number of \(\textsc {RO}\) queries made by \({\mathscr {A}}\) in its first stage, plus the number made by the oracle leakage function \(\mathsf {LK}\) that it outputs in this stage, is at most \(q_1\), and the number of \(\textsc {RO}\) queries made by \({\mathscr {A}}\) in its second stage is at most \(q_2\). Then
We note that the bound of Eq. (14) does not depend on the length \(\kappa \) of the output keys.
Proof
(Theorem 12 ). Consider the games of Fig. 7. Their \(\textsc {RO}\) procedures are in Fig. 8. Game \(\mathbf G _0\) includes the boxed code and is equivalent to case of game \(\mathbf {G}^{\mathrm {key}}_{\mathsf {KEY}}({\mathscr {A}})\) in which \(b=1\). Game \(\mathbf G _5\) excludes the boxed code and mimics the \(b=0\) case of game \(\mathbf {G}^{\mathrm {key}}_{\mathsf {KEY}}({\mathscr {A}})\), except that the probability the former returns \(\mathsf {true}\) is the probability the latter returns \(\mathsf {false}\). Let \(p_i = \Pr [\mathbf G _i]\) for \(0\le i \le 5\). Then
Equation (15) used the fact that \(p_1=p_2\) and \(p_3=p_4\). Equation (16) used the Fundamental Lemma of Game Playing [13]. Now
Also
The first estimate of the above bound may be that each \(\textsc {RO}\) query can set the first instance of \(\mathsf {bad}\) with probability \(pq/2^{r}\) and the second with probability \(q/2^{r}\) for a bound of \(q_1(p+1)q/2^{r}\). But these different events are mutually exclusive due to the queries including the index i and the domain separation bit, whence Eq. (18). Now we will present an adversary \({\mathscr {A}}\) such that
The theorem follows from Lemma 11. Adversary \({\mathscr {B}}\) is shown in Fig. 9. In the first stage, \({\mathscr {B}}\) simulates \({\mathscr {A}}\)’s \(\textsc {RO}\) directly via its own \(\textsc {RO}\), keeping track of values in table T, which is passed to the next stage. In the second it does the same, makes sure to map selectors to probes as per game \(\mathbf G _5\), and also it saves subkey guesses in the set S. Equation (19) follows because \(|S|\le q_2\).
5 Big-Key Symmetric Encryption
Here we define and achieve big-key symmetric encryption.
Definitions. A symmetric encryption scheme \(\mathsf {SE}\) specifies a key length \(\mathsf {SE}.\mathsf {kl}\in {{\mathbb N}} \), an encryption algorithm \(\mathsf {SE}.\mathsf {Enc}\) that given a key K and message M returns a ciphertext, and a deterministic decryption algorithm \(\mathsf {SE}.\mathsf {Dec}\) such that for all K, M we have \(\mathsf {SE}.\mathsf {Dec}(K,\mathsf {SE}.\mathsf {Enc}(K,M))=M\) with probability one, where the probability is over the coins of \(\mathsf {SE}.\mathsf {Enc}\). Privacy is formalized by left or right indistinguishability [7] via game \(\mathbf IND _{\mathsf {SE}}({\mathscr {A}})\) on the right of Fig. 10 associated to \(\mathsf {SE}\) and adversary \({\mathscr {A}}\). We let \(\mathsf {Adv}^\mathrm{ind}_{\mathsf {SE}}({\mathscr {A}})=2\Pr [\mathbf IND _{\mathsf {SE}}({\mathscr {A}})]-1\) be the advantage of \({\mathscr {A}}\) in violating privacy of \(\mathsf {SE}\).
Syntactically, a big-key symmetric encryption scheme \({\mathbf {\mathsf{{SE}}}}\) continues to be a symmetric encryption scheme as above, specifying \({\mathbf {\mathsf{{SE}}}}.\mathsf {kl},{\mathbf {\mathsf{{SE}}}}.\mathsf {Enc}\) and \({\mathbf {\mathsf{{SE}}}}.\mathsf {Dec}\). Two things make it special. First, privacy is measured under leakage on the key. Second, the encryption and decryption algorithms have the “locality” efficiency attribute, which means that in any one execution they access only a small part of the key. Privacy is formalized via game \(\mathbf LIND _{{\mathbf {\mathsf{{SE}}}}}({\mathscr {A}})\) on the left of Fig. 10 associated to \({\mathbf {\mathsf{{SE}}}}\) and adversary \({\mathscr {A}}\). In its first stage, the adversary, given access to \(\textsc {RO}\), specifies an oracle leakage function \(\mathsf {LK}{:\;\,}\{0,1\}^{{\mathbf {\mathsf{{SE}}}}.\mathsf {kl}}\rightarrow \{0,1\}^{\ell }\) together with state information \(\sigma \). We refer to \(\ell \) as the number of bits leaked by \({\mathscr {A}}\). In its second stage, \({\mathscr {A}}\) gets the leakage \(L \leftarrow \mathsf {LK}^{\textsc {RO}}({{\varvec{K}}})\), the state \(\sigma \), and access to the challenge encryption oracle \(\textsc {Enc}\), while continuing to have access to \(\textsc {RO}\). To win it must guess the challenge bit \(b\). We let \(\mathsf {Adv}^\mathrm{lind}_{{\mathbf {\mathsf{{SE}}}}}({\mathscr {A}})= 2 \Pr [ \mathbf LIND _{{\mathbf {\mathsf{{SE}}}}}({\mathscr {A}}) ]-1\) be the advantage of \({\mathscr {A}}\) in violating privacy of \({\mathbf {\mathsf{{SE}}}}\) under leakage. Locality will not be formalized but rather visible in specific constructs. Giving the leakage function access to \(\textsc {RO}\) is important for same reason as we discussed in Sect. 4 for key encapsulation, namely that, otherwise, there are trivial ROM schemes that are secure but when the RO is instantiated the resulting scheme is clearly not secure.
Big-key encryption scheme. Let \(\mathsf {SE}\) be a symmetric encryption scheme. Let \(\mathsf {KEY}\) be a key-encapsulation algorithm with big-key length \(k\), randomness length \(r\) and derived key length \(\kappa = \mathsf {SE}.\mathsf {kl}\) (keys output by \(\mathsf {KEY}\) are suitable for use with \(\mathsf {SE}\)). We associate to \(\mathsf {SE},\mathsf {KEY}\) the big-key symmetric encryption scheme \({\mathbf {\mathsf{{SE}}}}= \mathsf {BKSE}[\mathsf {SE},\mathsf {KEY}]\) defined as follows. The key length is \({\mathbf {\mathsf{{SE}}}}.\mathsf {kl}= {k}\) (the key for \({\mathbf {\mathsf{{SE}}}}\) is the same as that for \(\mathsf {KEY}\)) and the encryption and decryption algorithms are as follows:
Encryption applies the key encapsulation algorithm to the big-key to get a derived key K. The message is encrypted under \(\mathsf {SE}\) using key K. The locality of this scheme is exactly that of \(\mathsf {KEY}\) since accesses to the key are done only by \(\mathsf {KEY}\). The big-key aspect is similarly inherited from \(\mathsf {KEY}\). The following says that our big-key scheme achieves privacy under leakage on the key assuming standard privacy of the base scheme \(\mathsf {SE}\) and the lror-security of \(\mathsf {KEY}\). The proof is in [9].
Theorem 13
Let \(\mathsf {SE}\) be a symmetric encryption scheme. Let \(\mathsf {KEY}\) be a key encapsulation algorithm with big-key length \(k\), randomness length \(r\) and derived key length \(\kappa = \mathsf {SE}.\mathsf {kl}\). Let \({\mathbf {\mathsf{{SE}}}}= \mathsf {BKSE}[\mathsf {SE},\mathsf {KEY}]\) be the big-key symmetric encryption scheme associated to them as above. Let \({\mathscr {A}}\) be an adversary making at most q queries to its \(\textsc {Enc}\) oracle and leaking \(\ell \) bits. Then the proof below specifies an adversary \({\mathscr {A}}_1\) and an adversary \({\mathscr {A}}_2\) such that
Adversary \({\mathscr {A}}_1\) makes only one query to its \(\textsc {Enc}\) oracle and its running time is about that of \({\mathscr {A}}\). Adversary \({\mathscr {A}}_2\) makes q queries to its \(\textsc {Derive}\) oracle and has running time about that of \({\mathscr {A}}\). Its first stage is the same as that of \({\mathscr {A}}\), so it also leaks \(\ell \) bits. In its second stage it makes the same number of \(\textsc {RO}\) queries as \({\mathscr {A}}\) does in its second stage.
Since \({\mathscr {A}}_1\) makes only one query to its \(\textsc {Enc}\) oracle, a one-time encryption scheme suffices to instantiate \(\mathsf {SE}\).
6 Standard-Model Big-Key Encryption
In this section we give a standard-model variant of our scheme whose security relies on UCE. The scheme is as efficient as our ROM one, but ciphertexts are longer.
Definitions. We recall the UCE framework following [8]. Let \(\mathsf {H}{:\;\,}\{0,1\}^{\mathsf {H}.\mathsf {kl}}\times \{0,1\}^{\mathsf {H}.\mathsf {il}} \rightarrow \{0,1\}^{\mathsf {H}.\mathsf {ol}}\) be a family of functions taking an \(\mathsf {H}.\mathsf {kl}\)-bit key \(I\) and \(\mathsf {H}.\mathsf {il}\)-bit input x to a \(\mathsf {H}.\mathsf {ol}\)-bit output \(\mathsf {H}(I,x)\). Game \(\mathbf {G}^{\mathrm {uce}}\) of Fig. 11 is associated to \(\mathsf {H}\), an adversary \(\mathcal{S}\) called the source, an adversary \(\mathcal{D}\) called the distinguisher, and a number q of keys. (We are using the multi-key version of UCE from [8].) Here \(\mathcal{S}\) does not get the keys. It produces leakage M that is passed to \(\mathcal{D}\), who does get the keys. We let \(\mathsf {Adv}^\mathrm{uce}_{\mathsf {H},q}(\mathcal{S},\mathcal{D})=2 \Pr [\mathbf {G}^{\mathrm {uce}}_{\mathsf {H},q}(\mathcal{S},\mathcal{D})]-1\). We can only expect this to be small for sources restricted in some way. We require statistical unpredictability [8, 15] of the source’s oracle queries. If \(\mathcal{P}\) is an adversary called the predictor, let \(\mathsf {Adv}^\mathrm{pred}_{\mathcal{S}}(\mathcal{P}) = \Pr [\mathbf {G}^{\mathrm {sp}}_{\mathcal{S}}(\mathcal{P})]\) where the game is again in Fig. 11, and let \(\mathsf {Adv}^\mathrm{pred}_{\mathcal{S}} = \max _{\mathcal{P}} \mathsf {Adv}^\mathrm{pred}_{\mathcal{S}}(\mathcal{P})\). The predictor here is unbounded (corresponding to statistical unpredictability) so the maximum is over all predictors. The assumption, informally, is that if \(\mathsf {Adv}^\mathrm{pred}_{\mathcal{S}}\) is small then so is \(\mathsf {Adv}^\mathrm{uce}_{\mathsf {H},q}(\mathcal{S},\mathcal{D})\) for all efficient \(\mathcal{S},\mathcal{D}\). An important element of results is thus to be able to bound \(\mathsf {Adv}^\mathrm{pred}_{\mathcal{S}}\) for the \(\mathcal{S}\) constructed by the reduction.
XKEY2. The encapsulation algorithm is specified in Fig. 12. If \(\kappa \) is the desired length of the derived key, k the length of the big-key \({{\varvec{K}}}\) (assumed a power of two for simplicity) and \(p\) the number of probes then it uses a family of functions \(\mathsf {H}\) with \(\mathsf {H}.\mathsf {ol}= \kappa \) and \(\mathsf {H}.\mathsf {il}= p\). The selector is of length \(r = \mathsf {H}.\mathsf {kl}+ p\cdot \lg (k)\) and specifies a key \(I\) for \(\mathsf {H}\) as well as the probe sequence \(\mathbf {p}\). The derived key \(K\) is then computed as shown. The following theorem says that the scheme works, meaning achieves our notion of encapsulation security. This involves two claims. First is that the key encapsulation advantage can be bounded by the uce advantage of a source-distinguisher pair. But this by itself is not enough. To ensure this uce advantage is small, we also show that the predictability of the source can be bounded. Here we appeal to our bound on sub-key predictability, so that once again the latter emerges as crucial.
Theorem 14
Let \(k,\kappa ,p\ge 1\) be integers and \(\mathsf {H}\) a family of functions with \(\mathsf {H}.\mathsf {ol}= \kappa \) and \(\mathsf {H}.\mathsf {il}= p\). Let \(r = \mathsf {H}.\mathsf {kl}+ p\cdot \lg (k)\). Let \(\mathsf {KEY}= \mathsf {XKEY2}_{k,\kappa ,p,r}\) be the big-key key-derivation scheme associated to them as per Fig. 12. Let \({\mathscr {A}}\) be an adversary making at most q queries to its \(\textsc {Derive}\) oracle and leaking \(\ell \) bits. The proof specifies a source adversary \(\mathcal{S}\) and a distinguisher adversary \(\mathcal{D}\) such that
Adversary \(\mathcal{S}\) makes q queries to its \(\textsc {HASH}\) oracle (one per key) and the running times of \(\mathcal{S}\) and \(\mathcal{D}\) add up to essentially that of \({\mathscr {A}}\).
Proof
(Theorem 14 ). Adversaries \(\mathcal{S},\mathcal{D}\) are as follows:
Adversary \(\mathcal{S}\) itself picks the big-key \({{\varvec{K}}}\) and runs \({\mathscr {A}}\) to get the leakage function, producing its own leakage M as shown. Adversary \(\mathcal{D}\) continues the execution of \({\mathscr {A}}\), being in a position to answer \(\textsc {Derive}\) queries because it has \(I_1,\ldots ,I_q\). If \(\mathcal{P}\) is any predictor adversary, the leakage M it gets in its game specifies the output L of the leakage function on the big-key, the probe sequences, and independent random strings \(K_1,\ldots ,K_q\), and an oracle query of the source is a subkey, so guessing it is exactly guessing a subkey. It follows that \(\mathsf {Adv}^\mathrm{pred}_{\mathcal{S}}(\mathcal{P}) \le \mathsf {Adv}^\mathrm{skp}_{k,p,q}(\ell )\). We omit the details.
Big-key encryption. We can turn \(\mathsf {XKEY2}\) into a big-key encryption scheme via the general transform of Sect. 5. This transform does not introduce a random oracle. (If the big-key key encapsulation mechanism used one, it will inherit it, but will not introduce an additional use.) Thus the result of applying the transform to \(\mathsf {XKEY2}\) is a standard-model big-key encryption scheme. It satisfies locality because \(\mathsf {XKEY2}\) does. Theorem 13 reduces its security to that of \(\mathsf {XKEY2}\), and thus we can conclude by applying Theorem 14. We omit the details.
7 Authenticity and Hedged Big-Key Encryption
In real-world settings we are likely to want authenticated encryption (AE) rather than privacy-only encryption. We should thus ask whether, we can have big-key AE rather than the privacy-only formulation we have now. As discussed in Sect. 1, this is not possible due to the following attack: the adversary simply leaks a valid ciphertext. This is a small amount of leakage, yet violates authenticity.
To overcome this difficulty we suggest to use what we call hedged big-key encryption. This provides privacy in the big-key setting we have already defined and achieved; additionally, in the absence of leakage, it provides authenticity. We suggest that this is a good goal because, in the mass-surveillance /APT context, it is privacy that is the main concern, not authenticity; but in the absence of an APT, our concerns would be the usual ones, which include authenticity. Hedged big-key encryption provides both, so that security does not degrade by moving to big keys.
There is a simple and generic way to turn a privacy-only big-key encryption scheme into a hedged big-key encryption scheme. Reserve a small (128-bit, say) portion K of the big-key \({{\varvec{K}}}\) as a key for a conventional PRF or MAC. Then use encrypt-then-mac [10]. Namely, big-key encrypt the message under the remaining (big) portion of \({{\varvec{K}}}\) to get a ciphertext C, and return (C, T) as the ciphertext for the hedged big-key scheme, where T is the result of applying a PRF, keyed by K, to C. In the absence of leakage, we have authenticated encryption by applying results of [10]. In the presence of leakage, we must assume the small key K is leaked in its entirety, but the big-key privacy-only component will still provide the same privacy as before. Here we use the fact that in the privacy proof of [10], the adversary can be given the PRF (MAC) key.
References
Agrawal, D., Archambeault, B., Rao, J.R., Rohatgi, P.: The EM side-channels. In: Kaliski Jr., B.S., Koç, Ç.K., Paar, C. (eds.) CHES 2002. LNCS, vol. 2523. Springer, Heidelberg (2003)
Alwen, J., Dodis, Y., Naor, M., Segev, G., Walfish, S., Wichs, D.: Public-key encryption in the bounded-retrieval model. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 113–134. Springer, Heidelberg (2010)
Alwen, J., Dodis, Y., Wichs, D.: Leakage-resilient public-key cryptography in the bounded-retrieval model. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 36–54. Springer, Heidelberg (2009)
Alwen, J., Dodis, Y., Wichs, D.: Survey: leakage resilience and the bounded retrieval model. In: Kurosawa, K. (ed.) Information Theoretic Security. LNCS, vol. 5973, pp. 1–18. Springer, Heidelberg (2010)
Aumann, Y., Ding, Y.Z., Rabin, M.O.: Everlasting security in the bounded storage model. IEEE Trans. Inf. Theory 48(6), 1668–1680 (2002)
Aumann, Y., Rabin, M.O.: Information theoretically secure communication in the limited storage space model. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 65–79. Springer, Heidelberg (1999)
Bellare, M., Desai, A., Jokipii, E., Rogaway, P.: A concrete security treatment of symmetric encryption. In: 38th FOCS, pp. 394–403. IEEE Computer Society Press, October 1997
Bellare, M., Hoang, V.T., Keelveedhi, S.: Instantiating random oracles via UCEs. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 398–415. Springer, Heidelberg (2013)
Bellare, M., Kane, D., Rogaway, P.: Big-key symmetric encryption: resisting key exfiltration. Cryptology ePrint Archive, report 2016/541 (2016)
Bellare, M., Namprempre, C.: Authenticated encryption: relations among notions and analysis of the generic composition paradigm. In: Okamoto, T. (ed.) ASIACRYPT 2000. LNCS, vol. 1976, pp. 531–545. Springer, Heidelberg (2000)
Bellare, M., Paterson, K.G., Rogaway, P.: Security of symmetric encryption against mass Surveillance. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part I. LNCS, vol. 8616, pp. 1–19. Springer, Heidelberg (2014)
Bellare, M., Rogaway, P.: Random oracles are practical: a paradigm for designing efficient protocols. In: Ashby, V. (ed.) ACM CCS 1993, pp. 62–73. ACM Press, November 1993
Bellare, M., Rogaway, P.: The Security of triple encryption and a framework for code-based game-playing proofs. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 409–426. Springer, Heidelberg (2006)
Boyen, X.: Reusable cryptographic fuzzy extractors. In: Atluri, V., Pfitzmann, B., McDaniel, P. (eds.) ACM CCS 2004, pp. 82–91. ACM Press, October 2004
Brzuska, C., Farshim, P., Mittelbach, A.: Indistinguishability obfuscation and UCEs: the case of computationally unpredictable sources. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part I. LNCS, vol. 8616, pp. 188–205. Springer, Heidelberg (2014)
Cachin, C., Maurer, U.M.: Unconditional security against memory-bounded adversaries. In: Kaliski Jr., B.S. (ed.) CRYPTO 1997. LNCS, vol. 1294, pp. 292–306. Springer, Heidelberg (1997)
Cash, D.M., Ding, Y.Z., Dodis, Y., Lee, W., Lipton, R.J., Walfish, S.: Intrusion-resilient key exchange in the bounded retrieval model. In: Vadhan, S.P. (ed.) TCC 2007. LNCS, vol. 4392, pp. 479–498. Springer, Heidelberg (2007)
Cramer, R., Shoup, V.: Design and analysis of practical public-key encryption schemes secure against adaptive chosen ciphertext attack. SIAM J. Comput. 33(1), 167–226 (2003)
Dagon, D., Lee, W., Lipton, R.J.: Protecting secret data from insider attacks. In: Patrick, A.S., Yung, M. (eds.) FC 2005. LNCS, vol. 3570, pp. 16–30. Springer, Heidelberg (2005)
Di Crescenzo, G., Lipton, R.J., Walfish, S.: Perfectly secure password protocols in the bounded retrieval model. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 225–244. Springer, Heidelberg (2006)
Dodis, Y., Kalai, Y.T., Lovett, S.: On cryptography with auxiliary input. In: Mitzenmacher, M. (ed.) 41st ACM STOC, pp. 621–630. ACM Press, May/June 2009
Dodis, Y., Ristenpart, T., Vadhan, S.: Randomness condensers for efficiently samplable, seed-dependent sources. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 618–635. Springer, Heidelberg (2012)
Dziembowski, S.: Intrusion-resilience via the bounded-storage model. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 207–224. Springer, Heidelberg (2006)
Dziembowski, S., Maurer, U.M.: Optimal randomizer efficiency in the bounded-storage model. J. Cryptol. 17(1), 5–26 (2004)
Dziembowski, S., Pietrzak, K.: Leakage-resilient cryptography. In: 49th FOCS, pp. 293–302. IEEE Computer Society Press, October 2008
Halevi, S., Lin, H.: After-the-fact leakage in public-key encryption. In: Ishai, Y. (ed.) TCC 2011. LNCS, vol. 6597, pp. 107–124. Springer, Heidelberg (2011)
Kelsey, J., Schneier, B.: Authenticating secure tokens using slow memory access. In: Proceedings of the USENIX Workshop on Smartcard Technology (Smartcard 1999), 10–11 May 1999, Chicago, Illinois, USA, p. 101. USENIX Association (1999)
Lu, C.-J.: Hyper-encryption against space-bounded adversaries from on-line strong extractors. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 257–271. Springer, Heidelberg (2002)
Maurer, U.M.: Conditionally-perfect secrecy and a provably-secure randomized cipher. J. Cryptol. 5(1), 53–66 (1992)
Nisan, N., Zuckerman, D.: Randomness is linear in space. J. Comput. Syst. Sci. 52(1), 43–52 (1996)
Pietrzak, K.: A leakage-resilient mode of operation. In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 462–482. Springer, Heidelberg (2009)
Raz, R., Reingold, O.: On recycling the randomness of states in space bounded computation. In: 31st ACM STOC, pp. 159–168. ACM Press, May 1999
Reingold, O., Shaltiel, R., Wigderson, A.: Extracting randomness via repeated condensing. SIAM J. Comput. 35(5), 1185–1209 (2006)
Shin, S.H., Kobara, K., Imai, H.: Leakage-resilient authenticated key establishment protocols. In: Laih, C.-S. (ed.) ASIACRYPT 2003. LNCS, vol. 2894, pp. 155–172. Springer, Heidelberg (2003)
Vadhan, S.P.: Constructing locally computable extractors and cryptosystems in the bounded-storage model. J. Cryptol. 17(1), 43–77 (2004)
Acknowledgments
Bellare was supported in part by NSF grants CNS-1526801 and CNS-1228890, a gift from Microsoft corporation and ERC Project ERCC (FP7/615074). Rogaway was supported in part by NSF grants CNS-1228828 and CNS-1314885. We thank Joseph Jaeger for comments and corrections, Wei Dai for helpful discussions, and the CRYPTO 2016 reviewers for their knowledgeable reviews, corrections and pointers to the literature.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 International Association for Cryptologic Research
About this paper
Cite this paper
Bellare, M., Kane, D., Rogaway, P. (2016). Big-Key Symmetric Encryption: Resisting Key Exfiltration. In: Robshaw, M., Katz, J. (eds) Advances in Cryptology – CRYPTO 2016. CRYPTO 2016. Lecture Notes in Computer Science(), vol 9814. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-53018-4_14
Download citation
DOI: https://doi.org/10.1007/978-3-662-53018-4_14
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-53017-7
Online ISBN: 978-3-662-53018-4
eBook Packages: Computer ScienceComputer Science (R0)