Abstract
We present a multi-party exchange protocol that achieves optimal partial fairness even in the presence of a dishonest majority. We demonstrate how this protocol can be applied to any type of multi-party exchange scenario where the network topology is complete. When combined with standard secure multi-party computation techniques, our protocol enables SMPC with partial fairness when a dishonest majority is involved. Fairness optimality is proven in an abstract model which applies to all protocols based on the concept of concealing the point when the secrets are exchanged. Our protocol improves known results via the use of timed-release encryption and commutative blinding.
This research was funded in whole, or in part, by UKRI 2421791 and Crypto.com. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Multi-party fair exchange
- Timed-release encryption
- Commutative encryption
- Secure multi-party computation
- Fair exchange
- Partial fairness
1 Introduction
A multi-party fair exchange protocol allows any number of parties to exchange items in an arbitrary way while guaranteeing fairness: either everyone receives all the items they expected or nobody obtains anything. The problem of fair exchange has primarily been studied in the two-party setting where it was already known to be impossible in 1986 thanks to Cleve [8] and later to Pagnia and Gärtner [21]. Fair exchange is often solved with the aid of a trusted party either being present (e.g. [13]) or acting only as a judge in case of misbehaviour (e.g. [3, 12]). The protocols following the latter approach are called optimistic since they assume that most of the time the parties involved will all be honest. Among these, a recent approach involves the use of crypto currencies as collateral for failed exchanges (e.g. [2]). Alternative solutions require stronger assumptions such as similarity in general or sequential computing power [6, 11]. A more satisfactory approach is the weakening of the fairness requirement to allow a small but not negligible chance of failure [5, 9, 24]. Other approaches to multi-party fair exchange without introducing trust are based on rationality assumptions [1]. A related research field is the study of Secure Multi-Party Computation (SMPC). The problem tackled by this branch of computer science is how a group of parties, each with a private input, can compute a function on these inputs without revealing them. This problem appears to be more general than the simple fair exchange, yet SMPC do not always require fairness, instead it mainly focuses on ensuring that the outputs are correct. In particular, in the presence of a dishonest majority, SMPC protocols allow the computation of an arbitrary functionality guaranteeing privacy, correctness but not fairness [15]. That is, the private inputs are kept private, the outputs are guaranteed to be correct, but the dishonest parties can choose not to let the honest parties receive the outputs. This is clearly unsatisfactory in the context of fair exchange, yet the theory of SMPC proves that fairness cannot be achieved in this scenario. In a brief summary, SMPC can be achieved with “standard assumptions” under the following circumstances [15, 22]:
-
in the presence of any number of passive adversaries;
-
in the presence of an active adversary controlling only a minority of the parties;
-
in the presence of an active adversary controlling a majority of the parties if early abortion is not a security violation.
The above results show that the fair exchange problem can be solved in the presence of an honest majority (e.g. via the use of secret sharing), but it is not possible otherwise. To escape this impossibility result, we follow the approach of partial fairness set out by Gordon and Katz [16] and applied in the multi-party setting by Beimel et al. [4].
1.1 Our Contribution
In this paper, we propose a protocol that achieves optimal partial fairnessFootnote 1 in the presence of a dishonest majority. In particular, we design a protocol where the fairness requirement is weakened so that there is a small probability that the exchange is not fair. This protocol follows the blueprint of other protocols (e.g. [4, 9, 16, 24]) where the secrets are hidden among other dummy messages so that no party knows when the secrets are truly exchanged. By hiding the secrets among other items, the parties can exchange messages without anyone knowing whether they have received or sent any secret. This lack of information leads to the guarantee of partial fairness, i.e. only a lucky guess will break fairness. Moreover, the specific way in which the secrets are distributed leads to the optimality result. We vastly improve the round complexity of [4] as well as escape their impossibility results via the use of Timed-Release Encryption [23]. In particular, Beimel et al.’s protocols that are fair with probability \(1 - \frac{1}{p}\) against an arbitrary number m of malicious entities require at least \(O(m p^{2^m})\) rounds, while ours is roughly mp. The impossibility result presented in [4] states that, for some parameter depending on the number of malicious entities and the functionality to compute, fairness cannot be guaranteed with probability higher than \(\frac{1}{2} + \textsf {negl}(\lambda )\). By assuming the existence of Timed-Release Encryption, our protocol can achieve fairness with any user-defined probability regardless of the context in which it is used. In particular, our protocol can be rendered fair against malicious adversaries with standard zero-knowledge proofs techniques [14, 15, 22] and fair exchange can be used to achieve fairness in SMPC [18, 19].
We also present a general model to easily describe the exchange protocols based on hiding and, within this model, prove an upper bound on partial fairness as well as a protocol that can achieve this bound. Our impossibility result has no assumptions on the cryptographic primitives used, and therefore it is much stronger than the previous ones we circumvent in this paper. Therefore, our contribution to the body of knowledge is a twofold:
-
1.
An abstract and theoretical analysis of multi-party protocols based on hiding the actual exchange of secrets;
-
2.
An efficient protocol that achieves optimal fairness.
In the next section, we describe the setting of a multi-party fair exchange as well as the adversarial model we consider. Section 3 is dedicated to the security definitions and requirements of the cryptographic primitives used in our protocol, which is described in Sect. 4. The security of the protocol is sketched in the Appendix A. The paper concludes with an extensive abstract analysis of the multi-party protocols based on hiding, which also proves the optimality of our protocol.
2 Multi-party Fair Exchange
In this section, we describe the context of our study as well as the adversarial model. We follow the most general definition of Asokan et al. [3]. We assume the presence of K parties \(\mathbf {P_1}, \dots , \mathbf {P_K}\), and we define a matrix \(\varSigma \) to describe an arbitrary exchange. In particular, \(\varSigma _{i,j}\) is the secret that \(\mathbf {P_i}\) is meant to send to \(\mathbf {P_j}\). In this model, we set no restriction on the type of exchange, but we do assume that the network is a complete graph, i.e. any party can communicate with any other party. Intuitively, we say that an exchange is fair if and only if either all parties receive all the items they expect or nobody receives anything. Even if the exchange \(\varSigma \) is built from two separate sub-exchanges, by requiring that the whole of \(\varSigma \) is fair, the completion of one sub-exchange requires the completion of the other sub-exchange. Although this constraint might seem unreasonable in the abstract model, it is quite common in the real world, e.g. when moving to new house the two separate transactions of buying the new place and selling the old one typically depend on each other. In Sect. 4, we will show that this complex model for fair exchange can be reduced to a much simpler one. In particular, each of the K parties is given a secret \(s_i\), and the aim of the exchange is for all parties to know all secrets. We assume that knowing all but one secret is of no use. This is justified by thinking about these secrets as shares of some larger secret (often the case in SMPC). In this simpler model, the exchange is fair if and only if whenever a party discloses their secret, they receive all the other secrets.
2.1 Adversarial Model
We assume that the network topology is complete, but communication channels do not need to be secure. However, we require them to guarantee confidentiality; stronger properties are not needed since adversaries are passive. Delivery is not guaranteed, and we assume the use of agreed-upon timeouts to universally establish if the protocol is aborted. We are only interested in the case where at least one honest party is involved in the protocol, and we assume that corruptions are static (non-adaptive adversary), i.e. the coalitions of malicious parties do not change after the protocol starts. In our discussions, we talk about a group of malicious parties which coordinate and share knowledge as a coalition. Alternatively, the reader can think of a coalition of parties as a group of entities controlled by a single malicious actor. Crucially, we allow the presence of a dishonest majority, but adversaries are passive, i.e. they will follow the protocol, but can abort communication at any point in time without detection.
3 Preliminaries
In this section, we introduce all the cryptographic primitives used in our protocol as well as the security requirements they need to satisfy.
3.1 Notation
We write an n-length vector (or array) from a set X as \(\boldsymbol{v} \in X^n\), and we denote its ith element by \(\boldsymbol{v}[i]\) where \(1 \le i \le n\). If we have a function \(f: X \rightarrow Y\) and a vector \(\boldsymbol{v} \in X^n\), then we write \(f^*(\boldsymbol{v})\) to mean the vector \(\langle f(\boldsymbol{v}[i]) \mid 1 \le i \le n\rangle \) where f is applied to each element. We use \(S_n\) for the group of permutations over the set \(\{1, \dots ,n\}\), but for \(\sigma \in S_n\) and \(\boldsymbol{v} \in X^n\), we will write \(\sigma (\boldsymbol{v})\) to mean the vector \(\boldsymbol{w}=\langle \boldsymbol{v}[ \sigma ^{-1}(i)] \mid 1\le i \le n\rangle \). That is, if \(\sigma (a) = b\), then \(\boldsymbol{w}[b] = \boldsymbol{v}[a]\). We write \(R_n\) for the subset of rotations in \(S_n\): \(\{\tau \in S_n \mid \forall i\; \tau (i+1) = (\tau (i) +1 \bmod n)\}\). If \(\boldsymbol{\sigma } \in S_n^K\) and M is a \(K\times n\) matrix, then \(\boldsymbol{\sigma }(M)\) is the result of applying \(\boldsymbol{\sigma }[i]\) to the ith row of M. We use \(a\Vert b\) for the concatenation of a and b where these are interpreted as bitstrings.
3.2 Partial Fairness
We define fairness in the usual idea vs real world setting. Assume an ideal world where we have an incorruptible trusted party which acts as an intermediary collecting and delivering the secrets. We say that our protocol is \(\frac{1}{p}\)-fair if for any adversary to our protocol there is a simulator in the ideal world so that the two runs are computationally indistinguishable with probability \(1 -\frac{1}{p} - \textsf {negl}(\lambda )\) (where \(\textsf {negl}(\lambda )\) is a negligible function of the security parameter \(\lambda \)). The above means that our \(\frac{1}{p}\)-fair protocol can be unfair with probability at most \(\frac{1}{p} + \textsf {negl}(\lambda )\). See Appendix A for a formal definition.
3.3 Symmetric Encryption
Our protocol will take advantage of a symmetric encryption scheme \((\textsf{KGen}, \textsf{Enc}, \textsf{Dec})\) which we only require to be COA (Ciphertext Only Attack) secure, and each key will only be used once. However, in order to formally prove the security of our protocol, we require the encryption scheme to be non-committing [10]. This means that we require the additional property that for any message-message-key \((m_1, m_2, k_1)\) triple there is another key \(k_2\) so that \( \textsf{Dec}_{k_2}(\textsf{Enc}_{k_1}(m_1)) = m_2\). For simplicity, the reader can assume that the scheme used is the one-time-pad.
3.4 Delay Encryption
Delay encryption (sometimes called Timed-Release Encryption [23]) is an unusual cryptographic primitive whose aim is not to provide confidentiality of a message from other parties, but to hide the message from anyone for some predefined amount of time. For the reader accustomed to timed-release encryption, what we use and define is a “delay” time-lock-puzzle-based encryption scheme rather than time-specific schemes using trusted parties. This is justified because we don’t want to introduce trust in the fair exchange context.
More formally, a delay encryption scheme is a triple of algorithms \((\textsf{Pgen}, {{\,\mathrm{\textsf{Delay}}\,}}, {{\,\mathrm{\textsf{Open}}\,}})\) with associated sets \(\mathcal {M}, \mathcal {C}, \mathcal {P}\) such that
Intuitively, \(\textsf{Pgen}(1^\lambda , T)\) generates public parameters \( params \) that are used to delay a message for time T. We omit \( params \) and write \({{\,\mathrm{\textsf{Delay}}\,}}_T(m)\) for the ciphertext delaying m for elapsed time T. Similarly, we omit \( params \) and write \({{\,\mathrm{\textsf{Open}}\,}}(c)\). Practically speaking, \({{\,\mathrm{\textsf{Delay}}\,}}_T(m)\) will create a puzzle c so that \({{\,\mathrm{\textsf{Open}}\,}}(c)\) can solve the puzzle and obtain m only after at least T (sequential) time. Thus, \({{\,\mathrm{\textsf{Open}}\,}}\) will most likely not be a PPT algorithm. However, an honest party with moderate computational power should be able to run \({{\,\mathrm{\textsf{Open}}\,}}(c)\) in sequential time not much longer than T. This might be \(\mu T\) for a small integer \(\mu \) such as 10 or 20.
In order for our scheme to make sense, we set the following requirement
We say that a delay encryption is COA-secure if for any family of circuits \(\mathcal {A}\) of conceivable size and depth at most \(\mu (\lambda ) T\), we have
Intuitively, a COA-secure delay encryption scheme correctly hides encrypted messages for the expected amount of time. We remark that the size of such circuits will depend on the current state of technology. As noted in [20], allowing all polynomially-sized circuits could lead to misleading results with circuits much larger than what is feasible at the time of writing.
3.5 Commutative Blinding
A feature of our protocol is the use of commutative blinding (a.k.a. commutative encryption) to efficiently shuffle the secrets. The need for commutativity arises because the first party blinding the message cannot ever reveal their blinding key. Therefore, they need to unblind such message after it has been blinded by the other parties. We firstly present a definition that is easy to understand, but it is stricter than what is needed by the protocol. We use this definition for ease of presentation, but discuss the more general one in this section since it will be used in the formal proof of security.
Definition 1
We call a pair of PPT algorithms \(({{\,\mathrm{\textsf{Blind}}\,}}, {{\,\mathrm{\textsf{Unblind}}\,}})\) a commutative blinding scheme if
Moreover the commutativity property is given by
In its fully generality, to run a protocol with K parties, we only require \(K+1\) pairs \(({{\,\mathrm{\textsf{Blind}}\,}}_0, {{\,\mathrm{\textsf{Unblind}}\,}}_0), \dots , ({{\,\mathrm{\textsf{Blind}}\,}}_{K}, {{\,\mathrm{\textsf{Unblind}}\,}}_{K})\), such that
Writing the protocol and security requirements using the above will result in a hardly readable script. Therefore, we use the stronger commutativity property and simplify the exposition of the protocol. In terms of security requirements, we need the blinding to satisfy the usual \(\mathrm {IND\hbox {-}CPA}\) definition. In particular, we require each \({{\,\mathrm{\textsf{Blind}}\,}}_i\) to be \(\mathrm {IND\hbox {-}CPA}\) secure. As an example, this primitive could be instantiated using ElGamal and the re-encryption procedure often used in voting schemes (e.g. [17]). A proof the above, together with a more precise and slightly more relaxed security definition, is given in the full version of the paper.
4 The Protocol
Firstly, we show a reduction from the general fair exchange we described before to a simpler context that we will consider in the rest of this paper. Recall that we consider K parties \(\{\mathbf {P_1}, \dots , \mathbf {P_K}\}\) and that all items exchanged are encoded in the matrix \(\varSigma \) so that \(\varSigma _{i,j}\) is what \(\mathbf {P_i}\) sends to \(\mathbf {P_j}\). Recall from Sect. 2.1 that we assume the communication channels provide privacy. We now go one step further and assume that privacy is implemented using some (symmetric or asymmetric) encryption scheme. In our exchange protocol, \(\mathbf {P_i}\) holds only one secret \(s_i = \langle {\textsf{Enc}_{k_{i,j}}(\varSigma _{i,j}) \mid j \ne i}\rangle \) where \(k_{i,j}\) is the key \(\mathbf {P_i}\) uses to communicate confidentially with \(\mathbf {P_j}\). Therefore, in the rest of this script, we assume that each party holds only one compound secret, and that this should be revealed to all the other participants.
Our protocol follows the idea of concealing from all the point where the secrets are exchanged. Therefore, the protocol can be split into two phases: a setup phase where the shuffling of the secrets occurs, and an exchange phase where the secrets (and dummy values) are exchanged.
In this scenario, we assume the K parties have agreed on a number N of exchange messages as well as a number L which represents the size of the coalition the protocol should be optimal against. We expect that in most cases \(L = K-1\), but some contexts may wish to lower such value. As we will see in Sect. 6, the secrets might not be uniformly distributed, instead there are only a few “configurations” or “states” where those should be. We define \(\langle {\pi _1, \dots , \pi _\delta }\rangle \) to be the permutations that send the lists of secrets and dummies to those valid configurations. Formulae for those permutations and the value \(\delta \) are described in Sect. 6. For ease of notation, we assume that all parties send n exchange messages (i.e. \(N=nK\)).
4.1 Protocol Overview
A crucial aspect of our protocol is that the parties involved will not exchange the secrets directly as in [4, 16], instead they will encrypt the secrets and exchange keys for the used encryption scheme. Since the ciphertext is delayed to the end of the protocol using Timed-Release Encryption, any auxiliary information about the secrets is rendered useless. The parties will only be able to test the exchanged keys when the delayed message opens after the protocol is terminated. This is what allows us to escape the impossibility results of [4, 16] as well as drastically improving the efficiency of the protocol. Before analysing what should be included in the delayed messages, we describe how the keys are shuffled.
Initially, all parties will construct a list of n random keys where the first element is the one used to encrypt the secret. All those lists are blinded and sent to \(\mathbf {P_1}\) who will collate them in a matrix so that the xth row is the list sent by \(\mathbf {P_x}\). It is worth remarking that \(\mathbf {P_1}\) is not a trusted party, but one of parties involved in the exchange. Since this matrix will need to end up in one of the \(\delta \) valid configurations (as defined in Sect. 6), \(\mathbf {P_1}\) creates \(\delta \) pairs \((M_1, f_1), \dots , (M_\delta , f_\delta )\) so that \(\left\{ f_i(M_i) \mid 1 \le i \le \delta \right\} \) is the set of all the valid configurations. This decoupling allows us to shuffle the pairs uniformly. Hence, starting from \(\mathbf {P_1}\), each party will randomly rotate the list of these \(\delta \) pairs and pass it to the next participant. In order to keep those rotations secret, each party will need to mask all the pairs. Masking (M, f) will be done in the following way: each element of M is blinded using a commutative blinding scheme, while f is masked by computing \(f\,\circ \, g^{-1}\) for some randomly picked permutation g. The resulting pair \(({{\,\mathrm{\textsf{Blind}}\,}}^*(M,b) , f\,\circ \, g^{-1})\) might not satisfy the property that \((f \,\circ \, g^{-1})({{\,\mathrm{\textsf{Blind}}\,}}^*(M, b))\) is a valid configuration, therefore, we apply g to the matrix M as well. Hence, (M, f) is transformed into \((g({{\,\mathrm{\textsf{Blind}}\,}}^*(M, b)), f \,\circ \, g^{-1})\) for some random permutation g and blinding key b. For security reasons, g will need to be different for each pair. In Fig. 1, the above transformation is encoded in the function F and \(\boldsymbol{S^x}\) indicates the list of pairs after being rotated x times. The random permutations g are called \(\boldsymbol{\sigma _j}[i]\) where j indicates which party picked it and i indicates which of the \(\delta \) matrices the permutation should be applied to.
The last party \(\mathbf {P_K}\) will receive the list of pairs rotated by all the other parties. Therefore, they will simply rotate and blind one last time as well as pick the first element (M, f) of the list. By computing f(M), \(\mathbf {P_K}\) obtains a random valid configuration, called \(\boldsymbol{E^K}\) in Fig. 1. The resulting matrix will have its elements blinded by all the parties, moreover, each xth row is blinded by \(\mathbf {P_x}\) twice.Footnote 2 All that is left to do is for \(\mathbf {P_K}\) to send back to each party \(\mathbf {P_x}\) their list (the xth row of \(\boldsymbol{E^K}\)). Now all parties can remove their first blinding and proceed to exchange the items in the list one by one.
In order to retrieve the secret, one needs to unblind the exchanged messages and identify which keys were used to encrypt the secrets. Therefore, the delayed message must include the rotation used by each party as well as all of the keys used for the blindings that are not removed. In particular, the setup messages are blinded with keys which are never revealed \(\left\{ \boldsymbol{b_1}[1], \dots , \boldsymbol{b_K}[1]\right\} \). This prevents any malicious adversary from extracting the secrets by looking only at the setup phase. After the setup phase, the parties will exchange the items of their list one at a time.
4.2 Protocol Analysis
In this brief subsection, we analyse the setup phase to understand its result. We note the following (ignoring blinding)
If there is at least one honest party, then there is at least one truly random \(\tau _i\) meaning that the permutation \(\pi _i\) picked is uniformly distributed. If during the exchange phase no further information is leakedFootnote 3, then the coalition of corrupted parties has no knowledge on when the \(\boldsymbol{k_i}[1]\) are exchanged. Hence, the corrupted coalition can attack the protocol only by aborting the exchange phase at an arbitrary point. Their strategy is successful only if they stop after having received all the honest parties’ secret keys but without having revealed all of theirs. In Sect. 6, we will prove that the permutations \(\pi _i\)’s can be picked so that this strategy has the smallest possible success rate.
We note that the round complexity of the setup phase is \(K+2\) with 3K messages sent overall. The exchange phase is run in N rounds with NK messages overall. These numbers are a large improvement over the protocol by Beimel et al. [4].
5 Security Proof
We only have space to give a brief overview the security proof of our protocol. More details are in Appendix A.
Let \(\mathcal F_{\text {ex}}\) be the ideal functionality where all K parties perform the fair exchange by handling their secrets to a central trusted party which redistributes them.
Definition 2
Let \(\textsf{IDEAL}_{\mathcal F_{\text {ex}}, \mathcal {S}(\textsf{aux})}(\varSigma , 1^\lambda )\) be the random variable consisting of the output of the K parties involved in an ideal-world run of the functionality \(\mathcal F_{\text {ex}}\), where \(\varSigma \) represents the matrix of secrets to exchange and \(\mathcal {S}(\textsf{aux})\) is a non-uniform PPT adversary with auxiliary information \(\textsf{aux}\).
Definition 3
Let \(\textsf{REAL}_{\varPi , \mathcal {A}(\textsf{aux})}(\varSigma , 1^\lambda )\) be the random variable consisting of the output of the K parties involved in a real-world run of the protocol \(\varPi \), where \(\varSigma \) represents the matrix of secrets to exchange and \(\mathcal {A}(\textsf{aux})\) is a non-uniform PPT adversary with auxiliary information \(\textsf{aux}\).
Definition 4
We say that a protocol \(\varPi \) implements the ideal functionality \(\mathcal F_{\text {ex}}\) with \(\frac{1}{p}\)-partial fairness, if for every non-uniform PPT adversary \(\mathcal {A}\) there exists a non-uniform PPT simulator \(\mathcal {S}\) such that
i.e. the two probability ensembles are computationally indistinguishable with probability \(1 - \frac{1}{p} - \textsf {negl}(\lambda )\) for some negligible function \(\textsf {negl}(\lambda )\).
For ease of presentation, we only consider the scenario where the adversary controls all but one honest party \(\mathbf {P_\mathcal H}\). Furthermore, we assume that \(1 \ne \mathcal H \ne K\).Footnote 4
We prove the security of the protocol in a 3-step process. Firstly, we prove that retrieving the secret \(s_\mathcal H\) is the equivalent to obtaining \(\boldsymbol{k_\mathcal H}[1]\) (Theorem 1).
Theorem 1
If \({{\,\mathrm{\textsf{Delay}}\,}}\) is COA secure, \(\textsf{Enc}\) is COA secure and \({{\,\mathrm{\textsf{Blind}}\,}}\) is \(\mathrm {IND\hbox {-}CPA}\), then no non-uniform PPT adversary \(\mathcal {A}\) can obtain \(s_\mathcal H\) unless \(D_\mathcal H\) is opened, and they have received the message containing \(\boldsymbol{k_\mathcal H}[1]\) during the exchange phase.
Working in the hybrid model [7], we assume the existence of an ideal functionality \(\mathcal F_{\text {setup}}\) where each party submits their secret s and receives back the shuffled and blinded list \(\boldsymbol{E^K}[i]\) together with the delayed messages \(D_i\) of all the other parties. The configuration picked by \(\mathcal F_{\text {setup}}\) is chosen uniformly among the \(\delta \) possibilities. We assume that this functionality is secure-with-abort, i.e. the corrupted parties can stop the honest parties from receiving their outputs, but not alter the output. In this context, we prove that our protocol achieves partial fairness (Theorem 2).
Theorem 2
Let \(\varPi ^{\textsf{hyb}}\) be the protocol of Fig. 1 where the setup phase has been replaced by an ideal functionality \(\mathcal F_{\text {setup}}\). Assume that the delay encryption scheme is COA secure and that the symmetric encryption scheme is non-committing and COA secure, then \(\varPi ^{\textsf{hyb}}\) implements the fair exchange ideal functionality \(\mathcal F_{\text {ex}}\) with \(\frac{K-1}{N+1-K}\)-partial fairness.
To complete the security proof, we show that the setup phase described in Fig. 1 implements the ideal functionality. In Sect. 4.2, we have already shown that our protocol already computes the setup phase correctly, so we only need to show that it does so privately (Theorem 3).
Theorem 3
If \({{\,\mathrm{\textsf{Blind}}\,}}\) is \(\mathrm {IND\hbox {-}CPA}\) and \({{\,\mathrm{\textsf{Delay}}\,}}\) is COA secure, then the setup phase of Fig. 1 does not reveal any information about \(\tau _\mathcal H\) while \(D_\mathcal H\) is not opened.
6 Fairness Optimality
We represent a protocol of N messages and \(K =\{p_1, \dots , p_K\}\) parties by a pair \(\varPi = (\mathcal M, \mathcal P)\), where
-
\(\mathcal M : \mathbb {N}\rightarrow \{1, \dots , K\}\) is a map indicating in which order the parties are sending messages, i.e. \(\mathcal M(i) = j\) means that message i is sent by party \(p_j\).
-
a “configuration” \(t : \{1, \dots , K\} \rightarrow \mathbb {N}\) is a function such that \(t(i) = j\) means that \(p_i\)’s secret is in the jth message.
-
\(\mathcal P\) is a function which associates to each configuration its probability of happening.
We set the constraint: \(t(i) = j \implies M(j) = i\). That is, if \(p_i\)’s secret is disclosed at the jth message, then such message must be sent by \(p_i\). This is due to the principle that if a secret is disclosed to someone, it is disclosed to everyone. When we say “\(p_i\)’s secret is in the jth message” or “\(p_i\)’s secret is disclosed at the jth message” we do not mean that the jth message gives immediate knowledge of \(p_i\)’s secret, but that knowledge is guaranteed to happen. Looking at the exchange protocol of Sect. 4, \(s_i\) is “disclosed” when the blinded \(\boldsymbol{k_i}[1]\) is sent by \(P_i\) during the exchange phase. However, the other parties will have knowledge of \(s_i\) only after the delay message \(D_i\) is opened.
Given a protocol \(\varPi = (\mathcal M , \mathcal P)\), we define the probability that a coalition of parties I “wins” by stopping at \(S \in \mathbb {N}\) to be
Intuitively, I wins by stopping at S if someone in I has not released their secret after the Sth message but all parties not in I have already sent their secrets. For ease of presentation we write \(t < S\) if for \(1 \le i \le K\; \; t(i) < S\) and similarly for \(t > S\).
In our search for an optimal protocol, we assume that no party sends consecutive messages.
We measure the unfairness of a protocol \(\varPi \) against coalitions of size L by computing
Lemma 1
Let \(\varPi _b=(\mathcal M_b, \mathcal P_b)\) be a protocol with a configuration \(t_b\). Let \(t_g\) be a configuration such that \(\max t_g = \max t_b\) and \(\forall i \;\; t_g(i) \ge t_b(i)\). Define \(\varPi _g =(\mathcal M_b, \mathcal P_g)\) where \(\mathcal P_g(t_b) = 0\), \(\mathcal P_g(t_g) = \mathcal P_b(t_g) + \mathcal P_b(t_b)\) and \(\mathcal P_g(t) = \mathcal P_b(t)\) for any other t. Then for any L
Proof
We prove the statement by showing that for all I, S we have
Note that \(\mathcal T(\varPi _g, I, S) = \mathcal T(\varPi _b, I, S)\). Hence, the inequality above becomes:
Therefore, it is enough to show that \(t_g \in \mathcal T(\varPi _b, I, S) \implies t_b \in \mathcal T(\varPi _b, I, S)\), i.e. if I wins by stopping at S when \(t_g\) happens, then I wins by stopping at S also when \(t_b\) happen. Assume that \(t_g \in \mathcal T(\varPi _b, I, S)\), then \(\forall i \notin I \;\; t_g(i) \le S\). Therefore, \(\forall i \notin I \;\; t_b(i)\le t_g(i) \le S\). Moreover, \(j = \arg \max t_g \in I \wedge t_g(j) > S\). Since \(\max t_b = \max t_g\), we have that \(t_b(j) > S\). Hence \(t_b \in \mathcal T(\varPi _b, I, S)\). \(\square \)
For every x such that there is a configuration t with \(\max t = x\), there is exactly one configuration \(t_x\) such that for all other configurations \(t'\) \(\forall i\;\; t_x(i) \ge t'(i)\). Such \(t_x\) is constructed by setting \(t_x(i)\) to be the largest (valid) integer up to x. The above lemma proves that we only need to consider those “short” configurations because longer ones are always worse.
Let \(\varPi ^L_g = (\mathcal M_g, \mathcal P_g)\) be a protocol where
-
\(\mathcal M_g(i) = (i-1\bmod K) +1\)
-
\(\mathcal P_g(t_i) = \alpha _i\)
-
\(\mathcal P_g(t) = 0\) for other configurations
-
the \(\alpha _i\)’s are picked to minimise: \(\max _{\begin{array}{c} I, S\\ |I|=L \end{array}}{\text {Pr}}_{\varPi ^L_g}\left[ I, S\right] \)
Lemma 2
Consider a protocol \(\varPi ^x_g\) as described above for some x and pick \(L < K\), then
Proof
The short configurations \(t_i\) can be described as the map \(y \mapsto i - ((\mathcal M (i) -y) \bmod K)\). That is, the secrets are sent in messages \(\{i-K+1, i-K+2, \dots , i\}\). All the short configurations of the protocol \(\varPi ^x_g\) are exactly the \(t_i\) configurations described above where \(K \le i \le N\). Note that
Fix an S. Consider the configurations \(T= \{ t_{S+1}, t_{S+2}, \dots , t_{S+L}\}\).Footnote 5 Pick any other configuration \(t_i\), one of two scenarios can happen
-
1.
\(i \le S\), i.e. all secrets are exchanged up to (including) the Sth message
-
2.
\(i > S+L\), i.e. more than L secrets are exchanged after Sth message
In both scenarios, there is no coalition of size L that can win by stopping at S in those configurations. Pick any configuration \(t \in T\), then any coalition containing \(\{p_j \mid t(j) > S\}\) can win in t by stopping at S. In particular, only the coalition \(J = \{ p_j \mid j = M(i),\;\; S+1 \le i \le S+L\}\) wins in all such configurations. Therefore, \(\max _{I ; |I|=L} {\text {Pr}}_{\varPi _g^x}\left[ I,S\right] = {\text {Pr}}_{\varPi _g^x}\left[ J,S\right] = \alpha _{S+1} + \dots + \alpha _{S+L}\).Footnote 6 By iterating over the possible S, we obtain the following set of sums:
Note that the sums in the first and third set are all “sub-sums” of sums in the second set. Therefore, the maximum over the above three sets is the same as the maximum over the second set. The lemma follows. \(\square \)
Theorem 4
Let \(\varPi = (\mathcal M , \mathcal P)\) be any protocol, then
Proof
First, we construct an intermediate protocol \(\varPi _b = (\mathcal M_b, \mathcal P_b)\) such that
-
\(\mathcal M_b = \mathcal M\)
-
\(\mathcal P_b(t_i) = \beta _i\), where \(t_i\) are the “short” configurations in \(\mathcal M_b\)
-
\(\mathcal P_b(t) = 0\) for other t’s
-
the \(\beta \)’s minimise: \(\max _{\begin{array}{c} I, S\\ |I|=L \end{array}}{\text {Pr}}_{\varPi _b}\left[ I, S\right] \)
We first point out that \(\max _{\begin{array}{c} I,S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi }\left[ I,S\right] \ge \max _{\begin{array}{c} I,S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi _b}\left[ I,S\right] \) due to Lemma 1 and the definition of \(\mathcal P_b\). Therefore, we can prove the theorem by showing the same inequality between \(\varPi _b\) and \(\varPi ^L_g\).
By the previous lemmas, we have that the \(\alpha \)’s minimise
Recall that \(\beta _i\) is the probability of the configuration \(t_i\) which is the unique shortest one with \(\max t_i = i\).Footnote 7 We have constructed the set \(\{\beta _K, \dots , \beta _{N}\}\). We prove the result by showing that:
The second inequality comes straight from the definition of the \(\alpha \)’s and Lemma 2. Fix any i and consider the configurations \(T=\{t_i, t_{i+1}, \dots , t_{i+L-1}\}\) and the coalition \(I = \{ p_j \mid \mathcal M_b(\max t) = j \text { for } t \in T\}\) of the parties sending the messages from i to \(i+L-1\).Footnote 8 Set \(S = i -1\). By the definition of I, all messages sent after S up to (including) \(i+L-1\) are sent by parties in I. Therefore, I wins in all the coalitions in T by stopping at S. Hence \({\text {Pr}}_{\varPi _b}\left[ I, S\right] \ge \beta _i + \dots + \beta _{i+L-1}\). Since i was arbitrary, we have
and the theorem follows. \(\square \)
As a result of the above theorem, we know that \(\varPi _g^L\) is optimal against coalitions of size L. We now analyse \(\varPi _g^L\) to find out exactly the values of the \(\alpha \)’s.
Lemma 3
Given n non-negative numbers \(\alpha _1, \dots , \alpha _n\) whose sum is 1 and any \(l\le n\), set \(\lambda = {\lceil \frac{n}{l} \rceil }\), then
Proof
Assume for a contradiction that Eq. 2 doesn’t hold. Let \(v_i = \{\alpha _{l(i-1)+1}, \alpha _{i+1}, \dots , \alpha _{li}\}\) for \(1\le i \le \lfloor \frac{n}{l} \rfloor \). If \(n\bmod l \ne 0\), then set \(v_\lambda = \{\alpha _{n+1-l}, \dots , \alpha _n\}\). Since Eq. 2 doesn’t hold, we have that for all \(1\le i \le \lambda \), the inequality \(\sum _{\alpha \in v_i} \alpha < \frac{1}{\lambda }\) holds. However,
\(\square \)
Theorem 5
Given n non-negative numbers \(\alpha _1, \dots , \alpha _n\) whose sum is 1 and any \(l\le n\), set \(\lambda = {\lceil \frac{n}{l} \rceil }\), then
Proof
By the previous Lemma 3, it is enough to exhibit a set of \(\alpha \)’s so that
For \(1 \le i < \lambda \), set \(\alpha _{l(i-1)+1} = \frac{1}{\lambda }\) and let \(\alpha _n = \frac{1}{\lambda }\). Every other \(\alpha \) is zero. Note that these values are picked so that non-zero values have at least \(l-1\) zero values between each other. Therefore, any sum \(\alpha _i + \dots + \alpha _{i+l-1}\) will contain at most 1 non-zero value and all the non-zero sums will be \(\frac{1}{\lambda }\). The theorem follows immediately. \(\square \)
This last theorem tells us that the optimal fairness is \(\frac{1}{\lambda }\) and also how this can be achieved. It is easy to see that there are other values of \(\alpha \)’s for which this optimal fairness can be achieved, so what we have proved so far is not enough for a complete classification. Nevertheless, it contains enough information to obtain an optimally-fair protocol.
Corollary 1
Let \(N+1-K\) be a multiple of \({{\,\textrm{lcm}\,}}\{2, 3, \dots , K-1\}\). Then \(\varPi _g^1\) is optimally fair against coalitions of any size.
Proof
By our definition of \(\varPi _g^1=(\mathcal M_1, \mathcal P_1)\) and Lemma 2, we have that the \(\alpha _i\)’s minimise
Hence, \(\alpha _i = \frac{1}{N+1-K}\) for all i. In particular, all the \(N+1-K\) shortest configurations are equally likely.
Pick any \(L < K\). We already know from Theorem 4, that \(\varPi _g^L = (\mathcal M_L, \mathcal P_L)\) is optimally fair against coalitions of size L. Let \(\mathcal P_L(t_i) = \beta _i\). Note that the shortest configurations in \(\varPi _g^1\) and \(\varPi _g^L\) are the same, therefore \(\beta _i\) and \(\alpha _i\) refer to the same configuration \(t_i\). We know from Lemma 2 that these \(\beta \)’s minimise
By Theorem 5, we know that the above has value \(\frac{1}{ \left\lceil \frac{N+1-K}{L} \right\rceil } = \frac{L}{N+1-K}\) since L divides \(N+1-K\). Finally, by Lemma 2 the theorem follows:
\(\square \)
With this corollary we show how to construct a protocol which is optimally fair against any coalition size. The protocol we described in Sect. 4 can be used to achieve this fairness. In particular, using the notation of Fig. 1 and assuming the parties agreed on an N for which Corollary 1 applies, \(\mathbf {P_1}\) needs to construct a permutation \(\pi _i\) for each of the \(N+1-K\) “shortest” configurations. Hence, \(\pi _i[x] = i-1 + ( (x-i) \bmod K)\), i.e. \(\pi _i\) rotates the xth row forward by \(\pi _i[x]\). We also note that the protocol of Fig. 1 can be used to achieve the distribution of \(\varPi ^L_g\) for any L. Thus, for large values of K, since using values of N suitable for Corollary 1 is prohibitive, the protocol of Fig. 1 can be tuned to be optimal against specific values of L (rather than all).
7 Conclusions and Future Developments
In this paper, we have proposed an efficient exchange protocol which achieves optimal partial fairness even in the presence of a dishonest majority. This is achieved by concealing when the exchange actually happens among a linear amount of dummy messages. Our concrete instantiation of the protocol takes advantage of delay encryption and commutative blinding and provides security against passive adversaries. Using our protocol in conjunction with other SMPC protocols improves the known bounds of partial fairness in the presence of a dishonest majority. In this regard, we provide a large improvement over the current state of the art [4, 9, 16]. We also provide a deep and abstract analysis of the protocols achieving partial fairness by concealing the point where the secrets are exchanged. Since our analysis does not involve any assumption on the cryptographic primitives used, it proves a very general impossibility result and shows the optimality of our protocol. However, our research leaves a couple of unanswered questions. Our abstract analysis does not provide a complete classification of the optimally-fair protocols. Therefore, could a protocol be designed to achieve optimal fairness without the use of delay encryption? Moreover, the protocols of [4] achieves complete fairness if an honest majority is present. This leaves open the question of whether our method could be modify to obtain the same feature.
Notes
- 1.
See Appendix A and [16] for a formal definition of partial fairness and Sect. 3 for an intuitive description.
- 2.
This is why we need \(K+1\) blinding functions in the general definition of Sect. 3.5.
- 3.
See Appendix A for a proof of this fact.
- 4.
The other cases are a simplification of what presented here, hence omitted for brevity.
- 5.
Some of these configurations may not exist if \(S+L>N\), so they can be removed from the set.
- 6.
Once again, some of this \(\alpha \) might refer to non-existing configurations, in such scenario those values can be considered to be zero.
- 7.
It might be the case that for all small i up to some x, \(t_i\) does not exist. For instance, say that the first K messages are all sent by two parties, then “earliest” configuration t will have \(\max t \ge 2K-2\) and therefore \(t = t_{K}\) doesn’t exist. If \(t_i\) as described above doesn’t exist, we let it be one of the many configurations where \(\max t_i\) is minimal (dropping the constraint of it being short).
- 8.
T might contain configurations \(t_i, t_j\) with \(\mathcal M_b(i) = \mathcal M_b(j)\), then I will not have size L. If this is the case, I can be extended to a coalition of size L arbitrarily, therefore we assume \(|I|=L\).
- 9.
See Fig. 3 for a precise definition of the \(\mathrm {IND\hbox {-}CPA}\) game for F.
References
Alcaide, A., Estevez-Tapiador, J.M., Hernandez-Castro, J.C., Ribagorda, A.: A multi-party rational exchange protocol. In: Meersman, R., Tari, Z., Herrero, P. (eds.) OTM 2007. LNCS, vol. 4805, pp. 42–43. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-76888-3_21
Andrychowicz, M., Dziembowski, S., Malinowski, D., Mazurek, Ł: Fair two-party computations via bitcoin deposits. In: Böhme, R., Brenner, M., Moore, T., Smith, M. (eds.) FC 2014. LNCS, vol. 8438, pp. 105–121. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44774-1_8
Asokan, N., Schunter, M., Waidner, M.: Optimistic protocols for multi-party fair exchange. IBM Research Division (1996)
Beimel, A., Lindell, Y., Omri, E., Orlov, I.: \(\frac{1}{p}\)-secure multiparty computation without an honest majority and the best of both worlds. J. Cryptol. 33(4), 1659–1731 (2020). https://doi.org/10.1007/s00145-020-09354-z
Ben-Or, M., Goldreich, O., Micali, S., Rivest, R.L.: A fair protocol for signing contracts. IEEE Trans. Inf. Theor. 36(1), 40–46 (1990). https://doi.org/10.1109/18.50372
Boneh, D., Naor, M.: Timed commitments. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 236–254. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44598-6_15
Canetti, R.: Security and composition of multiparty cryptographic protocols. J. Cryptol. 13(1), 143–202 (2000). https://doi.org/10.1007/s001459910006
Cleve, R.: Limits on the security of coin flips when half the processors are faulty. In: Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing, STOC 1986, pp. 364–369. Association for Computing Machinery, New York, NY, USA (1986). https://doi.org/10.1145/12130.12168
Couteau, G., Roscoe, A.W., Ryan, P.Y.A.: Partially-fair computation from timed-release encryption and oblivious transfer. In: Baek, J., Ruj, S. (eds.) ACISP 2021. LNCS, vol. 13083, pp. 330–349. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90567-5_17
Damgård, I., Nielsen, J.B.: Improved non-committing encryption schemes based on a general complexity assumption. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 432–450. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44598-6_27
Damgård, I.B.: Practical and provably secure release of a secret and exchange of signatures. J. Cryptol. 8(4), 201–222 (1995). https://doi.org/10.1007/BF00191356
Feng, B., Deng, R., Nguyen, K.Q., Varadharajan, V.: Multi-party fair exchange with an off-line trusted neutral party. In: Proceedings of the Tenth International Workshop on Database and Expert Systems Applications, DEXA 1999, pp. 858–862 (1999). https://doi.org/10.1109/DEXA.1999.795294
Franklin, M., Tsudik, G.: Secure group barter: multi-party fair exchange with semi-trusted neutral parties. In: Hirchfeld, R. (ed.) FC 1998. LNCS, vol. 1465, pp. 90–102. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0055475
Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game. In: Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, STOC 1987, pp. 218–229. Association for Computing Machinery, New York, NY, USA (1987). https://doi.org/10.1145/28395.28420
Goldreich, O.: Foundations of Cryptography, vol. 2. Cambridge University Press (2004). https://doi.org/10.1017/CBO9780511721656
Gordon, S.D., Katz, J.: Partial fairness in secure two-party computation. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 157–176. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13190-5_8
Jakobsson, M.: Flash mixing. In: Proceedings of the Eighteenth Annual ACM Symposium on Principles of Distributed Computing, PODC 1999, pp. 83–89. Association for Computing Machinery, New York, NY, USA (1999). https://doi.org/10.1145/301308.301333
Kılınç, H., Küpçü, A.: Efficiently making secure two-party computation fair. In: Grossklags, J., Preneel, B. (eds.) FC 2016. LNCS, vol. 9603, pp. 188–207. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-54970-4_11
Küpçü, A., Mohassel, P.: Fast optimistically fair cut-and-choose 2PC. In: Grossklags, J., Preneel, B. (eds.) FC 2016. LNCS, vol. 9603, pp. 208–228. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-54970-4_12
Maffei, I., Roscoe, A.W.: Delay encryption by cubing (2022). https://doi.org/10.48550/ARXIV.2205.05594
Pagnia, H., Gärtner, F.C.: On the impossibility of fair exchange without a trusted third party. Technical report, Darmstadt University of Technology (1999). https://www.cs.utexas.edu/~shmat/courses/cs395t_fall04/pagnia.pdf
Prabhakaran, M.M., Sahai, A.: Secure Multi-Party Computation. IOS Press (2013). https://ebookcentral.proquest.com/lib/oxford/detail.action?docID=1137458
Rivest, R.L., Shamir, A., Wagner, D.A.: Time-lock Puzzles and Timed-release Crypto. Report, Massachusetts Institute of Technology (1996)
Roscoe, A.W., Ryan, P.Y.A.: Auditable PAKEs: approaching fair exchange without a TTP. In: Stajano, F., Anderson, J., Christianson, B., Matyáš, V. (eds.) Security Protocols 2017. LNCS, vol. 10476, pp. 278–297. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71075-4_31
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix A Formal security proof
Appendix A Formal security proof
In this section, we provide some proof sketches for the theorems presented in Sect. 5. More details are available in the full version of the paper. We assume the setting described in Sect. 3 as well as only considering the exchange in its simplest form after the reduction of Sect. 4.
Proof
(Theorem 1 proof). Consider the function described in Fig. 2. It computes the view of the protocol of an adversary \(\mathcal {A}\) controlling all parties but \(\mathbf {P_\mathcal H}\). For brevity, Fig. 2 only shows how the messages that depends on \(\mathbf {P_\mathcal H}\) are computed. If \({{\,\mathrm{\textsf{Delay}}\,}}\) is COA secure and \(D_\mathcal H\) is not opened yet, then \(\mathcal {A}\) has no access to \(s_\mathcal H\). So assume that the content of \(D_\mathcal H\) are known to \(\mathcal {A}\), then \(\mathcal {A}\) must obtain \(\boldsymbol{k_\mathcal H}[1]\). By using the commutativity property of \({{\,\mathrm{\textsf{Blind}}\,}}\), we can remove the appearance of \(\boldsymbol{b_\mathcal H}[1]\) from the computation of \(\boldsymbol{M^\mathcal H}\). Now modify \(\textsf {ColView}\) into \(\textsf {ColView}_{2}\) which only outputs the messages of \(\boldsymbol{M^\mathcal H}\) up to (excluding) the message containing \(\boldsymbol{k_\mathcal H}[1]\). Since \({{\,\mathrm{\textsf{Blind}}\,}}\) is \(\mathrm {IND\hbox {-}CPA}\) and \(\boldsymbol{b_\mathcal H}[1]\) only appears in \(\boldsymbol{L_\mathcal H}\), \(\textsf {ColView}_{2}\) is computationally indistinguishable from \(\textsf {ColView}_{3}\) which behaves the same but constructs \(\boldsymbol{L_\mathcal H}\) using some random (and known by \(\mathcal {A}\)) key \(\varepsilon \) instead of \(\boldsymbol{k_\mathcal H}[1]\). In \(\textsf {ColView}_{3}\), \(\mathcal {A}\) cannot obtain \(s_\mathcal H\). Thus, \(\mathcal {A}\) can obtain \(s_\mathcal H\) only if \(D_\mathcal H\) is opened and the part of \(\boldsymbol{M^\mathcal H}\) containing \(\boldsymbol{k_\mathcal H}[1]\) is exchanged. \(\square \)
Proof
(Theorem 2 proof). To prove the statement we need to construct a simulator \(\mathcal {S}\) in the ideal world for any non-uniform PPT adversary \(\mathcal {A}\) in the hybrid model. So, fix an adversary \(\mathcal {A}\), the simulator \(\mathcal {S}\) will work as follows:
-
\(\mathcal {S}\) runs a local instance of \(\mathcal {A}\) to which passes the secrets of the corrupted parties as well as the auxiliary information;
-
whenever \(\mathcal {A}\) aborts, so does \(\mathcal {S}\);
-
when \(\mathcal {A}\) interacts with the functionality performing the setup phase, \(\mathcal {S}\) picks a random input \(s'_\mathcal H\) and carries out the functionality \(\mathcal F_{\text {setup}}\);
-
during the exchange phase \(\mathcal {S}\) acts as \(\mathbf {P_\mathcal H}\) and behaves accordingly to the output they produced when simulating the functionality \(\mathcal F_{\text {setup}}\);
-
when \(\mathcal {S}\) should send the exchange message containing \(\boldsymbol{k_\mathcal H}[1]\), \(\mathcal {S}\) will interact with \(\mathcal F_{\text {ex}}\) to carry out the exchange in the ideal world and obtain the real secret \(s_\mathcal H\). Using the non-committing property of the encryption scheme, \(\mathcal {S}\) can compute a key which will decrypt the ciphertext \(\textsf{Enc}_{\boldsymbol{k_\mathcal H}[1]}(s'_\mathcal H)\) that \(\mathcal {A}\) holds into the real secret \(s_\mathcal H\). \(\mathcal {S}\) will use this key in the exchange message;
-
when \(\mathcal {A}\) returns, \(\mathcal {S}\) will output the same result.
Firstly, the view of \(\mathcal {A}\) in its interaction with \(\mathcal {S}\) is the same as it would be in the real world. This follows from Theorem 1 and the non-committing property of the encryption scheme. Hence, the output of \(\mathcal {S}\) in the ideal world is the same as the output of \(\mathcal {A}\) in the real world. However, the output of \(\mathbf {P_\mathcal H}\) might differ. If \(\mathcal {A}\) aborts the communication when one of the corrupted party would have sent a message that reveals their secret, then such secret is kept confidential in the real world, while it is not in the ideal world. Thanks to Sects. 4.2 and 6, and the use of ideal functionality \(\mathcal F_{\text {setup}}\), we know that \(\mathcal {A}\) will abort at such a point with probability at most \(\frac{K-1}{N+1-K}\). \(\square \)
Before proving the correctness of our setup phase, we show that the helper function F defined in Fig. 1 is an \(\mathrm {IND\hbox {-}CPA}\) encryption scheme.
Lemma 4
If \({{\,\mathrm{\textsf{Blind}}\,}}\) is \(\mathrm {IND\hbox {-}CPA}\), then function F(M, f, b, g) defined in Fig. 1 is an \(\mathrm {IND\hbox {-}CPA}\) encryption scheme of (M, f) provided that g is not fixed.Footnote 9
Proof (sketch)
Consider the Impossible game of Fig. 3. Intuitively, \(\mathcal {A}\) wins the impossible game if they can pick a permutation \(\tau \) so that they can tell if a list of unknown values has been permuted with \(\tau \) or not. We rely on the fact that k is kept confidential, so \({{\,\mathrm{\textsf{Blind}}\,}}^*(\varepsilon , k)\) is an unpredictable \(n\times K\) matrix of values for \(\mathcal {A}\) (note that if \({{\,\mathrm{\textsf{Blind}}\,}}\) is deterministic, then the impossibility of the game is glaring). Now assume that \(\mathcal {B}\) is a PPT adversary so that \(\mathrm {IND\hbox {-}CPA}^\mathcal {B}_F(1^\lambda )=1\) with probability greater than \(\frac{1}{2} + \textsf {negl}(\lambda )\) for all negligible functions \(\textsf {negl}(\lambda )\). We then argue that Reduction\(^\mathcal {B}(1^\lambda ) = 1\) with the same probability. In particular, note that \(\textsf {R}(M_0, M_1, f_1, f_0) = \textsf {O}(M_0, M_1, f_1, f_0)\). However, solving the Reduction games means solving the Impossible game since, in Reduction\(^\mathcal {B}(1^\lambda )\), the bit b is used (implicitly) only in the call to \(\textsf {shuffle}\). Therefore, no such \(\mathcal {B}\) can exist. \(\square \)
Proof
(Theorem 3 proof). Consider Fig. 2, the function \(\textsf {ColView}\) returns the view of the dishonest entity \(\mathcal {A}\) controlling all parties but \(\mathbf {P_\mathcal H}\). For brevity, Fig. 2 only shows how the messages that depends on \(\mathbf {P_\mathcal H}\) are computed. Firstly, since we assume the security of the time-release encryption, we ignore the delayed messages. By using the fact that F is \(\mathrm {IND\hbox {-}CPA}\), we note that \(\textsf {ColView}\) is computationally indistinguishable from \(\textsf {ColView}_{2}\) which behaves the same but \(\boldsymbol{S^\mathcal H}\) is replaced by \(\langle { F(M_\varepsilon , o_\varepsilon , \boldsymbol{b_\mathcal H}[2], \boldsymbol{\sigma _{\mathcal H}}'[i] \mid 1 \le i \le \delta }\rangle \) for some fixed (and known by \(\mathcal {A}\)) \(M_\varepsilon , o_\varepsilon \) and fresh \(\boldsymbol{\sigma _{\mathcal H}}'[i]\). This is enough to show that the setup phase doesn’t leak any information about \(\tau _\mathcal H\) (since \(\tau _\mathcal H\) doesn’t appear at all in \(\textsf {ColView}_{2}\)). \(\square \)
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this paper
Cite this paper
Maffei, I., Roscoe, A.W. (2024). Optimally-Fair Multi-party Exchange Without Trusted Parties. In: Tsudik, G., Conti, M., Liang, K., Smaragdakis, G. (eds) Computer Security – ESORICS 2023. ESORICS 2023. Lecture Notes in Computer Science, vol 14344. Springer, Cham. https://doi.org/10.1007/978-3-031-50594-2_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-50594-2_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-50593-5
Online ISBN: 978-3-031-50594-2
eBook Packages: Computer ScienceComputer Science (R0)