Keywords

1 Introduction

A multi-party fair exchange protocol allows any number of parties to exchange items in an arbitrary way while guaranteeing fairness: either everyone receives all the items they expected or nobody obtains anything. The problem of fair exchange has primarily been studied in the two-party setting where it was already known to be impossible in 1986 thanks to Cleve [8] and later to Pagnia and Gärtner [21]. Fair exchange is often solved with the aid of a trusted party either being present (e.g. [13]) or acting only as a judge in case of misbehaviour (e.g. [3, 12]). The protocols following the latter approach are called optimistic since they assume that most of the time the parties involved will all be honest. Among these, a recent approach involves the use of crypto currencies as collateral for failed exchanges (e.g. [2]). Alternative solutions require stronger assumptions such as similarity in general or sequential computing power [6, 11]. A more satisfactory approach is the weakening of the fairness requirement to allow a small but not negligible chance of failure [5, 9, 24]. Other approaches to multi-party fair exchange without introducing trust are based on rationality assumptions [1]. A related research field is the study of Secure Multi-Party Computation (SMPC). The problem tackled by this branch of computer science is how a group of parties, each with a private input, can compute a function on these inputs without revealing them. This problem appears to be more general than the simple fair exchange, yet SMPC do not always require fairness, instead it mainly focuses on ensuring that the outputs are correct. In particular, in the presence of a dishonest majority, SMPC protocols allow the computation of an arbitrary functionality guaranteeing privacy, correctness but not fairness [15]. That is, the private inputs are kept private, the outputs are guaranteed to be correct, but the dishonest parties can choose not to let the honest parties receive the outputs. This is clearly unsatisfactory in the context of fair exchange, yet the theory of SMPC proves that fairness cannot be achieved in this scenario. In a brief summary, SMPC can be achieved with “standard assumptions” under the following circumstances [15, 22]:

  • in the presence of any number of passive adversaries;

  • in the presence of an active adversary controlling only a minority of the parties;

  • in the presence of an active adversary controlling a majority of the parties if early abortion is not a security violation.

The above results show that the fair exchange problem can be solved in the presence of an honest majority (e.g. via the use of secret sharing), but it is not possible otherwise. To escape this impossibility result, we follow the approach of partial fairness set out by Gordon and Katz [16] and applied in the multi-party setting by Beimel et al. [4].

1.1 Our Contribution

In this paper, we propose a protocol that achieves optimal partial fairnessFootnote 1 in the presence of a dishonest majority. In particular, we design a protocol where the fairness requirement is weakened so that there is a small probability that the exchange is not fair. This protocol follows the blueprint of other protocols (e.g. [4, 9, 16, 24]) where the secrets are hidden among other dummy messages so that no party knows when the secrets are truly exchanged. By hiding the secrets among other items, the parties can exchange messages without anyone knowing whether they have received or sent any secret. This lack of information leads to the guarantee of partial fairness, i.e. only a lucky guess will break fairness. Moreover, the specific way in which the secrets are distributed leads to the optimality result. We vastly improve the round complexity of [4] as well as escape their impossibility results via the use of Timed-Release Encryption [23]. In particular, Beimel et al.’s protocols that are fair with probability \(1 - \frac{1}{p}\) against an arbitrary number m of malicious entities require at least \(O(m p^{2^m})\) rounds, while ours is roughly mp. The impossibility result presented in [4] states that, for some parameter depending on the number of malicious entities and the functionality to compute, fairness cannot be guaranteed with probability higher than \(\frac{1}{2} + \textsf {negl}(\lambda )\). By assuming the existence of Timed-Release Encryption, our protocol can achieve fairness with any user-defined probability regardless of the context in which it is used. In particular, our protocol can be rendered fair against malicious adversaries with standard zero-knowledge proofs techniques [14, 15, 22] and fair exchange can be used to achieve fairness in SMPC [18, 19].

We also present a general model to easily describe the exchange protocols based on hiding and, within this model, prove an upper bound on partial fairness as well as a protocol that can achieve this bound. Our impossibility result has no assumptions on the cryptographic primitives used, and therefore it is much stronger than the previous ones we circumvent in this paper. Therefore, our contribution to the body of knowledge is a twofold:

  1. 1.

    An abstract and theoretical analysis of multi-party protocols based on hiding the actual exchange of secrets;

  2. 2.

    An efficient protocol that achieves optimal fairness.

In the next section, we describe the setting of a multi-party fair exchange as well as the adversarial model we consider. Section 3 is dedicated to the security definitions and requirements of the cryptographic primitives used in our protocol, which is described in Sect. 4. The security of the protocol is sketched in the Appendix A. The paper concludes with an extensive abstract analysis of the multi-party protocols based on hiding, which also proves the optimality of our protocol.

2 Multi-party Fair Exchange

In this section, we describe the context of our study as well as the adversarial model. We follow the most general definition of Asokan et al. [3]. We assume the presence of K parties \(\mathbf {P_1}, \dots , \mathbf {P_K}\), and we define a matrix \(\varSigma \) to describe an arbitrary exchange. In particular, \(\varSigma _{i,j}\) is the secret that \(\mathbf {P_i}\) is meant to send to \(\mathbf {P_j}\). In this model, we set no restriction on the type of exchange, but we do assume that the network is a complete graph, i.e. any party can communicate with any other party. Intuitively, we say that an exchange is fair if and only if either all parties receive all the items they expect or nobody receives anything. Even if the exchange \(\varSigma \) is built from two separate sub-exchanges, by requiring that the whole of \(\varSigma \) is fair, the completion of one sub-exchange requires the completion of the other sub-exchange. Although this constraint might seem unreasonable in the abstract model, it is quite common in the real world, e.g. when moving to new house the two separate transactions of buying the new place and selling the old one typically depend on each other. In Sect. 4, we will show that this complex model for fair exchange can be reduced to a much simpler one. In particular, each of the K parties is given a secret \(s_i\), and the aim of the exchange is for all parties to know all secrets. We assume that knowing all but one secret is of no use. This is justified by thinking about these secrets as shares of some larger secret (often the case in SMPC). In this simpler model, the exchange is fair if and only if whenever a party discloses their secret, they receive all the other secrets.

2.1 Adversarial Model

We assume that the network topology is complete, but communication channels do not need to be secure. However, we require them to guarantee confidentiality; stronger properties are not needed since adversaries are passive. Delivery is not guaranteed, and we assume the use of agreed-upon timeouts to universally establish if the protocol is aborted. We are only interested in the case where at least one honest party is involved in the protocol, and we assume that corruptions are static (non-adaptive adversary), i.e. the coalitions of malicious parties do not change after the protocol starts. In our discussions, we talk about a group of malicious parties which coordinate and share knowledge as a coalition. Alternatively, the reader can think of a coalition of parties as a group of entities controlled by a single malicious actor. Crucially, we allow the presence of a dishonest majority, but adversaries are passive, i.e. they will follow the protocol, but can abort communication at any point in time without detection.

3 Preliminaries

In this section, we introduce all the cryptographic primitives used in our protocol as well as the security requirements they need to satisfy.

3.1 Notation

We write an n-length vector (or array) from a set X as \(\boldsymbol{v} \in X^n\), and we denote its ith element by \(\boldsymbol{v}[i]\) where \(1 \le i \le n\). If we have a function \(f: X \rightarrow Y\) and a vector \(\boldsymbol{v} \in X^n\), then we write \(f^*(\boldsymbol{v})\) to mean the vector \(\langle f(\boldsymbol{v}[i]) \mid 1 \le i \le n\rangle \) where f is applied to each element. We use \(S_n\) for the group of permutations over the set \(\{1, \dots ,n\}\), but for \(\sigma \in S_n\) and \(\boldsymbol{v} \in X^n\), we will write \(\sigma (\boldsymbol{v})\) to mean the vector \(\boldsymbol{w}=\langle \boldsymbol{v}[ \sigma ^{-1}(i)] \mid 1\le i \le n\rangle \). That is, if \(\sigma (a) = b\), then \(\boldsymbol{w}[b] = \boldsymbol{v}[a]\). We write \(R_n\) for the subset of rotations in \(S_n\): \(\{\tau \in S_n \mid \forall i\; \tau (i+1) = (\tau (i) +1 \bmod n)\}\). If \(\boldsymbol{\sigma } \in S_n^K\) and M is a \(K\times n\) matrix, then \(\boldsymbol{\sigma }(M)\) is the result of applying \(\boldsymbol{\sigma }[i]\) to the ith row of M. We use \(a\Vert b\) for the concatenation of a and b where these are interpreted as bitstrings.

3.2 Partial Fairness

We define fairness in the usual idea vs real world setting. Assume an ideal world where we have an incorruptible trusted party which acts as an intermediary collecting and delivering the secrets. We say that our protocol is \(\frac{1}{p}\)-fair if for any adversary to our protocol there is a simulator in the ideal world so that the two runs are computationally indistinguishable with probability \(1 -\frac{1}{p} - \textsf {negl}(\lambda )\) (where \(\textsf {negl}(\lambda )\) is a negligible function of the security parameter \(\lambda \)). The above means that our \(\frac{1}{p}\)-fair protocol can be unfair with probability at most \(\frac{1}{p} + \textsf {negl}(\lambda )\). See Appendix A for a formal definition.

3.3 Symmetric Encryption

Our protocol will take advantage of a symmetric encryption scheme \((\textsf{KGen}, \textsf{Enc}, \textsf{Dec})\) which we only require to be COA (Ciphertext Only Attack) secure, and each key will only be used once. However, in order to formally prove the security of our protocol, we require the encryption scheme to be non-committing [10]. This means that we require the additional property that for any message-message-key \((m_1, m_2, k_1)\) triple there is another key \(k_2\) so that \( \textsf{Dec}_{k_2}(\textsf{Enc}_{k_1}(m_1)) = m_2\). For simplicity, the reader can assume that the scheme used is the one-time-pad.

3.4 Delay Encryption

Delay encryption (sometimes called Timed-Release Encryption [23]) is an unusual cryptographic primitive whose aim is not to provide confidentiality of a message from other parties, but to hide the message from anyone for some predefined amount of time. For the reader accustomed to timed-release encryption, what we use and define is a “delay” time-lock-puzzle-based encryption scheme rather than time-specific schemes using trusted parties. This is justified because we don’t want to introduce trust in the fair exchange context.

More formally, a delay encryption scheme is a triple of algorithms \((\textsf{Pgen}, {{\,\mathrm{\textsf{Delay}}\,}}, {{\,\mathrm{\textsf{Open}}\,}})\) with associated sets \(\mathcal {M}, \mathcal {C}, \mathcal {P}\) such that

$$\begin{aligned} \textsf{Pgen}&:\left\{ 1\right\} ^* \times \mathbb {N}\rightarrow \mathcal {P}& {{\,\mathrm{\textsf{Delay}}\,}}&:\mathcal {M}\times \mathcal {P}\rightarrow \mathcal {C}& {{\,\mathrm{\textsf{Open}}\,}}&:\mathcal {C}\times \mathcal {P}\rightarrow \mathcal {M}\end{aligned}$$

Intuitively, \(\textsf{Pgen}(1^\lambda , T)\) generates public parameters \( params \) that are used to delay a message for time T. We omit \( params \) and write \({{\,\mathrm{\textsf{Delay}}\,}}_T(m)\) for the ciphertext delaying m for elapsed time T. Similarly, we omit \( params \) and write \({{\,\mathrm{\textsf{Open}}\,}}(c)\). Practically speaking, \({{\,\mathrm{\textsf{Delay}}\,}}_T(m)\) will create a puzzle c so that \({{\,\mathrm{\textsf{Open}}\,}}(c)\) can solve the puzzle and obtain m only after at least T (sequential) time. Thus, \({{\,\mathrm{\textsf{Open}}\,}}\) will most likely not be a PPT algorithm. However, an honest party with moderate computational power should be able to run \({{\,\mathrm{\textsf{Open}}\,}}(c)\) in sequential time not much longer than T. This might be \(\mu T\) for a small integer \(\mu \) such as 10 or 20.

In order for our scheme to make sense, we set the following requirement

$$ \forall m \in M \; \forall \,T \in \mathbb {N}\quad {{\,\mathrm{\textsf{Open}}\,}}({{\,\mathrm{\textsf{Delay}}\,}}_T(m)) = m $$

We say that a delay encryption is COA-secure if for any family of circuits \(\mathcal {A}\) of conceivable size and depth at most \(\mu (\lambda ) T\), we have

figure a

Intuitively, a COA-secure delay encryption scheme correctly hides encrypted messages for the expected amount of time. We remark that the size of such circuits will depend on the current state of technology. As noted in [20], allowing all polynomially-sized circuits could lead to misleading results with circuits much larger than what is feasible at the time of writing.

3.5 Commutative Blinding

A feature of our protocol is the use of commutative blinding (a.k.a. commutative encryption) to efficiently shuffle the secrets. The need for commutativity arises because the first party blinding the message cannot ever reveal their blinding key. Therefore, they need to unblind such message after it has been blinded by the other parties. We firstly present a definition that is easy to understand, but it is stricter than what is needed by the protocol. We use this definition for ease of presentation, but discuss the more general one in this section since it will be used in the formal proof of security.

Definition 1

We call a pair of PPT algorithms \(({{\,\mathrm{\textsf{Blind}}\,}}, {{\,\mathrm{\textsf{Unblind}}\,}})\) a commutative blinding scheme if

$$\begin{aligned} {{\,\mathrm{\textsf{Blind}}\,}}&:\mathcal {M}\times \mathcal {K}\rightarrow \mathcal {C}& {{\,\mathrm{\textsf{Unblind}}\,}}& :\mathcal {C}\times \mathcal {K}\rightarrow \mathcal {M}\end{aligned}$$
$$ \forall m \in \mathcal {M}\; \forall k \in \mathcal {K}\; {{\,\mathrm{\textsf{Unblind}}\,}}({{\,\mathrm{\textsf{Blind}}\,}}(m, k), k) = m $$

Moreover the commutativity property is given by

$$ \forall m \in \mathcal {M}\; \forall k_1, k_2 \in \mathcal {K}\; {{\,\mathrm{\textsf{Blind}}\,}}({{\,\mathrm{\textsf{Blind}}\,}}(m, k_1), k_2) = {{\,\mathrm{\textsf{Blind}}\,}}({{\,\mathrm{\textsf{Blind}}\,}}(m, k_2), k_1) $$

In its fully generality, to run a protocol with K parties, we only require \(K+1\) pairs \(({{\,\mathrm{\textsf{Blind}}\,}}_0, {{\,\mathrm{\textsf{Unblind}}\,}}_0), \dots , ({{\,\mathrm{\textsf{Blind}}\,}}_{K}, {{\,\mathrm{\textsf{Unblind}}\,}}_{K})\), such that

$$\begin{aligned} {{\,\mathrm{\textsf{Unblind}}\,}}_{K}(\dots {{\,\mathrm{\textsf{Unblind}}\,}}_0({{\,\mathrm{\textsf{Blind}}\,}}_{K}(\dots {{\,\mathrm{\textsf{Blind}}\,}}_0(m, k_0) \dots , k_{K}), k_0), \dots , k_{K}) = m \end{aligned}$$

Writing the protocol and security requirements using the above will result in a hardly readable script. Therefore, we use the stronger commutativity property and simplify the exposition of the protocol. In terms of security requirements, we need the blinding to satisfy the usual \(\mathrm {IND\hbox {-}CPA}\) definition. In particular, we require each \({{\,\mathrm{\textsf{Blind}}\,}}_i\) to be \(\mathrm {IND\hbox {-}CPA}\) secure. As an example, this primitive could be instantiated using ElGamal and the re-encryption procedure often used in voting schemes (e.g. [17]). A proof the above, together with a more precise and slightly more relaxed security definition, is given in the full version of the paper.

4 The Protocol

Firstly, we show a reduction from the general fair exchange we described before to a simpler context that we will consider in the rest of this paper. Recall that we consider K parties \(\{\mathbf {P_1}, \dots , \mathbf {P_K}\}\) and that all items exchanged are encoded in the matrix \(\varSigma \) so that \(\varSigma _{i,j}\) is what \(\mathbf {P_i}\) sends to \(\mathbf {P_j}\). Recall from Sect. 2.1 that we assume the communication channels provide privacy. We now go one step further and assume that privacy is implemented using some (symmetric or asymmetric) encryption scheme. In our exchange protocol, \(\mathbf {P_i}\) holds only one secret \(s_i = \langle {\textsf{Enc}_{k_{i,j}}(\varSigma _{i,j}) \mid j \ne i}\rangle \) where \(k_{i,j}\) is the key \(\mathbf {P_i}\) uses to communicate confidentially with \(\mathbf {P_j}\). Therefore, in the rest of this script, we assume that each party holds only one compound secret, and that this should be revealed to all the other participants.

Our protocol follows the idea of concealing from all the point where the secrets are exchanged. Therefore, the protocol can be split into two phases: a setup phase where the shuffling of the secrets occurs, and an exchange phase where the secrets (and dummy values) are exchanged.

In this scenario, we assume the K parties have agreed on a number N of exchange messages as well as a number L which represents the size of the coalition the protocol should be optimal against. We expect that in most cases \(L = K-1\), but some contexts may wish to lower such value. As we will see in Sect. 6, the secrets might not be uniformly distributed, instead there are only a few “configurations” or “states” where those should be. We define \(\langle {\pi _1, \dots , \pi _\delta }\rangle \) to be the permutations that send the lists of secrets and dummies to those valid configurations. Formulae for those permutations and the value \(\delta \) are described in Sect. 6. For ease of notation, we assume that all parties send n exchange messages (i.e. \(N=nK\)).

Fig. 1.
figure 1

Fair exchange protocol.

4.1 Protocol Overview

A crucial aspect of our protocol is that the parties involved will not exchange the secrets directly as in [4, 16], instead they will encrypt the secrets and exchange keys for the used encryption scheme. Since the ciphertext is delayed to the end of the protocol using Timed-Release Encryption, any auxiliary information about the secrets is rendered useless. The parties will only be able to test the exchanged keys when the delayed message opens after the protocol is terminated. This is what allows us to escape the impossibility results of [4, 16] as well as drastically improving the efficiency of the protocol. Before analysing what should be included in the delayed messages, we describe how the keys are shuffled.

Initially, all parties will construct a list of n random keys where the first element is the one used to encrypt the secret. All those lists are blinded and sent to \(\mathbf {P_1}\) who will collate them in a matrix so that the xth row is the list sent by \(\mathbf {P_x}\). It is worth remarking that \(\mathbf {P_1}\) is not a trusted party, but one of parties involved in the exchange. Since this matrix will need to end up in one of the \(\delta \) valid configurations (as defined in Sect. 6), \(\mathbf {P_1}\) creates \(\delta \) pairs \((M_1, f_1), \dots , (M_\delta , f_\delta )\) so that \(\left\{ f_i(M_i) \mid 1 \le i \le \delta \right\} \) is the set of all the valid configurations. This decoupling allows us to shuffle the pairs uniformly. Hence, starting from \(\mathbf {P_1}\), each party will randomly rotate the list of these \(\delta \) pairs and pass it to the next participant. In order to keep those rotations secret, each party will need to mask all the pairs. Masking (Mf) will be done in the following way: each element of M is blinded using a commutative blinding scheme, while f is masked by computing \(f\,\circ \, g^{-1}\) for some randomly picked permutation g. The resulting pair \(({{\,\mathrm{\textsf{Blind}}\,}}^*(M,b) , f\,\circ \, g^{-1})\) might not satisfy the property that \((f \,\circ \, g^{-1})({{\,\mathrm{\textsf{Blind}}\,}}^*(M, b))\) is a valid configuration, therefore, we apply g to the matrix M as well. Hence, (Mf) is transformed into \((g({{\,\mathrm{\textsf{Blind}}\,}}^*(M, b)), f \,\circ \, g^{-1})\) for some random permutation g and blinding key b. For security reasons, g will need to be different for each pair. In Fig. 1, the above transformation is encoded in the function F and \(\boldsymbol{S^x}\) indicates the list of pairs after being rotated x times. The random permutations g are called \(\boldsymbol{\sigma _j}[i]\) where j indicates which party picked it and i indicates which of the \(\delta \) matrices the permutation should be applied to.

The last party \(\mathbf {P_K}\) will receive the list of pairs rotated by all the other parties. Therefore, they will simply rotate and blind one last time as well as pick the first element (Mf) of the list. By computing f(M), \(\mathbf {P_K}\) obtains a random valid configuration, called \(\boldsymbol{E^K}\) in Fig. 1. The resulting matrix will have its elements blinded by all the parties, moreover, each xth row is blinded by \(\mathbf {P_x}\) twice.Footnote 2 All that is left to do is for \(\mathbf {P_K}\) to send back to each party \(\mathbf {P_x}\) their list (the xth row of \(\boldsymbol{E^K}\)). Now all parties can remove their first blinding and proceed to exchange the items in the list one by one.

In order to retrieve the secret, one needs to unblind the exchanged messages and identify which keys were used to encrypt the secrets. Therefore, the delayed message must include the rotation used by each party as well as all of the keys used for the blindings that are not removed. In particular, the setup messages are blinded with keys which are never revealed \(\left\{ \boldsymbol{b_1}[1], \dots , \boldsymbol{b_K}[1]\right\} \). This prevents any malicious adversary from extracting the secrets by looking only at the setup phase. After the setup phase, the parties will exchange the items of their list one at a time.

4.2 Protocol Analysis

In this brief subsection, we analyse the setup phase to understand its result. We note the following (ignoring blinding)

$$\begin{aligned} \boldsymbol{E^K} = \pi _{(\tau _1^{-1}\,\circ \, \dots \,\circ \, \tau _K^{-1})(1)}(\boldsymbol{E}) \end{aligned}$$

If there is at least one honest party, then there is at least one truly random \(\tau _i\) meaning that the permutation \(\pi _i\) picked is uniformly distributed. If during the exchange phase no further information is leakedFootnote 3, then the coalition of corrupted parties has no knowledge on when the \(\boldsymbol{k_i}[1]\) are exchanged. Hence, the corrupted coalition can attack the protocol only by aborting the exchange phase at an arbitrary point. Their strategy is successful only if they stop after having received all the honest parties’ secret keys but without having revealed all of theirs. In Sect. 6, we will prove that the permutations \(\pi _i\)’s can be picked so that this strategy has the smallest possible success rate.

We note that the round complexity of the setup phase is \(K+2\) with 3K messages sent overall. The exchange phase is run in N rounds with NK messages overall. These numbers are a large improvement over the protocol by Beimel et al. [4].

5 Security Proof

We only have space to give a brief overview the security proof of our protocol. More details are in Appendix A.

Let \(\mathcal F_{\text {ex}}\) be the ideal functionality where all K parties perform the fair exchange by handling their secrets to a central trusted party which redistributes them.

Definition 2

Let \(\textsf{IDEAL}_{\mathcal F_{\text {ex}}, \mathcal {S}(\textsf{aux})}(\varSigma , 1^\lambda )\) be the random variable consisting of the output of the K parties involved in an ideal-world run of the functionality \(\mathcal F_{\text {ex}}\), where \(\varSigma \) represents the matrix of secrets to exchange and \(\mathcal {S}(\textsf{aux})\) is a non-uniform PPT adversary with auxiliary information \(\textsf{aux}\).

Definition 3

Let \(\textsf{REAL}_{\varPi , \mathcal {A}(\textsf{aux})}(\varSigma , 1^\lambda )\) be the random variable consisting of the output of the K parties involved in a real-world run of the protocol \(\varPi \), where \(\varSigma \) represents the matrix of secrets to exchange and \(\mathcal {A}(\textsf{aux})\) is a non-uniform PPT adversary with auxiliary information \(\textsf{aux}\).

Definition 4

We say that a protocol \(\varPi \) implements the ideal functionality \(\mathcal F_{\text {ex}}\) with \(\frac{1}{p}\)-partial fairness, if for every non-uniform PPT adversary \(\mathcal {A}\) there exists a non-uniform PPT simulator \(\mathcal {S}\) such that

$$ \left\{ \textsf{IDEAL}_{\mathcal F_{\text {ex}}, \mathcal {S}(\textsf{aux})}(\varSigma , 1^\lambda )\right\} {\mathop {\approx }\limits ^{\frac{1}{p}}} \left\{ \textsf{REAL}_{\varPi , \mathcal {A}(\textsf{aux})}(\varSigma , 1^\lambda )\right\} $$

i.e. the two probability ensembles are computationally indistinguishable with probability \(1 - \frac{1}{p} - \textsf {negl}(\lambda )\) for some negligible function \(\textsf {negl}(\lambda )\).

For ease of presentation, we only consider the scenario where the adversary controls all but one honest party \(\mathbf {P_\mathcal H}\). Furthermore, we assume that \(1 \ne \mathcal H \ne K\).Footnote 4

We prove the security of the protocol in a 3-step process. Firstly, we prove that retrieving the secret \(s_\mathcal H\) is the equivalent to obtaining \(\boldsymbol{k_\mathcal H}[1]\) (Theorem 1).

Theorem 1

If \({{\,\mathrm{\textsf{Delay}}\,}}\) is COA secure, \(\textsf{Enc}\) is COA secure and \({{\,\mathrm{\textsf{Blind}}\,}}\) is \(\mathrm {IND\hbox {-}CPA}\), then no non-uniform PPT adversary \(\mathcal {A}\) can obtain \(s_\mathcal H\) unless \(D_\mathcal H\) is opened, and they have received the message containing \(\boldsymbol{k_\mathcal H}[1]\) during the exchange phase.

Working in the hybrid model [7], we assume the existence of an ideal functionality \(\mathcal F_{\text {setup}}\) where each party submits their secret s and receives back the shuffled and blinded list \(\boldsymbol{E^K}[i]\) together with the delayed messages \(D_i\) of all the other parties. The configuration picked by \(\mathcal F_{\text {setup}}\) is chosen uniformly among the \(\delta \) possibilities. We assume that this functionality is secure-with-abort, i.e. the corrupted parties can stop the honest parties from receiving their outputs, but not alter the output. In this context, we prove that our protocol achieves partial fairness (Theorem 2).

Theorem 2

Let \(\varPi ^{\textsf{hyb}}\) be the protocol of Fig. 1 where the setup phase has been replaced by an ideal functionality \(\mathcal F_{\text {setup}}\). Assume that the delay encryption scheme is COA secure and that the symmetric encryption scheme is non-committing and COA secure, then \(\varPi ^{\textsf{hyb}}\) implements the fair exchange ideal functionality \(\mathcal F_{\text {ex}}\) with \(\frac{K-1}{N+1-K}\)-partial fairness.

To complete the security proof, we show that the setup phase described in Fig. 1 implements the ideal functionality. In Sect. 4.2, we have already shown that our protocol already computes the setup phase correctly, so we only need to show that it does so privately (Theorem 3).

Theorem 3

If \({{\,\mathrm{\textsf{Blind}}\,}}\) is \(\mathrm {IND\hbox {-}CPA}\) and \({{\,\mathrm{\textsf{Delay}}\,}}\) is COA secure, then the setup phase of Fig. 1 does not reveal any information about \(\tau _\mathcal H\) while \(D_\mathcal H\) is not opened.

6 Fairness Optimality

We represent a protocol of N messages and \(K =\{p_1, \dots , p_K\}\) parties by a pair \(\varPi = (\mathcal M, \mathcal P)\), where

  • \(\mathcal M : \mathbb {N}\rightarrow \{1, \dots , K\}\) is a map indicating in which order the parties are sending messages, i.e. \(\mathcal M(i) = j\) means that message i is sent by party \(p_j\).

  • a “configuration” \(t : \{1, \dots , K\} \rightarrow \mathbb {N}\) is a function such that \(t(i) = j\) means that \(p_i\)’s secret is in the jth message.

  • \(\mathcal P\) is a function which associates to each configuration its probability of happening.

We set the constraint: \(t(i) = j \implies M(j) = i\). That is, if \(p_i\)’s secret is disclosed at the jth message, then such message must be sent by \(p_i\). This is due to the principle that if a secret is disclosed to someone, it is disclosed to everyone. When we say “\(p_i\)’s secret is in the jth message” or “\(p_i\)’s secret is disclosed at the jth message” we do not mean that the jth message gives immediate knowledge of \(p_i\)’s secret, but that knowledge is guaranteed to happen. Looking at the exchange protocol of Sect. 4, \(s_i\) is “disclosed” when the blinded \(\boldsymbol{k_i}[1]\) is sent by \(P_i\) during the exchange phase. However, the other parties will have knowledge of \(s_i\) only after the delay message \(D_i\) is opened.

Given a protocol \(\varPi = (\mathcal M , \mathcal P)\), we define the probability that a coalition of parties I “wins” by stopping at \(S \in \mathbb {N}\) to be

$$\begin{aligned}\begin{gathered} {\text {Pr}}_{\varPi }\left[ I, S\right] = \sum _{t \in \mathcal {T}(\varPi , I,S)} \mathcal {P}(t)\\ \text {where } \mathcal {T}(\varPi , I,S) = \{ t \mid \exists i \in I \;\; t(i) > S \wedge \forall j \notin I \;\; t(j) \le S \} \end{gathered}\end{aligned}$$

Intuitively, I wins by stopping at S if someone in I has not released their secret after the Sth message but all parties not in I have already sent their secrets. For ease of presentation we write \(t < S\) if for \(1 \le i \le K\; \; t(i) < S\) and similarly for \(t > S\).

In our search for an optimal protocol, we assume that no party sends consecutive messages.

We measure the unfairness of a protocol \(\varPi \) against coalitions of size L by computing

$$ \max _{\begin{array}{c} I, S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi }\left[ I, S\right] $$

Lemma 1

Let \(\varPi _b=(\mathcal M_b, \mathcal P_b)\) be a protocol with a configuration \(t_b\). Let \(t_g\) be a configuration such that \(\max t_g = \max t_b\) and \(\forall i \;\; t_g(i) \ge t_b(i)\). Define \(\varPi _g =(\mathcal M_b, \mathcal P_g)\) where \(\mathcal P_g(t_b) = 0\), \(\mathcal P_g(t_g) = \mathcal P_b(t_g) + \mathcal P_b(t_b)\) and \(\mathcal P_g(t) = \mathcal P_b(t)\) for any other t. Then for any L

$$ \max _{\begin{array}{c} I, S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi _b}\left[ I, S\right] \ge \max _{\begin{array}{c} I, S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi _g}\left[ I, S\right] $$

Proof

We prove the statement by showing that for all IS we have

$$\begin{aligned} {\text {Pr}}_{\varPi _b}\left[ I, S\right] \ge {\text {Pr}}_{\varPi _g}\left[ I, S\right] \end{aligned}$$
(1)

Note that \(\mathcal T(\varPi _g, I, S) = \mathcal T(\varPi _b, I, S)\). Hence, the inequality above becomes:

$$ \sum _{t \in \mathcal T(\varPi _b, I,S)} \mathcal P_b(t) \ge \sum _{t\in \mathcal T(\varPi _b, I, S)} \mathcal P_g(t) $$

Therefore, it is enough to show that \(t_g \in \mathcal T(\varPi _b, I, S) \implies t_b \in \mathcal T(\varPi _b, I, S)\), i.e. if I wins by stopping at S when \(t_g\) happens, then I wins by stopping at S also when \(t_b\) happen. Assume that \(t_g \in \mathcal T(\varPi _b, I, S)\), then \(\forall i \notin I \;\; t_g(i) \le S\). Therefore, \(\forall i \notin I \;\; t_b(i)\le t_g(i) \le S\). Moreover, \(j = \arg \max t_g \in I \wedge t_g(j) > S\). Since \(\max t_b = \max t_g\), we have that \(t_b(j) > S\). Hence \(t_b \in \mathcal T(\varPi _b, I, S)\).    \(\square \)

For every x such that there is a configuration t with \(\max t = x\), there is exactly one configuration \(t_x\) such that for all other configurations \(t'\) \(\forall i\;\; t_x(i) \ge t'(i)\). Such \(t_x\) is constructed by setting \(t_x(i)\) to be the largest (valid) integer up to x. The above lemma proves that we only need to consider those “short” configurations because longer ones are always worse.

Let \(\varPi ^L_g = (\mathcal M_g, \mathcal P_g)\) be a protocol where

  • \(\mathcal M_g(i) = (i-1\bmod K) +1\)

  • \(\mathcal P_g(t_i) = \alpha _i\)

  • \(\mathcal P_g(t) = 0\) for other configurations

  • the \(\alpha _i\)’s are picked to minimise: \(\max _{\begin{array}{c} I, S\\ |I|=L \end{array}}{\text {Pr}}_{\varPi ^L_g}\left[ I, S\right] \)

Lemma 2

Consider a protocol \(\varPi ^x_g\) as described above for some x and pick \(L < K\), then

$$ \max _{\begin{array}{c} I, S\\ |I|=L \end{array}}{\text {Pr}}_{\varPi ^x_g}\left[ I, S\right] =\max \{\alpha _i + \dots + \alpha _{i+L-1} \mid K \le i \le N+1-L\} $$

Proof

The short configurations \(t_i\) can be described as the map \(y \mapsto i - ((\mathcal M (i) -y) \bmod K)\). That is, the secrets are sent in messages \(\{i-K+1, i-K+2, \dots , i\}\). All the short configurations of the protocol \(\varPi ^x_g\) are exactly the \(t_i\) configurations described above where \(K \le i \le N\). Note that

$$ \max _{\begin{array}{c} I, S\\ |I|=L \end{array}}{\text {Pr}}_{\varPi ^x_g}\left[ I, S\right] = \max \{ \max _{I; |I|=L} {\text {Pr}}_{\varPi ^x_g}\left[ I, S\right] \mid 1\le S \le N\} $$

Fix an S. Consider the configurations \(T= \{ t_{S+1}, t_{S+2}, \dots , t_{S+L}\}\).Footnote 5 Pick any other configuration \(t_i\), one of two scenarios can happen

  1. 1.

    \(i \le S\), i.e. all secrets are exchanged up to (including) the Sth message

  2. 2.

    \(i > S+L\), i.e. more than L secrets are exchanged after Sth message

In both scenarios, there is no coalition of size L that can win by stopping at S in those configurations. Pick any configuration \(t \in T\), then any coalition containing \(\{p_j \mid t(j) > S\}\) can win in t by stopping at S. In particular, only the coalition \(J = \{ p_j \mid j = M(i),\;\; S+1 \le i \le S+L\}\) wins in all such configurations. Therefore, \(\max _{I ; |I|=L} {\text {Pr}}_{\varPi _g^x}\left[ I,S\right] = {\text {Pr}}_{\varPi _g^x}\left[ J,S\right] = \alpha _{S+1} + \dots + \alpha _{S+L}\).Footnote 6 By iterating over the possible S, we obtain the following set of sums:

$$\begin{aligned} &\{\alpha _K, \alpha _K + \alpha _{K+1}, \alpha _K+\alpha _{K+1}+\alpha _{K+2}, \dots , \alpha _K+ \dots + \alpha _{K+L-1}\} \\ \cup \, &\{\alpha _i + \dots + \alpha _{i+L-1} \mid K \le i \le N+1-L\} \\ \cup \, &\{\alpha _{N+1-L} + \dots + \alpha _{N}, \alpha _{N+2-L} + \dots + \alpha _{N}, \dots , \alpha _{N}\} \end{aligned}$$

Note that the sums in the first and third set are all “sub-sums” of sums in the second set. Therefore, the maximum over the above three sets is the same as the maximum over the second set. The lemma follows.    \(\square \)

Theorem 4

Let \(\varPi = (\mathcal M , \mathcal P)\) be any protocol, then

$$ \max _{\begin{array}{c} I,S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi }\left[ I,S\right] \ge \max _{\begin{array}{c} I,S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi ^L_g}\left[ I,S\right] $$

Proof

First, we construct an intermediate protocol \(\varPi _b = (\mathcal M_b, \mathcal P_b)\) such that

  • \(\mathcal M_b = \mathcal M\)

  • \(\mathcal P_b(t_i) = \beta _i\), where \(t_i\) are the “short” configurations in \(\mathcal M_b\)

  • \(\mathcal P_b(t) = 0\) for other t’s

  • the \(\beta \)’s minimise: \(\max _{\begin{array}{c} I, S\\ |I|=L \end{array}}{\text {Pr}}_{\varPi _b}\left[ I, S\right] \)

We first point out that \(\max _{\begin{array}{c} I,S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi }\left[ I,S\right] \ge \max _{\begin{array}{c} I,S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi _b}\left[ I,S\right] \) due to Lemma 1 and the definition of \(\mathcal P_b\). Therefore, we can prove the theorem by showing the same inequality between \(\varPi _b\) and \(\varPi ^L_g\).

By the previous lemmas, we have that the \(\alpha \)’s minimise

$$ \max \{\alpha _i + \dots + \alpha _{i+L-1} \mid K \le i \le N+1-L\} \left( = \max _{\begin{array}{c} I, S\\ |I|=L \end{array}}{\text {Pr}}_{\varPi ^L_g}\left[ I, S\right] \right) $$

Recall that \(\beta _i\) is the probability of the configuration \(t_i\) which is the unique shortest one with \(\max t_i = i\).Footnote 7 We have constructed the set \(\{\beta _K, \dots , \beta _{N}\}\). We prove the result by showing that:

$$\begin{aligned} \max _{\begin{array}{c} I,S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi _b}\left[ I,S\right] &\ge \max \{\beta _i + \dots + \beta _{i+L-1} \mid K \le i \le N+1-L\} \\ &\ge \max \{\alpha _i + \dots + \alpha _{i+L-1} \mid K \le i \le N+1-L\} \end{aligned}$$

The second inequality comes straight from the definition of the \(\alpha \)’s and Lemma 2. Fix any i and consider the configurations \(T=\{t_i, t_{i+1}, \dots , t_{i+L-1}\}\) and the coalition \(I = \{ p_j \mid \mathcal M_b(\max t) = j \text { for } t \in T\}\) of the parties sending the messages from i to \(i+L-1\).Footnote 8 Set \(S = i -1\). By the definition of I, all messages sent after S up to (including) \(i+L-1\) are sent by parties in I. Therefore, I wins in all the coalitions in T by stopping at S. Hence \({\text {Pr}}_{\varPi _b}\left[ I, S\right] \ge \beta _i + \dots + \beta _{i+L-1}\). Since i was arbitrary, we have

$$ \max _{\begin{array}{c} I,S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi _b}\left[ I,S\right] \ge \max \{\beta _i + \dots + \beta _{i+L-1} \mid K \le i \le N+1-L\} $$

and the theorem follows.    \(\square \)

As a result of the above theorem, we know that \(\varPi _g^L\) is optimal against coalitions of size L. We now analyse \(\varPi _g^L\) to find out exactly the values of the \(\alpha \)’s.

Lemma 3

Given n non-negative numbers \(\alpha _1, \dots , \alpha _n\) whose sum is 1 and any \(l\le n\), set \(\lambda = {\lceil \frac{n}{l} \rceil }\), then

$$\begin{aligned} \max \{ \alpha _i + \dots + \alpha _{i+l-1} \mid 1 \le i \le n +1 -l\} \ge \frac{1}{\lambda } \end{aligned}$$
(2)

Proof

Assume for a contradiction that Eq. 2 doesn’t hold. Let \(v_i = \{\alpha _{l(i-1)+1}, \alpha _{i+1}, \dots , \alpha _{li}\}\) for \(1\le i \le \lfloor \frac{n}{l} \rfloor \). If \(n\bmod l \ne 0\), then set \(v_\lambda = \{\alpha _{n+1-l}, \dots , \alpha _n\}\). Since Eq. 2 doesn’t hold, we have that for all \(1\le i \le \lambda \), the inequality \(\sum _{\alpha \in v_i} \alpha < \frac{1}{\lambda }\) holds. However,

$$ 1 = \sum _{i=1}^n \alpha _i \le \sum _{i=1}^\lambda \sum _{\alpha \in v_i} \alpha < \sum _{i=1}^\lambda \frac{1}{\lambda }= 1 $$

   \(\square \)

Theorem 5

Given n non-negative numbers \(\alpha _1, \dots , \alpha _n\) whose sum is 1 and any \(l\le n\), set \(\lambda = {\lceil \frac{n}{l} \rceil }\), then

$$\begin{aligned} \min _{\alpha } \max \{ \alpha _i + \dots + \alpha _{i+l-1} \mid 1 \le i \le n+1-l\} = \frac{1}{\lambda }\end{aligned}$$
(3)

Proof

By the previous Lemma 3, it is enough to exhibit a set of \(\alpha \)’s so that

$$ \max \{ \alpha _i + \dots + \alpha _{i+l-1} \mid 1 \le i \le n+1-l\} = \frac{1}{\lambda }$$

For \(1 \le i < \lambda \), set \(\alpha _{l(i-1)+1} = \frac{1}{\lambda }\) and let \(\alpha _n = \frac{1}{\lambda }\). Every other \(\alpha \) is zero. Note that these values are picked so that non-zero values have at least \(l-1\) zero values between each other. Therefore, any sum \(\alpha _i + \dots + \alpha _{i+l-1}\) will contain at most 1 non-zero value and all the non-zero sums will be \(\frac{1}{\lambda }\). The theorem follows immediately.   \(\square \)

This last theorem tells us that the optimal fairness is \(\frac{1}{\lambda }\) and also how this can be achieved. It is easy to see that there are other values of \(\alpha \)’s for which this optimal fairness can be achieved, so what we have proved so far is not enough for a complete classification. Nevertheless, it contains enough information to obtain an optimally-fair protocol.

Corollary 1

Let \(N+1-K\) be a multiple of \({{\,\textrm{lcm}\,}}\{2, 3, \dots , K-1\}\). Then \(\varPi _g^1\) is optimally fair against coalitions of any size.

Proof

By our definition of \(\varPi _g^1=(\mathcal M_1, \mathcal P_1)\) and Lemma 2, we have that the \(\alpha _i\)’s minimise

$$ \max \{\alpha _i \mid K \le i \le N\} $$

Hence, \(\alpha _i = \frac{1}{N+1-K}\) for all i. In particular, all the \(N+1-K\) shortest configurations are equally likely.

Pick any \(L < K\). We already know from Theorem 4, that \(\varPi _g^L = (\mathcal M_L, \mathcal P_L)\) is optimally fair against coalitions of size L. Let \(\mathcal P_L(t_i) = \beta _i\). Note that the shortest configurations in \(\varPi _g^1\) and \(\varPi _g^L\) are the same, therefore \(\beta _i\) and \(\alpha _i\) refer to the same configuration \(t_i\). We know from Lemma 2 that these \(\beta \)’s minimise

$$ \max _{\begin{array}{c} I, S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi _g^L}\left[ I,S\right] = \max \{\beta _i + \dots + \beta _{i+L-1} \mid K \le i \le N+1-L\} $$

By Theorem 5, we know that the above has value \(\frac{1}{ \left\lceil \frac{N+1-K}{L} \right\rceil } = \frac{L}{N+1-K}\) since L divides \(N+1-K\). Finally, by Lemma 2 the theorem follows:

$$ \max _{\begin{array}{c} I, S\\ |I|=L \end{array}} {\text {Pr}}_{\varPi _g^1}\left[ I,S\right] = \max \{\alpha _i + \dots + \alpha _{i+L-1} \mid K \le i \le N+1-L\} = \frac{L}{N+1-K} $$

   \(\square \)

With this corollary we show how to construct a protocol which is optimally fair against any coalition size. The protocol we described in Sect. 4 can be used to achieve this fairness. In particular, using the notation of Fig. 1 and assuming the parties agreed on an N for which Corollary 1 applies, \(\mathbf {P_1}\) needs to construct a permutation \(\pi _i\) for each of the \(N+1-K\) “shortest” configurations. Hence, \(\pi _i[x] = i-1 + ( (x-i) \bmod K)\), i.e. \(\pi _i\) rotates the xth row forward by \(\pi _i[x]\). We also note that the protocol of Fig. 1 can be used to achieve the distribution of \(\varPi ^L_g\) for any L. Thus, for large values of K, since using values of N suitable for Corollary 1 is prohibitive, the protocol of Fig. 1 can be tuned to be optimal against specific values of L (rather than all).

7 Conclusions and Future Developments

In this paper, we have proposed an efficient exchange protocol which achieves optimal partial fairness even in the presence of a dishonest majority. This is achieved by concealing when the exchange actually happens among a linear amount of dummy messages. Our concrete instantiation of the protocol takes advantage of delay encryption and commutative blinding and provides security against passive adversaries. Using our protocol in conjunction with other SMPC protocols improves the known bounds of partial fairness in the presence of a dishonest majority. In this regard, we provide a large improvement over the current state of the art [4, 9, 16]. We also provide a deep and abstract analysis of the protocols achieving partial fairness by concealing the point where the secrets are exchanged. Since our analysis does not involve any assumption on the cryptographic primitives used, it proves a very general impossibility result and shows the optimality of our protocol. However, our research leaves a couple of unanswered questions. Our abstract analysis does not provide a complete classification of the optimally-fair protocols. Therefore, could a protocol be designed to achieve optimal fairness without the use of delay encryption? Moreover, the protocols of [4] achieves complete fairness if an honest majority is present. This leaves open the question of whether our method could be modify to obtain the same feature.