Timed Encryption with Application to Deniable Key Exchange Shaoquan Jiang∗ 1

Institute of Information Security Mianyang Normal University, Mianyang China 621000 2 School of Computer Science and Engineering University of Electronic Science and Technology of China 2006 Xiyuan Rd, High Tech District West, Chengdu, 611731, China

Abstract In this paper, we propose a new notion of timed encryption, in which the encryption is secure within time t while it is completely insecure after some time T > t. We consider the setting where t and T are both polynomial (in the security parameter). This primitive seems useful in applications where some intermediate data needs to be private temporarily while later it is desired to be public. We propose two schemes for this. One is reasonably efficient in the random oracle model; the other is generic without a random oracle. To demonstrate its usefulness, we use it as a building block to construct a new deniable key exchange (KE) protocol. A deniable KE protocol is a protocol that allows two parties to securely agree on a secret while neither of them can prove to a third party the fact of communication. So an honest party can deny his participation in the communication. Our protocol is adaptively deniable and secret in the concurrent and non-eraser model that admits session state reveal attacks and eavesdropping attacks. Here a session state reveal attack in a non-eraser model means that a user does not erase his intermediate data (e.g., due to a system backup) and, when compromised, will hand it out faithfully to an adversary. An eavesdropping attack allows an adversary to eavesdrop transcripts between honest users, in which he is unaware of the randomness. As emphasized by Raimondo et al. (CCS06) and Yao and Zhao (ACNS10), an eavesdropping attack is very serious toward breaking the deniability. Our protocol is the first to simultaneously achieve all of the above properties without random oracles. The only price we pay is a timing restriction on the protocol execution. However, this restriction is rather weak and is essentially to require a user to answer an incoming message as soon as possible, which can be satisfied by almost all protocols that are executed online. Keywords: Public-key Encryption, Key Exchange, Deniability

1. Introduction In this paper, we propose a new notion of timed encryption. This is a public key encryption primitive, except that the secrecy of the plaintext is only required to hold in a short time t and the encrypted content will be completely insecure after a longer time T . Here t and T are pre-determined at the system setup stage. Any regular public key encryption scheme can be regarded as a timed encryption scheme with an exponentially (in the security parameter) large ∗

Corresponding author. Tel: 862861830360 Fax: 862861830360 Email address: [email protected] (Shaoquan Jiang) Abstract appeared in 9th Annual Conference on Theory and Application of Models of Computation (TAMC’12).

Preprint submitted to Elsevier

July 22, 2014

T . However, we are interested in the case where both t and T are polynomial. Practically, such a setup is possible if a timed encryption scheme is used in an interactive protocol and the secrecy is required to hold only during the protocol execution. In this case, t can be set as short as a few seconds and T can be set as a few hours. For a concrete application of this primitive, we consider an auction scheme which consists of two phases: a bidding phase and an opening phase. In the bidding phase, every bidder casts his bid and no one else can read it; in the opening phase, the bidding result is made public and it is desired that the result is publicly verifiable. Assume we are only interested in the fairness of the scheme. Then, a bidder can cast his bid using a timed encryption scheme. Under this, t can be set such that the bid remains private before the opening phase. Later, after time T , one can verify the result by forcefully decrypting all the encrypted bids. For another application, consider a deniable authentication protocol, where Alice wishes to authenticate a message to Bob such that Bob can not prove to a third party the fact of communication. To do this, Bob can first send a secure key encrypted using Alice’s timed encryption scheme. Alice then decrypts this key to generate and send an authentication tag within time t to Bob. Since no one except Alice can reply Bob’s request within time t, the authentication is guaranteed. Further, after time T > t, anybody can decrypt the authentication key and create the authentication tag. The deniability is guaranteed. 1.1. Related works A timed-release encryption (TRE), initiated by May [33], captures the intuition of “send a message into the future”. There are two types of TREs in the literature; see below. Time-lock based TRE. In TRE [36], a sender generates a RSA scheme and uses the factorization trapdoor to compute 2t squarings efficiently in order to encrypt a secret. No one else has this trapdoor and hence has to sequentially repeat 2t squarings in order to decrypt it, due to which the decryption delay is achieved. This approach was adopted by Mao [32] to build a timed-release of RSA encryption and RSA signature. A time-lock based TRE is different from a timed encryption as the latter has a legal receiver who has a decryption key and can decrypt the ciphertext at any time without any delay. Trusted server based TRE. In this approach, the decryption needs a secret (we call it time secret) from a trusted server who will release it only after a period of time, due to which a decryption delay is achieved. This type of TRE has several variants. Identity-based encryption (IBE) [5] can work as this primitive through a key control, where the release time together with the receiver name serves as an identity. The key for this identity will be released by a server only after a desired length of time. This approach has a drawback that the time secret changes with a different receiver, which is inconvenient for a server. In Di Crescenzo et al. [14], the time secret is released through an interaction between a receiver and a server. In Blake and Chan [4], the server releases the time secret without involving a sender or a receiver. The server’s time secret works for all users. Their scheme is scalable, compared with IBE or [14]. This approach was further discussed in [9, 10, 11]. Following the model [4], a timed-release of a signature was considered by Dodis and Yum [19]. Paterson and Quaglia [35] considered time-specific encryption (TSE), where the server’s release time lies in a specific time interval. Hwang, Yum and Lee [27] extended the model [4] with pre-open capability (TRE-PC), where the sender can publish a release key to allow the receiver to decrypt before the server releases the time secret. The server-based TREs except TRE-PC do not allow a receiver to decrypt before releasing the time secret. TRE-PC allows a receiver to do this by requiring a sender to publish a release key and hence has controlled security: a receiver can decrypt at any time if a sender provides a release key; otherwise, the ciphertext will be secure within time T and completely insecure 2

after that (when the time secret will be provided by the server). This controlled security is for a legal receiver. This is different from a timed encryption. For the latter, the receiver has a decryption key and can decrypt the ciphertext at any time. So its controlled security is only for an outsider: within time t, no outsider can decrypt while after time T anyone can do this. It should be noted that an outsider in TRE does not have the decryption capability at any time. Decryption delay in a timed encryption scheme is achieved by forcing the decryptor to finish a reasonably large amount of sequential computation (similar to a time-lock based TRE). The decryption delay in TREs is achieved through the server’s control on the release of a time secret. Due to the differences between a timed encryption and TRE, they are favored by different applications (even if both are applicable). Consider a key escrow system for instance, where a user encrypts his key so that the government can get it only one year later. In this case, a server based TRE might be better since it only requires a time server to release a time secret one year later while if we use a timed encryption, the government has to keep the forceful decryption running for a whole year. For a deniable key exchange, we will see that a timed encryption is more suitable. Partially, this is because it is hard to keep an online server as in TRE. In addition, if a timed encryption is adopted, the protocol never requires a user to run a forceful decryption. Indeed, this algorithm only works as a proof of the existence of a way that an adversary can decrypt a ciphertext without a key and hence he can not claim that he does not know the plaintext while the receiver does. For a time-lock based TRE, it is not clear how it can be applied to a deniable key exchange. It is certainly interesting if one can find a way. A more related work is a timed-commitment by Boneh and Naor [6], where one can commit to a message m and within time t the message m remains confidential while after time T , m can be obtained forcefully. They used it to build a timed signature. We will apply a timed encryption to build a deniably authenticated key exchange protocol. We consider the deniability advocated by Di Raimondo et al. [17] and Yao and Zhao [37], where the deniability remains valid even if an adversary can eavesdrop some communication records between honest users (note: this threat is captured in [17, 37] by giving the adversary auxiliary inputs). These records could add to the difficulty in preserving deniability. Indeed, an eavesdropped transcript usually is linked to an honest user as the randomness in it is unknown to an attacker. If later he blends this transcript into its attacked session, this linkage might preserve. Adaptivity for deniability is also important, where an adversary can corrupt any user and obtain his secret and internal states at any time. To our knowledge, no adaptively deniable key exchange in the eavesdropping model without random oracles has ever been proposed. We also consider a non-eraser model, where the intermediate data in the protocol execution can not be erased and, when the executed session is corrupted, all the intermediate data will be handed out faithfully. This makes the protocol construction difficult as the security of many existing protocols depend on the erasure of intermediate data. [37] considered the threat against the randomness leak (hence this attack). 1.2. Contribution In this paper, we propose a new notion of timed encryption, in which the encryption scheme is secure within time t while it is completely insecure after time T > t. We are interested in the case where t and T are both polynomial in the security parameter. We propose a concrete construction for this primitive in the random oracle model and a generic construction without random oracles. The generic scheme is realizable using a timed commitment of Boneh and Naor [6]. Timed encryption is useful in applications where some intermediate data is protected temporarily while it is desired to be publicly verifiable in the future. To see the usefulness of this primitive, we use it as a building block to construct a deniable secure key exchange 3

protocol. Our protocol is adaptively deniable and secret in the concurrent non-eraser model, where our model also admits session state reveal attacks and eavesdropping attacks. Here the non-eraser model means that a user can not erase his local temporary data, which captures the settings where the deleted data could be recoverable (e.g., through a backup). A session state reveal attack means that the temporary data in a protocol execution, if requested, will be handed out to the adversary. An eavesdropping attack allows an adversary to eavesdrop communication transcripts between honest users, the randomness of which is unknown to the attacker. By [17, 37], this attack could be very useful for an adversary to break the deniability. The adaptivity means that the adversary can corrupt a user at any time according to the data collected so far. Adaptive deniability implies an honest user’s deniability against a user who is honest in an early stage while becoming malicious later. Such a deniability is called forward deniability by Di Raimondo et al. [15, 16] and Yao and Zhao [37]. In Table 1, we compare our protocol (using a timed encryption scheme without random oracles) with existing protocols, where adaptive deniability with eavesdropping attacks and forward deniability are the main technical issues considered in [37] and [17]. Further, it is always preferred to remove random oracles in any secure system (if possible) as it is known [8] that there exists a cryptographic system that is secure in the random oracle model while it becomes completely insecure when the random oracle is replaced by any real function.

[17] [37] [28] [29] ours

Adapt. Deniable with Eavesdrop No Yes No No Yes

State Reveal in Non-Eraser Model Not Allowed Allowed Not Allowed Allowed Allowed

Concur. Deniable Yes Yes Yes Yes Yes

Forward Deniable No Yes Yes Yes Yes

Timing No No No No Yes

Random Oracle No Yes Yes Yes No

Table 1: Comparisons between existing protocols and ours (properties in black are desired)

From Table 1, we can see that each of the previous protocols has at least one unsatisfying measure. Our protocol also has one undesired property. That is, we have used a timing restriction which is inherited from the underlying timed encryption. However, this restriction is rather weak. It essentially requires a user to respond to an incoming message as soon as possible and can be satisfied by almost all protocols that are executed online. Pass [34] noticed that deniability in the random oracle model is not satisfactory. Our protocol, if using a random oracle based timed encryption, is also in the random oracle model. However, our deniability proof only uses the forceful decryption algorithm of the timed encryption and especially does not use a random oracle assumption and hence the deniability in this case is still random oracle free (although its secrecy proof needs this). 2. Definitions Notations. For a set S, x ← S samples x from S randomly; A|B means A concatenating with B. When the context is clear, we also use AB to denote the concatenation of A with B. We use negl : N → R to denote a negligible function: for any polynomial p(x), limn→∞ negl(n)p(n) = 0. For two functions f, g from N to R, write f (n) ≈ g(n) if f (n) − g(n) is negligible. In this paper, we always use κ to denote the security parameter. PPT stands for probabilistic polynomial time. For an algorithm A (e.g., encryption or commitment) with input m and randomness r, the output is denoted by A(m; r). When r is unspecified, simply write it as a random variable A(m). 4

2.1. Public key encryption Public key encryption is an encryption system where any person can generate ciphertexts using a public key while only the receiver with a decryption key can decrypt. This notion was proposed in the seminal paper of Diffie-Hellman [18]. Formally, Definition 1. A public key encryption is a triple of PPT algorithms S = (S.Gen, S.Enc, S.Dec) as follows (k ∈ N is a security parameter): - Key Generation S.Gen(1κ ). Take (e, d) ← S. Gen(1κ ) to generate an encryption key e and a decryption key d. e is made public and d is known only to the receiver. - Encryption S.Enc. To encrypt a plaintext m with e and randomness r, compute a ciphertext c = S.Ence (m; r). When r is unspecified, we write it as c = S.Ence (m). - Decryption S.Dec.

To decrypt a ciphertext c, compute m = S.Decd (c).

Completeness. For any m, Pr[S.Decd (S.Ence (m)) = m] = 1 − negl(κ). Semantic security of a public key encryption scheme was introduced by Goldwasser and Micali [25]. It essentially means that given a ciphertext, no PPT attacker can get any information about the plaintext. By Goldreich [26], this is equivalent to say that ciphertexts of two plaintexts of equal length are indistinguishable. For convenience, we state the semantic security in terms of indistinguishability. Definition 2. Let S = (S.Gen, S.Enc, S.Dec) be a public key encryption scheme. Let M be the plaintext domain and (e, d) ← S.Gen(1κ ). S is semantically secure if for any PPT adversary A and any m0 , m1 ∈ M with |m0 | = |m1 |, | Pr[A(S.Ence (m0 ), e) = 1] − Pr[A(S.Ence (m1 ), e) = 1]| = negl(κ),

(1)

where the probability is over the distribution of (e, d) and the randomness in S.Enc and A. 2.2. Symmetric encryption The definition of a symmetric encryption scheme K is similar to the public key case, except that e is not public and e = d. Formally, sk ← K.Gen(1κ ) is provided to both a sender and a receiver as their secret key. As sk is usually uniformly random over a key space K, we usually state that K = (K.Enc, K.Dec) is a symmetric encryption scheme with key space K. If a ciphertext C is invalid, we define K.Decsk (C) =⊥. Semantically Secure Symmetric Encryption. Semantic security of a symmetric encryption scheme is similar to Definition 2, except that e = d is the private key sk and e is removed from the input to A in Eq. (1). One-time Unforgeable Symmetric Encryption. One-time unforgeable symmetric encryption scheme essentially means that, given one ciphertext, no PPT adversary can create a different valid ciphertext (no matter the plaintext is known or not). Definition 3. Let K = (K.Enc, K.Dec) be a symmetric encryption scheme with key space K. Let sk ← K. K is one-time unforgeable if for any PPT adversary A, Pr[K.Decsk (C ′ ) ̸=⊥: C = K.Encsk (m), C ′ ← A(C), C ′ ̸= C] = negl(κ), where m is a message chosen by A. 5

(2)

2.3. Timed Encryption Syntax. A timed encryption scheme essentially is a public-key encryption scheme, which, besides the normal encryption and decryption (in Definition 1), has a forceful decryption algorithm that decrypts a ciphertext without a decryption key (although inefficient). Formally, Definition 4. S = (S.Gen, S.Enc, S.Dec, S.Inv) is a timed encryption scheme if (S.Gen, S.Enc, S.Dec) is a public key encryption scheme and for (e, d) ← S.Gen(1κ ), S.Inv satisfies the following. Forceful Decryption S.Inv.

For c = S.Ence (m), S.Inv(e, c) outputs m in a polynomial time.

Security. As said before, a timed encryption captures the intuition that within a polynomial time t the encryption is secure while if a longer time T is afforded, it becomes completely insecure. Here, we need to be careful about the time as it depends on the computation model. Specifically, if the task is parallelizable, one can always solve it faster using a parallel algorithm. This was noticed by Boneh and Naor [6] in their definition of timed commitment. They used the parallel random access machine (PRAM) model for this purpose, where an adversary is modeled as a machine with a polynomial number of parallel processors. Practically, the degree of parallelism is a priori bounded. Hence, here we consider a bounded PRAM. For a fixed polynomial α, we call an adversary with α processors by an adversary in the α-PRAM model or simply an α-PRAM adversary. To define the security, we need to specify what attacks an adversary can launch. As for a regular public-key encryption scheme, we are interested in an adaptive chosen ciphertext attack. Usually, this attack has three stages. In stage one, the adversary can query to decrypt any ciphertext of his choice. In stage two, he provides two messages m0 , m1 of equal length as its challenge pair and in turn receives a challenge ciphertext Cb of mb for b ← {0, 1}. In stage three, he can continue to ask for a decryption of any ciphertext other than Cb . Finally, he outputs a bit b′ and succeeds if b′ = b. Note in our primitive we only want to guarantee the security of a challenge ciphertext within time t after it is generated. As stage one does not involve a challenge ciphertext, there is no need to impose a time limitation (other than a polynomial) on the attacker. Under this change, any decryption query in this stage can be answered by the adversary himself using a forceful decryption algorithm. So stage one actually can be removed. In addition, since b′ is computed within time t after releasing Cb , the total time of stage three is bounded by t. This leads to the following definition. Definition 5. Let α, t, T be polynomials in the security parameter κ and t < T. A timed encryption scheme S = (S.Gen, S.Enc, S.Dec, S.Inv) is (α, t, T )-secure if the following holds. Let (e, d) ← S.Gen(1κ ). • Completeness. For any string C, Pr[S.Decd (C) ̸= S.Inv(e, C)] = negl(κ), where S.Inv has a runtime bounded by T . • Secrecy.

For any PPT α-PRAM adversary A in a game below, Pr(b′ = b) ≈ 1/2.

- Stage One. Given e, A outputs messages m0 , m1 of equal length. In turn, he will receive Cb = S.Ence (mb ) for b ← {0, 1}. - Stage Two. A can issue any decryption query C ̸= Cb and receive S.Decd (C). Finally, he outputs a bit b′ and succeeds if b′ = b and b′ is produced in time t after receiving Cb . S is called a (t, T )-secure timed encryption scheme if it is (α, t, T )-secure for any polynomial α. 6

2.4. Timed commitment A timed commitment scheme is a special commitment scheme whose secrecy is guaranteed only within a given period. It was proposed by Boneh and Naor [6]. Our notion of timed encryption is motivated by this. A timed commitment scheme consists of a committer S and a receiver R. It has three phases. Commit phase: To commit to a string w ∈ {0, 1}n , S and R execute a protocol Com and the commitment to w is the final output c by R. Open phase: In the open phase, S sends the string w to R. Then, they execute a protocol DCom, at the end of which R obtains a proof that w is the committed value in c. Forced open phase: If S refuses to open c, there exists an algorithm FO that takes c as input and, within time T , outputs w and a proof that w is the committed value in c. Boneh and Naor formalized the commitment security against any polynomial time PRAM adversary. As for a timed encryption, we also consider an α-PRAM adversary. So we relax the security definition of Boneh and Naor as follows. Definition 6. Algorithm TC = (TC.Com, TC.DCom, TC.FO) is a (α, t, T )-secure timed commitment if for a committer S and a receiver R, the following holds: Completeness: When R accepts in the commitment phase, his output c must be a valid commitment for some w ∈ {0, 1}n such that TC.FO(c) = w. Binding: If TC.Com(w) = c, then S can not convince R in the decommitment phase that c is a commitment of w′ ̸= w. This holds information theoretically. Soundness: At the end of the commitment phase, R is convinced that there exists a forceful open algorithm TC.FO(c) that outputs the committed value w in time T . Privacy: For any α-PRAM adversary A of time t < T , | Pr[A(tr, w) = 1] − Pr[A(tr, w′ ) = 1]| is negligible, where tr is the transcript in the commitment phase and the probability is over coins of S and R. TC is called a (t, T )-secure timed commitment, if it is (α, t, T )-secure for any polynomial α. 3. Timed Encryption in the Random Oracle Model In this section, we construct a concrete timed encryption scheme in the random oracle model. The idea is that we decompose a plaintext into many parts. Each part is not long and is deterministically encrypted. With a decryption key, the plaintext can be quickly decrypted while, without a decryption key, an attacker (including a PRAM adversary) has to spend a considerable amount of time (but still polynomial) in order to obtain all parts of the plaintext. This prevents a PRAM adversary from decrypting the plaintext quickly. Further, if “the considerable amount of time” is afforded, one can forcefully decrypt the plaintext. Construction 1. Let S = (S.Gen, S.Enc, S.Dec) be a public key encryption scheme. Assume n ∈ N and β is a positive constant. H : {0, 1}∗ → {0, 1}ℓ(κ) is a hash function (modeled as a random oracle), where ℓ(κ) is polynomial in the security parameter κ. K = (K.Enc, K.Dec) is a symmetric key encryption scheme with key space {0, 1}ℓ(κ) . Key Generation.

Take (e, d) ← S.Gen(1κ ). e is the public key and d is the private key.

Encryption. To encrypt m, take r0 ← {0, 1}κ , ri ← {0, 1}β log κ , i = 1, · · · , n, compute ci = S.Ence [ri ; H(c0 r0 r1 · · · ri )] and c0 = K.Encsk (m), where sk = H(r0 r1 · · · rn ). Finally, set ciphertext C = r0 c0 · · · cn .

7

Decryption with d. To decrypt C with d, compute ri = S.Decd (ci ) for i = 1, · · · , n, check if {ri }ni=1 is consistent with {ci }ni=1 . If not, reject; otherwise, compute m = K.Decsk (c0 ) for sk = H(r0 r1 · · · rn ). Forceful Decryption. Given ciphertext C = (r0 , c0 , · · · , cn ), search for r1 ∈ {0, 1}β log n such that c1 = S.Ence (r1 ; H(c0 r0 r1 )). If r1 is found, similarly search for r2 that is consistent with c2 , then r3 , · · · , rn . If some ri is not found, reject; otherwise, output m = K.Decsk (c0 ) for sk = H(r0 · · · rn ). Remark. One might attempt to take n = 1 so that C = r0 c0 c1 . However, this is insecure as one can guess r1 (and hence obtain sk and finally m) with probability κ−β (non-negligible) without any search attempt. Another attempt is to modify ci as ci = S.Ence [ri ; H(c0 r0 ri )]. This does not work since {ci } can be decrypted in parallel. Our construction will force the attacker to search for r1 , · · · , rn one after the other and hence avoid a parallel attack. We now state our security theorem and its proof is in Appendix C. Theorem 1. Assume S is semantically secure and K is semantically secure and one-time unforgeable. H is a random oracle. ϵ is a positive constant. µH and µE is the time to evaluate H (with input length κ) and S.Enc (with input length β log κ) respecitively. Then, our construction √ is (σ, tµH , µE nk β )-secure for n ≥ max{3 σtκ−β/2 log κ, log2+ϵ κ}. Efficiency. The cost of decryption with d in our scheme is dominated by n encryptions and n decryptions of S (as n + 1 hashes is relatively cheap). Note the decryption of our construction with d can be parallelized. If we use n parallel processors, each processor only needs to evaluate one S decryption and (after all processors finish the decryption) then one S encryption. The cost of encryption in our scheme is dominated by n encryptions of S which again can be done in parallel. In some applications (e.g., deniable key exchange in section 5), the secrecy of a timed encryption is only required to hold in seconds. So n can be set small. Thus, our scheme is practically interesting. 4. Timed Encryption without a Random Oracle Now we construct a timed encryption scheme from a timed commitment. The idea is as follows. A timed commitment already has a forceful opening algorithm. But it lacks a private key based decryption mechanism. To make up this, we can further encrypt the message using a normal encryption scheme. With a decryption key, one can obtain m from the normal encryption while, without a decryption key, one can forcefully compute m from the timed commitment. To make sure the timed commitment and the normal encryption are consistent in m, a noninteractive zero-knowledge (NIZK) proof is used. Formally, Construction 2. Let (e, d) be a public/private key pair for a public key encryption scheme S. TCom is a timed commitment. P is a non-interactive zero knowledge with a common random { } ′ ′ ′ ∗ string σ for relation Re = ⟨(S.Ence [m; r], TCom[m; r ]), m, r, r ⟩ | m, r, r ∈ {0, 1} . To encrypt m, compute C = S.Ence [m; r], τ = TCom[m; r′ ] ,where r and r′ are the randomness for C and τ respectively. Let π = Pσ [C, τ ; m, r, r′ ], where (C, τ ) is the common input and (m, r, r′ ) is the witness. The final ciphertext is γ = (C, τ, π). Upon γ = (C, τ, π), the normal decryption with d first verifies if π is valid. If yes, decrypt m = S.Decd (C); otherwise, set m =⊥ . The forceful decryption for γ first verifies π. If it is valid, open τ using a forceful opening algorithm T in TCom; otherwise, reject. Denote this scheme by S∗ . The security of the above construction is stated as follows and the proof is in Appendix D.

8

Theorem 2. If S is secure against an adaptive chosen-ciphertext attack, TCom is a (α, t, T )secure timed commitment and P is a one-time simulation-sound adaptive NIZK (see Appendix A for a definition), then S∗ is a (α, t, T )-secure timed encryption scheme. Further, if TCom is a (t, T )-secure timed commitment, then S∗ is a (t, T )-secure timed encryption scheme. 5. Application to Adaptive Deniable Key Exchange In this section, we apply a timed encryption to build an adaptive deniable key exchange protocol and achieve several properties that have not been achieved in the literature. 5.1. Security model Deniable key exchange is a protocol that allows two parties to securely establish a common secret while neither of them can prove to a third party the fact of communication. This property prevents one party from maliciously using the communication record (i.e., at the court) against the other. In our security model, the secrecy is revised from Bellare-Rogaway [2] and the deniability is revised from [22, 23, 29, 37]. Assume there are n parties P1 , · · · , Pn . Pi and Pj might jointly execute a key exchange protocol Ξ to establish a common secret (called a session key). Notions. Πℓi i denotes a protocol instance in Pi , which is a copy of Ξ and ℓi is its instance id in Pi . sidℓi i is a session identifier for Πℓi i and will be specified when analyzing the protocol security. Supposedly, two communicating instances should share the same session identifier. pidℓi i is the partner party of Πℓi i that he presumably interacts with. statℓi i is the internal state of Πℓi i . We also use stati to denote an internal state for an unspecified instance in Pi . skiℓi is the session key ℓ ℓ ℓ in Πℓi i . Πℓi i and Πjj are partnered if (1) pidℓi i = Pj and pidjj = Pi ; (2) sidℓi i = sidjj . Intuitively, instances are partnered if they are jointly executing Ξ. Adversarial Model. Now we introduce the attack model. Essentially, we would like to capture the concern that the adversary can fully control the network. In particular, he can inject, modify, block and delete messages at will. He can also corrupt some users and obtain their secret keys and internal states. He is also able to collect some selected session keys. Finally, Ξ is secure if the session key of any adversarially chosen instance remains computationally random, where the adversary is assumed not to compromise this session key in an obvious way (e.g., through a corruption or a session key request). The formal model is defined as a game between a challenger and an attacker A. The challenger maintains a set of oracles that represent events during protocol executions. Adversarial capabilities are modeled as a sequence of queries to these oracles. Send(i, ℓi , M ). A can send any message M to Πℓi i . The result is whatever the latter returns according to the specification. This models Pi ’s response to an incoming message. Reveal(i, ℓi ). A can ask for a session key skiℓi in Πiℓi . This models a session key loss event. Corrupt(i, ℓi ). A can ask to corrupt Πℓi i and receive state statℓi i . Note that Pi ’s long term secret key is not part of statℓi i . This threat is also called session state reveal attack [1, 12]. Security under this means that compromising one session does not affect other sessions. Corrupt(i). A can corrupt Pi and obtain his long term secret and all internal states {statℓi i }ℓi . Further, Pi ’s future action is taken by A. This models the case where a user becomes malicious. Test(i, ℓi ). This is the security test and can be queried only once. The queried session must have successfully completed. Further, this session should not be compromised (see the definition below) throughout the game. When this oracle is called, it flips a fair coin b and provides a number αb to A, where α0 = skiℓi and α1 ← K. Here K is the space of skiℓi .

9

Πℓi i is said compromised if a Reveal or Corrupt query was issued to Πℓi i or its partnered instance (if any), or, if Pi or pidℓi i is Corrupted. At the end of the game, A outputs a bit b′ . He succeeds if b′ = b; otherwise, he fails. The protocol security is defined through correctness, secrecy, authentication and deniability. ℓ ℓ Correctness. If two partnered instances Πℓi i and Πjj successfully complete, then skili = skj j . Secrecy. Let Succ(A) be the event b′ = b in a Test query. Secrecy requires Pr[Succ(A)] < 1 2 + negl(κ). Authentication. Essentially, authentication requires that when instance Πℓi i successfully completes the execution of Ξ, pidℓi i indeed attended the joint execution. Formally, let Πℓi i be the test instance and Non-Auth be the event: either there does not exist any partnered instance for Πℓi i or its partnered instance is not unique. Then Ξ is said to be authenticated if Pr[Non-Auth(A)] is negligible. Defining Non-Auth on the test instance (instead of a general instance) is for simplicity only as otherwise we could instead choose the authentication-breaking instance as the test instance. Deniability. Deniability essentially states that the adversary view in the interaction with oracles maintained by the challenger can be simulated by the attacker himself (i.e., without any interaction). Under this, his view can not be used as evidences against deniability of honest parties. Here we must consider two concerns. First, an honest Pi ’s long-term secret is unknown to the adversary and so his view in Corrupt(i) can not be simulated by himself and an external assistant is necessary. We induce a trusted third party T to provide the long term secret of Pi in this case. Second, the adversary can eavesdrop the communication between honest users, the randomness of which is unknown to him. As said in the introduction, this might be useful for him to break the deniability. To capture this concern, we add the following oracle maintained by T. Execute(i, ℓi , j, ℓj ). When this oracle is called, a complete protocol execution between Πℓi i and ℓ Πjj is carried out. Finally, A and the simulator will be provided with a protocol transcript tr. The simulatability by the adversary alone is captured by a simulator S. Formally, the deniability model is described as follows. Initially, a trusted third party T generates global parameters params and a public key Ei (maybe empty) and a private key Di for each party. There are two games Γrea and Γsim . In Γrea , T provides {Ei } and params to A and maintains oracles Send, Reveal, Corrupt(i, ℓi ), Corrupt(i), Test and Execute faithfully. In Γsim , T provides {Ei } and params to A and a simulator S. Then, T maintains Execute oracle and S maintains the remaining oracles, except that upon Corrupt(i), Di is provided to S by T. Finally, protocol Ξ is deniable if the views of A in Γrea and Γsim are statistically close. Definition 7. A key exchange protocol Ξ is deniable secure if for any PPT A, correctness, secrecy, authentication and deniability are all satisfied. Remark. In the next subsection, we will construct a key exchange protocol between an initiator Pi and a responder Pj . Our protocol specification requires a timing restriction (e.g., Pj requires that the third message flow F low3 from Pi must be received within time t after Pj sends out the second flow F low2 ). We remark that this type of protocol is covered in our model by requiring an instance to maintain a local clock in his internal state. So after processing Send(j, ℓj , F low1 ), Pj marks his local time T0 and when oracle Send(j, ℓj , F low3 ) is queried later, he checks if the current time T1 is less than T0 + t and proceed only if this is satisfied. 5.2. Construction We now apply a timed encryption to construct a new key exchange protocol (also see Figure 1). Let S be a public key encryption scheme. Initially, take (Ei , Di ) ← S.Gen(1κ ) as user 10

Pj

Pi C1

C1 = Ej [k1 |g x ]

C2

o

τ =MACk2 (Pi |Pj |C1 |C2 |0)

sk = g yx

/

o

C2 = Ei [k2 |C1 |g y ] /

σ=MACk1 (Pi |Pj |C1 |C2 |1)

sk = g yx

Figure 1: Our Timed Encryption-based Deniable Key Exchange tE-DKE(See details in the bodytext)

Pi ’s public/private key pair. p, q are large primes with q | p − 1. g ∈ Z∗p has an order of q. MAC: {0, 1}κ × {0, 1}∗ → {0, 1}κ is a message authentication code with key space {0, 1}κ . Key exchange between Pi and Pj is as follows, where for simplicity S.EncEi (·) and S.DecDi (·) are respectively denoted by Ei (·) and Di (·). 1. Pi takes x ← Zq , k1 ← {0, 1}κ and sends C1 = Ej [k1 |g x ] to Pj . 2. Upon C1 , Pj takes y ← Zq , k2 ← {0, 1}κ and sends C2 = Ei [k2 |C1 |g y ] to Pi . 3. Upon C2 , Pi checks whether Di (C2 ) = k2 |C1 |Y for some k2 ∈ {0, 1}κ and Y ∈ ⟨g⟩. If no, he rejects; otherwise, he sends MACk2 (Pi |Pj |C1 |C2 |0) to Pj . 4. Upon τ , if τ = MACk2 (Pi |Pj |C1 |C2 |0) and τ is received within time t from sending out C2 and if Dj (C1 ) = k1 |X for some X ∈ ⟨g⟩ and k1 ∈ {0, 1}κ , Pj sets sk = X y , computes and sends σ = MACk1 (Pi |Pj |C1 |C2 |1) to Pi ; otherwise, he rejects. 5. Upon σ, if σ ̸= MACk1 (Pi |Pj |C1 |C2 |1) or σ is received more than time t from sending out C1 , Pi rejects; otherwise, he sets sk = Y x . Remark. To better understand our protocol, some remarks are necessary. (1) One careful reader might realize that the final message flow seemingly can be moved to the second flow (i.e., placed together with C2 ). If so, Pj needs to first derive k1 from C1 in order to compute MACk1 (∗) . However, this variant suffers from a session state reveal attack. ℓ ℓ Indeed, assume Πℓi i sends C1 to Πjj . When Πjj replies with C2 and σ, the attacker reveals the ℓ

state of Πjj and obtains k1 in C1 . With k1 , the attacker can forge a new C2′ and σ ′ by taking a new y ′ . Now when Piℓi successfully completes, it has no partner in Pj while skiℓi is known to the attacker. Note Πℓi i is not compromised and hence the attacker is allowed to choose Πℓi i as the test session in which he always succeeds. A simple counter measure for this is to erase k1 ℓ after computing σ (as k1 will not be used by Πjj anymore). However, this works only in an eraser model which is not our interest. (2) If C1 is not encrypted in C2 , the protocol again will suffer from a session state reveal attack. ′ Indeed, when an attacker A sees Πℓi i ’s message C1 = Ej [k1 |g x ], he changes it to C1′ = Ej [k1′ |g x ] ℓ and sends it to Πjj . After seeing C2 , he forwards it to Πℓi i . When Πℓi i sends out τ , A corrupts ℓ

Πℓi i and obtains k2 , with which A computes τ ′ that matches C1′ and C2 . Πjj then will be ℓ



deceptively convinced. However, since Πjj is not compromised, the attacker can choose Πjj as

11

his test instance, in which he can always succeed using x′ . Our protocol will not suffer from this attack as C1 is encrypted in C2 and so A can not mall C2 to C2′ while preserving k2 unchanged. (3) If g x is not encrypted in C1 but still included in the input for τ and σ, then it again suffers from a session state reveal attack. The procedure is similar to the case in item (2). 5.3. Security We analyze the deniable security of our protocol. Toward this, we first formally define session identifier and internal states. ℓ ℓ Session identifier. For initiator Πℓi i and responder Πjj , let sidℓi i = sidjj = ⟨Pi , Pj , C1 , C2 ⟩. Internal state statℓt t . Since our analysis is in the non-eraser model, all the randomness in the execution will not be erased. So statℓt t is the internal state using which the protocol with a private key can continue the execution. Specifically, statℓt t evolves with the execution as follows, where r1 (resp. r2 ) is the encryption randomness in C1 (resp. C2 ). step 1: statℓi i = (i, j, x, k1 , r1 , Ti ), where Ti is a timer initially 0 and starts when Pi sends C1 ; ℓ step 2: statjj = (i, j, C1 , k2 , y, r2 , Tj ), where Tj is a timer initially 0 and starts when Pj sends C2 ; step 3: statℓi i = (i, j, x, k1 , r1 , Ti , C2 , ⊥) (if reject) and (i, j, x, k1 , r1 , Ti , C2 , k2 , Y ) (other); ℓ step 4: statjj is (i, j, C1 , k2 , y, r2 , Tj , τ, ⊥) (if reject) and (i, j, C1 , k2 , y, r2 , Tj , k1 , X, sk) (other); step 5: statℓi i is (i, j, x, k1 , r1 , Ti , C2 , k2 , Y, σ, ⊥) (if reject), (i, j, x, k1 , r1 , Ti , , C2 , k2 , Y, σ, sk) (other). In the following, we present the ideas of deniable security. It consists of deniability, authentication and secrecy. Deniability idea. We need to simulate the oracles without an honest user’s decryption key. Note if the simulator has decryption keys, then the oracles can be perfectly simulated. Further, the decryption key is only used in Corrupt(i) oracle and in Send oracle for the decryption of C1 or C2 . In the deniability Γsim , the former is answered with the help of the trusted party T and the latter can be answered using the forceful decryption algorithm of the timed encryption. Specifically, when receiving C2 , the simulator suspends the adversary and forcefully decrypts it. Similarly, we can deal with C1 . As the forceful decryption runs in a polynomial time, the simulator is legal. The remaining oracles (i.e., Corrupt(i, ℓi ), Test, Send(i, ℓi , F lowv ), v = 3, 4) do not use any Di and hence whatever computable by oracles in Γrea can be perfectly computed by the simulator in Γsim . Hence, the simulation is statistically close to the real one. We remark that the suspension based simulation was used by Dwork, Naor and Sahai [22, 23]. Authentication idea. If Πℓt t is the test instance, we show that it must have a unique partner Πℓss . Actually, we only need to prove the existence of Πℓss as otherwise, the same x (or y) in Ps (honest) will be sampled twice, negligible (i.e., with probability 1/q, ignored)! First consider the case where the test instance Πℓt t is a responder. After receiving C1∗ from some Ps , he sends ∗ C2∗ = Es [k2∗ |C1∗ |g y ] to Ps . Denote the event that Πℓt t accepts τ ∗ in F low3 while it has no partner in Ps , by Imp. It suffices to show that Pr[Imp] is negligible. Denote the real security game by Γ. We modify Γ to Γ′ such that the game terminates after time t from Πℓt t ’s sending F low2 . Pr[Imp(Γ)] = Pr[Imp(Γ′ )] since if Imp occurs, it must happen within time t after Πℓt t sends F low2 . ∗ ∗ We further modify Γ′ to Γ′′ such that Es [k2∗ |C1∗ |g y ] in Πℓt t is replaced by Es [0|C1∗ |g y ]. To be ∗ ∗ consistent, whenever Ps receives the same Es [0|C1∗ |g y ], it proceeds normally using (k2∗ |C1∗ |g y ) as the decryption result (.e.g., computing τ with k2∗ ). By the security of timed encryption, Pr[Imp(Γ′ )] ≈ Pr[Imp(Γ′′ )]. It remains to show Pr[Imp(Γ′′ )] is negligible. Due to Imp event, no instance in Ps sees the same (C1∗ , C2∗ ) as Πtℓt does. Hence, Ps never computes MAC with input Ps |Pt |C1∗ |C2∗ |0. So τ ∗ must be forged. Reviewing the oracles, we can see that the usage of k2∗ in Γ′′ is only to evaluate MACk2∗ (·) (especially Ptℓt is a test instance and hence can not be 12

issued a session state reveal) and hence the non-negligible Pr(Imp(Γ′′ )) can be reduced to break MAC, contradiction! Authentication for a responder Πℓt t follows. The case of an initiator Πℓt t is similar. Authentication follows. Secrecy idea. By the authentication property, the test instance Πℓt t has a unique partner Πℓss . In fact, this Πℓss also has Πℓt t as its unique partner; otherwise, the same x or y is sampled repeatedly in Pt (with probability 1/q, ignored!). We show that non-negligible secrecy advantage can be reduced to breaking DDH assumption. Given a challenge (g a , g b , α), a DDH attacker B does the following. He simulates the security game normally except that Πℓss and Πℓt t uses g b and g a as g x and g y respectively and that the challenge key in test session is α. This simulation is perfect only if (1) a, b is not requested to reveal and (2) no Reveal query on an instance with (g x , g y ) = (g a , g b ) (as it requires to output g ab ). First of all, (1) will occur for instances Πℓt t and Πℓss by definition of Test oracle. It will not occur to other instances either as otherwise a or b is sampled a second time by the simulator, which has negligible probability since there are only polynomial oracle queries. The reason for (2) is similar to (1). So the simulation is real and the advantage of the attacker implies the advantage of breaking DDH. The detail is in Appendix E. Theorem 3. If S is a (α, t, T )-secure timed encryption scheme and MAC is an existentially unforgeable message authentication code, then tE-DKE is an adaptively deniable secure key exchange protocol against any PPT α-PRAM adversary. Further, if S is (t, T )-secure, then tE-DKE is adaptively deniable secure against any PPT PRAM adversary. 6. Conclusion and open questions In this paper, we proposed a new notion of timed encryption. On the one hand, it is like a normal encryption scheme in the sense that its ciphertext can be decrypted using a secret key. On the other hand, it is different from a normal scheme in the sense that the secrecy holds only within time t and it is completely insecure after time T > t. We constructed an efficient scheme under the random oracle model and a generic scheme from a timed commitment scheme. As an application, we showed that this primitive can be used to construct a deniable secure key exchange protocol. Our protocol is proven adaptively deniable that admits eavesdropping attacks and session state reveal attacks. This is the first work in the literature that achieves the properties. Interesting problems are to find more applications for this primitive and to construct efficient schemes without random oracles. Acknowledgements The author would like to thank an anonymous reviewer for pointing out the relation between a timed encryption and TRE-PC. This work is supported by National 973 Program of China (No. 2013CB834203), NSFC (No. 60973161), and Fundamental Research Funds for the Central Universities (No. ZYGX2010X015). References [1] M. Bellare, R. Canetti, H. Krawczyk, A modular approach to the design and analysis of authentication and key exchange protocols, in: J. S. Vitter (Ed.), Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, STOC 1998, ACM Press, 1998, pp. 419-428. [2] M. Bellare, P. Rogaway, Entity authentication and key distribution, in: D. R. Stinson (Ed.), Proceedings of Advances in Cryptology-CRYPTO 1993, in: LNCS, vol. 773, Springer, Heidelberg, 1993, pp. 232-249. 13

[3] M. Bellare, P. Rogaway, Random oracle is practical: a paradigm for designing efficient protocols, in: D. E. Denning, R. Pyle, R. Ganesan, R. S. Sandhu, V. Ashby (Eds.), Proceedings of the 1st ACM Conference on Computer and Communications Security, CCS 1993, ACM Press, 1993, pp. 62-73. [4] I. F. Blake, A. C.-F. Chan, Scalable, server-passive, user- anonymous timed release cryptography, in: Proceedings of the 25th International Conference on Distributed Computing Systems, ICDCS 2005, IEEE Computer Society, 2005, pp. 504-513. [5] D. Boneh, M. K. Franklin, Identity-based encryption from the weil pairing, SIAM J. Comput. 32(3), (2003) 586-615. [6] D. Boneh, M. Naor, Timed commitments and applications, in: M. Bellare (Ed.), Proceedings of Advances in Cryptology-CRYPTO 2000, in: LNCS, vol. 1880, Springer, Heidelberg, 2000, pp. 236-254. [7] R. Canetti, H. Krawczyk, Analysis of key-exchange protocols and their use for building secure channels, in: B. Pfitzmann (Ed.), Proceedings of Advances in CryptologyEUROCRYPT 2001, in: LNCS, vol. 2045, Springer, Heidelberg, 2001, pp. 453-474. [8] R. Canetti, O. Goldreich, S. Halevi, The random oracle methodology, revisited (Preliminary Version), in: J. S. Vitter (Ed.), Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, STOC 1998, ACM Press, 1998, pp. 209-218. [9] J. Cathalo, B. Libert, J.-J. Quisquater, Efficient and non-interactive timed-release encryption, in: S. Qing, W. Mao, J. Lopez, G. Wang (Eds.), Proceedings of 7th International Conference on Information and Communications Security, ICICS 2005, in: LNCS, vol. 3783, Springer, Heidelberg, 2005, pp. 291-303. [10] J. H. Cheon, N. Hopper, Y. Kim, I. Osipkov, Timed-release and key-insulated public key encryption, in: G. Di Crescenzo, A. D. Rubin (Eds.), Proceedings of 10th International Conference on Financial Cryptography and Data Security, FC 2006, in: LNCS, vol. 4107, Springer, Heidelberg, 2006, pp. 191-205. [11] J. H. Cheon, N. Hopper, Y. Kim, I. Osipkov, Provably secure timed-release public key encryption, ACM Trans. Inf. Syst. Secur. 11(2), 2008. [12] K. Choo, C. Boyd, Y. Hitchcock, Errors in computational complexity proofs for protocols, in: B. K. Roy (Ed.), Proceedings of Advances in Cryptology-ASIACRYPT 2005, in: LNCS, vol. 3788, Springer, Heidelberg, 2005, pp. 624-643. [13] A. W. Dent and Q. Tang, Revisiting the security model for timed-release encryption with pre-open capability, in: J. A. Garay, A. K. Lenstra, M. Mambo, and R. Peralta (Eds.), Proceedings of ISC 2007, in: LNCS, vol. 4779, Springer-Verlag, 2007, pp. 158-174. [14] G. Di Crescenzo, R. Ostrovsky, S. Rajagopalan, Conditional oblivious transfer and timed-release encryption, in: J. Stern (Ed.), Proceedings of Advances in CryptologyEUROCRYPT 1999, in: LNCS, vol. 1592, Springer, Heidelberg, 1999, pp. 74-89. [15] M. Di Raimondo, R. Gennaro, New approaches for deniable authentication, in: V. Atluri, C. Meadows, A. Juels (Eds.), Proceedings of the 12th ACM Conference on Computer and Communications Security, CCS 2005, ACM Press, 2005, pp. 112-121.

14

[16] M. Di Raimondo, R. Gennaro, New approaches for deniable authentication, J. Cryptology 22 (1999) 572-615. [17] M. Di Raimondo, R. Gennaro, H. Krawczyk, Deniable authentication and key exchange, in: A. Juels, R. N. Wright, S. Di Vimercati (Eds.), Proceedings of the 13th ACM Conference on Computer and Communications Security, CCS 2006, ACM Press, 2006, pp. 400-409. [18] W. Diffie, M. Hellman, New directions in cryptography, IEEE Transactions on Information Theory 22(6), 1976, pp. 644-654. [19] Y. Dodis, D. H. Yum, Time capsule signatures, in: A. S. Patrick, M. Yung (Eds.), Proceedings of 9th International Conference on Financial Cryptography and Data Security, FC 2005, in: LNCS, vol. 3570, Springer, Heidelberg, 2005, pp. 57-71. [20] Y. Dodis, J. Katz, A. Smith, S. Walfish, Composability and on-line deniability of Authentication, in: O. Reingold (Ed.), Proceedings of 6th Theory of Cryptography Conference, TCC 2009, in: LNCS, vol. 5444, Springer, Heidelberg, 2009, pp. 146-162. [21] C. Dwork, M. Naor, Zaps and their applications, in: Proceedings of 41st Annual Symposium on Foundations of Computer Science, FOCS 2000, IEEE Computer Society, 2000, pp. 283293. [22] C. Dwork, M. Naor, A. Sahai, Concurrent zero-knowledge, in: J. S. Vitter (Ed.), Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, STOC 1998, ACM Press, 1998, pp. 409-418. [23] C. Dwork, M. Naor, A. Sahai, Concurrent Zero-Knowledge, Journal of ACM 51(6), 2004, 851-898. [24] J. A. Garay, M. Jakobsson, Timed release of standard digital signatures, in: M. Blaze (Ed.), Proceedings of 6th International Conference on Financial Cryptography, FC 2002, in: LNCS, vol. 2357, Springer, Heidelberg, 2003, pp. 168-182. [25] S. Goldwasser, S. Micali, Probabilistic encryptions, Journal of Computer and System Science 28(2), 1984, 270-299. [26] O. Goldreich, Foundations of Cryptography: Applications, Cambridge University Press, 2004. [27] Y. H. Hwang, D. H. Yum, and P. J. Lee, Timed-release encryption with pre-open capability and its application to certified e-mail system, in: J. Zhou, J. Lopez, R. H. Deng, and F. Bao (Eds.), Proceedings of ISC 2005, in: LNCS, vol. 3650, Springer, Heidelberg, 2005, pp. 344-358. [28] S. Jiang, Deniable authentication on the Internet, in: D. Pei, M. Yung, D. Lin, C. Wu (Eds.), Proceedings of Third SKLOIS Conference on Information Security and Cryptology, Inscrypt 2007, in: LNCS, vol. 4990, 2008, pp. 298-312. [29] S. Jiang, R. Safavi-Naini, An efficient deniable key exchange protocol, in: G. Tsudik (Ed.), Proceedings of 12th International Conference on Financial Cryptography and Data Security, FC 2008, in: LNCS, vol. 5143, Springer, Heidelberg, 2008, pp. 47-52. [30] H. Krawczyk, SKEME, A versatile secure key exchange mechanism for Internet, in: Proceedings of the 1996 Symposium on Network and Distributed Systems Security, NDSS 1996, IEEE Society, 1996, pp. 114-127. 15

[31] Y. Lindell, A simpler construction of CCA2-secure public-key encryption under general assumptions, in: E. Biham (Ed.), Proceedings of Advances in Cryptology-EUROCRYPT 2003, in: LNCS, vol. 2656, Springer, Heidelberg, 2003, pp. 241-254. [32] W. Mao, Timed-release cryptography, in: S. Vaudenay, A. M. Youssef (Eds.), Proceedings of 8th Annual International Workshop on Selected Areas in Cryptography, SAC 2001, in: LNCS, vol. 2259, Springer, Heidelberg, 2001, pp. 342-358. [33] T. May, Timed-release crypto, 1993, http://cypherpunks.venona.com/date/1994/06/msg00481.html, last access: 2013/09/24 [34] R. Pass, On the deniability in the common reference string and random oracle model, in: D. Boneh (Ed.), Proceedings of Advances in Cryptology-CRYPTO 2003, in: LNCS, vol. 2729, Springer, Heidelberg, 2003, pp. 316-337. [35] K. G. Paterson, E. A. Quaglia, Time-specific encryption, in: J. A. Garay, R. De Prisco (Eds.), Proceedings of 7th International Conference on Security and Cryptography for Networks, SCN 2010, in: LNCS, vol. 6280, Springer, Heidelberg, 2010, pp. 1-16. [36] R. Rivest, A. Shamir, D. Wagner, Time-lock puzzles and timed-release crypto, http://people.csail.mit.edu/rivest/RivestShamirWagner-timelock.ps, last accessed on August 8, 2012. [37] A. C. Yao and Y. Zhao, Deniable Internet key exchange, in: J. Zhou, M. Yung (Eds.), Proceedings of 8th International Conference on Applied Cryptography and Network Security, ACNS 2010, in: LNCS, vol. 6123, Springer, Heidelberg, 2010, pp. 329-348. Appendix A. One-time Simulation Sound Non-Interactive Zero Knowledge One-time simulation sound NIZK essentially means that besides the usual adaptive NIZK property, the adversary can not prove a false theorem even if the adversary has seen a simulated proof for a false theorem. The formulation below essentially follows from [26, 31]. Definition 8. Let ℓ(κ) be a polynomial in the security parameter κ. A pair of PPT interactive Turing machines (P, V ) with a common random string σ ← {0, 1}ℓ is an adaptive non-interactive zero-knowledge (Adaptive NIZK) proof system for a N P-language L with relation RL if the following holds. - Completeness. For any (x, w) ∈ RL , Pr[V (x, σ, P (x, w, σ)) = 1] = 1 − negl(κ), where σ is uniformly random in {0, 1}ℓ , the probability is over σ and the coins of P . - Adaptive Soundness. For σ ← {0, 1}ℓ , the probability that there exists x ̸∈ L and a proof π such that (σ, x, π) is accepting, is negligible, where the probability is over σ. - Adaptive Zero Knowledge. For any non-uniform PPT adversary A, there exists a simulator SIM such that (σ, x, π) generated in the following two processes are indistinguishable. ⋄ Take σ ← {0, 1}ℓ ; (x, w) ← A(σ) s.t. (x, w) ∈ RL ; π = P (x, w, σ). ⋄ SIM generates σ and an auxiliary string τ ; (x, w) ← A(σ) s.t. (x, w) ∈ RL ; SIM computes π using (x, σ, τ ). Adatpive NIZK is one-time simulation sound if there exists adaptive NIZK simulator SIM such that for any PPT adversary A the following holds negligibly. 16

• SIM outputs (σ, τ ) and gives σ to A; A(σ) then computes statement x and gives it to SIM; SIM next generates a proof π for x using (σ, τ ) and gives it to A; A finally finds a statement x′ and proof π ′ using (x, σ, π). A succeeds if V (x′ , σ, π ′ ) = 1, (x′ , π ′ ) ̸= (x, π) and x′ ̸∈ L. Appendix B. Property of IND-CPA Secure PKE Lemma 1. Let S be an IND-CPA secure public key encryption scheme. Let (e, d) ← S.Gen(1κ ). Then for any c, Pr[S.Ence (S.Decd (c)) = c] = negl(κ), where the probability is over the randomness of S.Ence (·) and (e, d). Proof. Otherwise, assume the claim is violated by a (valid!) ciphertext α. Recall that INDCPA security implies that for any m0 , m1 of the same length, Pr[A(S.Ence (mb )) = b] = 1/2 + negl(κ). Especially, consider m0 = S.Decd (α) and m1 = 0 of the same length. An adversary A incorporating (m0 , m1 ) can break IND-CPA of S as follows. When receiving cb = S.Ence (mb ), A computes c ← S.Ence (m0 ). If c = cb , then he outputs 0; otherwise, he outputs 0 or 1 randomly. Note c = cb implies b = 0 since the decryption of S is∑ unique. Hence, Pr[A(cb ) = b] = 1/2 + Pr[c = cb ]/2. On the other hand, Pr[c = cb ] = Pr[b = 0] · u∈S.Ence (m0 ) Pr2 [S.Ence (m0 ) = u] ≥ Pr2 [S.Ence (m0 ) = α]/2, non-negligible. So Pr[A(cb ) = b] − 1/2 is non-negligible, contradiction!  Appendix C. Proof of Theorem 1 Proof. Completeness. The decryption with d and the forceful decryption both can obtain r0 , · · · , rn , m. They are identical as the decryption result of S and K is unique and there is a consistency check. Finally, the forceful decryption runs in a polynomial time. Completeness follows. Secrecy. We show that any σ-PRAM adversary A of time µH nκβ can only break the secrecy negligibly; otherwise, an adversary D can use A to break the semantic security2 of K. D does the following. He generates (e, d) ← S.Gen(1κ ), invokes A with e and answers his queries as follows. - H-query x. Initially, let LH = { }. Upon query x, check if (x, y) ∈ LH for some y. If not, take y ← {0, 1}ℓ and add (x, y) into LH . In any case, return y for (x, y) ∈ LH . - Stage one. Upon (m0 , m1 ) from A, D forwards it to his challenger of scheme K as his own challenge query. After receiving K.EncSK (mb ) for b ← {0, 1} for a hidden key SK, D defines c∗0 = K.EncSK (mb ) and normally generates r0∗ ← {0, 1}κ , ri∗ ← {0, 1}β log κ and c∗i for i = 1, · · · , n. Finally, he returns Cb∗ = r0∗ c∗0 c∗1 · · · c∗n to A. He records (r0∗ · · · rn∗ , undef-SK) into LH , where undef-SK is a symbol representing the hidden SK (due to the randomness of {ri }ni=0 , the probability that there already exists y s.t. (r0∗ · · · rn∗ , y) ∈ LH is negligible, ignored!). Later whenever r0∗ r1∗ · · · rn∗ is queried to H by A (denoted it by event Bad0 ), D aborts with output 0 or 1 randomly. 2

This is the main thread of the security proof and of course the proof will rely on the security of S.

17

- Stage Two. Upon any C = r0 c0 c1 · · · cn (̸= Cb∗ ), D uses d to normally decrypt r1 , · · · , rn , computes H(r0 r1 · · · rn ) and decrypts c0 . This simulation is perfect unless r0 r1 · · · rn = r0∗ · · · rn∗ . In this case, D simply rejects, where we use Bad1 to denote the event: this reject decision is wrong. Finally, if A outputs a guess bit b′ for b (within time t after receiving Cb∗ ; otherwise, A is an invalid adversary), D outputs b′ as well. Denote the simulation of D by Γsim and the real secrecy game in Definition 5 by Γrea . Notice that the views of A in Γsim and Γrea are identical unless Bad0 ∨ Bad1 in Γsim occurs. By assumption, A in Γrea has a non-negligible success advantage. If we can show Pr[Badi (Γsim )] = negl(κ), i = 0, 1, then A in Γsim (hence D) has a non-negligible success advantage, which contradicts the semantic security of K. It remains to show Pr[Badi (Γsim )] = negl(κ), i = 0, 1. Lemma 2. If K is one-time unforgeable, Pr[Bad1 (Γsim )] = negl(κ). Proof. Consider Γsim . Notice that c0 in a Bad1 event must be valid and different from c∗0 (since C ̸= Cb∗ and ci = c∗i for i ̸= 0). If Pr[Bad1 (Γsim )] is non-negligible, then the one-time unforgeability of K can be broken by an attacker D′ . Our key point is to look at the first Bad1 event: if Bad1 in Γsim occurs, then some Bad1 must occur first. For a decryption query C = (r0 , c0 , · · · , cn ), let ri = S.Decd (ci ) (i > 0) and denote the event r0 · · · rn = r0∗ · · · rn∗ (no matter c0 is valid or not) by Bad∗1 . Let the number of Bad∗1 in Γsim be bounded by ν. The code of D′ is as follows. He first takes w ← {1, · · · , ν} and runs the code of D, except that the Bad∗1 events are handled differently. For the first w − 1 Bad∗1 events, ciphertext C is rejected as in Γsim while for the wth Bad∗1 , D′ directly outputs c0 as his forgery and terminates. Note before the wth Bad∗1 event, the simulation by D′ and that in Γsim are identical. If the wth Bad∗1 is the first Bad1 event (this occurs with probability 1/ν), then D′ succeeds. So D′ succeeds with probability Pr[Bad1 ]/ν, non-negligible, contradicting the one-time unforgeability of K.  From now on, we assume Bad1 event in Γsim never occurs. Note that by our time complexity assumption for A, A in Γsim is a σ-PRAM attacker with runtime equivalent to t sequential hashes of input length κ. So he can make at most t parallel H-queries, each of which has at least one input length ≥ κ. Note t does not count parallel queries with every input length < κ. We modify Γsim to Γ1 such that A is any PPT σ-PRAM adversary except that he can only make at most t parallel H-queries, each of which has at least one input length ≥ κ. Since this restriction is satisfied for A in Γsim , we have Lemma 3. Pr[Bad0 (Γsim )] ≤ Pr[Bad0 (Γ1 )]. We continue to modify Γ1 to Γ2 such that upon a decryption query r0 c0 · · · cn from A, D will never issue a H-query to help to process it. Instead, after decrypting r1 · · · rn from c1 , · · · , cn using d, if c0 r0 · · · ri (for some i) or r0 · · · rn was not recorded in LH , or, if r0 · · · rn = r0∗ · · · rn∗ , then D rejects; otherwise, D uses the records in LH to help to decrypt. Lemma 4. If S is semantically secure and K is one-time unforgeable, Pr[Bad0 (Γ1 )] is negligibly close to Pr[Bad0 (Γ2 )]. Proof. We actually show that the adversary view in Γ1 and Γ2 are negligibly close. Recall the view of A in Γ1 and Γ2 differs only when D in Γ2 can not find a record (c0 r0 · · · ri , ∗) or (r0 · · · rn , ∗) in LH while ci or c0 is still consistent. Denote this event by inc. Similar to Lemma 2, inc event occurs to c0 only negligibly. So if the lemma is untrue, then inc occurs to ci (i > 0) non-negligibly. Notice if c0 r0 · · · ri = c∗0 r0∗ · · · ri∗ , then there is no inc event for this ci as c∗0 r0∗ · · · ri∗ 18

was recorded in LH . Hence, inc event implies that c0 r0 · · · ri ̸= c∗0 r0∗ · · · ri∗ for some i. Note that ci in C is consistent only if ci = Ee (ri ; H(c0 r0 · · · ri )). By definition of inc, there is no record (c0 r0 · · · ri , ∗) ∈ LH , it follows that H(c0 r0 · · · ri ) is undefined and uniformly random in {0, 1}ℓ . By Lemma 1 (Appendix B), ci is valid negligibly only. Hence, inc occurs to ci negligibly too.  We continue to modify Γ2 to Γ3 such that the decryption oracle is disabled. Lemma 5. Pr[Bad0 (Γ2 )] ≈ Pr[Bad0 (Γ3 )]. Proof. We modify Γ2 to Γ′2 such that upon a decryption query, D does not use d to decrypt ci and, instead, from i = 1 to n (in order), he directly searches for ri so that (c0 r0 · · · ri , yi ) ∈ LH and ci = S.Ence (ri ; yi ) for some yi . If all ri ’s are found, then he continues to decrypt c0 as in Γ2 ; otherwise, he rejects. Note that if ri ’s are found, then the decryption must be consistent with Γ2 (using d). If ri for some i is not found, then either ci is valid but c0 r0 · · · ri is not recorded in LH and hence (by Lemma 1) ci is consistent negligibly, or, ci is invalid, or, ci is valid but inconsistent with record (c0 r0 · · · ri , yi ). This analysis implies that, if ri is not found, then Γ2 will reject (as in Γ′2 ). Thus, the decryption in Γ2 and Γ′2 differs negligibly. So Pr[Bad0 (Γ′2 )] ≈ Pr[Bad0 (Γ2 )]. Pr[Bad0 (Γ′2 )] ≥ Pr[Bad0 (Γ3 )] as Γ3 is a special case of Γ′2 where A does not issue a decryption query. So to prove the lemma, it suffices to show that given any adversary A in Γ′2 , there exists an adversary B for Γ3 such that Pr[Bad0 (A, Γ′2 )] = Pr[Bad0 (B, Γ3 )] + negl(κ). The strategy of B is to simulate Γ′2 with A against it and in turn uses A to help his attack in Γ3 . Specifically, the code of B is as follows. Given e, B forwards it to A. Upon m0 , m1 from A, B forwards it to his own challenger. After receiving the challenge ciphertext Cb∗ , he forwards to A. H-query. B defines a set L′H = {}. Upon a H-query x, B forwards to his own challenger and after receiving the reply y, he records (x, y) into L′H (if it is new) and returns y to A. Decryption query C = r0 c0 c1 · · · cn (̸= Cb∗ ). Let I = 0 if ci ̸= c∗i for all i ≥ 1; otherwise, let I = max{i | ci = c∗i , i = 1, · · · , n}. If ci ̸= c∗i for some i s.t. 0 ≤ i < I, B rejects. This decision is correct: ci ̸= c∗i implies that c0 r0 · · · rI ̸= c∗0 r0∗ · · · rI∗ and hence H(c0 r0 · · · rI ) is independent of H(c∗0 r0∗ · · · rI∗ ); by Lemma 1, Pr(cI = c∗I ) = negl(κ), ignored! Otherwise, r0 c0 · · · cI = r0∗ c∗0 · · · c∗I (when I ≥ 1) and hence c0 r0 · · · rI = c∗0 r0∗ · · · rI∗ (but B may not be able to determine r1 · · · rI now). Then, B searches in L′H to find r1 · · · rn s.t. i. (c0 r0 · · · ri , yi ) ∈ L′H for some yi , I + 1 ≤ i ≤ n; ii. (c0 , r0 , · · · , rn , y1 , · · · , yn ) is consistent with cI+1 · · · cn . iii. (r0 · · · rn , y0 ) ∈ L′H for some y0 . The message m is computed as follows. Case I = n. As seen above, r0 c0 · · · cn = r0∗ c∗0 · · · c∗n = Cb∗ (invalid query). So return m = nil. Case I = 0. If (i)(ii)(iii) is satisfied, return m = K.Decy0 (c0 ); otherwise, reject. Note that this case differs from Γ′2 only by replacing LH in Γ′2 with L′H (above). This decryption differs from Γ′2 only if (c0 r1 · · · ru , ∗) (for some u) or (r0 · · · rn , ∗) is in LH \L′H . However, LH \L′H consists of queries from D which are either c∗0 r0∗ · · · ri∗ (i = 1, · · · , n) or r0∗ · · · rn∗ . Since c1 ̸= c∗1 , we know that c0 r0 r1 ̸= c∗0 r0∗ r1∗ and hence c0 r0 · · · ri ̸∈ LH \L′H for any i. So such a u does not exist. So the decryption difference from Γ′2 implies that r0 · · · rn = r0∗ · · · rn∗ . Thus, c0 ̸= c∗0 as C ̸= Cb∗ , which is Bad1 event (negligible, similar to Lemma 2).

19

Case 0 < I < n. If (i)(ii)(iii) is not satisfied, then B rejects (this decision is consistent with Γ′2 by the similar reason to Case I = 0). Otherwise, B queries r0 · · · rI to H-oracle. When receiving the reply yI , he checks whether (c0 r0 · · · rI , yI ) is consistent with c∗I . If yes, B returns K.Decy0 (c0 ); otherwise, he rejects. Note r1 · · · rn satisfying (i)(ii)(iii) must be unique; otherwise for distinct satisfying r1′ · · · rn′ and r1 · · · rn , cn can not be consistent with both H(c0 r0 r1 · · · rn ) and H(c0 r0 r1′ · · · rn′ ) by Lemma 1. Recall r0 c0 · · · cI = r0∗ c∗0 · · · c∗I . As (c0 r0 · · · rI , yI ) is consistent with c∗I , c0 r0 · · · rI = c∗0 r0∗ · · · rI∗ . Hence, the decryption in this case is consistent with Γ′2 , as L′H ⊂ LH and recall that Γ′2 will decrypt to K.Decy0 (c0 ) if and only if c0 r0 · · · ri (for all i) was recorded in LH and consistent with c∗i . From our analysis of the three cases, the decryption by B is consistent with the decryption in Γ′2 , except for a negligible probability. Hence, the views of A in the simulation of B and in Γ′2 are negligibly close. So Pr(Bad0 (A, Γ′2 )) = Pr(Bad0 (B, Γ3 )) + negl(κ). The lemma follows.  In the remaining of the proof, we analyze Γ3 . Γ3 is the game where A receives challenge Cb∗ and tries to compute b with access to H-oracle but without a decryption query. As a decryption query is disabled, the challenger D himself does not issue a H-query other than computing H(c∗0 r0∗ · · · ri∗ ) for all i and H(r0∗ · · · rn∗ ) for Cb∗ . Let DisOrd denote the event in Γ3 : for some (i, j) with j ≥ i + log κ − 1, the first H-query from A with prefix c∗0 r0∗ · · · ri∗ is in fact a H-query with a longer prefix c∗0 r0∗ r1∗ · · · rj∗ . Lemma 6. Pr[DisOrd] = negl(κ) for any PPT A in the PRAM (including σ-PRAM) model. Proof. Otherwise, since there are at most n2 pairs of (i, j), there exists (i∗ , j ∗ ) so that DisOrd on (i∗ , j ∗ ) (or DisOrd(i∗ , j ∗ ) for short) occurs non-negligibly. We show that the semantic security of S can be broken by an adversary D′ . We consider a multi-ciphertext challenge of S for D′ (note: the semantic security with the multi-ciphertext challenge is equivalent to that with a single ciphertext challenge (see [26])). ⃗ i = ⟨r∗ , c∗ , · · · , c∗n ⟩, where c∗ is as before, c∗t = S.Ence [rt∗ ; H(c∗ r∗ · · · rt∗ )] (1 ≤ t ≤ i) Define W 0 0 0 0 0 ∗ and ct = S.Ence [0; H(c∗0 r0∗ · · · rt∗ )] (t > i). Define Γi3 to be a variant of Γ3 where the ciphertext ⃗ i. Cb∗ in Γ3 for A is replaced by W ′ The code of adversary D against S is as follows. He takes r0∗ , r1∗ , · · · , rn∗ randomly and sets r1∗ · · · rn∗ and 0 · · · 0 as the challenge pair for S. In turn, he will receive (γ1 , · · · , γn ) as the challenge ciphertext, where either γi = S.Ence (ri∗ ) for all i or γi = S.Ence (0) for all i. D′ takes i∗ ← {1, · · · , n} and simulates Γ3 normally using r0∗ , · · · , rn∗ normally with A against it, except that c∗i∗ · · · c∗n = γi∗ · · · γn and that H(c∗0 r0∗ · · · , rj∗ ) for j > i∗ is defined as the randomness of c∗j (although unknown to D′ ). This simulation terminates when a H-query s = c∗0 r0∗ · · · ri∗∗ · · · occurs. In this case, he outputs 1 if and only if DisOrd(i∗ , j ∗ ) occurs. DisOrd(i∗ , j ∗ ) occurs to s if and only if s in fact has a longer prefix r0 · · · rj ∗ . If c∗t for t > i∗ is a ciphertext of rt∗ , then before the termination event, the view of A is according to Γ3 ; otherwise, it is according to ∗ ∗ Γi3 . Hence, the non-negligible gap in DisOrd between Γ3 and Γi3 implies breaking the semantic security of S, contradiction! ∗ ⃗ i∗ ) occurs negIt remains to show that DisOrd(i∗ , j ∗ ) in Γi3 (where the challenge Cb∗ = W ligibly. Toward this, we first note that the probability of DisOrd(i∗ , j ∗ ) with adversary A is ∗ bounded by the probability of DisOrd(i∗ , j ∗ ) with adversary A′ whose input (instead of Wii ) is ⃗i∗ = ⟨mb , r∗ , · · · , r∗∗ , H(c∗ r∗ · · · , r∗ ), j = 1, · · · , n, H(r∗ · · · rn∗ )⟩. V 0 0 0 0 i −1 j 20

This is true as A′ can perfect simulate A interacting with D′ . So it remains to bound the probability of DisOrd(i∗ , j ∗ ) with adversary A′ . We assume if ′ A issues a H-query r0∗ · · · rn∗ , then he will further query c∗0 r0∗ · · · rn∗ . He can do this as he can ⃗i∗ . We first show that the probability realize the correctness of r0∗ · · · rn∗ using H(r0∗ · · · rn∗ ) in V ∗ ∗ ′ of DisOrd(i , j ) by A with an adaptive access to H-oracle can be achieved by A′′ with a non∗ ⃗i∗ , does the following. adaptive access to H-oracle. To achieve this, A′′ in Γi3 when given input V ⃗i∗ . He maintains a (fake) H-oracle as ⃗i∗ . A′′ then computes c∗ using V He runs A′ with input V 0 follows (recall the true H-oracle is maintained by D). He defines L′′H = { } and then records ⃗i∗ into L′′ for j < i∗ . Later he maintains L′′ normally (i.e., (c∗0 r0∗ · · · rj∗ , H(c∗0 r0∗ · · · rj∗ )) from V H H when query x was recorded, reply with the existing answer; otherwise, build a new record (x, y) and reply with y). So L′′H is generally inconsistent with LH maintained by D, except for records (c∗0 r0∗ · · · rj∗ , H(c∗0 r0∗ · · · rj∗ )) with j < i∗ . Even though, before a query r0∗ · · · rn∗ or a query with prefix c∗0 · · · , ri∗∗ occurs, L′′H has the same distribution as a real random oracle (i.e., LH ) and ∗ hence till then, the view of A′ in the simulation of A′′ is identical to his view in Γi3 . Thus, the probability that A′ issues a query r0∗ · · · rn∗ or a query with prefix c∗0 · · · , ri∗∗ is identical to that ∗ in Γi3 and hence non-negligible. So when A′′ finally sends all H-queries issued by A′ to his own H-oracle maintained by D, DisOrd(i∗ , j ∗ ) keeps unchanged and is non-negligible. Finally, DisOrd(i∗ , j ∗ ) with A′′ occurs only negligibly as prior to his H-queries, ri∗ · · · rn∗ is uniform to him while he can only make polynomial queries at one-time, resulting in probability Pr[DisOrd(i∗ , j ∗ , A′′ )] ≤ poly(κ)/(κβ )log κ = negl(κ).  In the following, we assume DisOrd never occurs. Thus, for any i and j ≥ i + log κ − 1, the H-query with prefix r0∗ · · · ri∗ always occurs prior to one with prefix r0∗ · · · rj∗ . Note A is a σ-PRAM adversary. So each parallel query contains at most σ single H-queries. Let nu denote the number of parallel H-queries with one input length at least κ, each of ∗ which occurs after a query with prefix r0∗ r1∗ r2∗ · · · r1+(u−1) log κ but before a query with prefix ∗ ∗ ∗ ∗ r0 r1 r2 · · · r1+u log κ . Lemma 7. Pr[n0 ≤ a0 , · · · , nn/ log κ−1 ≤ an/ log κ−1 ] ≤

(

) ∑ σ( u au ) log κ n/ log κ (1 nκβ

+ negl(κ)).

Proof. Let u∗ = 1 + (u − 1) log κ. Using the simple hybrid reduction (as in the first part ∗ of proof for Lemma 6), the distributions of nu in Γ3 and Γu3 are negligibly close. By Lemma ∗ 6, in Γu3 , no H-query with prefix c∗0 r0∗ · · · ru∗ log κ occurs before the first H-query Q with prefix ∗ ∗ c∗0 r0∗ · · · ru∗ ∗ . Let the view of A till query Q be viewu (A, Γu3 )). It follows that given viewu (A, Γu3 ), ∗ σau ∗ u r1+u log κ is uniformly random. Hence, Pr[nu ≤ au |viewu (A, Γ3 )] ≤ nβ (recall A is a σPRAM and au counts the number of parallel queries with one input length at least κ). So u Pr[nu ≤ au |viewu (A, Γ3 )] ≤ σa (1 + ϵu ), where ϵu is negligible. So nβ Pr[nu ≤ au , viewu (A, Γ3 )] ≤

σau (1 + ϵu ) · Pr[viewu (A, Γ3 )]. nβ

(C.1)

Let Ωu be the set of viewu (A, Γ3 ) that satisfies n0 ≤ a0 , · · · , nu−1 ≤ au−1 . Summation over Ωu for Eq. (C.1), we have Pr[nu ≤ au , n0 ≤ a0 , · · · , nu−1 ≤ au−1 ] ≤

σau (1 + ϵu ) · Pr[n0 ≤ a0 , · · · , nu−1 ≤ au−1 ]. nβ

(C.2)

Hence, ∏

n/ log κ−1

Pr[n0 ≤ a0 , · · · , nn/ log κ−1 ≤ an/ log κ−1 ] ≤

u=0

21

σau (1 + ϵu ) κβ

(C.3)

∑ σ n/ log κ u au n/ log κ ) · ( ) n/ log κ κβ ( σ ∑ a log κ )n/ log κ u u = 3 , nκβ

≤ 3(

r r where the second ≤ uses ϵu ≤ 1/n, (1 + 1/n)n ≤ 3 and inequality a1 · · · ar ≤ ( a1 +···+a ) . r

(C.4) (C.5) 

Final Analysis. Let ν be the total number of parallel H-queries by a σ-PRAM adversary A in Γ3 with one input length at least κ. Since A can make at most t such H-queries in total, ∑n/ log κ−1 ∑n/ log κ−1 ν ≤ t. As r0 has length κ, u=0 nu ≤ ν and so Pr[ν ≤ t] ≤ Pr[ u=0 nu ≤ t]. Since ni ≥ 1, Pr[ν ≤ t] = 0 for t < n/ log κ. If t ≥ n/ log κ, since n0 + · · · + nn/ log κ−1 ≤ t has ( t−1 ) 3t log κ n/ log κ solutions, by Lemma 7, n/ log κ−1 ≤ ( n ) Pr[

∑n/ log κ−1 u=0

√ √

σ log κ 2n/ log κ ) nu ≤ t] ≤ 3 · ( 3tnκβ/2 < 3 · 3−n/ log κ ,

negligible. Hence, Pr[Bad0 (Γ3 )] = negl(κ). By Lemmas 3, 4, 5, Pr[Bad0 (Γsim )] = negl(κ). This completes the proof of our theorem.  Appendix D. Proof of Theorem 2 Proof. Completeness. For any γ = (C, τ, π), if π is invalid, the decryption result (no matter one decrypts with d or with a forceful algorithm) is m = nil. If π is valid, then from the soundness of P, (C, τ ) ∈ Re (i.e., there exists (m, r, r′ ) that is consistent with (C, τ ))). In this case, m from T is identical to S.Decd (C) by the completeness of TCom and S. So the forceful decryption and the decryption using d give the same m. If we ignore the small computing cost of verifying π, T has runtime T . The completeness follows. Secrecy. Let Γ0 be the secrecy game of the timed encryption system S∗ . Before the proof, we first define two variants of Γ0 . Γ1 is a variant of Γ0 such that σ is simulated by the challenger (who also gets a trapdoor η of σ) and that π is simulated using η. Let Γ2 be a variant of Γ1 such that in the challenge ciphertext γ ∗ = (Cb∗ , τ ∗ , π ∗ ), Cb∗ encrypts mb while τ ∗ commits to m1−b . We use pui to denote the probability that A outputs 0 in Γi when b = u. The secrecy requires |p00 − p10 | = negl(κ). It suffices to show that |pu0 − pu1 | (for u = 0, 1), 0 |p1 − p12 | and |p12 − p11 | are all negligible as |p00 − p10 | ≤ |p00 − p01 | + |p01 − p12 | + |p12 − p11 | + |p11 − p10 |. Notice that |pu0 − pu1 | is negligible simply due to the adaptive zero knowledge property of P. We now prove the other two. |p01 − p12 | = negl(κ). We show that if this is not true, then S is not CCA2 secure. Let A∗ be an attacker against S∗ . Then, an attacker A against S can be constructed as follows. Given e, A simulates σ and also obtains its trapdoor η. He gives (e, σ) to A∗ . Upon a challenge query (m0 , m1 ), he forwards to his own challenger. In turn, it receives Cb . Then he computes τ ∗ = TCom(m0 ) and simulates π ∗ for (Cb , τ ∗ ) using η. Finally, he gives γb = (Cb , τ ∗ , π ∗ ) to A∗ . A decryption query (C, τ, π) after the challenge query is handled as follows. By the query restriction, (C, τ, π) ̸= (Cb , τ ∗ , π ∗ ). If (C, τ, π) ̸= (Cb , τ ∗ , π ∗ ) but π is invalid, he rejects normally. If (C, τ, π) ̸= (Cb , τ ∗ , π ∗ ) but π is valid, by one-time simulation soundness of P, there must exist (m, r, r′ ) such that ((C, τ ), (m, r, r′ )) ∈ Re . In this case, if C ̸= Cb , then C is decrypted using the decryption oracle of A normally; otherwise, A suspends A∗ and then uses the forced-open algorithm in TCom to compute the de-commitment m in τ . Here m = mb by the consistency of C with τ which is guaranteed by the validity of π and the one-time simulation soundness of P. Hence, b is obtained. We can see that in any case the decryption query is correctly handled. Finally, A outputs whatever A∗ does. From the simulation of A, when b = 0, the simulation is 22

according to Γ01 ; when b = 1, the simulation is according to Γ12 (more precisely, this holds if we ignore the negligible soundness error of P). Hence, the non-negligible gap between p01 and p12 implies the non-negligible advantage of A, contradicting the security of S. |p12 − p11 | = negl(κ). If this is not true, we show that the secrecy of TCom can be broken by an α-PRAM adversary B. B does the following. He takes (e, d) normally, generates σ with trapdoor η and simulates Γ11 , except for the challenge query (m0 , m1 ). In this case, he provides (m0 , m1 ) to his own challenger and receives τb = TCom(mb ). Then, the challenge for A is (S.Ence (m1 ), τb , π ∗ ), where π ∗ is simulated using η. Upon a decryption query (c, τ, π), he answers it normally using d. By the one-time simulation soundness of P, the ciphertext is consistent only if π is consistent and (c, τ ) contains the same plaintext. So the decryption is consistent with Γ11 and Γ12 . Finally, when A outputs 0 within time t, B outputs 0; 1 otherwise. Note that if b = 1, the simulated game is according to Γ11 ; otherwise, it is according to Γ12 . Hence, non-negligible gap between p12 and p11 implies breaking the secrecy of TCom. It remains to check the runtime of B. Note if A is in the α-PRAM model and outputs b in time t, B is in the α-PRAM model and returns b in time t from receiving τb (here we ignore the small time to compute S.Ence (m1 ) and π ∗ ).  Appendix E. Proof of Theorem 3 ℓ

Proof. Correctness. Assume Πℓi i and Πjj share an identical sid and both accept. Using the symbols in the protocol description, ℓ

sidℓi i = ⟨Pi , Pj , Ej [k1 |g x ], C2 ⟩ = sidjj = ⟨Pi , Pj , C1 , Ei [k2 |C1 |g y ]⟩. ℓ

Then Πℓi i is the initiator and Πjj is the responder. So C1 = Ej [k1 |g x ] and C2 = Ei [k2 |C1 |g y ]. Hence, they see the same (g x , g y ) and thus the same sk = g xy . The correctness follows. Deniability. Simulator S in Γsim is as follows: it behaves normally except when he needs to use key Di for some i. There are two scenarios to use Di . First, Corrupt(i) occurs. In this case, S can perfectly answer the query as Di is externally provided by the trusted party T. Second, Di is needed to decrypt a ciphertext C. In this case, the simulator can suspend the adversary A and use the forced decryption to obtain m = T(C). After that, he frees A and continues the normal execution. Due to the completeness of (α, t, T )-timed encryption scheme, the outcome of T differs from Di (C) only negligibly. Note this holds regardless how C is computed (especially it could be from Execute oracle or invalid). The remaining oracles (i.e., Corrupt(i, ℓi ), Send, Test) do not use any Di and hence whatever computable by these oracles in Γrea can be perfectly computed by the simulator in Γsim (e.g., statℓi i is updated perfectly as in Γrea ). Hence, the simulation is statistically close to the real one. Deniability follows3 . Authentication. If Πℓt t is the test instance, we need to show that it has a unique partner Πℓss . We only need to prove the existence as otherwise, the existence of two instances in Ps partnered with Πℓt t implies that the same x (or y) in Ps (honest) will be sampled repeatedly, negligible (i.e., with probability 1/q, ignored)! First consider the case where the test instance Πℓt t is a responder. Let Πℓt t receive C1∗ from some Ps (through Send(t, ℓt , C1∗ ) oracle). He sends ∗ C2∗ = Es [k2∗ |C1∗ |g y ] to Ps . Denote the event that Πℓt t accepts the F low3 message τ ∗ but there is no partnered instance in Ps for Πℓt t , by Imp. It suffices to show that Pr[Imp] is negligible. Denote the real security game by Γ. We modify Γ to Γ′ such that the game terminates after time t from the moment Πℓt t sends out C2∗ . Pr[Imp(Γ)] = Pr[Imp(Γ′ )] since if Imp occurs, it 3

Note that the deniability proof here does not use a random oracle even if the secrecy proof needs this. So the claim in [34] that the deniability in the random oracle model can not be trusted, does not affect us even if the underlying timed encryption scheme is based on random oracles.

23

must happen within time t after Πℓt t sends out C2∗ . We further modify Γ′ to Γ′′ such that ∗ ∗ C2∗ computed by Πℓt t is Es [0|C1∗ |g y ] (instead of Es [k2∗ |C1∗ |g y ]). To be consistent, whenever ∗ ∗ the simulated Ps receives the same Es [0|C1∗ |g x ], it proceeds normally using (k2∗ |C1∗ |g y ) as the decryption result (e.g., to be used in updating stats or computing τ ). By Lemma 8 below, Pr[Imp(Γ′ )] = Pr[Imp(Γ′′ )] + negl(κ). It remains to show Pr[Imp(Γ′′ )] is negligible. Note due to Imp event, no instance in Ps sees the same (C1∗ , C2∗ ) as Πtℓt does. Hence, Ps never computes MAC with input (Ps , Pt , C1∗ , C2∗ ). Reviewing the oracles, we can see that besides as part of statℓt t , k2∗ in Γ′′ is only used to evaluate MACk2∗ (·) (especially, Ptℓt can not be issued a session state reveal (i.e., Corrupt(t, ℓt )) and Pt can not be corrupted). Hence, the non-negligible Pr(Imp(Γ′′ )) can be reduced to break MAC, contradiction! This completes the authentication proof for the case where Πℓt t is a responder. When Πℓt t is an initiator, the proof is similar. As a summary, Pr[Imp(Γ)] is negligible and the authentication follows. Lemma 8. Pr[Imp(Γ′ )] = Pr[Imp(Γ′′ )] + negl(κ). Proof. Otherwise, an α-PRAM attacker D against the secrecy of S is constructed as follows. Assume there are at most µ responder instances. Given a challenge public key E, D takes u ← {1, · · · , µ} and s ← {1, · · · , n}. He initializes Γ′ normally with A against it, except Es = E. Especially, Di for i ̸= s is known while Ds is unknown. He simulates Γ′ as follows. - Prior to the first activation of the uth responder instance (denoted by Πℓt t ), all queries except Corrupt(s) are answered normally. For example, Send oracles are handled normally with the help of Di for i ̸= s or decryption oracle Ds (·) for i = s), and statℓi i is updated normally. Upon Corrupt(s), D aborts with output 0. - Now consider the uth responder instance Πℓt t . Since Πℓt t is a responder instance, its first activation must be due to Send(t, ℓt , C1∗ ) query. If pidℓt t ̸= Ps , abort with output 0; otherwise upon Send(t, ℓt , C1∗ ) query, D takes y ∗ , k2∗ randomly and outputs the challenge pair as ∗ ∗ (0|C1∗ |g y , k2∗ |C1∗ |g y ). In turn, he receives and forwards C2∗ to Ps . Let hid-r2 be a symbol representing the unknown randomness in C2∗ . Record statℓt t = (s, t, C1∗ , k2∗ , y ∗ , hid-r2 , Tt ). - Any oracle query to Pu for u ̸= s, t is handled normally with Du . - Upon Corrupt(u) for u = s, t, abort with 0. - Oracle query Send(s, ℓ, F lowj ) for j = 0, 1 is handled normally as no secret is used. Oracle query Send(t, ℓ, F lowi ) for any i but ℓ ̸= ℓt and Corrupt(t, ℓ) (ℓ ̸= ℓt ) are handled normally with Dt . If Corrupt(t, ℓt ) occurs, abort with 0. - Query Send(s, ℓ, C2 ) is answered as follows. If C2 ̸= C2∗ or if query Send(s, ℓ, C2∗ ) occurs before query Send(t, ℓt , C1∗ ), decryption of C2 is answered normally under the help of ∗ decryption oracle Ds (·). Otherwise, D proceeds using k2∗ |C1∗ |g y as the decryption result of C2∗ (e.g. to update statℓs ). - Send(s, ℓ, F lowk ) for k = 3, 4, or, Corrupt(s, ℓ), and Send(t, ℓt , F low3 ) are handled normally as it is based on the state established after processing F low1 or F low2 . - If Πℓt t is not chosen as the test instance, abort; otherwise, proceed normally to compute the challenge key. Finally, when Πℓt t receives a valid τ within time t after sending out C2∗ while no partnered instance in Ps exists for Πℓt t , D outputs 1; otherwise 0.

24

Now we analyze the success of D. The probability that the uth instance Πℓt t is the test 1 . When the guess of (u, s) is correct, the simulation instance and Ps is its partner party is nu ∗ ∗ ∗ ∗ y of D with C2 encoding 0|C1 |g (resp. k2∗ |C1∗ |g y ) is perfectly consistent with Γ′′ (resp. Γ′ ). Hence, the non-negligible gap of event Imp between them implies the non-negligible advantage of D. In addition, since A is in the α-PRAM model, so is D. D does not violate the time constraint since A must output the guess b′ within time t. This contradicts the secrecy of S.  Secrecy. We have showed that the test instance Πℓt t must have a unique partnered instance Πℓss . In addition, this Πℓss also has Πℓt t as its unique partner. This is so due to the fact that sampling a repeated x or y in Pt has a negligible probability 1/q (we ignore it). Now we show that the adversary success probability in the test session is 1/2 + negl(κ); otherwise, an attacker B can break the DDH assumption as follows. Let µ be the upper bound on the number of instances invoked by A. Given a challenge tuple (α, β, γ), B sets up Γ normally with A against it. It randomly takes two distinct numbers u, v from {1, · · · , µ}. Assume u < v. Then it simulates Γ normally, except when the uth instance Πℓt t or vth instance Πℓss is invoked. If Πℓt t is not an initiator instance or if Πℓss is not a responder instance, abort with output 0, 1 randomly. Otherwise, for Πℓt t , he defines g x as α and for Πℓss he defines g y as β. In this simulation, x = logg α and y = logg β are unknown and hence x (resp. y) in statℓt t (resp. statℓss ) is set as * (undefined). Whenever Πℓt t or Πℓss is compromised or if Πℓt t and Πℓss are not partnered or if neither of them is chosen as the test session, then B aborts with output 0, 1 randomly; otherwise, he defines γ as the challenge session key for the test session (either Πℓt t or Πℓss ). Finally, he outputs whatever A does. First of all, we have showed in the authentication property that there is a unique pair of partnered instances, one of which is the test session. If this pair of partnered instances are the uth and vth instances, then the simulation is consistent with a real distribution. Indeed, the only difference of the simulation from a real game is that x, y are unknown to the simulator B. However, since one of two instances is test instance, Πℓt t and Πℓss are not allowed to issue a session key reveal, a session state reveal or a party corruption. Hence, the only query that uses x, y is the test query, where x, y are used to compute Diffie-Hellman key of X, Y . However, since the test session outputs a Diffie-Hellman key with 1/2 probability and a random key with 1/2 probability, γ has the same distribution as this output and so x, y actually are not needed in the simulation. So when (u, v) is correctly guessed, the simulation is perfect. Since prior to the abortion event the adversary view is real, the correct guess of (u, v) has a probability of 1 µ(µ−1) . Conditional on the correct guess of (u, v), the adversary view is consistent with the real distribution in Γ. Further, when the guess is incorrect, the abortion event will certainly occur and hence the output is 0 or 1 randomly. So a non-negligible advantage of A in Γ implies a non-negligible advantage of B in his DDH game, contradiction! 

25

Timed Encryption with Application to Deniable Key ...

Jul 22, 2014 - erase his intermediate data (e.g., due to a system backup) and, when compromised, will hand it out faithfully to an adversary. .... Timed encryption is useful in applications where some intermediate data is protected temporarily while it is ... free (although its secrecy proof needs this). 2. Definitions. Notations.

215KB Sizes 2 Downloads 179 Views

Recommend Documents

Comparison of Symmetric Key Encryption Algorithms - IJRIT
Today it becomes very essential to protect data and database mostly in .... within today's on-chip cache memory, and typically do so with room to spare. RC6 is a ...

Comparison of Symmetric Key Encryption Algorithms - IJRIT
In this paper we provides a comparison between most common symmetric key cryptography algorithms: DES, AES, RC2, ... Today it becomes very essential to protect data and database mostly in e-transaction. The information has .... For most applications,

Clear key encryption using MP4BOX -
What tools are needed and where are they ? MP4Box to encrypt or decrypt ... drm_file. It is an XML file whose syntax looks like this: XML Syntax. 1. 2. 3. 4. 5. 6. 7.

Constructing Public-key Homomorphic Encryption ...
Sep 13, 2012 - momorphic encryption scheme based on a private-key one that can ... provide solutions to practical security problems; however, they are not ... real world applications require large message spaces; ... systems, and cloud computing appl

An Efficient Fully Deniable Key Exchange Protocol
is a receiver of message F low1, we say that Pi acts as a responder in this instance. ..... test session key and win the test session. However, we show that ...

with Strategically Timed Offers
tion: “How will rational agents come to an agreement in a bargaining situation? ..... support (K, 0) ), whether on or off the equilibrium path, agreement is reached ...

Dwork-Naor ZAP and Its Application in Deniable ...
with a more explicit proof of their soundness through enumeration. Based ..... a normal network delay might unexpectedly cause the receiver to reject.

A Known-Plaintext Attack on Two-Key Triple Encryption - CiteSeerX
Jun 29, 1990 - key in the Data Encryption Standard (DES) [FIPS46], several varieties of multiple ... where flag indicates either a Pi-type or Bi-type triple.

Tree-Based Symmetric Key Broadcast Encryption
Chattopadhyay and all other M. Tech. batch-mates, seniors and juniors were an integral part in shaping up ..... 8.1.4 Sending Encrypted Email to Mailing Lists .

Tree-Based Symmetric Key Broadcast Encryption ...
Oct 8, 2015 - Global Broadcast Service (US). ▻ Joint Broadcast System ... Mailing list encryption: [BGW05] OpenPGP functions as a. BE system. ▷ Online ...

A Known-Plaintext Attack on Two-Key Triple Encryption - CiteSeerX
Jun 29, 1990 - key in the Data Encryption Standard (DES) [FIPS46], several varieties of multiple encryption have ..... [Merk81] Merkle, R. and M. Hellman, "On the Security of Multiple Encryption", ... Encryption Standard", Computer, vol. 10, no.

Public-Key Encryption in the Bounded-Retrieval Model
Oct 28, 2009 - memory contents of a machine, even after the machine is powered down. ... §Department of Computer Science and Applied Mathematics, Weizmann ...... Let HID(x)=(xq+2 −IDq+2)/(x−ID) be the polynomial of degree q+1, ...

Public-Key Encryption in the Bounded-Retrieval Model
Oct 28, 2009 - §Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel. Email: ... of information that an adversary can learn through a key-leakage attack. ... chosen in the same way as in standa

Daniel VisOne Cade - NOV ITS Scenario - Key Encryption - Update ...
Daniel VisOne Cade - NOV ITS Scenario - Key Encryption - Update 03.pdf. Daniel VisOne Cade - NOV ITS Scenario - Key Encryption - Update 03.pdf. Open.

Contract-oriented programming with timed session types - Trustworthy ...
The value of clocks is in R≥0, like for timed automata. Send and ...... In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. (POPL) ...

Alternatives to Honey Encryption
For some special block ciphers, the probability of message recovery could be roughly ... cipher and sends the ciphertext and the partition number to the receiver.

Contract-oriented programming with timed session types - Trustworthy ...
contracts to discipline the interactions among distrusting services. Since it supports the COntract-Oriented paradigm, we called it “CO2 middleware”. Figure 1.1 ...

Deniable Authentication on the Internet
an Internet shopping, you do not want your shopping privacy to be transferred to a third party. In ..... With the above idea in mind, we first construct A against π in.

Encryption Whitepaper
As computers get better and faster, it becomes easier to ... Table 1 details what type of data is encrypted by each G Suite solution. 3. Google encrypts data as it is written to disk with a per-chunk encryption key that is associated .... We compleme

ROBUST CENTROID RECOGNITION WITH APPLICATION TO ...
ROBUST CENTROID RECOGNITION WITH APPLICATION TO VISUAL SERVOING OF. ROBOT ... software on their web site allowing the examination.

DISCRETE MATHEMATICS STRUCTURES WITH APPLICATION TO ...
Write the converse, inverse and contrapositive of the implication “If two integers ... STRUCTURES WITH APPLICATION TO COMPUTER SCIENCE.pdf. Page 1 of ...

DISCRETE MATHEMATICS WITH APPLICATION TO COMPUTER ...
are isomorphic or not. 2. State and prove Euler's ... Displaying DISCRETE MATHEMATICS WITH APPLICATION TO COMPUTER SCIENCE.pdf. Page 1 of 3.

Google Message Encryption
Google Message Encryption service, powered by Postini, provides on-demand message encryption for your organization to securely communicate with business partners and customers according to security policy or on an “as needed” basis. Without the c

Using the Timed Loop to Write Multirate Applications in LabVIEW™
patents.txt file on your CD, or ni.com/patents. .... AI engine. Use the Source type listbox in the Loop Configuration dialog box to select a timing source or use the ...