new and improved constructions of non-malleable ...rafael/papers/nmc-journal-final.pdfated in the...

54
New and Improved Constructions of Non-Malleable Cryptographic Protocols * Rafael Pass Alon Rosen Abstract We present a new constant round protocol for non-malleable zero-knowledge. Using this pro- tocol as a subroutine, we obtain a new constant-round protocol for non-malleable commitments. Our constructions rely on the existence of (standard) collision resistant hash functions. Previous constructions either relied on the existence of trapdoor permutations and hash functions that are collision resistant against sub-exponential sized circuits, or required a super-constant number of rounds. Additional results are the first construction of a non-malleable commitment scheme that is statistically hiding (with respect to opening), and the first non-malleable commitments that satisfy a strict polynomial-time simulation requirement. Our approach differs from the approaches taken in previous works in that we view non- malleable zero-knowledge as a building-block rather than an end goal. This gives rise to a modular construction of non-malleable commitments and results in a somewhat simpler analysis. Keywords: Cryptography, zero-knowledge, non-malleability, man-in-the-middle, round-complexity, non black-box simulation * Preliminary version appeared in STOC 2005, pages 533–542. Department of Computer Science. Cornell University, Ithaca, NY. E-mail: [email protected]. Part of this work done while at CSAIL, MIT, Cambridge, MA. Center for Research on Computation and Society (CRCS). DEAS, Harvard University, Cambridge, MA. E-mail: [email protected] . Part of this work done while at CSAIL, MIT, Cambridge, MA. 0

Upload: others

Post on 06-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

New and Improved Constructions of

Non-Malleable Cryptographic Protocols ∗

Rafael Pass † Alon Rosen ‡

Abstract

We present a new constant round protocol for non-malleable zero-knowledge. Using this pro-tocol as a subroutine, we obtain a new constant-round protocol for non-malleable commitments.Our constructions rely on the existence of (standard) collision resistant hash functions. Previousconstructions either relied on the existence of trapdoor permutations and hash functions that arecollision resistant against sub-exponential sized circuits, or required a super-constant numberof rounds. Additional results are the first construction of a non-malleable commitment schemethat is statistically hiding (with respect to opening), and the first non-malleable commitmentsthat satisfy a strict polynomial-time simulation requirement.

Our approach differs from the approaches taken in previous works in that we view non-malleable zero-knowledge as a building-block rather than an end goal. This gives rise to amodular construction of non-malleable commitments and results in a somewhat simpler analysis.

Keywords: Cryptography, zero-knowledge, non-malleability, man-in-the-middle, round-complexity,non black-box simulation

∗Preliminary version appeared in STOC 2005, pages 533–542.†Department of Computer Science. Cornell University, Ithaca, NY. E-mail: [email protected]. Part of this

work done while at CSAIL, MIT, Cambridge, MA.‡Center for Research on Computation and Society (CRCS). DEAS, Harvard University, Cambridge, MA. E-mail:

[email protected] . Part of this work done while at CSAIL, MIT, Cambridge, MA.

0

Page 2: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Contents

1 Introduction 21.1 Non-Malleable Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Our Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Techniques and New Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.5 Future and Subsequent Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Preliminaries 62.1 Basic notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Witness Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Probabilistic notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 Computational indistinguishability and statistical closeness . . . . . . . . . . . . . . 62.5 Interactive Proofs, Zero-Knowledge and Witness-Indistinguishability . . . . . . . . . 72.6 Proofs of Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.7 Universal Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.8 Commitment Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Non-Malleable Protocols 113.1 Non-Malleable Interactive Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2 Non-malleable Zero-Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3 Non-malleable Commitments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.4 Comparison with Previous Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 153.5 Simulation-Extractability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4 A Simulation-Extractable Protocol 174.1 Barak’s non-black-box protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.2 A “Special-Purpose” Universal Argument . . . . . . . . . . . . . . . . . . . . . . . . 194.3 A family of 2n protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204.4 A family of 2n protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5 Proving Simulation-Extractability 235.1 Proof Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.2 Many-to-One Simulation-Extractabiity . . . . . . . . . . . . . . . . . . . . . . . . . . 25

5.2.1 The Many-to-One Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.2.2 The Simulator-Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.2.3 Correctness of Simulation-Extraction . . . . . . . . . . . . . . . . . . . . . . . 29

5.3 “Full-Fledged” Simulation-Extractability . . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Non-malleable Commitments 366.1 A statistically-binding scheme (NM with respect to commitment) . . . . . . . . . . . 366.2 A statistically-hiding scheme (NM with respect to opening) . . . . . . . . . . . . . . 43

7 Acknowledgments 46

A Missing Proofs 49

1

Page 3: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

1 Introduction

Consider the execution of two-party protocols in the presence of an adversary that has full controlof the communication channel between the parties. The adversary has the power to omit, insertor modify messages at its choice. It has also full control over the scheduling of the messages. Thehonest parties are not necessarily aware to the existence of the adversary, and are not allowed touse any kind of trusted set-up (such as a common reference string).

The above kind of attack is often referred to as a man-in-the-middle attack. It models a naturalscenario whose investigation is well motivated. Protocols that retain their security properties in faceof a man-in-the-middle are said to be non-malleable [16]. Due to the hostile environment in whichthey operate, the design and analysis of non-malleable protocols is a notoriously difficult task. Thetask becomes even more challenging if the honest parties are not allowed to use any kind of trustedset-up. Indeed, only a handful of such protocols have been constructed so far.

The rigorous treatment of two-party protocols in the man-in-the-middle setting has been initi-ated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of securityfor the tasks of non-malleable commitment and non-malleable zero-knowledge. It also presents pro-tocols that meet these definitions. The protocols rely on the existence of one-way functions, andrequire O(log n) rounds of interaction, where n ∈ N is a security parameter.

A more recent result by Barak presents constant-round protocols for non-malleable commitmentand non-malleable zero-knowledge [2]. This is achieved by constructing a coin-tossing protocol thatis secure against a man in the middle, and then using the outcome of this protocol to instantiateknown constructions for non-malleable commitment and zero-knowledge in the common referencestring model. The proof of security makes use of non black-box techniques and is highly complex.It relies on the existence of trapdoor permutations and hash functions that are collision-resistantagainst sub-exponential sized circuits.

In this paper we continue the line of research initiated by the above papers. We will be interestedconstructions of new constant-round protocols for non-malleable commitment and non-malleablezero-knowledge, and will not rely on any kind of set-up assumption.

1.1 Non-Malleable Protocols

In accordance with the above discussion, consider a man-in-the-middle adversary A that is simul-taneously participating in two executions of a two-party protocol. These executions are called theleft and the right interaction. Besides controlling the messages that it sends in the left and rightinteractions, A has control over the scheduling of the messages. In particular, it may delay thetransmission of a message in one interaction until it receives a message (or even multiple messages)in the other interaction.

(a) (b)

P A V

x∈L=====⇒ x∈L=====⇒

C A R

Com(v)=====⇒ Com(v)

=====⇒

Figure 1: The man-in-the-middle adversary. (a) Interactive proofs. (b) Commitments.

The adversary is trying to take advantage of its participation in the left interaction in order toviolate the security of the protocol executed in the right interaction, where the exact interpretationof the term ”violate the security” depends on the specific task at hand.

2

Page 4: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

A two-party protocol is said to be non-malleable if the left interaction does not “help” theadversary in violating the security of the right interaction. Following the simulation paradigm [26,27, 23, 24], this is formalized by defining appropriate “real” and “idealized” executions.

In the real execution, called the man-in-the-middle execution, the adversary participates in boththe left and the right interactions. In the idealized execution, called the stand-alone execution,the adversary is only participating in a single interaction. Security is defined by requiring thatthe adversary cannot succeed better in the man-in-the middle execution than he could have inthe stand-alone execution. In the specific instances of zero-knowledge and string commitment, thedefinition of security takes the following forms.

Non-malleable zero-knowledge [16]. Let 〈P, V 〉 be an interactive proof system. In the leftinteraction the adversary A is verifying the validity of a statement x by interacting with an honestprover P . In the right interaction A proves the validity of a statement x 6= x to the honestverifier V (see Figure 1.a). The objective of the adversary is to convince the verifier in the rightinteraction that x ∈ L. Non-malleability of 〈P, V 〉 is defined by requiring that for any man-in-the-middle adversary A, there exists a stand-alone prover S that manages to convince the verifierwith essentially the same probability as A. The interactive proof 〈P, V 〉 is said to be non-malleablezero-knowledge if it is non-malleable and (stand-alone) zero-knowledge.

Non-malleable commitments [16]. Let 〈C,R〉 be a commitment scheme. In the left interactionthe adversary A is receiving a commitment to a value v from the committer C. In the rightinteraction A is sending a commitment to a value v to the receiver R (see Figure 1.b). Theobjective of the adversary is to succeed in committing in the right interaction to a value v 6= vthat satisfies R(v, v) = 1 for some poly-time computable relation R. Non-malleability of 〈C,R〉is defined by requiring that for any man-in-the-middle adversary A, there exists a stand-alonecommitter S that manages to commit to the related v with essentially the same probability as A.

Schemes that satisfy the above definition are said to be non-malleable with respect to commit-ment. In a different variant, called non-malleable commitment with respect to opening [19], theadversary is considered to have succeeded only if it manages to decommit to a related value v.

1.2 Our Contributions

Our main result is the construction of a new constant-round protocol for non-malleable ZK. Theproof of security relies on the existence of (ordinary) collision resistant hash functions and does notrely on any set-up assumption.

Theorem 1 (Non-malleable ZK) Suppose that there exists a family of collision resistant hashfunctions. Then, there exists a constant-round non-malleable ZK argument for every L ∈ NP.

Theorem 1 is established using the notion of simulation extractability. A protocol is said to besimulation extractable if for any man-in-the-middle adversary A, there exists a simulator-extractorthat can simulate the views of both the left and the right interactions for A, while outputting awitness for the statement proved by A in the right interaction. Any protocol that is simulation-extractable is also non-malleable ZK. The main reason for using simulation extractability (whichis more technical in flavor than non-malleability) is that it is easier to work with.

Using our new simulation extractable protocols as a subroutine, we construct constant roundprotocols for non-malleable string commitment. One of our constructions achieves statistically

3

Page 5: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

binding commitments that are non-malleable w.r.t. commitment, and the other achieves statisti-cally hiding commitments that are non-malleable w.r.t. opening.

Theorem 2 (Statistically binding non-malleable commitment) Suppose that there exists afamily of collision-resistant hash functions. Then, there exists a constant-round statistically bindingcommitment scheme that is non malleable with respect to commitment.

Theorem 3 (Statistically hiding non-malleable commitment) Suppose that there exists afamily of collision-resistant hash functions. Then, there exists a constant-round statistically hidingcommitment scheme that is non malleable with respect to opening.

Underlying cryptographic assumptions. The main quantitative improvement of our con-struction over the constant round protocols in [2] is in the underlying cryptographic assumption.Our constructions rely on the existence of ordinary collision resistant hash functions. The protocolsin [2] relied on the existence of both trapdoor permutations and hash functions that are collision re-sistant against sub exponential sized circuits. The constructions in [16] assumed only the existenceof one-way functions, but had a super-constant number of rounds.

Statistically hiding non-malleable commitments. Theorem 3 gives the first constructionof a non-malleable commitment scheme that is statistically hiding and that does not rely on set-upassumptions. We mention that the existence of collision resistant hash functions is the weakestassumption currently known to imply constant round statistically hiding commitment schemes(even those that are not of the non-malleable kind) [36, 12].

Strict vs. liberal non-malleability. The notion of non malleability that has been consideredso far in all works, allows the stand alone adversary S to run in expected polynomial time. Astronger (“tighter”) notion of security, named strict non-malleability [16], requires S to run in strictpolynomial time. In the context of strict non-malleability, we have the following result.

Theorem 4 (Strict non-malleability) Suppose that there exists a family of collision resistanthash functions. Then, There exists a constant-round statistically binding commitment scheme thatis strictly non-malleable with respect to commitment.

1.3 Techniques and New Ideas

Our protocols rely on non black-box techniques used by Barak to obtain constant-round public-coinZK argument for NP [1] (in a setting where no man in the middle is considered). They are closelyrelated to previous works by Pass [38], and Pass and Rosen [39] that appeared in the context ofbounded-concurrent two-party and multi-party computation; in particular our protocols rely andfurther explore the technique from [38] of using message-lengths to obtain non-malleability. Ourtechniques are different than the ones used by Barak in the context of non-malleable coin-tossing [2].

The approach we follow in this work is fundamentally different than the approach used in [16].Instead of viewing non-malleable commitments as a tool for constructing non-malleable ZK proto-cols, we reverse the roles and use non-malleable ZK protocols in order to construct non-malleablecommitments. Our approach is also different from the one taken by [2], who uses a coin-tossingprotocol to instantiate constructions that rely on the existence of a common reference string.

4

Page 6: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Our approach gives rise to a modular and natural construction of non-malleable commitments.This construction emphasizes the role of non-malleable ZK as a building block for other non-malleable cryptographic primitives. In proving the security of our protocols, we introduce thenotion of simulation extractability, which is a convenient form of non-malleability (in particular,it enables a more modular construction of proofs). A generalization of simulation-extractability,called one-many simulation extractability, has already been found to be useful in constructingcommitment schemes that retain their non-malleability properties even if executed concurrently anunbounded (polynomial) number of times [40].

In principle, our definitions of non-malleability are compatible with the ones appearing in [16].However, the presentation is more detailed and somewhat different (see Section 3). Our definitionalapproach, as well as our construction of non-malleable ZK highlights a distinction between thenotions of non-malleable interactive proofs and non-malleable ZK. This distinction was not presentin the definitions given in [16].

1.4 Related Work

Assuming the existence of a common random string, Di Crescenzo, Ishai and Ostrovsky [15], andDi Crescenzo, Katz, Ostrovsky, and Smith [14] construct non-malleable commitment schemes.Sahai [41], and De Santis, Di Crescenzo, Ostrovsky, Persiano and Sahai [13] construct a non-interactive non-malleable ZK protocol under the same assumption. Fischlin and Fischlin [19], andDamgard and Groth [11] construct non-malleable commitments assuming the existence of a commonreference string. We note that the non-malleable commitments constructed in [15] and [19] onlysatisfy non-malleability with respect to opening [19]. Canetti and Fischlin [9] construct a universallycomposable commitment assuming a common random string. Universal composability implies nonmalleability. However, it is impossible to construct universally composable commitments withoutmaking set-up assumptions [9].

Goldreich and Lindell [22], and Nguyen and Vadhan [37] consider the task of session-key gen-eration in a setting where the honest parties share a password that is taken from a relatively smalldictionary. Their protocols are designed having a man-in-the-middle adversary in mind, and onlyrequires the usage of a “mild” set-up assumption (namely the existence of a “short” password).

1.5 Future and Subsequent Work

Our constructions (and even more so the previous ones) are quite complex. A natural question iswhether they can be simplified. A somewhat related question is whether non-black box techniquesare necessary for achieving constant-round non-malleable ZK or commitments. Our constructionsrely on the existence of collision resistant hash functions, whereas the non constant-round construc-tion in [16] relies on the existence of one-way functions. We wonder whether the collision resistanceassumption can be relaxed.

Another interesting question (which has been already addressed in subsequent work – see below)is whether it is possible to achieve non-malleability under concurrent executions. The techniquesused in this paper do not seem to extend to the (unbounded) concurrent case and new ideas seemto be required. Advances in that direction might shed light on the issue of concurrent compositionof general secure protocols.

In subsequent work [40], we show that (a close variant of) the commitments presented here willretain their non-malleability even if executed concurrently an unbounded (polynomial) number oftimes. We note that besides using an almost identical protocol, the proof of this new result heavilyrelies on a generalization of simulation extractability (called “one-many” simulation extractability).This notion has proved itself very useful in the context of non-malleability, and we believe that

5

Page 7: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

it will find more applications in scenarios where a man-in-the-middle adversary is involved. Weadditionally mention that the presentation of some of the results in this version of the paperincorporate simplification developed by us in [40].

2 Preliminaries

2.1 Basic notation

We let N denote the set of all integers. For any integer m ∈ N , denote by [m] the set {1, 2, . . . ,m}.For any x ∈ {0, 1}∗, we let |x| denote the size of x (i.e., the number of bits used in order to write it).For two machines M,A, we let MA(x) denote the output of machine M on input x and given oracleaccess to A. The term negligible is used for denoting functions that are (asymptotically) smallerthan one over any polynomial. More precisely, a function ν(·) from non-negative integers to realsis called negligible if for every constant c > 0 and all sufficiently large n, it holds that ν(n) < n−c.

2.2 Witness Relations

We recall the definition of a witness relation for an NP language [20].

Definition 2.1 (Witness relation) A witness relation for a language L ∈ NP is a binary relationRL that is polynomially bounded, polynomial time recognizable and characterizes L by

L = {x : ∃y s.t. (x, y) ∈ RL}

We say that y is a witness for the membership x ∈ L if (x, y) ∈ RL (also denoted RL(x, y) = 1).We will also let RL(x) denote the set of witnesses for the membership x ∈ L, i.e.,

RL(x) = {y : (x, y) ∈ L}

In the following, we assume a fixed witness relation RL(x, y) for each language L ∈ NP.

2.3 Probabilistic notation

Denote by xr← X the process of uniformly choosing an element x in a set X. If B(·) is an

event depending on the choice of xr← X, then Prx←X [B(x)] (alternatively, Prx[B(x)]) denotes the

probability that B(x) holds when x is chosen with probability 1/|X|. Namely,

Prx←X [B(x)] =∑x

1|X|· χ(B(x))

where χ is an indicator function so that χ(B) = 1 if event B holds, and equals zero otherwise. Wedenote by Un the uniform distribution over the set {0, 1}n.

2.4 Computational indistinguishability and statistical closeness

The following definition of (computational) indistinguishability originates in the seminal paper ofGoldwasser and Micali [26].

Let X be a countable set of strings. A probability ensemble indexed by X is a sequence of randomvariables indexed by X. Namely, any A = {Ax}x∈X is a random variable indexed by X.

6

Page 8: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Definition 2.2 ((Computational) Indistinguishability) Let X and Y be countable sets. Twoensembles {Ax,y}x∈X,y∈Y and {Bx,y}x∈X,y∈Y are said to be computationally indistinguishable over X,if for every probabilistic “distinguishing” machine D whose running time is polynomial in its firstinput, there exists a negligible function ν(·) so that for every x ∈ X, y ∈ Y :

|Pr [D(x, y, Ax,y) = 1]− Pr [D(x, y, Bx,y) = 1]| < ν(|x|)

{Ax,y}x∈X,y∈Y and {Bx,y}x∈X,y∈Y are said to be statistically close over X if the above conditionholds for all (possibly unbounded) machines D.

2.5 Interactive Proofs, Zero-Knowledge and Witness-Indistinguishability

We use the standard definitions of interactive proofs (and interactive Turing machines) [27, 20] andarguments [8]. Given a pair of interactive Turing machines, P and V , we denote by 〈P, V 〉(x) therandom variable representing the (local) output of V when interacting with machine P on commoninput x, when the random input to each machine is uniformly and independently chosen.

Definition 2.3 (Interactive Proof System) A pair of interactive machines 〈P, V 〉 is called aninteractive proof system for a language L if machine V is polynomial-time and the following twoconditions hold with respect to some negligible function ν(·):

• Completeness: For every x ∈ L,

Pr [〈P, V 〉(x) = 1] ≥ 1− ν(|x|)

• Soundness: For every x 6∈ L, and every interactive machine B,

Pr [〈B, V 〉(x) = 1] ≤ ν(|x|)

In case that the soundness condition is required to hold only with respect to a computationallybounded prover, the pair 〈P, V 〉 is called an interactive argument system.

Zero-knowledge. An interactive proof is said to be zero-knowledge (ZK) if it yields nothingbeyond the validity of the assertion being proved. This is formalized by requiring that the viewof every probabilistic polynomial-time adversary V ∗ interacting with the honest prover P can besimulated by a probabilistic polynomial-time machine S (a.k.a. the simulator). The idea behindthis definition is that whatever V ∗ might have learned from interacting with P , he could haveactually learned by himself (by running the simulator S).

The notion of ZK was introduced by Goldwasser, Micali and Rackoff [27]. To make ZK robustin the context of protocol composition, Goldreich and Oren [25] suggested to augment the definitionso that the above requirement holds also with respect to all z ∈ {0, 1}∗, where both V ∗ and S areallowed to obtain z as auxiliary input. The verifier’s view of an interaction consists of the commoninput x, followed by its random tape and the sequence of prover messages the verifier receivesduring the interaction. We denote by viewP

V ∗(x, z) a random variable describing V ∗(z)’s view ofthe interaction with P on common input x.

Definition 2.4 (Zero-knowledge) Let 〈P, V 〉 be an interactive proof system. We say that 〈P, V 〉is zero-knowledge, if for every probabilistic polynomial-time interactive machine V ∗ there exists aprobabilistic polynomial-time algorithm S such that the ensembles {viewP

V ∗(x, z)}x∈L,z∈{0,1}∗ and{S(x, z)}x∈L,z∈{0,1}∗ are computationally indistinguishable over L.

7

Page 9: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

A stronger variant of zero-knowledge is one in which the output of the simulator is statistically closeto the verifier’s view of real interactions. We focus on argument systems, in which the soundnessproperty is only guaranteed to hold with respect to polynomial time provers.

Definition 2.5 (Statistical zero-knowledge) Let 〈P, V 〉 be an interactive argument system. Wesay that 〈P, V 〉 is statistical zero-knowledge, if for every probabilistic polynomial-time V ∗ thereexists a probabilistic polynomial-time S such that the ensembles {viewP

V ∗(x, z)}x∈L,z∈{0,1}∗ and{S(x, z)}x∈L,z∈{0,1}∗ are statistically close over L.

In case that the ensembles {viewPV ∗(x, z)}x∈L,z∈{0,1}∗ and {S(x, z)}x∈L,z∈{0,1}∗ are identically

distributed, the protocol 〈P, V 〉 is said to be perfect zero-knowledge.

Witness Indistinguishability. An interactive proof is said to be witness indistinguishable (WI)if the verifier’s view is “computationally independent” (resp. “statistically independent”) of thewitness used by the prover for proving the statement (the notion of statistical WI will be used inSection 4.2). In this context, we focus our attention to languages L ∈ NP with a correspondingwitness relation RL. Namely, we consider interactions in which on common input x the proveris given a witness in RL(x). By saying that the view is computationally (resp. statistically)independent of the witness, we mean that for any two possible NP-witnesses that could be usedby the prover to prove the statement x ∈ L, the corresponding views are computationally (resp.statistically) indistinguishable.

Let V ∗ be a probabilistic polynomial time adversary interacting with the prover, and letviewP

V ∗(x, w, z) denote V ∗’s view of an interaction in which the witness used by the prover isw (where the common input is x and V ∗’s auxiliary input is z).

Definition 2.6 (Witness-indistinguishability) Let 〈P, V 〉 be an interactive proof system for alanguage L ∈ NP. We say that 〈P, V 〉 is witness-indistinguishable for RL, if for every probabilisticpolynomial-time interactive machine V ∗ and for every two sequences {w1

x}x∈L and {w2x}x∈L, such

that w1x, w2

x ∈ RL(x) for every x ∈ L, the probability ensembles {viewPV ∗(x,w1

x)}x∈L,z∈{0,1}∗ and{viewP

V ∗(x,w2x)}x∈L,z∈{0,1}∗ are computationally indistinguishable over L.

In case that the ensembles {viewPV ∗(x,w1

x)}x∈L,z∈{0,1}∗ and {viewPV ∗(x,w2

x)}x∈L,z∈{0,1}∗ are sta-tistically close over L, the proof system 〈P, V 〉 is said to be statistically witness indistinguishable.If the ensembles are identically distributed, the proof system is said to be witness independent.

2.6 Proofs of Knowledge

Informally an interactive proof is a proof of knowledge if the prover convinces the verifier not onlyof the validity of a statement, but also that it possesses a witness for the statement. This notion isformalized by the introduction of an machine E, called a knowledge extractor. As the name suggests,the extractor E is supposed to extract a witness from any malicious prover P ∗ that succeeds inconvincing an honest verifier. More formally,

Definition 2.7 Let (P, V ) be an interactive proof system for the language L with witness relationRL. We say that (P, V ) is a proof of knowledge if there exists a polynomial q and a probabilisticoracle machine E, such that for every probabilistic polynomial-time interactive machine P ∗, thereexists some negligible function µ(·) such that for every x ∈ L and every y, r ∈ {0, 1}∗ such thatPr[〈P ∗x,y,r, V (x)〉 = 1] > 0, where P ∗x,y,r denotes the machine P ∗ with common input fixed to x,auxiliary input fixed to y and random tape fixed to r, the following holds

8

Page 10: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

1. The expected number of steps taken by EP ∗x,y,r is bounded by

q(|x|)Pr[〈P ∗x,y,r, V (x)〉 = 1]

where EP ∗x,y,r denotes the machine E with oracle access to P ∗x,y,r.

2. Furthermore,Pr[〈P ∗x,y,r, V (x)〉 = 1 ∧ EP ∗

x,y,r /∈ RL(x)] ≤ µ(|x|)

The machine E is called a (knowledge) extractor.

We remark that as our definition only considers computationally bounded provers, we only get a“computationally convincing” notion of a proof of knowledge (a.k.a arguments of knowledge) [8].In addition, our definition is slightly different from the definition of [4] in that we require thatthe expected running-time of the extractor is always bounded by poly(|x|)/p, where p denotes thesuccess probability of P ∗, whereas [4] allows for some additional slackness in the running-time. Onthe other hand, whereas [4] requires the extractor to always output a valid witness, we insteadallow the extractor to fail with some negligible probability. We will rely on the following theorem:

Theorem 2.8 ([6, 8]) Assume the existence of claw-free permutations. Then there exists a con-stant-round public-coin witness independent argument of knowledge for NP.

Indeed, standard techniques can be used to show that the parallelized version of the protocol of [6],using perfectly-hiding commitments, is an argument of knowledge (as defined above). As usual,the knowledge extractor E proceeds by feeding new “challenges” to the prover P ∗ until it gets twoaccepting transcripts. If the two accepting challenges contain the same challenge, or if the provermanages to open up a commitment in two different ways, the extractor outputs fail; otherwise itcan extract a witness.

2.7 Universal Arguments

Universal arguments (introduced in [3] and closely related to the notion of CS-proofs [33]) are usedin order to provide “efficient” proofs to statements of the form y = (M,x, t), where y is consideredto be a true statement if M is a non-deterministic machine that accepts x within t steps. Thecorresponding language and witness relation are denoted LU and RU respectively, where the pair((M,x, t), w) is in RU if M (viewed here as a two-input deterministic machine) accepts the pair(x,w) within t steps. Notice that every language in NP is linear time reducible to LU . Thus, aproof system for LU allows us to handle all NP-statements. In fact, a proof system for LU enablesus to handle languages that are presumably “beyond” NP, as the language LU is NE-complete(hence the name universal arguments).1

Definition 2.9 (Universal argument) A pair of interactive Turing machines (P, V ) is called auniversal argument system if it satisfies the following properties:

• Efficient verification: There exists a polynomial p such that for any y = (M,x, t), the totaltime spent by the (probabilistic) verifier strategy V , on common input y, is at most p(|y|). Inparticular, all messages exchanged in the protocol have length smaller than p(|y|).

1Furthermore, every language in NEXP is polynomial-time (but not linear-time) reducible to LU

9

Page 11: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

• Completeness by a relatively efficient prover: For every ((M,x, t), w) in RU ,

Pr[(P (w), V )(M,x, t) = 1] = 1

Furthermore, there exists a polynomial q such that the total time spent by P (w), on commoninput (M,x, t), is at most q(TM (x,w)) ≤ q(t), where TM (x,w) denotes the running time ofM on input (x,w).

• Computational Soundness: For every polynomial size circuit family {P ∗n}n∈N , and everytriplet (M,x, t) ∈ {0, 1}n \ LU ,

Pr[(P ∗n , V )(M,x, t) = 1] < ν(n)

where ν(·) is a negligible function.

• Weak proof of knowledge: For every positive polynomial p there exists a positive polynomial p′

and a probabilistic polynomial-time oracle machine E such that the following holds: for everypolynomial-size circuit family {P ∗n}n∈N , and every sufficiently long y = (M,x, t) ∈ {0, 1}∗ ifPr[(P ∗n , V )(y) = 1] > 1/p(|y|) then

Pr[∃w = w1, . . . wt ∈ RU (y) s.t. ∀i ∈ [t], EP ∗n

r (y, i) = wi] >1

p′(|y|)

where RU (y) def= {w : (y, w) ∈ RU} and EP ∗

nr (·, ·) denotes the function defined by fixing the

random-tape of E to equal r, and providing the resulting Er with oracle access to P ∗n .

2.8 Commitment Schemes

Commitment schemes are used to enable a party, known as the sender, to commit itself to avalue while keeping it secret from the receiver (this property is called hiding). Furthermore, thecommitment is binding, and thus in a later stage when the commitment is opened, it is guaranteedthat the “opening” can yield only a single value determined in the committing phase. Commitmentschemes come in two different flavors, statistically-binding and statistically-hiding. We sketch theproperties of each one of these flavors. Full definitions can be found in [20].

Statistically-binding: In statistically binding commitments, the binding property holds againstunbounded adversaries, while the hiding property only holds against computationally bounded(non-uniform) adversaries. The statistical-binding property asserts that, with overwhelmingprobability over the coin-tosses of the receiver, the transcript of the interaction fully deter-mines the value committed to by the sender. The computational-hiding property guaranteesthat the commitments to any two different values are computationally indistinguishable.

Statistically-hiding: In statistically-hiding commitments, the hiding property holds against un-bounded adversaries, while the binding property only holds against computationally bounded(non-uniform) adversaries. Loosely speaking, the statistical-hiding property asserts that com-mitments to any two different values are statistically close (i.e., have negligible statistical dis-tance). In case the statistical distance is 0, the commitments are said to be perfectly-hiding.The computational-binding property guarantees that no polynomial time machine is able toopen a given commitment in two different ways.

10

Page 12: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Non-interactive statistically-binding commitment schemes can be constructed using any 1–1one-way function (see Section 4.4.1 of [20]). Allowing some minimal interaction (in which the re-ceiver first sends a single random initialization message), statistically-binding commitment schemescan be obtained from any one-way function [34, 28]. We will think of such commitments as a fam-ily of non-interactive commitments, where the description of members in the family will be theinitialization message. Perfectly-hiding commitment schemes can be constructed from any one-way permutation [35]. However, constant-round schemes are only known to exist under strongerassumptions; specifically, assuming the existence of a collection of certified clawfree functions [21].

3 Non-Malleable Protocols

The notion of non-malleability was introduced by Dolev, Dwork and Naor [16]. In this paper wefocus on non malleability of zero-knowledge proofs and of string commitment. The definitions arestated in terms of interactive proofs, though what we actually construct are non-malleable argumentsystems. The adaptation of the definitions to the case of arguments can be obtained by simplyreplacing the word “proof” with “argument,” whenever it appears.

In principle, our definitions are compatible with the ones appearing in [16]. However, the pre-sentation is more detailed and somewhat different (see Section 3.4 for a discussion on the differencesbetween our definition and previous ones).

3.1 Non-Malleable Interactive Proofs

Let 〈P, V 〉 be an interactive proof. Consider a scenario where a man-in-the-middle adversary A issimultaneously participating in two interactions. These interactions are called the left and the rightinteraction. In the left interaction the adversary A is verifying the validity of a statement x byinteracting with an honest prover P . In the right interaction A proves the validity of a statementx to the honest verifier V . The statement x is chosen by A, possibly depending on the messages itreceives in the left interaction.

Besides controlling the messages sent by the verifier in the left interaction and by the proverin the right interaction, A has control over the scheduling of the messages. In particular, it maydelay the transmission of a message in one interaction until it receives a message (or even multiplemessages) in the other interaction. Figure 2 describes two representative scheduling strategies.

P x A x V

−−−−→ −−−−→←−−−−−−−−→←−−−−−−−−→

P x A x V

−−−−→ −−−−→←−−−−←−−−−−−−−→ −−−−→

Figure 2: Two scheduling strategies.

The interactive proof 〈P, V 〉 is said to be non-malleable if, whenever x 6= x, the left interactiondoes not “help” the adversary in convincing the verifier in the right interaction.2 Following thesimulation paradigm [26, 27, 23], this is formalized by defining appropriate “real” and “idealized”

2Notice that requiring that x 6= x is necessary, since otherwise the adversary can succeed to convince the verifierin the right interaction by simply forwarding messages back and forth between the interactions.

11

Page 13: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

executions. In the real execution, called the man-in-the-middle execution, the adversary participatesin both the left and the right interactions with common inputs x and x respectively. In the idealizedexecution, called the stand-alone execution, the adversary is playing the role of the prover in a singleinteraction with common input x. Security is defined by requiring that the adversary cannot succeedbetter in the man-in-the middle execution than he could have done in the stand-alone execution.More formally, we consider the following two executions.

Man-in-the-middle execution. The man-in-the-middle execution consists of the scenario de-scribed above. The input of P is an instance-witness pair (x,w), and the input of V is an instancex. A receives x and an auxiliary input z. Let mimA

V (x,w, z) be a random variable describing theoutput of V in the above experiment when the random tapes of P,A and V are uniformly andindependently chosen. In case that x = x, the view mimA

V (x, w, z) is defined to be ⊥.

Stand-alone execution. In the stand-alone execution only one interaction takes place. Thestand-alone adversary S directly interacts with the honest verifier V . As in the man-in-the-middleexecution, V receives as input an instance x. S receives instances x, x and auxiliary input z. LetstaS

V (x, x, z) be a random variable describing the output of V in the above experiment when therandom tapes of S and V are uniformly and independently chosen.

Definition 3.1 (Non-malleable interactive proof) An interactive proof 〈P, V 〉 for a languageL is said to be non-malleable if for every probabilistic polynomial time man-in-the-middle adver-sary A, there exists a probabilistic expected polynomial time stand-alone prover S and a negligiblefunction ν : N → N , such that for every (x,w) ∈ L×RL(x), every x ∈ {0, 1}|x| so that x 6= x, andevery z ∈ {0, 1}∗:

Pr[mimA

V (x, x, w, z) = 1]

< Pr[staS

V (x, x, z) = 1]

+ ν(|x|)

Non-malleability with respect to tags. Definition 3.1 rules out the possibility that the state-ment proved on the right interaction is identical to the one on the left. Indeed, if the same protocolis executed on the left and on the right this kind of attack cannot be prevented, as the man-in-the-middle adversary can always copy messages between the two executions (cf., the chess-masterproblem [16]). Still, in many situations it might be important to be protected against an attackerthat attempts to prove even the same statement. In order to deal with this problem, one couldinstead consider a “tag-based” variant of non-malleability (see [31] for a definition of tag-basenon-malleability in the context of encryption).

We consider a family of interactive proofs, where each member of the family is labeled with atag string tag ∈ {0, 1}m, and t = t(n) is a parameter that potentially depends on the length ofthe common input (security parameter) n ∈ N . As before, we consider a MIM adversary A that issimultaneously participating in a left and a right interaction. In the left interaction, A is verifyingthe validity of a statement x by interacting with a prover Ptag while using a protocol that is labeledwith a string tag. In the right interaction A proves the validity of a statement x to the honestverifier V ˜tag while using a protocol that is labeled with a string ˜tag. Let mimA

V (tag, ˜tag, x, x, w, z)be a random variable describing the output of V in the man-in-the-middle experiment. The stand-alone execution is defined as before with the only difference being that in addition to their originalinputs, the parties also obtain the corresponding tags. Let staS

V (tag, ˜tag, x, x, z) be a randomvariable describing the output of V in the stand-alone experiment.

12

Page 14: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

The definition of non-malleability with respect to tags is essentially identical to Definition 3.1.The only differences in the definition is that instead of requiring non-malleability (which comparesthe success probability of mimA

V (tag, ˜tag, x, x, w, z) and staSV (tag, ˜tag, x, x, z)) whenever x 6= x,

we will require non-malleability whenever tag 6= ˜tag. For convenience, we repeat the definition:

Definition 3.2 (Tag-based non-malleable interactive proofs) A family of interactive proofs〈Ptag, Vtag〉 for a language L is said to be non-malleable with respect to tags of length m if for everyprobabilistic polynomial time man-in-the-middle adversary A, there exists a probabilistic expectedpolynomial time stand-alone prover S and a negligible function ν : N → N , such that for every(x,w) ∈ L×RL(x), every x ∈ {0, 1}|x|, every tag, ˜tag ∈ {0, 1}m so that tag 6= ˜tag, and everyz ∈ {0, 1}∗:

Pr[mimA

V (tag, ˜tag, x, x, w, z) = 1]

< Pr[staS

V (tag, ˜tag, x, x, z) = 1]

+ ν(|x|)

Tags vs. statements. A non-malleable interactive proof can be turned into a tag-based one bysimply concatenating the tag to the statement being proved. On the other hand, an interactive proofthat is non-malleable with respect to tags of length t(n) = n can be turned into a non-malleableinteractive proof by using the statement x ∈ {0, 1}n as tag.

The problem of constructing a tag-based non-malleable interactive proof is already non-trivialfor tags of length, say t(n) = O(log n) (and even for t(n) = O(1)), but is still potentially easierthan for tags of length n. This opens up the possibility of reducing the construction of interactiveproofs that are non-malleable w.r.t. long tags into interactive proofs that are non-malleable w.r.t.shorter tags. Even though we do not know whether such a reduction is possible in general, ourwork follows this path and demonstrates that in specific cases such a reduction is indeed possible.

Non-malleability with respect to other protocols. Our definitions of non-malleability referto protocols that are non-malleable with respect to themselves, since the definitions consider asetting where the same protocol is executed in the left and the right interaction. In principle, onecould consider two different protocols that are executed on the left and on the right which are benon-malleable with respect to each other. Such definitions are not considered in this work.

3.2 Non-malleable Zero-Knowledge

Non-malleable ZK proofs are non-malleable interactive proofs that additionally satisfy the ZKproperty (as stated in Definitions 2.4)

Definition 3.3 (Non-malleable zero-knowledge) A family {〈Ptag, Vtag〉}tag∈{0,1}∗ of interac-tive proofs is said to be non malleable zero-knowledge if it is both non malleable and zero knowledge.

3.3 Non-malleable Commitments

Informally, a commitment scheme is non-malleable if a man-in-the-middle adversary that receivesa commitment to a value v will not be able to “successfully” commit to a related value v. Theliterature discusses two different interpretations of the term“success”:

Non-malleability with respect to commitment [16]. The adversary is said to succeed ifit manages to commit to a related value, even without being able to later decommit to thisvalue. This notion makes sense only in the case of statistically-binding commitments.

13

Page 15: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Non-malleability with respect to opening [15]. The adversary is said to succeed only if it isable to both commit and decommit to a related value. This notion makes sense both in thecase of statistically-binding and statistically-hiding commitments.

As in the case of non-malleable zero-knowledge, we formalize the definition by comparing aman-in-the-middle and a stand-alone execution. Let n ∈ N be a security parameter. Let 〈C,R〉 bea commitment scheme, and let R ⊆ {0, 1}n × {0, 1}n be a polynomial-time computable irreflexiverelation (i.e., R(v, v) = 0). As before, we consider man-in-the-middle adversaries that are simul-taneously participating in a left and a right interaction in which a commitment scheme is takingplace. The adversary is said to succeed in mauling a left commitment to a value v, if he is ableto come up with a right commitment to a value v such that R(v, v) = 1. Since we cannot ruleout copying, we will only be interested in relations where copying is not considered success, andwe therefore require that the relation R is irreflexive. The man-in-the-middle and the stand-aloneexecutions are defined as follows.

The man-in-the-middle execution. In the man-in-the-middle execution, the man-in-the-middleadversary A is simultaneously participating in a left and a right interaction. In the left interactionthe man-in-the-middle adversary A interacts with C receiving a commitment to a value v. In theright interaction A interacts with R attempting to commit to a related value v. Prior to the inter-action, the value v is given to C as local input. A receives an auxiliary input z, which in particularmight contain a-priori information about v.3 The success of A is defined using the following twoBoolean random variables:

• mimAcom(R, v, z) = 1 if and only if A produces a valid commitment to v such that R(v, v) = 1.

• mimAopen(R, v, z) = 1 if and only if A decommits to a value v such that R(v, v) = 1.

The stand-alone execution. In the stand-alone execution only one interaction takes place. Thestand-alone adversary S directly interacts with R. As in the man-in-the-middle execution, the valuev is chosen prior to the interaction and S receives some a-priori information about v as part of itsan auxiliary input z. S first executes the commitment phase with R. Once the commitment phasehas been completed, S receives the value v and attempts to decommit to a value v. The success ofS is defined using the following two Boolean random variables:

• staScom(R, v, z) = 1 if and only if S produces a valid commitment to v such that R(v, v) = 1.

• staSopen(R, v, z) = 1 if and only if A decommits to a value v such that R(v, v) = 1.

Definition 3.4 (Non-malleable commitment) A commitment scheme 〈C,R〉 is said to be non-malleable with respect to commitment if for every probabilistic polynomial-time man-in-the-middleadversary A, there exists a probabilistic expected polynomial time stand-alone adversary S and anegligible function ν : N → N , such that for every irreflexive polynomial-time computable relationR⊆{0, 1}n×{0, 1}n, every v∈{0, 1}n, and every z∈{0, 1}∗, it holds that:

Pr[mimA

com(R, v, z) = 1]

< Pr[staS

com(R, v, z) = 1]

+ ν(n)

Non-malleability with respect to opening is defined in the same way, while replacing the randomvariables mimA

com(R, v, z) and staScom(R, v, z) with mimA

open(R, v, z) and staSopen(R, v, z).

3The original definition by Dwork et al. [16] accounted for such a-priori information by providing the adversarywith the value hist(v), where the function hist(·) be a polynomial-time computable function.

14

Page 16: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Content-based v.s. tag-based commitments. Similar to the definition of interactive proofsnon-malleable with respect to statements, the above definitions only require that the adversaryshould not be able to commit to a value that is related, but different, from the value it receivesa commitment of. Technically, the above fact can be seen from the definitions by noting that therelation R, which defines the success of the adversary, is required to be irreflexive. This means thatthe adversary is said to fail if it only is able to produce a commitment to the same value.4 Indeed,if the same protocol is executed in the left and the right interaction, the adversary can always copymessages and succeed in committing to the same value on the right as it receives a commitmentof, on the left. To cope with this problem, the definition can be extended to incorporate tags, inanalogy with the definition of interactive proofs non-malleable with respect to tags. The extensionis straightforward and therefore omitted.

We note that any commitment scheme that satisfies Definition 3.4 can easily be transformed intoa scheme which is tag-based non-malleable, by prepending the tag to the value before committing.Conversely, in analogy with non-malleable interactive proof, commitment schemes that are non-malleable with respect to tags of length t(n) = poly(n) can be transformed into commitmentschemes non-malleable with respect to content in a standard way (see e.g., [16, 31]).

3.4 Comparison with Previous Definitions

Our definitions of non-malleability essentially follow the original definitions by Dwork et al.[16].However, whereas the definitions by Dwork et al. quantifies the experiments over all distributions Dof inputs for the left and the right interaction (or just left interaction in the case of commitments),we instead quantify over all possible input values x, x (or, in the case of commitments over allpossible input values v for the left interaction). Our definitions can thus be seen as non-uniformversions of the definitions of [16].

Our definition of non-malleability with respect to opening is, however, different from the defi-nition of [15] in the following ways: (1) The definition of [15] does not take into account possiblea-priori information that the adversary might have about the commitment, while ours (following[16]) does. (2) In our definition of the stand-alone execution the stand-alone adversary receives thevalue v after having completed the commitment phase and is thereafter supposed to decommit toa value related to v. The definition of [15] does not provide the simulator with this information.

In our view, the “a-priori information” requirement is essential in many situations and wetherefore present a definition that satisfies it. (Consider, for example, a setting where the value vcommitted to is determined by a different protocol, which “leaks” some information about v.) Inorder to be able to satisfy this stronger requirement we relax the definition of [15] by allowing thestand-alone adversary to receive the value v before de-committing.

3.5 Simulation-Extractability

A central tool in our constructions of non-malleable interactive-proofs and commitments is thenotion of simulation-extractability. Loosely speaking, an interactive protocol is said to be simulationextractable if for any man-in-the-middle adversary A, there exists a probabilistic polynomial timemachine (called the simulator-extractor) that can simulate both the left and the right interactionfor A, while outputting a witness for the statement proved by the adversary in the right interaction.

4Potentially, one could consider a slightly stronger definition, which also rules out the case when the adversary isable to construct a different commitment to the same value. Nevertheless, we here adhere to the standard definitionof non-malleable commitments which allows the adversary to produce a different commitment to the same value.

15

Page 17: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Simulation-extractability can be thought of a technical (and stronger) variant of non-malleability.The main reason for introducing this notion is that it enables a more modular analysis (and in par-ticular is easier to work with). At the end of this section, we argue that any protocol that issimulation-extractable is also a non-malleable zero-knowledge proof of knowledge. In Section 6 weshow how to use simulation-extractable protocols in order to obtain non-malleable commitments.

Let A be a man-in-the middle adversary that is simultaneously participating in a left interactionof 〈Ptag, Vtag〉 while acting as verifier, and a right interaction of 〈P ˜tag, V ˜tag〉 while acting as prover.

Let viewA(x, z,tag) denote the joint view of A(x, z) and the honest verifier V ˜tag when A isverifying a left-proof of the statement x, using identity tag, and proving on the right a statementx using identity ˜tag. (The view consists of the messages sent and received by A in both left andright interactions, and the random coins of A, and V ˜tag).5 Both x and ˜tag are chosen by A. Givena function t = t(n) we use the notation {·}z,x,tag as shorthand for {·}z∈{0,1}∗,x∈L,tag∈{0,1}t(|x|) .

Definition 3.5 (Simulation-extractable protocol) A family {〈Ptag, Vtag〉}tag∈{0,1}∗ of inter-active proofs is said to be simulation extractable with tags of length t= t(n) if for any man-in-the-middle adversary A, there exists a probabilistic expected poly-time machine S such that:

1. The probability ensembles {S1(x, z,tag)}x,z,tag and {viewA(x, z,tag)}x,z,tag are statisticallyclose over L, where S1(x, z,tag) denotes the first output of S(x, z,tag).

2. Let x ∈ L, z ∈ {0, 1}∗,tag ∈ {0, 1}t(|x|) and let (view, w) denote the output of S(x, z,tag)(on input some random tape). Let x be the right-execution statement appearing in view andlet ˜tag denote the right-execution tag. Then, if the right-execution in view is accepting ANDtag 6= ˜tag, then RL(x, w) = 1.

We note that the above definition refers to protocols that are simulation extractable with respectto themselves. A stronger variant (which is not considered in the current work) would have requiredsimulation extractability even in the presence of protocols that do not belong to the family.

We next argue that in order to construct non-malleable zero-knowledge protocols, it will besufficient to come up with a protocol that is simulation-extractable. To do so, we prove that anyprotocol that is simulation-extractable (and has an efficient prover strategy) is also non-malleablezero-knowledge (i.e., it satisfies Definitions 3.1 and 3.3).

Proposition 3.6 Let {〈Ptag, Vtag〉}tag∈{0,1}∗ be a family of simulation-extractable protocols withtags of length t = t(n) (with respect to the language L and the witness relation RL) with an efficientprover strategy. Then, {〈Ptag, Vtag〉}tag∈{0,1}∗ is also a non-malleable zero-knowledge (with tags oflength t).

Proof: Let {〈Ptag, Vtag〉}tag∈{0,1}∗ be a family of simulation-extractable protocols with tags oflength t, with respect to the language L and the witness relation RL. We argue that{〈Ptag, Vtag〉}tag∈{0,1}∗ is both a non-malleable interactive proof and stand alone zero-knowledge.

Non-malleability. Assume for contradiction that there exist a prob. poly–time man-in-the-middle adversary A, and a polynomial p(n) such for infinitely many n, there exists x, x ∈ {0, 1}n,w, z ∈ {0, 1}∗, and tag, ˜tag∈{0, 1}t(n) such that (x,w) ∈ L×RL(x), tag 6= ˜tag and

Pr[mimA

V (tag, ˜tag, x, x, w, z) = 1]≥ Pr

[staS

V (tag, ˜tag, x, x, z) = 1]

+1

p(n)(1)

5Since the messages sent by A are fully determined given the code of A and the messages it receives, including themas part of the view is somewhat redundant. The reason we have chosen to do so is for convenience of presentation.

16

Page 18: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

By Definition 3.5, there exist a probabilistic polynomial time machine S for A that satisfies thedefinition’s conditions. We show how to use S in order to construct a stand alone prover S for〈Ptag, Vtag〉. On input tag, ˜tag, x, x, z, the machine S runs the simulator extractor S on inputx, z,tag and obtains the view view and witness w. In the event that the view contains an acceptingright-execution of the statement x using tag tag, S executes the honest prover strategy Ptag oninput x and the witness w.

It follows directly from the simulation property of S that the probability that view contains anaccepting right-execution proof of x using tag ˜tag is negligibly close to

pA = Pr[mimA

V (tag, ˜tag, x, x, w, z) = 1]

Since S always outputs a witness when the right-execution is accepting and the tag of the right-execution is different from the tag of the left execution, we conclude that success probability of Salso is negligbly close to pA (since tag 6= ˜tag). This contradicts equation 1.

Zero Knowledge. Consider any probabilistic poly-time verifier V ∗. Construct the man-in-the-middle adversary A that internally incorporates V and relays its left execution unmodified to V ∗. Inthe right execution, A simply outputs ⊥. By the simulation-extractability property of 〈Ptag, Vtag〉,there exists a simulator-extractor S for A. We describe a simulator S for V ∗.

On input x, z,tag, S runs S on input x, z,tag to obtain (view,w). Given the view view, Soutputs the view of V ∗ in view (which is a subset of view). It follows directly from the simulationproperty of S, and from the fact that S outputs an (efficiently computable) subset of view that theoutput of S is indistinguishable from the view of V ∗ in an honest interaction with a prover.

4 A Simulation-Extractable Protocol

We now turn to describe our construction of simulation extractable protocols. At a high level, theconstruction proceeds in two steps:

1. For any n ∈ N , construct a family {〈Ptag, Vtag〉}tag∈[2n] of simulation-extractable argumentswith tags of length t(n) = log n + 1.

2. For any n ∈ N , use the family {〈Ptag, Vtag〉}tag∈[2n] to construct a family {〈Ptag, Vtag〉}tag∈{0,1}nof simulation extractable arguments with tags of length t(n) = 2n.

The construction of the family {〈Ptag, Vtag〉}tag∈[2n] relies on Barak’s non black-box techniquesfor obtaining constant-round public-coin ZK for NP [1], and are very similar in structure to theZK protocols used by Pass in [38]. We start by reviewing the ideas underlying Barak’s protocol.We then proceed to present our protocols.

4.1 Barak’s non-black-box protocol

Barak’s protocol is designed to allow the simulator access to “trapdoor” information that is notavailable to the prover in actual interactions. Given this “trapdoor” information, the simulatorwill be able to produce convincing interactions even without possessing a witness for the statementbeing proved. The high-level idea is to enable the usage of the verifier’s code as a “fake” witnessin the proof. In the case of the honest verifier V (which merely sends over random bits), the codeconsists of the verifier’s random tape. In the case of a malicious verifier V ∗, the code may alsoconsist of a program that generates the verifier’s messages (based on previously received messages).

17

Page 19: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Since the actual prover does not have a-priori access to V ’s code in real interactions, this willnot harm the soundness of the protocol. The simulator, on the other hand, will be always able togenerate transcripts in which the verifier accepts since, by definition, it obtains V ∗’s code as input.

Let n ∈ N , and let T : N → N be a “nice” function that satisfies T (n) = nω(1). To make theabove ideas work, Barak’s protocol relies on a “special” NTIME(T (n)) relation. It also makesuse of a witness-indistinguishable universal argument (WIUARG) [18, 17, 29, 33, 3]. We startby describing a variant of Barak’s relation, which we denote by Rsim. Usage of this variant willfacilitate the presentation of our ideas in later stages.

Let {Hn}n be a family of hash functions where a function h ∈ Hn maps {0, 1}∗ to {0, 1}n(cf. [32, 10]), and let Com be a statistically binding commitment scheme for strings of length n,where for any α ∈ {0, 1}n, the length of Com(α) is upper bounded by 2n. The relation Rsim isdescribed in Figure 3.

Instance: A triplet 〈h, c, r〉 ∈ Hn × {0, 1}n × {0, 1}poly(n).

Witness: A program Π ∈ {0, 1}∗, a string y∈{0, 1}∗ and a string s ∈ {0, 1}poly(n).

Relation: Rsim(〈h, c, r〉, 〈Π, y, s〉) = 1 if and only if:

1. |y| ≤ |r| − n.

2. c = Com(h(Π); s).

3. Π(y) = r within T (n) steps.

Figure 3: Rsim - A variant of Barak’s relation.

Remark 4.1 (Simplifying assumptions) The relation presented in Figure 3 is slightly oversim-plified and will make Barak’s protocol work only when {Hn}n is collision resistant against “slightly”super-polynomial sized circuits [1]. To make it work assuming collision resistance against polyno-mial sized circuits, one should use a “good” error-correcting code ECC (i.e., with constant distanceand with polynomial-time encoding and decoding), and replace the condition c = Com(h(Π); s) withc = Com(h(ECC(Π)); s) [3]. We also assume that Com is a one-message commitment scheme. Suchschemes can be constructed based on any 1-1 one-way function. At the cost of a small compli-cation, the one-message scheme could have been replaced by the 2-message commitment schemeof [34], which can be based on “ordinary” one-way functions [28].

Let L be any language in NP, let n ∈ N , and let x ∈ {0, 1}n be the common input for theprotocol. The idea is to have the prover claim (in a witness indistinguishable fashion) that eitherx ∈ L, or that 〈h, c, r〉 belongs to the language Lsim that corresponds to Rsim, where 〈h, c, r〉 is atriplet that is jointly generated by the prover and the verifier. As will turn out from the analysis,no polynomial-time prover will be able to make 〈h, c, r〉 belong to Lsim. The simulator, on the otherhand, will use the verifier’s program in order to make sure that 〈h, c, r〉 is indeed in Lsim (while alsopossessing a witness for this fact).

A subtle point to be taken into consideration is that the verifier’s running-time (program size)is not a-priori bounded by any specific polynomial (this is because the adversary verifier might runin arbitrary polynomial time). This imposes a choice of T (n) = nω(1) in Rsim, and implies thatthe corresponding language does not lie in NP (but rather in NTIME(nω(1))). Such languages

18

Page 20: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

are beyond the scope of the “traditional” witness indistinguishable proof systems (which wereoriginally designed to handle “only” NP-languages), and will thus require the usage of a WitnessIndistinguishable Universal Argument. Barak’s protocol is described in Figure 4.

Common Input: An instance x ∈ {0, 1}n

Security parameter: 1n.

Stage 1:

V → P : Send hr← Hn.

P → V : Send c = Com(0n).

V → P : Send rr← {0, 1}3n.

Stage 2 (Body of the proof):

P ⇔ V : A WI UARG 〈PUA, VUA〉 proving the OR of the following statements:

1. ∃ w ∈ {0, 1}poly(|x|) s.t. RL(x, w) = 1.2. ∃ 〈Π, y, s〉 s.t. Rsim(〈h, c, r〉, 〈Π, y, s〉) = 1.

Figure 4: Barak’s ZK argument for NP - 〈PB, VB〉.

Soundness. The idea behind the soundness of 〈PB, VB〉 is that any program Π (be it efficient ornot) has only one output for any given input. This means that Π, when fed with an input y, hasprobability 2−3n to “hit” a string r

r← {0, 1}3n. Since the prover sends c before actually receiving r,and since Rsim imposes |y| ≤ |r| −n = n, then it is not able to “arrange” that both c = Com(h(Π))and Π(y) = r with probability significantly greater than 22n · 2−3n = 2−n (as |c| ≤ 2n). The onlyway for a prover to make the honest verifier accept in the WIUARG is thus to use a witness w forRL. This guarantees that whenever the verifier is convinced, it is indeed the case that x ∈ L.

Zero-knowledge. Let V ∗ be the program of a potentially malicious verifier. The ZK property of〈PB, VB〉 follows by letting the simulator set Π = V ∗ and y = c. Since |c| ≤ 2n ≤ |r| − n and since,by definition V ∗(c) always equals r, the simulator can set c = Com(h(V ∗); s) in Stage 1, and usethe triplet 〈V ∗, c, s〉 as a witness for Rsim in the WIUARG. This enables the simulator to produceconvincing interactions, even without knowing a valid witness for x ∈ L. The ZK property thenfollows (with some work) from the hiding property of Com and the WI property of 〈PUA, VUA〉.

4.2 A “Special-Purpose” Universal Argument

Before we proceed with the construction of our new protocol, we will need to present a universalargument that is specially tailored for our purposes. The main distinguishing features of thisuniversal argument, which we call the special purpose argument, are: (1) it is statistically witnessindistinguishable; and (2) it will enable us to prove that our protocols satisfy the proof of knowledgeproperty of Definition 2.7.6

6The “weak” proof of knowledge property of a universal argument (as defined in [3]) is not sufficient for ourpurposes. Specifically, while in a weak proof of knowledge it is required that the extractor succeeds with probability

19

Page 21: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Let Com be a statistically-hiding commitment scheme for strings of length n. Let Rsim be avariant of the relation Rsim (from Figure 3) in which the statistically-binding commitment Comis replaced with the commitment Com, let 〈PsWI, VsWI〉 be a statistical witness indistinguishableargument of knowledge, and let 〈PUA, VUA〉 be a 4-message, public-coin universal argument wherethe length of the messages is upper bounded by n.7 The special purpose UARG, which we denoteby 〈PsUA, VsUA〉, handles statements of the form (x, 〈h, c1, c2, r1, r2〉), where the triplets 〈h, c1, r1〉and 〈h, c2, r2〉 correspond to instances for Rsim. The protocol 〈PsUA, VsUA〉 is described in Figure 5.

Parameters: Security parameter 1n.

Common Input: x∈{0, 1}n, 〈h, c1, c2, r1, r2〉 where for i∈{1, 2}, 〈h, ci, ri〉 is an instance for Rsim.

Stage 1 (Encrypted UARG):

V → P : Send αr← {0, 1}n.

P → V : Send β = Com(0n).

V → P : Send γr← {0, 1}n.

P → V : Send δ = Com(0n).

Stage 2 (Body of the proof):

P ↔ V : A statistical WIAOK 〈PsWI, VsWI〉 proving the OR of the following statements:

1. ∃ w ∈ {0, 1}poly(|x|) so that RL(x,w) = 1.2. ∃ 〈β, δ, s1, s2〉 so that:

• β = Com(β; s1).

• δ = Com(δ; s2).• (α, β, γ, δ) is an accepting transcript for 〈PUA, VUA〉 proving the statement:

– ∃ 〈i, Π, y, s〉 so that Rsim(〈h, ci, ri〉, 〈Π, y, s〉)=1

Figure 5: A special-purpose universal argument 〈PsUA, VsUA〉.

4.3 A family of 2n protocols

We next present a family of protocols {〈Ptag, Vtag〉}tag∈[2n] (with tags of length t(n) = log n + 1).8

The protocols are based on 〈PB, VB〉, and are a variant of the ZK protocols introduced by Pass [38].There are two key differences between 〈Ptag, Vtag〉 and 〈PB, VB〉: in the 〈Ptag, Vtag〉 protocol (1) theprover (simulator) is given two opportunities to guess the verifier’s “next message”, and (2) thelengths of the verifier’s “next messages” depend on the tag of the protocol.We mention that the ideaof using a multiple slot version of 〈PB, VB〉 already appeared in [39, 38], and the message-lengthtechnique appeared in [38]. However, our protocols {〈Ptag, Vtag〉}tag∈[2n] differ from the protocol of[38] in two aspects: our protocols are required to satisfy (1) a statistical secrecy property, and (2) aproof of knowledge property. Towards this end, we replace the statistically-binding commitments,Com, used in the presentation of 〈PB, VB〉 with statistically-hiding commitments, and replace the

that is polynomially related to the success probability of the prover, in our proof of security we will make use of anextractor that succeeds with probability negligibly close to the success probability of the prover.

7Both statistical witness indistinguishable arguments of knowledge, and 4-message, public-coin, universal argu-ments can be constructed assuming a family Hn of standard collision resistant hash functions (cf. [20] and [29, 33, 3]).

8A closer look at the construction will reveal that it will in fact work for any t(n) = O(log n). The choice oft(n) = log n + 1 is simply made for the sake of concreteness (as in our constructions it is the case that tag ∈ [2n]).

20

Page 22: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

use of a WIUARG with the use of a “special-purpose” UARG. Let Com be a statistically-hidingcommitment scheme for strings of length n, where for any α ∈ {0, 1}n, the length of Com(α) isupper bounded by 2n. Let Rsim be the statistical variant of the relation Rsim, and let 〈PsUA, VsUA〉be the special purpose universal argument (both Rsim and 〈PsUA, VsUA〉 are described in Section 4.2).Protocol 〈Ptag, Vtag〉 is described in Figure 6.

Common Input: An instance x ∈ {0, 1}n

Parameters: Security parameter 1n, length parameter `(n).

Tag String: tag ∈ [2n].

Stage 0 (Set-up):

V → P : Send hr← Hn.

Stage 1 (Slot 1):

P → V : Send c1 = Com(0n).

V → P : Send r1r← {0, 1}tag·`(n).

Stage 1 (Slot 2):

P → V : Send c2 = Com(0n).

V → P : Send r2r← {0, 1}(2n+1−tag)·`(n).

Stage 2 (Body of the proof):

P ⇔ V : A special-purpose UARG 〈PsUA, VsUA〉 proving the OR of the following statements:

1. ∃ w ∈ {0, 1}poly(|x|) s.t. RL(x, w) = 1.2. ∃ 〈Π, y, s〉 s.t. RSim(〈h, c1, r1〉, 〈Π, y, s〉)=1.3. ∃ 〈Π, y, s〉 s.t. RSim(〈h, c2, r2〉, 〈Π, y, s〉)=1.

Figure 6: Protocol 〈Ptag, Vtag〉.

Note that the only difference between two protocols 〈Ptag, Vtag〉 and 〈P ˜tag, V ˜tag〉 is the length ofthe verifier’s “next messages”: in fact, the length of those messages in 〈Ptag, Vtag〉 is a parameterthat depends on tag (as well as on the length parameter `(n)). This property will be crucial forthe analysis of these protocols in the man in the middle setting.

Using similar arguments to the ones used for 〈PB, VB〉, it can be shown that 〈Ptag, Vtag〉 iscomputationally sound. The main difference to be taken into consideration is the existence ofmultiple slots in Stage 1 (see Lemma A.1 for a proof of an even stronger statement).

The ZK property of 〈Ptag, Vtag〉 is proved exactly as in the case of 〈PB, VB〉, by letting thesimulator pick either i = 1 or i = 2, and use 〈V ∗, ci, si〉 as the witness for 〈h, ci, ri〉 ∈ Lsim (whereLsim is the language that corresponds to Rsim). Since for every tag ∈ [m], |ri| − |ci| ≥ `(n) − 2n,we have that as long as `(n) ≥ 3n, the protocol 〈Ptag, Vtag〉 is indeed ZK.

We wish to highlight some useful properties of 〈Ptag, Vtag〉. These properties will turn out to berelevant when dealing with a man in the middle.

Freedom in the choice of the slot: The simulator described above has the freedom to choosewhich i ∈ {1, 2} it will use in order to satisfy the relation Rsim. In particular, for the simulationto succeed, it is sufficient that 〈h, ci, ri〉 ∈ Lsim for some i ∈ {1, 2}.

Using a longer y in the simulation: The stand-alone analysis of 〈Ptag, Vtag〉 only requires

21

Page 23: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

`(n) ≥ 3n. Allowing larger values of `(n) opens the possibility of using a longer y in thesimulation. This will turn out to be useful if the verifier is allowed to receive “outside”messages that do not belong to the protocol (as occurs in the man-in-the-middle setting).

Statistical secrecy: The output of the simulator described above is statistically close to realinteractions (whereas the security guaranteed in 〈PB, PB〉 is only computational). A relatedproperty will turn out to be crucial for the use of 〈Ptag, Vtag〉 as a subroutine in higher levelapplications (such as non-malleable commitments).

Proof of knowledge: 〈Ptag, Vtag〉 is a proof of knowledge. That is, for any prover P ∗ and for anyx ∈ {0, 1}n, if P ∗ convinces the honest verifier V that x ∈ L with non-negligible probabilitythen one can extract a witness w that satisfies RL(x,w) = 1 in (expected) polynomial time.

4.4 A family of 2n protocols

Relying on the protocol family {〈Ptag, Vtag〉}tag∈[2n], we now show how to construct a family{〈Ptag, Vtag〉}tag∈{0,1}n with tags of length t(n) = n. The protocols are constant-round and involven parallel executions of 〈Ptag, Vtag〉, with appropriately chosen tags. This new family of protocolsis denoted {〈Ptag, Vtag〉}tag∈{0,1}n and is described in Figure 7.

Common Input: An instance x ∈ {0, 1}n

Parameters: Security parameter 1n, length parameter `(n)

Tag String: tag ∈ {0, 1}n. Let tag = tag1, . . . ,tagn.

The protocol:

P ↔ V : For all i ∈ {1, . . . , n} (in parallel):

1. Set tagi = (i,tagi).2. Run 〈Ptagi

, Vtagi〉 with common input x and length parameter `(n).

V : Accept if and only if all runs are accepting.

Figure 7: Protocol 〈Ptag, Vtag〉.

Notice that 〈Ptag, Vtag〉 has a constant number of rounds (since each 〈Ptagi, Vtagi

〉 is constant-round). Also notice that for i ∈ [n], the length of tagi = (i,tagi) is

|i|+ |tagi| = log n + 1 = log(2n).

Viewing (i,tagi) as elements in [2n] we infer that the length of verifier messages in 〈Ptagi, Vtagi

〉is upper bounded by 2n`(n). Hence, as long as `(n) = poly(n) the length of verifier messages in〈Ptag, Vtag〉 is 2n2`(n) = poly(n).

We now turn to show that for any tag ∈ 2n, the protocol 〈Ptag, Vtag〉 is an interactive argument.In fact, what we show is a stronger statement. Namely, that the protocols 〈Ptag, Vtag〉 are proofs(actually arguments) of knowledge (as in Definition 2.7). For simplicity of exposition, we willshow how to prove the above assuming a family of hash functions that is collision resistant againstT (n) = nω(1)-sized circuits. As mentioned in Remark 4.1, by slightly modifying Rsim, one can provethe same statement under the more standard assumption of collision resistance against polynomial-sized circuits.

22

Page 24: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Proposition 4.2 (Argument of knowledge) Let 〈PsWI, VsWI〉 and 〈PUA, VUA〉 be the protocolsused in the construction of 〈PsUA, VsUA〉. Suppose that {Hn}n is collision resistant for T (n)-sizedcircuits, that Com is statistically hiding, that 〈PsWI, VsWI〉 is a statistical witness indistinguishableargument of knowledge, and that 〈PUA, VUA〉 is a universal argument. Then, for any tag ∈ {0, 1}n,〈Ptag, Vtag〉 is an interactive argument of knowledge.

Similar arguments to the ones used to prove Proposition 4.2 have already appeared in theworks of Barak [1], and Barak and Goldreich [3]. While our proof builds on these arguments, it issomewhat more involved. For the sake of completeness, the full proof appears in Appendix A.

5 Proving Simulation-Extractability

Our central technical Lemma states that the family of protocols {〈Ptag, Vtag〉}tag∈{0,1}n is sim-ulation extractable. As shown in Proposition 3.6 this implies that these protocols are also non-malleable zero-knowledge.

Lemma 5.1 (Simulation extractability) Suppose that Com are statistically hiding, that {Hn}nis a family of collision-resistant hash functions, that 〈PUA, VUA〉 is a special-purpose WIUARG,and that `(n) ≥ 2n2 + 2n. Then, {〈Ptag, Vtag〉}tag∈{0,1}n is simulation extractable.

The proof of Lemma 5.1 is fairly complex. To keep things manageable, we first give an overviewof the proof, describing the key ideas used for establishing the simulation extractability of thefamily {〈Ptag, Vtag〉}tag∈[2n]. This is followed by a full proof for the case of {〈Ptag, Vtag〉}tag∈{0,1}n .

5.1 Proof Overview

Consider a man-in-the-middle adversary A that is playing the role of the verifier of 〈Ptag, Vtag〉 inthe left interaction while simultaneously playing the role of the prover of 〈P ˜tag, V ˜tag〉 in the rightinteraction. Recall that in order to prove simulation-extractability we have to show that for anysuch A, there exists a combined simulator-extractor S = (SIM,EXT) that is able to simulate boththe left and the right interactions for A, while simultaneously extracting a witness to the statementx proved in the right interaction.

Towards this goal, we will construct a simulator S that is able to “internally” generate Ptag

messages for the left interaction of A, even if the messages in the right interaction are forwarded toA from an “external” verifier V ˜tag. The simulator S is almost identical to the simulator of [38] andexploits the difference in message lengths between the protocols 〈Ptag, Vtag〉 and 〈P ˜tag, V ˜tag〉. As theanalysis will demonstrate, the left view produced by the simulator S is statistically indistinguishablefrom A’s actual interactions with an honest left prover Ptag. Furthermore, we show that:

1. It will be possible to construct a procedure SIM that faithfully simulates A’s view in a man-in-the-middle execution. To do so, we will honestly play the role of V ˜tag in the right interactionand use S to simulate A’s left interaction with Ptag (pretending that the messages from theright interaction came from an external V ˜tag).

2. It will be possible to construct a procedure EXT that extracts witnesses for the statements xproved in the right interactions of the views generated by the above SIM. To do so, we willuse S to transform A into a stand alone prover P ∗˜tag for the statement x. This will be done byhaving P ∗˜tag internally emulate A’s execution, while forwarding A’s messages to an external

23

Page 25: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

honest verifier V ˜tag, and using S to simulate A’s left interaction with Ptag. We can then invokethe knowledge extractor that is guaranteed by the (stand-alone) proof of knowledge propertyof 〈P ˜tag, V ˜tag〉 and obtain a witness for x ∈ L.

It is important to have both SIM and EXT use the same simulator program S (with samerandom coins) in their respective executions. Otherwise, we are not guaranteed that the statementx appearing in the output of SIM is the same one EXT extracts a witness from.9

The execution of S (with one specific scheduling of messages) is depicted in Figure 8 below. Inorder to differentiate between the left and right interactions, messages m in the right interaction arelabeled as m. Stage 2 messages in the left and right interactions are denoted u and u respectively.

S V ˜tag

Ptag A V ˜tagh←−−−−−−−−−−−−−h←−−−−

c1−−−−→ c1−−−−−−−−−−−−−→r1←−−−−−−−−−−−−−r1←−−−−

c2−−−−→ c2−−−−−−−−−−−−−→r2←−−−−−−−−−−−−−r2←−−−−

u⇐==⇒ u⇐===========⇒

Figure 8: The simulator S.

The main hurdle in implementing S is in making the simulation of the left interaction work.The problem is that the actual code of the verifier whose view we are simulating is only partiallyavailable to S. This is because the messages sent by A in the left interaction also depend on themessages A receives in the right interaction. These messages are sent by an “external” V ˜tag, andV ˜tag’s code (randomness) is not available to S.

Technically speaking, the problem is implied by the fact that the values of the ri’s do notnecessarily depend only on the corresponding ci, but rather may also depend on the “external”right messages ri. Thus, setting Π = A and y = ci in the simulation (as done in Section 4.3) willnot be sufficient, since in some cases it is simply not true that ri = A(ci).

Intuitively, the most difficult case to handle is the one in which r1 is contained in Slot 1of 〈Ptag, Vtag〉 and r2 is contained in Slot 2 of 〈Ptag, Vtag〉 (as in Figure 8 above). In this caseri = A(ci, ri) and so ri = A(ci) does not hold for either i ∈ {1, 2}. As a consequence, the simulatorwill not be able to produce views of convincing Stage 2 interactions with A. In order to overcomethe difficulty, we will use the fact that for a given instance 〈h, ci, ri〉, the string ci is short enoughto be “augmented” by ri while still satisfying the relation Rsim.

Specifically, as long as |ci|+ |ri|≤ |ri|−n the relation Rsim can be satisfied by setting y=(ci, ri).This guarantees that indeed Π(y) = ri. The crux of the argument lies in the following fact.

Fact 5.2 If tag 6= ˜tag then there exists i ∈ {1, 2} so that |ri| ≤ |ri| − `(n).9The statement x will remain unchanged because x occurs prior to any message in the right interaction (and hence

does not depend on the external messages received by P ∗˜tag).

24

Page 26: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

By setting y = (ci, ri) for the appropriate i, the simulator is thus always able to satisfy Rsim forsome i ∈ {1, 2}. This is because the “auxiliary” string y used in order to enable the prediction ofri is short enough to pass the inspection at Condition 1 of Rsim (i.e., |y| = |ci|+ |ri| ≤ |ri| − n).10

Once Rsim can be satisfied, the simulator is able to produce views of convincing interactions thatare computationally indistinguishable from real left interactions.11

The extension of the above analysis to the case of 〈Ptag, Vtag〉 has to take several new factors intoconsideration. First, each execution of 〈Ptag, Vtag〉 consists of n parallel executions of 〈Ptag, Vtag〉(and not only one). This imposes the constraint `(n) ≥ 2n2+2n, and requires a careful specificationof the way in which the left simulation procedure handles the verifier challenges in 〈Ptag, Vtag〉.Secondly, and more importantly, the simulation procedure will not be able to handle a case inwhich 〈P ˜tag, Vtag〉 messages of the right interaction are forwarded from an external verifier V ˜tag

(because these messages are too long for the simulation to work).While this does not seem to pose a problem for the SIM procedure, it suddenly becomes unclear

how to construct a stand alone prover P ∗˜tag for the EXT procedure (since this involves forwardingmessages from V ˜tag). The way around this difficulty will be to construct a stand-alone prover P ∗˜tagfor a single sub-protocol 〈P ˜tag, V ˜tag〉 instead. This will guarantee that the only messages that endup being forwarded are sent by an external verifier V ˜tag, whose messages are short enough to makethe simulation work. Once such a P ∗˜tag is constructed it is possible to use the knowledge extractorfor 〈P ˜tag, V ˜tag〉 in order to obtain a witness for x.

5.2 Many-to-One Simulation-Extractabiity

We now proceed with the proof of Lemma 5.1. To establish simulation-extractability of 〈Ptag, Vtag〉,we first consider what happens when a man-in-the-middle adversary is simultaneously involved inthe verification of many different (parallel) executions of 〈Ptag, Vtag〉 on the left while proving asingle interaction 〈P ˜tag, V ˜tag〉 on the right. As it turns out, as long as the number of left executionsis bounded in advance, we can actually guarantee simulation-extractability even in this scenario.

For any tag = (tag1, . . . , tagn) ∈ [2n]n we consider a left interaction in which the protocols〈Ptag1

, Vtag1〉, . . . , 〈Ptagn

, Vtagn〉 are executed in parallel with common input x ∈ {0, 1}n, and a right

interaction in which 〈P ˜tag, V ˜tag〉 is executed with common input x ∈ {0, 1}n. The strings ˜tag andx are chosen adaptively by the man-in-the-middle A. The witness used by the prover in the leftinteraction is denoted by w, and the auxiliary input used by A is denoted by z.

Proposition 5.3 Let A be a MIM adversary as above, and suppose that `(n)≥ 2n2+2n. Then, thereexists a probabilistic expected polynomial time, S such that the following conditions hold:

1. The probability ensembles {S1(x, z,tag)}x,z,tag and {viewA(x, z,tag)}x,z,tag are statisticallyclose over L, where S1(x, z,tag) denotes the first output of S(x, z,tag).

2. Let x ∈ L, z ∈ {0, 1}∗,tag ∈ {0, 1}t(|x|) and let (view, w) denote the output of S(x, z,tag)(on input some random tape). Let x be the right-execution statement appearing in view andlet ˜tag denote the right-execution tag. Then, if the right-execution in view is accepting ANDtagj 6= ˜tag for all j ∈ [n], then RL(x, w) = 1.

10This follows from the fact that `(n) ≥ 3n and |ci| = 2n.11In the above discussion we have been implicitly assuming that h, u are not contained in the two slots of 〈Ptag, Vtag〉

(where h denotes the hash function in the right interaction and u denotes the sequence of messages sent in the rightWIUARG). The case in which h, u are contained in the slots can be handled by setting `(n) ≥ 4n, and by assumingthat both |h| and the total length of the messages sent by the verifier in the WIUARG is at most n. We mentionthat the latter assumption is reasonable, and is indeed satisfied by known protocols (e.g. the WIUARG of [3]).

25

Page 27: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Proof: As discussed in Section 5.1, we construct a “many-to-one” simulator S that internally gen-erates a left view of 〈Ptag, Vtag〉=(〈Ptag1

, Vtag1〉, . . . , 〈Ptagn

, Vtagn〉) for A while forwarding messages

from the right interaction to an external honest verifier V ˜tag. This simulator is essentially identicalto the simulator of [38].12 We then show how to use S to construct the procedures (SIM,EXT).

5.2.1 The Many-to-One Simulator

The many-to-one simulator S invokes A as a subroutine. It attempts to generate views of theleft and right interactions that are indistinguishable from A’s view in real interactions. Messagesin the right interaction are forwarded by S to an “external” honest verifier V ˜tag for 〈P ˜tag, V ˜tag〉,whose replies are then fed back to A. Messages in the left interaction are handled by n “sub-simulators” S1, . . . , Sn, where each Sj is responsible for generating the messages of the sub-protocol〈Ptagj

, Vtagj〉. The execution of the simulator is depicted in Figure 9 (for simplicity, we ignore the

messages h1, . . . , hn, u1, . . . , un and h, u).

S V ˜tag

Ptag A V ˜tagS1 Sj Sn

c11→ · · ·cj1→ · · ·

cn1→ c1−−−−−−−−−−−−−−−→

r1←−−−−−−−−−−−−−−−r11← · · ·

rj1← · · ·

rn1←

c12→ · · ·cj2→ · · ·

cn2→ c2−−−−−−−−−−−−−−−→

r2←−−−−−−−−−−−−−−−r12← · · ·

rj2← · · ·

rn2←

Figure 9: The “many-to-one” simulator S.

The specific actions of a sub-simulator Sj depend on the scheduling of Stage 1 messages asdecided by A. The scheduling of left and right messages are divided into three separate cases whichare depicted in Figure 10 below. In all three cases we make the simplifying assumption that h isscheduled in the right interaction before h1 . . . , hn are scheduled in the left interaction. We alsoassume that the 〈PsUA, VsUA〉 messages u in the right interaction are scheduled after the 〈PsUA, VsUA〉messages u1, . . . , un in the left interaction. We later argue how these assumptions can be removed.

Let Aj be a program that acts exactly like A, but for any i ∈ {1, 2} instead of outputtingr1i , . . . , r

ni it outputs only rj

i . Given a string α ∈ {0, 1}∗, let A(α, ·) denote the program obtained by“hardwiring” α into it (i.e., A(α, ·) evaluated on β equals A(α, β)). We now describe Sj ’s actionsin each of the three cases:

None of r1, r2 is contained in Slot 1 of 〈Ptag, Vtag〉: Assume for simplicity that c1, r1, c2, r2 areall contained in Slot 2 of 〈Ptag, Vtag〉 (Fig. 10a). The simulator Sj sets c1 = Com(h(Π1); s1)and c2 = Com(0n; s2) where Π1 = Aj(x, ·). It then sets the triplet 〈Π1, (c1

1, . . . , cn1 ), s1〉 as

witness for 〈hj , cj1, r

j1〉 ∈ Lsim.

12In fact, the simulator presented here is somewhat simplified in that we only consider n parallel executions of〈Ptag, Vtag〉, whereas [38] shows a simulator also for n concurrent executions of the protocols.

26

Page 28: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

(a) (b) (c)

P A Vc11→· · ·

cn1→

r11←· · ·

rn1←

c12→· · ·cn2→ c1−−−−→

r1←−−−−c2−−−−→r2←−−−−r1

2←· · ·rn2←

P A Vc11→· · ·

cn1→ c1−−−−→

r1←−−−−c2−−−−→r2←−−−−r1

1←· · ·rn1←

c12→· · ·cn2→

r12←· · ·

rn2←

P A Vc11→· · ·

cn1→ c1−−−−→

r1←−−−−r11←· · ·

rn1←

c12→· · ·cn2→ c2−−−−→

r2←−−−−r12←· · ·

rn2←

Figure 10: Three “representative” schedulings.

None of r1, r2 is contained in Slot 2 of 〈Ptag, Vtag〉: Assume for simplicity that c1, r1, c2, r2

are all contained in Slot 1 of 〈Ptag, Vtag〉 (Fig. 10b). The simulator Sj sets c1 = Com(0n; s1)and c2 = Com(h(Π2); s2) where Π2 = Aj(x, c1

1, . . . , cn1 , r1, r2, ·). It then sets the triplet

〈Π2, (c12, . . . , c

n2 ), s2〉 as witness for 〈hj , cj

2, rj2〉 ∈ Lsim.

r1 is contained in Slot 1 and r2 is contained in Slot 2 of 〈Ptag, Vtag〉: In this case c1, r1 areboth contained in Slot 1 of 〈Ptag, Vtag〉, and c2, r2 are both contained in Slot 2 of 〈Ptag, Vtag〉(Fig. 10c). The simulator Sj sets c1 = Com(h(Π1); s1) and c2 = Com(h(Π2); s2) whereΠ1 = Aj(x, ·) and Π2 = Aj(x, c1

1, . . . , cn1 , r1, ·). Then:

• If tagj > ˜tag, the simulator sets 〈Π1, (c11, . . . , c

n1 , r1), s1〉 as witness for 〈hj , cj

1, rj1〉 ∈ Lsim.

• If tagj < ˜tag, the simulator sets 〈Π2, (c12, . . . , c

n2 , r2), s2〉 as witness for 〈hj , cj

2, rj2〉 ∈ Lsim.

In all cases, combining the messages together results in a Stage 1 transcript τ j1 = 〈hj , cj

1, rj1, c

j2, r

j2〉.

By definition of 〈Ptagj, Vtagj

〉, the transcript τ j1 induces a Stage 2 special-purpose WIUARG with

common input (x, 〈hj , cj1, r

j1〉, 〈hj , cj

2, rj2〉). The sub-simulator Sj now follows the prescribed prover

strategy PsUA and produces a Stage 2 transcript τ j2 for 〈Ptagj

, Vtagj〉.

Remark 5.4 (Handling h and u) To handle the case in which either h or u are contained inone of the slots, we set `(n) ≥ 2n2 + 2n and let the simulator append either h or u to the auxiliarystring y (whenever necessary). This will guarantee that the program committed to by the simulatorindeed outputs the corresponding “challenge” rj

i , when fed with y as input. The crucial point isthat even after appending h or u to y, it will still be the case that |y| ≤ |rj

i | − n. This just followsfrom the fact that the total length of h and the messages u sent in 〈PsUA, VsUA〉 is upper boundedby, say n, and that the gap between |rj

i | and the “original” |y| (i.e. before appending h or u to it)is guaranteed to be at least n (this follows from the requirement `(n) ≥ 2n2 + 2n).

Output of S. To generate its output, which consists of a verifier view of a 〈Ptag, Vtag〉 in-teraction, S combines all the views generated by the Sj ’s. Specifically, letting σj

1 = (cj1, c

j2)

be the verifier’s view of τ j1 , and σj

2 be the verifier’s view of τ j2 , the output of S consists of

(σ1, σ2) = ((σ11, . . . , σ

n1 ), (σ1

2, . . . , σn2 )).

27

Page 29: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

5.2.2 The Simulator-Extractor

Using S, we construct the simulator-extractor S = (SIM,EXT). We start with the machine SIM. Inthe right interaction SIM’s goal is to generate messages by a verifier V ˜tag. This part of the simulationis quite straightforward, and is performed by simply playing the role of an honest verifier in theexecution of the protocol (with the exception of cases in which tagj = ˜tag for some j ∈ [n] – seebelow for details). In the left interaction, on the other hand, SIM is supposed to act as a proverPtag, and this is where S is invoked.

The machine SIM. On input (x, z,tag), and given a man-in-the-middle adversary A, SIM startsby constructing a man-in-the-middle adversary A′ that acts as follows:

Internal messages: Pick random M ′ = (h, r1, r2, u) verifier messages for the right interaction.

Right interaction: The statement x proved is the same as the one chosen by A. If thereexists j ∈ [n] so that tagj = ˜tag, use the messages in M ′ in order to internally emulate aright interaction for A (while ignoring external V ˜tag messages M). Otherwise, forward A’smessages in the right interaction to an external V ˜tag and send back his answers M to A.

Left interaction: As induced by the scheduling of messages by A, forward the messages sent byA in the left interaction to an external prover Ptag, and send back his answers to A.

Fig. 11.a describes the behavior of A′ in case tagj 6= ˜tag for all j ∈ [n], whereas Fig. 11.b describesits behavior otherwise. The purpose of constructing such an A′ is to enable us to argue that for all”practical purposes” the man-in-the-middle adversary never uses a ˜tag that satisfies tagj = ˜tag forsome j ∈ [n] (as in such cases A′ ignores all messages M in the right interaction anyway).

Ptag A′ V ˜tag Ptag A′ V ˜tag

P A V←−−−−−−−−−−−− ←−−−−−−−−−−−−−−−−−−−−−−−−→ −−−−−−−−−−−−→←−−−−−−−−−−−− ←−−−−−−−−−−−−−−−−−−−−−−−−→ −−−−−−−−−−−−→

M

P A V←−−−−−−−−−−−− ←−−−−−−−−−−−−−−→ −−→←−−−−−−−−−−−− ←−−−−−−−−−−−−−−→ −−→

←−−←−−M ′ M

(a) (b)

Figure 11: The adversary A′.

Given the new adversary A′, the machine SIM picks random V ˜tag messages M , and invokes thesimulator S with random coins s. The simulator’s goal is to generate a view of a left interactionfor an A′ that receives messages M in the right interaction.

Let (σ1, σ2) be the view generated by S for the left 〈Ptag, Vtag〉 interaction, and let ˜tag be thetag sent by A in the right interaction when (σ1σ2) is its view of the left interaction. If there existsj ∈ [n] so that tagj = ˜tag then SIM outputs M ′ as the right view of A. Otherwise, it outputs M .SIM always outputs (σ1, σ2) as a left view for A.

The machine EXT. The machine EXT starts by sampling a random execution of SIM, usingrandom coins s,M ′,M . Let x be the right hand side common input that results from feeding theoutput of SIM to A. Our goal is to extract a witness to the statement x. At a high level, EXT actsthe following way:

28

Page 30: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

1. If the right session was not accepting or tagj = ˜tag for some j ∈ [n], EXT will assume thatno witness exists for the statement x, and will refrain from extraction.

2. Otherwise, EXT constructs a stand-alone prover P ∗˜tag for the right interaction 〈P ˜tagi, V ˜tagi

〉,and from which it will later attempt to extract the witness (see Figure 12).

In principle, the prover P ∗˜tag will follow SIM’s actions using the same random coins s usedfor initially sampling the execution of SIM. However, P ∗˜tag’s execution will differ from SIM’sexecution in that it will not use the messages M in the right interaction of A, but will ratherforward messages receives from an external verifier V ˜tag

P ∗˜tag V ˜tag

P A Vtag,x−−→

˜tag,x−−−−−−−−−−−−→

←−− ←−−−−−−−−−−−−−−→ −−−−−−−−−−−−→←−− ←−−−−−−−−−−−−−−→ −−−−−−−−−−−−→

Figure 12: The prover P ∗˜tag.

3. Once P ∗˜tag is constructed, EXT can apply the knowledge extractor, guaranteed by the proofof knowledge property of 〈P ˜tag, V ˜tag〉, and extract a witness w to the statement x. In theunlikely event that the extraction failed, EXT outputs fail. Otherwise, it outputs w.

Remark 5.5 It is important to have a prover P ∗˜tag for the entire protocol 〈P ˜tagi, V ˜tagi

〉 (and notjust for 〈PsUA, VsUA〉). This is required in order to argue that the witness extracted is a witness forx and not a witness to 〈h, c1, r1〉 ∈ Lsim or to 〈h, c2, r2〉 ∈ Lsim (which could indeed be the case ifwe fixed the messages 〈h, c1, r1, c2, r2〉 in advance).

Output of simulator-extractor S. The combined simulator-extractor S runs EXT and outputsfail whenever EXT does so. Otherwise, it output the view output by SIM (in the execution byEXT) followed by the output of EXT.

5.2.3 Correctness of Simulation-Extraction

We start by showing that the view of A in the simulation by SIM is statistically close to its view inan actual interaction with Ptag and V ˜tag.

Lemma 5.6 {S1(x, z,tag)}x∈L,z∈{0,1}∗,tag∈{0,1}m and {viewA(x, z,tag)}x∈L,z∈{0,1}∗,tag∈{0,1}m arestatistically close over L, where S1(x, z,tag) denotes the first output of S(x, z,tag).

Proof: Recall S proceeds by first computing a joint view 〈(σ1, σ2),M〉 (by running SIM) and thenoutputs this view only if EXT does not output fail. Below we show that the output of SIM isstatistically close to the view of A in real interactions. This concludes that, since EXT outputsfail only in the event that the extraction fails, and since by the proof of knowledge propertyof 〈P ˜tag, V ˜tag〉 the extraction fails only with negligible probability, the first output of S is alsostatistically close to the view of A.

Let the random variable {SIM(x, z,tag)} denote the view 〈(σ1, σ2),M〉 output by {SIM(x, z,tag)}in the execution by S.

29

Page 31: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Claim 5.7 {SIM(x, z,tag)}x∈L,z∈{0,1}∗,tag∈{0,1}m and {viewA(x, z,tag)}x∈L,z∈{0,1}∗,tag∈{0,1}m arestatistically close over L.

Proof: Recall that the output of SIM consists of the tuple 〈(σ1, σ2),M〉, where (σ1, σ2) is the leftview generated by the simulator S and M = (h, r1, r2, u) are uniformly chosen messages that arefed to A during the simulation. In other words:

{SIM(x, z,tag)}x,z,tag = {(S(x, z,tag), U|M |)}x,z,tag.

Let x ∈ L,tag ∈ [2n]n and z ∈ {0, 1}∗. To prove the claim, we will thus compare the distribution(S(x, z,tag), U|M |) with real executions of the man-in-the-middle A(x, z,tag).

We start by observing that, whenever M is chosen randomly, a distinguisher between realand simulated views of the left interaction of A yields a distinguisher between S(x, z,tag) andviewA(x, z,tag). This follows from the following two facts (both facts are true regardless of whethertagj = ˜tag for some j ∈ [n]):

1. The messages M (resp, M ′) appearing in its output are identically distributed to messagesin a real right interaction of A with V ˜tag (by construction of SIM).

2. The simulation of the left interaction in SIM is done with respect to an A whose right handside view consists of the messages M (resp. M ′).

In particular, to distinguish whether a tuple (σ1, σ2),M was drawn according to S(x, z,tag)or according to viewA(x, z,tag) one could simply take M , hardwire it into A and invoke thedistinguisher for the resulting stand alone verifier for 〈Ptag, Vtag〉 (which we denote by V ∗tag). Thus,all we need to prove is the indistinguishability of real and simulated views of an arbitrary standalone verifier V ∗tag (while ignoring the messages M). We now proceed with the proof of the claim.

Consider a random variable (σ1, σ2) that is distributed according to the output of S(x, z,tag),and a random variable (π1, π2) that is distributed according to the verifier view of 〈Ptag, Vtag〉in {viewA(x, z,tag)}z,x,tag (where π1, π2 are A’s respective views of Stages 1 and 2). We willshow that both (σ1, σ2) and (π1, π2) are statistically close to the hybrid distribution (σ1, π2). Thisdistribution is obtained by considering a hybrid simulator that generates σ1 exactly as S does, butuses the witness w for x ∈ L in order to produce π2.

Sub Claim 5.8 The distribution (π1, π2) is statistically close to (σ1, π2).

Proof: The claim follows from the (parallel) statistical hiding property of Com. Specifically,suppose that there exists a (possibly unbounded) D that distinguishes between the two distri-butions with probability ε. Consider a distinguisher D′ that has the witness w for x ∈ L andh = V ∗tag(x) hardwired, and acts as follows. Whenever D′ gets an input (c1, c2), it starts by gen-erating r1 = V ∗tag(x, c1) and r2 = V ∗tag(x, c1, c2). It then emulates a Stage 2 interaction betweenV ∗tag(x, c1, c2) and the honest provers PsUA, where (x, 〈h, cj

1, rj1〉, 〈h, cj

2, rj2〉) is the common input for

the jth interaction, and PsUA is using w as a witness for x ∈ L. Let π2 denote the resulting verifierview. D′ outputs whatever D outputs on input (c1, c2, π2).

Notice that if (cj1, c

j2) = (Com(0n),Com(0n)) for all j ∈ [n], then the input fed to D is

distributed according to (π1, π2), whereas if (cj1, c

j2) = (Com(h(Πj

1)),Com(h(Πj2))), then the input

fed to D is distributed according to (σ1, π2). Thus, D′ has advantage ε in distinguishing between twotuples of n committed values. Hence, if ε is non-negligible we reach contradiction to the statisticalhiding property of Com.

30

Page 32: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Sub Claim 5.9 The distribution (σ1, σ2) is statistically close to (σ1, π2).

Proof: The claim follows from the (parallel) statistical witness indistinguishability property of〈PsWI, VsWI〉 and the (parallel) statistical hiding property of Com (both used in 〈PsUA, VsUA〉). Letσ2 = (σ2,1, σ2,2), where σ2,1 corresponds to a simulated view of Stage 1 of 〈PsUA, VsUA〉 and σ2,2

corresponds to a Stage 2 of 〈PsUA, VsUA〉. Similarly, let π2 = (π2,1, π2,2) correspond to the real viewsof Stages 1 and 2 of 〈PsUA, VsUA〉. We will show that both (σ1, σ2) and (σ1, π2) are statisticallyclose to the hybrid distribution (σ1, (σ2,1, π2,2)).

Suppose that there exists a (possibly unbounded) algorithm D that distinguishes between(σ1, σ2) and (σ1, (σ2,1, π2,2)) with probability ε. Then there must exist a Stage 1 view (c1, c2) =(c1

1, . . . , cn1 , c1

2, . . . , cn2 ) for 〈Ptag, Vtag〉, and a Stage 1 view (β, δ) = (β1, . . . , βn, δ1, . . . , δn) for

the sub-protocol 〈PsUA, VsUA〉 so that D has advantage ε in distinguishing between 〈(σ1, σ2)〉 and〈(σ1, π2)〉 conditioned on (σ1, σ2,1) = ((c1, c2), (β, δ)).

Consider a Stage 2 execution of 〈PsUA, VsUA〉 with V ∗sWI = V ∗tag(x, c1, c2, β, δ, ·) as verifier. Then, adistinguisher D(c1, c2, β, δ, ·) (i.e., D with (c1, c2, β, δ) hardwired as part of its input) has advantageε in distinguishing between an interaction of V ∗sWI with n honest PsWI provers that use accepting〈PUA, VUA〉 transcripts (α, β, γ, δ), and an interaction of V ∗sUA with honest PsUA provers that uses was witness.13 Thus, if ε is non-negligible we reach contradiction to the (parallel) statistical witnessindistinguishability of 〈PsWI, VsWI〉.

Suppose now that there exists a (possibly unbounded) algorithm D that distinguishes between(σ1, π2) and (σ1, (σ2,1, π2,2)) with probability ε. Consider a distinguisher D′ that has the wit-ness w for x ∈ L and (h, c1, c2) hardwired, and acts as follows. Whenever D′ gets an input(β, δ), it starts by generating α = V ∗tag(x, c1, c2) and γ = V ∗tag(x, c1, c2, β). It then emulatesa Stage 2 interaction between the VsWI verifiers V ∗tag(x, c1, c2, β, δ) and the honest provers PsWI,where (x, 〈h, cj

1, rj1〉, 〈h, cj

2, rj2〉, 〈α, β, γ, δ〉) is the common input for the jth interaction, and PsWI is

using w as a witness for x ∈ L. Let π2,2 denote the resulting verifier view. D′ outputs whatever Doutputs on input (c1, c2, β, δ, π2,2).

Notice that if (βj , δj) = (Com(0n),Com(0n)) for all j ∈ [n], then the input fed to D isdistributed according to (σ1, (π2,1, π2,2)) = (σ1, π2), whereas if (βj , δj) = Com(βj),Com(δj)〉, forsome βj , δj for which (αj , βj , γj , δj) is an accepting 〈PUA, VUA〉 then the input fed to D is distributedaccording to (σ1, (σ2,1, π2,2)). Thus, D′ has advantage ε in distinguishing between two tuples ofn committed values. Hence, if ε is non-negligible we reach contradiction to the statistical hidingproperty of Com.

Combining Sub-claims 5.9 and 5.8 we conclude that the ensembles {SIM(x, z,tag)}x,z,tag and{viewA(x, z,tag)}x,z,tag are statistically close.

This completes the proof of Lemma 5.6

Lemma 5.10 Let x ∈ L, z ∈ {0, 1}∗,tag ∈ {0, 1}m and let (view, w) denote the output of S(x, z,tag)(on input some random tape). Let x be the right-execution statement appearing in view and let ˜tagdenote the right-execution tag. Then, if the right-execution in view is accepting AND tagj 6= ˜tagfor all j ∈ [n], then RL(x, w) = 1.

Proof: We start by noting that since S outputs fail whenever the extraction by EXT fails, theclaim trivially holds in the event that the extraction by EXT fails.

13In accordance with the specification of 〈PsUA, VsUA〉, the transcripts (α, β, γ, δ) are generated using programs Πji

as witnesses, where Πji is the program chosen by the simulator Sj .

31

Page 33: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Observe that the right hand side input-tag pair (x, ˜tag) used in EXT is exactly the same as theone generated by SIM. This follows from the following two reasons: (1) Both EXT and SIM usethe same random coins s in the simulation. (2) The input-tag pair x, ˜tag is determined before anyexternal message is received in the right interaction. In particular, the pair (x, ˜tag) is independentof the messages M (which is the only potential difference between the executions of SIM and EXT).

Since whenever the properties described in the hypothesis hold EXT performs extraction, andsince the extraction by EXT proceeds until a witness is extracted (or until the extraction fails inwhich case, we are already done), we infer that S always outputs a witness to the statement xproved in the right-interaction in the view output.

We conclude the proof by bounding the running time of the combined simulator-extractor S.

Lemma 5.11 S runs in expected polynomial time.

Proof: We start by proving that the running time of SIM is polynomial. Recall that the SIMprocedure invokes the simulator S with the adversary A′. Thus, we need to show that S runsin polynomial time. To this end, it will be sufficient to show that every individual sub-simulatorSj runs in polynomial time. We first do so assuming that tagj 6= ˜tag. We then argue that, byconstruction of the adversary A′ this will be sufficient to guarantee polynomial running time evenin cases where tagj = ˜tag.

Claim 5.12 Suppose that tagj 6= ˜tag. Then, Sj completes the simulation in polynomial time.

Proof: We start by arguing that, in each of the three cases specified in the simulation, the witnessused by the simulator indeed satisfies the relation Rsim. A close inspection of the simulator’sactions in the first two cases reveals that the simulator indeed commits to a program Πi that oninput y = (c1

i , . . . , cni ) outputs the corresponding ri. Namely,

• Π1(y) = Aj(x, c11, . . . , c

n1 ) = rj

1.

• Π2(y) = Aj(x, c11, . . . , c

n1 , r1, r2, c

12, . . . , c

n2 ) = rj

2.

Since in both cases |y| = n|cij | = 2n2 ≤ `(n)− n ≤ |rj

i | − n it follows that Rsim is satisfied. As forthe third case, observe that for both i ∈ {1, 2}, if Sj sets y = (c1

i , . . . , cni , ri) then

• Πi(y) = Aj(c1i , . . . , c

ni , ri) = rj

i .

Since tagj 6= ˜tag we can use Fact 5.2 and infer that there exists i ∈ {1, 2} so that |ri| ≤ |rji |−`(n).

This means that for every j ∈ [n] the simulator Sj will choose the i ∈ {1, 2} for which:

|y| = |c1i |+ . . . + |cn

i |+ |rii|

= 2n2 + |rii|

≤ 2n2 + |rji | − `(n) (2)

≤ |rji | − n (3)

where Eq. (2) follows from |ri| ≤ |rji | − `(n) and Eq. (3) follows from the fact that `(n) ≥ 2n2 + n.

Thus, Rsim can always be satisfied by Sj .Since the programs Πi are of size poly(n) and satisfy Πi(y) = rj

i in poly(n) time (becauseAj does), then the verification time of Rsim on the instance 〈h, cj

i , rji 〉 is polynomial in n. By the

perfect completeness and relative prover efficiency of 〈PsUA, VsUA〉, it then follows that the simulatoris always able to make a verifier accept in polynomial time.

32

Page 34: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Remark 5.13 (Handling the case tagj = ˜tag) When invoked by SIM, the simulator S will outputan accepting left view even if A chooses ˜tag so that tagj = ˜tag for some j ∈ [n].14 This is because insuch a case the A′ whose view S needs to simulate ignores all right hand side messages, and feeds themessages M ′ to A internally. In particular, no external messages will be contained in neither Slot 1or Slot 2 of 〈Ptag, Vtag〉. A look at the proof of Claim 5.12, reveals that in such cases the simulatorcan indeed always produce an accepting conversation (regardless of whether tagj= ˜tag or not).

It now remains to bound the expected running time of EXT. Recall that EXT uses the viewsampled by SIM and proceeds to extract a witness only if the right interaction is accepting andtagj 6= ˜tag for all j ∈ [n]. Using Claim 5.12, we know that the simulation internally invoked by thestand alone prover P ∗˜tag will always terminate in polynomial time (since tagj 6= ˜tag for all j ∈ [n] andA is a poly-time machine). We now argue that the extraction of the witness from P ∗˜tag conductedby EXT will terminate in expected polynomial time.

Let p denote the probability that A produces an accepting proof in the right execution inthe simulation by SIM. Let p′ denote the probability that A produces an accepting proof in theright execution in the internal execution of P ∗˜tag (constructed in EXT). By the POK property of〈Ptag, Vtag〉 it holds that the expected running-time of the knowledge extractor is bounded by

poly(n)p′

.

Since the probability of invoking the extraction procedure is p, the expected number of steps usedto extract a witness is

ppoly(n)

p′.

Now, in both SIM and EXT the left view is generated by S, and the right view is uniformly chosen.This in particular means that p = p′. It follows that the expected number of steps used to extracta witness is

ppoly(n)

p= poly(n)

This completes the proof of Lemma 5.11 .

This completes the proof of “many-to-one” simulation extractability (Proposition 5.3).

5.3 “Full-Fledged” Simulation-Extractability

Let tag ∈ {0, 1}m, let x ∈ {0, 1}n, and let A be the corresponding MIM adversary. We considera left interaction in which 〈Ptag, Vtag〉 is executed with common input x ∈ {0, 1}n, and a rightinteraction in which 〈P ˜tag, V ˜tag〉 is executed with common input x ∈ {0, 1}n The strings ˜tag andx are chosen adaptively by the man-in-the-middle A. The witness used by the prover in the leftinteraction is denoted by w, and the auxiliary input used by the adversary is denoted by z.

Proposition 5.14 Let A be a MIM adversary as above, and suppose that `(n)≥ 2n2+2n. Then, thereexists a probabilistic expected polynomial time machine S such that the following conditions hold:

14Note that this is not necessarily true in general. For example, when tagj = ˜tag for some j ∈ [n], and themessages that A sees are forwarded from an external source (e.g., when S is used by EXT in order to constructthe stand alone prover P ∗

˜tag), we cannot guarantee anything about the running time of S. Indeed, the definition of

simulation-extractability does not require EXT to output a witness when tagj = ˜tag for some j ∈ [n].

33

Page 35: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

1. The probability ensembles {S1(x, z,tag)}x,z,tag and {viewA(x, z,tag)}x,z,tag are statisticallyclose over L, where S1(x, z,tag) denotes the first output of S(x, z,tag).

2. Let x ∈ L, z ∈ {0, 1}∗,tag ∈ {0, 1}t(|x|) and let (view, w) denote the output of S(x, z,tag)(on input some random tape). Let x be the right-execution statement appearing in view andlet ˜tag denote the right-execution tag. Then, if the right-execution in view is accepting ANDtag 6= ˜tag, then RL(x, w) = 1.

Proof: The construction of the simulator-extractor S proceeds in two phases and makes use ofthe many-to-one simulator extractor guaranteed by Lemma 5.3. In the first phase, the adversary Ais used in order to construct a many-to-one adversary A′ with the protocol 〈Ptag, Vtag〉 on its leftand with one of the sub-protocols 〈P ˜tagi

, V ˜tag〉 on its right. In the second phase, the many-to-onesimulation-extractability property of 〈Ptag, Vtag〉 is used in order to generate a view for A alongwith a witness for the statement x appearing in the simulation.

The many-to-one adversary. On input (x, z,tag), and given a man-in-the-middle adversaryA, the many-to-one adversary A′ acts as follows:

Internal messages: For all j ∈ [n], pick random messages (hj , rj1, r

j2, u

j) for the right interaction.

Right interaction The statement x proved is the same as the one chosen by A. If there existsi ∈ [n] so that tagj 6= ˜tagi for all j ∈ [n], forward A’s messages in the ith right interactionto an external V ˜tagi

and send back his answers to A. Use the messages {(hj , rj1, r

j2, u

j)}j 6=i tointernally emulate all other right interactions {〈P ˜tagj

, V ˜tagj〉}j 6=i.

Otherwise, (i.e. if for all i ∈ [n] there exists j ∈ [n] such that tagj = ˜tagi), pick an arbitraryi ∈ [n], forward A’s messages in the ith right interaction to an external V ˜tagi

and send backhis answers to A. Use the messages {(hj , rj

1, rj2, u

j)}j 6=i to internally emulate all other rightinteractions {〈P ˜tagj

, V ˜tagj〉}j 6=i.

Left interaction: As induced by the scheduling of messages by A, forward the messages sent byA in the left interaction to an external prover Ptag, and send back his answers to A.

The many-to-one adversary A′ is depicted in Fig. 13. Messages from circled sub-protocols are theones who get forwarded externally.

The simulator-extractor. By Lemma 5.3 there exists a simulator S ′ that produces a view thatis statistically close to the real view of A′, and outputs a witness provided that the right interactionis accepting and ˜tag is different from all the left side tags tag1, . . . , tagn.

The simulator-extractor S(x, z,tag) for A invokes S ′(x, z,tag). If S ′ outputs fail so does S.Otherwise, let view′, w′ denote the output of S ′. S now outputs view,w′, where view is the viewview′ (output by S ′) augmented with the right hand side messages {(hj , rj

1, rj2, u

j)}j 6=i that wereused in the internal emulation of A′.

Claim 5.15 S runs in expected polynomial time.

Proof: Notice that whenever A is polynomial time then so is A′. By Lemma 5.11, this impliesthat S ′ runs in expected polynomial time, and hence so does S.

34

Page 36: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

P A′ VP A V

c11→ · · ·cj1→ · · ·c

m1−−−−−−−−−−−−−−−−→ c11→ · · · c

i1−−−−−−−−−−−−−→· · ·

cm1→

r11← · · · ri

1←−−−−−−−−−−−−−· · ·rm1←

r11←−−−−−−−−−−−−−−−· · ·

rj1←· · ·

rm1←

c21→ · · ·cj2→ · · ·c

m2−−−−−−−−−−−−−−−−→ c12→ · · · ci

2−−−−−−−−−−−−−→· · ·cm2→

c12← · · · ri2←−−−−−−−−−−−−−· · ·

rm2←

r12←−−−−−−−−−−−−−−−· · ·

rj2←· · ·

rm2←

'

&

$

%Figure 13: The “many-to-one” MIM adversary A′.

Claim 5.16 {S1(x, z,tag)}x∈L,z∈{0,1}∗,tag∈{0,1}m and {viewA(x, z,tag)}x∈L,z∈{0,1}∗,tag∈{0,1}m arestatistically close over L, where S1(x, z,tag) denotes the first output of S(x, z,tag).

Proof: Given a distinguisher D between {S1(x, z,tag)}x,z,tag and {viewA(x, z,tag)}z,x,tag, weconstruct a distinguisher D′ between {S ′1(x, z,tag)}x,z,tag and {viewA′(x, z,tag)}x,z,tag. Thiswill be in contradiction to Lemma 5.6. The distinguisher D′ has the messages {(hj , rj

1, rj2, u

j)}j 6=i

hardwired.15 Given a joint view 〈(σ′1, σ′2),M ′〉 of a left 〈P ˜tag, V ˜tag〉 interaction and a 〈P ˜tagi, V ˜tagi

〉right interaction, D′ augments the view with the right interaction messages {(hj , rj

1, rj2, u

j)}j 6=i.The distinguisher D′ feeds the augmented view to D and outputs whatever D outputs.

Notice that if 〈(σ′1, σ′2),M ′〉 is drawn according to {S ′1(x, z,tag)}x,z,tag then the augmentedview is distributed according to {SIM(x, z,tag)}x,z,tag. On the other hand, if 〈(σ′1, σ′2),M ′〉 isdrawn according to {viewA′(x, z,tag)}z,x,tag then the augmented view is distributed according to{viewA(x, z,tag)}z,x,tag. Thus D′ has exactly the same advantage as D.

Claim 5.17 Let x ∈ L, z ∈ {0, 1}∗,tag ∈ {0, 1}m and let (view, w) denote the output of S(x, z,tag)(on input some random tape). Let x be the right-execution statement appearing in view and let ˜tagdenote the right-execution tag. Then, if the right-execution in view is accepting AND tag 6= ˜tag,then RL(x, w) = 1.

Proof: Recall that the right interaction in the view output of S is accepting if and only if theright interaction in the view output of S is accepting. In addition, the statement proved in theright interaction output by S is identical to the one proved in the right interaction of S ′.

Observe that if tag 6= ˜tag, there must exist i ∈ [n] for which (i, ˜tagi) 6= (j,tagj) for all j ∈ [n](just take the i for which ˜tagi 6= tagi). Recall that by construction of the protocol 〈P ˜tag, V ˜tag〉,for every i ∈ [n], the value tagi is defined as (i,tagi) Thus, whenever tag 6= ˜tag there exists a˜tagi = (i, ˜tagi) that is different than tagj = (j,tagj) for all j ∈ [n]. In particular, the tag used

by A′ in the right interaction will satisfy tagj 6= ˜tagi for all j ∈ [n]. By Lemma 5.10, we then havethat if the right interaction in view output by S is accepting (and hence also in S ′), S ′ will outputa witness for x. The proof is complete, since S outputs whatever witness S ′ outputs.

This completes the proof of Proposition 5.14.

15One could think of D as a family of distinguishers that is indexed by {(hj , rj1, r

j2, u

j)}j 6=i, and from which amember is drawn at random.

35

Page 37: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

6 Non-malleable Commitments

In this section we present two simple constructions of non-malleable commitments. The approachwe follow is different than the approach used in [16]. Instead of viewing non-malleable commitmentsas a tool for constructing non-malleable zero-knowledge protocols, we reverse the roles and use anon-malleable zero-knowledge protocol (in particular any simulation-extractable protocol will do)in order to construct a non-malleable commitment scheme. Our approach is also different from theapproach taken by [2].

6.1 A statistically-binding scheme (NM with respect to commitment)

We start by presenting a construction of a statistically binding scheme which is non-malleable withrespect to commitment. Our construction relies on the following two building blocks:

• a family of (possibly malleable) non-interactive statistically binding commitment schemes,

• a simulation-extractable zero-knowledge argument.

The construction is conceptually very simple: The committer commits to a string using the statis-tically binding commitment scheme, and then proves knowledge of the string committed to usinga simulation-extractable argument.

We remark that the general idea behind this protocol is not new. The idea of enhancing acommitment scheme with a proof of knowledge protocols was explored already in [16]. However, aspointed out in [16], this approach cannot work with any proof of knowledge protocol, as this proofof knowledge protocol itself might be malleable. (As mentioned above, Dolev et al therefore relyon a quite different approach to construct non-malleable commitments [16]). What we show here,is that this approach in fact works if the proof of knowledge protocol is simulation-extractable.

Let {Comr}r∈{0,1}∗ be a family of non-interactive statistically binding commitment schemes(e.g., Naor’s commitment [34]) and let 〈Ptag, Vtag〉 be a simulation extractable protocol. Considerthe protocol in Figure 14.16

We start by sketching why the scheme is non-malleable. Note that the commit phase ofthe scheme only consists of a message specifying an NP -statement (i.e., the “statement” c =Comr(v; s)), and an accompanying “proof” of this statement. Thus, intuitively, an adversary thatis able to successfully maul a commitment, must be able to maul both the commitment Com andalso the accompanying proof 〈Ptag, Vtag〉. The former might be easy as Com might be malleable.However, as 〈Ptag, Vtag〉 is simulation-extractable (and thus non-malleable) the latter will be im-possible.

At first sight, it seems like it would be sufficient to simply assume that 〈Ptag, Vtag〉 is non-malleable to conclude non-malleability of 〈C,R〉. Note, however, that the definition of a non-malleable proofs only considers a setting where the statement proven by the man-in-the-middleadversary is fixed ahead of the interaction. In our scenario, we instead require security also w.r.tan adaptively chosen statement (as the statement proven is related to the commitment chosenby the adversary). This gap is addressed in the definition of simulation-extractability; here theman-in-the-middle adversary is also allowed to adaptively chose the statements it will attempt toprove.

16It is interesting to note that the protocol 〈C, R〉 is statistically-binding even though the protocol 〈Ptag, Vtag〉 is“only” computationally sound. At first sight, this is somewhat counter intuitive since the statistical binding propertyis typically associated with all-powerful committers.

36

Page 38: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Protocol 〈C,R〉

Security Parameter: 1n.

String to be committed to: v ∈ {0, 1}n.

Commit Phase:

R→ C: Pick uniformly r ∈ {0, 1}n.

C → R: Pick s ∈ {0, 1}n and send c = Comr(v; s).

C ↔ R: Let tag = (r, c). Prove using 〈Ptag, Vtag〉 that there exist v, s ∈ {0, 1}n sothat c = Comr(v; s). Formally, prove the statement (r, c) with respect to the witnessrelation

RL = {(r, c), (v, s)|c = Comr(v; s)}

R: Verify that 〈Ptag, Vtag〉 is accepting.

Reveal Phase:

C → R: Send v, s.

R: Verify that c = Comr(v; s).

Figure 14: A statistically binding non-malleable string commitment protocol - 〈C,R〉

Interestingly, to formalize the above argument, we will be required to rely on the statisticalindistinguishability property of the definition of simulation-extractability. Intuitively, the reasonfor this is the following. For any given man-in-the-middle adversary we are required to construct astand-alone adversary that succeeds in committing to “indistinguishable” values. In proving thatthis stand-alone adversary succeeds in this task, we will rely on the simulator-extractor for the man-in-the-middle adversary; however, to do this, we need to make sure that the simulator-extractorindeed will commit to indistinguishable values. Note that it is not sufficient that the simulator-extractor simply proves a statement that is indistinguishable from the statement proved by the man-in-the-middle adversary, as the value committed to is not efficiently computable from the statement.However, the value committed to is computable (although not efficiently) from the statement.Thus, by relying on the statistical indistinguishability property of the simulator-extractor, we canmake sure that also the value committed by the simulator-extractor is indistinguishable from thatcommitted to by the man-in-the-middle adversary.17

Theorem 6.1 (nmC with respect to commitment) Suppose that {Comr}r∈{0,1}∗ is a familyof non-interactive statistically binding commitment schemes, and that 〈Ptag, Vtag〉 is simulationextractable. Then, 〈C,R〉 is a statistically binding non-malleable commitment scheme with respectto commitment.

Proof: We need to prove that the scheme satisfies the following three properties: statisticalbinding, computational hiding and non-malleability with respect to commitment.

Statistical Binding: The binding property of the scheme directly follows from the bindingproperty of the underlying commitment scheme Com: to break the binding property of nmCtag

17Note that in the case of non-malleability with respect to opening, this complication does not arise.

37

Page 39: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

requires first breaking the binding of Com. More formally, given any adversary A that breaks thebinding property of nmCtag, we construct an adversary A′, which on input a message r, feeds r toA, and then internally honestly emulates all the verification messages in the proof part of nmCtag.It directly follows that the success probability of A′ is the same as that of A.

Computational Hiding: The hiding property follows from the hiding property of Com com-bined with the ZK property of 〈Ptag, Vtag〉. More formally, recall that the notion of simulation-extractability implies ZK (see Proposition 3.6) and that ZK implies strong-witness indistinguishability18

[20]. Since the scheme Com produces indistinguishable commitments, it thus directly follows bythe definition of strong witness indistinguishability that the protocol 〈C,R〉 also produces indistin-guishable commitments.

Non-malleability: Consider a man-in-the middle adversary A. We assume without loss ofgenerality that A is deterministic (this is w.l.o.g since A can obtain its “best” random tape asauxiliary input). We show the existence of a probabilistic polynomial-time stand-alone adversaryS and a negligible function ν : N → N , such that for every irreflexive polynomial-time computablerelation R⊆{0, 1}n×{0, 1}n, every v∈{0, 1}n, and every z∈{0, 1}∗, it holds that:

Pr[mimA

com(R, v, z) = 1]

< Pr[staS

com(R, v, z) = 1]

+ ν(n) (4)

Description of the stand-alone adversary. The stand-alone adversary S uses A as a black-box and emulates the left and right interactions for A as follows: the left interaction is emulatedinternally, while the right interaction is forwarded externally. More precisely, S proceeds as followson input z. S incorporates A(z) and internally emulates the left interactions for A by simplyhonestly committing to the string 0n; i.e., to emulate the left interaction, S executes the algorithmC on input 0n. Messages from the right interactions are instead forwarded externally. Note that Sis thus a stand-alone adversary that expects to act as a committer for the scheme 〈C,R〉.

Analysis of the stand-alone adversary. We proceed to show that equation 4 holds. Suppose,for contradiction, that this is not the case. That is, there exists an irreflexive polynomial-timerelationR and a polynomial p(n) such that for infinitely many n, there exists strings v ∈ {0, 1}n, z ∈{0, 1}∗ such that

Pr[mimA

com(R, v, z) = 1]− Pr

[staS

com(R, v, z) = 1]≥ 1

p(n)

Fix generic n, v, z for which the above holds. We show how this contradicts the simulation-extractability property of 〈Ptag, Vtag〉. On a high-level, our proof consists of the following steps:

1. We first note that since the commit phase of 〈C,R〉 “essentially” only consists of a statement(r, c) (i.e., the commitment) and a proof of the “validity” of (r, c), A can be interpreted as asimulation-extractability man-in-the-middle adversary A′ for 〈Ptag, Vtag〉.

2. It follows from the simulation-extractability property of 〈Ptag, Vtag〉 that there exist a com-bined simulator-extractor S for A′ that outputs a view that is statistically close to that of A′,

18Intuitively, the notion of strong-witness indistinguishability requires that proofs of indistinguishable statementsare indistinguishable. This is, in fact, exactly the property we need in order to prove that the commitment schemeis computationally hiding.

38

Page 40: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

while at the same time outputting a witness to all accepting right proofs which use a differentright tag ˜tag than the tag tag of the left interaction, i.e., if ˜tag 6= tag.

3. Since the view output by the simulator-extractor S is statistically close to the view of A′ in thereal interaction, it follows that also the value committed to in that view is statistically closeto value committed to by A′. (Note that computational indistinguishability would not havebeen enough to argue the indistinguishability of these values, since they are not efficientlycomputable from the view.)

4. It also follows that the simulator-extractor S will output also the witness to each acceptingright executions such that ˜tag 6= tag. We conclude that S additionally outputs the valuecommitted to in the right execution (except possibly when the value committed to in the rightinteraction is the same as that committed to in the left).

5. We finally note that if R (which is irreflexive) “distinguishes” between the value committedto by A and by S, then R also “distinguishes” the second output of S (which consists ofthe committed values) when run on input a commitment (using Com) to v, and the secondoutput of S when run on input a commitment to 0. But, this contradicts the hiding propertyof Com.

We proceed to a formal proof. One particular complication that arises with the above proofsketch is that in the construction of 〈C,R〉 we are relying on a family of commitment schemes{Comr}r∈{0,1}∗ and not a single non-interactive commitment scheme. Thus, strictly speaking, Ais not a man-in-the-middle adversary for the interactive proof 〈Ptag, Vtag〉. However, the onlydifference is that A additionally expects to receive a message r on the right, and also to send amessage r on the left. To get around this problem we rely on the non-uniform computational hidingproperty of 〈C,R〉.

Note that since in both experiments mim and sta the right execution is identically generated,there must exists some fixed message r such that conditioned on the event that the first messagesent in the right execution is r, it holds that the success probability in mimA

com(R, v, z) is 1p(n) higher

than in staScom(R, v, z). In fact, by the statistical binding property of Com, it follows that there

must exists some message r such Comr is perfectly binding and additionally, conditioned on theevent that the first message sent in the right execution is r, it holds that the success probability inmimA

com(R, v, z) is 12p(n) higher than in staS

com(R, v, z)Given this message r, we must now consider two cases:

1. Either A (which by assumption is deterministic) sends its first message r in left interactiondirectly after receiving r (see Figure 15),

2. or A first send its first message c in the right interaction.

We first show that in the second case, A can be used to break the non-uniform hiding propertyof 〈C,R〉. Intuitively, this follows from the fact that the value committed to by A on the right is“essentially” determined by the message c sent by A before it has received a single message on theleft; it is not “fully” determined by c as A can decide whether it fails in the interactive proof, andthus potentially change the value determined by c to ⊥. However, in this case, A can be used toviolate the hiding property of 〈C,R〉 (as the probability of failing the proof must depend on thevalue it receives a commitment to).

More formally, let vc denote the value committed to in c (using Com). It then holds thatthe value committed to by A in its right interaction (of 〈C,R〉) will be vc if A succeeds in the

39

Page 41: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

C A R

r←−−−−−−−−−−−−

r←−−−−−−−−−−−−

c−−−−−−−−−−−−→

c−−−−−−−−−−−−→

Figure 15: An interleaved scheduling of commitments.

proof following the message c and ⊥ otherwise. By our assumption that the success probability inmimA

com(R, v, z) is 12p(n) higher than in staS

com(R, v, z), conditioned on the event that the first mes-sage sent in the right execution is r, it thus holds that A “aborts” the proof in the left interactionwith different probability in experiments mimA

com(R, v, z) and staScom(R, v, z), conditioned on the

first message in the right interaction being r. However, since the only difference in those experi-ments is that A receives a commitment to v in mim and a commitment to 0n in sta, we concludethat A contradicts the (non-uniform) computational hiding property of Com. Formally, we con-struct a non-uniform distinguisher D for the commitment scheme 〈C,R〉: D incorporates A, z, r, vand vc and emulates the right execution for A by honestly running the verification procedure of〈Ptag, Vtag〉, and forwards messages in the left execution externally. D finally outputs R(v, vc) ifthe proof was accepting and 0 otherwise. The claim follows from the fact that D perfectly emulatesmimA

com(R, v, z) when receiving a commitment to v, and perfectly emulates staScom(R, v, z) when

receiving a commitment to 0n.We proceed to consider the first (and harder) case depicted in Figure 15—i.e., when A sends

its first left message r directly after receiving the message r. In this case, we instead directly useA to contradict the hiding property of Com. Towards this goal, we proceed in three steps:

1. We first define a simulation-extractability adversary A′.

2. We next show that A′ can be used to violate the non-malleability property of 〈C,R〉.

3. In the final step, we show how to use the simulator-extractor S for A′ to violate the hidingproperty of Com.

Step 1: Defining a simulation-extractability adversary A′. We define a simulation-extractabilityadversary A′ for 〈Ptag, Vtag〉. On input x,tag, z′ = (z, r), A′ internally incorporates A(z) and em-ulates the left and right interactions for A as follows.

1. A′ starts by feeding A the message r as part of its right execution. All remaining messagesin the right execution are forwarded externally.

2. All messages in A’s left interaction are forwarded externally as part of A′’s left interaction,except for the first message r.

Step 2: Show that A′ violates non-malleability of 〈C,R〉. Towards the goal of showing thatA violates non-malleability of 〈C,R〉, we define the hybrid experiment hyb1(v′): (relying on thedefinition of v, z, r, r)

1. Let s be a uniform random string and let c = Comr(v′; s).

40

Page 42: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

2. Let x = (r, c),tag = (r, c), z′ = (z, r). Emulate an execution for A′(x,tag, z′) by honestlyproviding a proof of x (using tag tag and the witness (v′, s)) as part of its left interaction,and honestly verifying the right interaction.

3. Given the view of A′ in the above emulation, reconstruct the view view of A in the emulationby A′. If the commitment produced by A in the right-execution in view is valid, let v denotethe value committed to; recall that by our assumption on r, this value is uniquely defined(although it is not efficiently computable). If, on the other hand, the commitment producedby A is invalid, let v = ⊥.

4. Finally, if v = ⊥, output 0. Otherwise output R(v, v).

Note that hyb1(v) is not efficiently samplable, since the third step is not efficient. However,except for that step, every other operation in hyb1 is efficient. (This will be useful to us at a laterstage).

We have the following claim.

Claim 6.2Pr

[hyb1(v) = 1

]− Pr

[hyb1(0

n) = 1]≥ 1

2p(n)

Proof: Note that by the construction of A′ and hyb1 it directly follows that:

1. The view of A in hyb1(v) is identically distributed to the view of A in mimAcom(R, v, z),

conditioned on the event that the first message in the right execution is r.

2. The view of A in hyb1(0n) is identically distributed to the view of A in staAcom(R, v, z), con-

ditioned on the event that the first message in the right execution is r.

Since the output of the experiments hyb,mim, sta is determined by applying the same fixed function(involving R and v) to the view of A in those experiments, the claim follows.

Step 3: Show that the simulator for A′ violates the hiding property of Com. We nextuse the simulator-extractor S′ for A′ to construct an efficiently computable experiment that isstatistically close to hyb1.

Towards this goal, we first consider the following experiment hyb2, which still is not efficient.hyb2(v′) proceeds just as hyb1(v′) except that instead of emulating the left and right interactionsfor A′, hyb2 runs the combined simulator extractor S for A′ to generate the view of A′.

Claim 6.3 There exists a negligible function ν ′(n) such that for any string v′ ∈ {0, 1}n,

|Pr[hyb1(v

′) = 1]− Pr

[hyb2(v

′) = 1]| ≤ ν ′(n)

Proof: It follows directly from the statistical indistinguishability property of S that the viewof A generated in hyb1 is statistically close to the view of A generated in hyb2. The claim isconcluded by (again) observing that the success of both hyb1 and hyb2 is defined by applying thesame (deterministic) function to the view of A.

Remark 6.4 Note that the proof of Claim 6.3 inherently relies on the statistical indistinguisha-bility property of S. Indeed, if the simulation had only been computationally indistinguishable, wewould not have been able to argue indistinguishability of the outputs of hyb1 and hyb2. This followsfrom the fact that success in experiments hyb1 and hyb2 (which depends on the actual committedvalues in the view of A) is not efficiently computable from the view alone.

41

Page 43: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

We next define the final experiment hyb3(v′) that proceeds just as hyb2(v′) with the followingmodification:

• Instead of letting v be set to the actual value committed to in the view view of A, v iscomputed as follows. Recall that the combined-simulator extractor S outputs both a viewand a witness to each accepting right interaction. If the right execution in view is accepting19

and if ˜tag 6= tag where ˜tag is the tag used in the right interaction in view, simply set v tobe consistent with the witness output by S (i.e., if S outputs the witness (v′, s′), let v = v′).Otherwise, (i.e., if ˜tag = tag, or if the right execution was rejecting), let v = ⊥.

Note that in contrast to hyb2, hyb3 is efficiently computable. Furthermore it holds that:

Claim 6.5 For any string v′ ∈ {0, 1}n,

Pr[hyb2(v

′) = 1]

= Pr[hyb3(v

′) = 1]

Proof: Recall that the view of A in hyb2 and hyb3 is identical; the only difference in the experimentsis how the final output is computed. It holds by the definition of the simulator-extractor S thatS always outputs the witness to the statement proved by A′ if the right interaction is acceptingand if ˜tag 6= tag. Thus, whenever view contains an accepting right-execution proof such that˜tag 6= tag, it follows by our assumption that Comr is perfectly binding, that the output of hyb2

and hyb3 is identical. Furthermore, in case the right-execution proof is rejecting, it holds that v = ⊥in both hyb2 and hyb3, which again means the output in both experiments is identical. Finally,consider the case when the right-execution is accepting, but ˜tag = tag. By definition, it holdsthat hyb3 outputs 0. Now, recall that tag = (r, c) and ˜tag = (r, c); in other words, if ˜tag = tag,it means that A fully copied the initial commitment using Comr. Since R is irreflexive and Comr

is perfectly binding, it follows that also hyb2 outputs 0. We conclude that the outputs of hyb2 andhyb3 are identically distributed.

By combining the above claims we obtain that there exists some polynomial p′(n) such that

Pr[hyb3(v) = 1

]− Pr

[hyb3(0

n) = 1]≥ 1

p′(n)

However, since hyb3 is efficiently samplable, we conclude that this contradicts the (non-uniform)hiding property of Comr.

More formally, define an additional hybrid experiment hyb4(c′) that proceeds as follows on inputa commitment c′ using Comr: hyb4 performs the same operations as hyb3, except that instead ofgenerating the commitment c, it simply sets c = c′. It directly follows from the construction ofhyb4 that hyb4(c′) is identically distributed to hyb3(0n) when c′ is a (random) commitment to 0n

(using Comr), and is identically distributed to hyb3(v) when c′ is a commitment to v. We concludethat hyb4 distinguishes commitments (using Comr) to 0n and v.

Remark 6.6 (Black-box v.s. Non Black-box Simulation) Note that the stand-alone commit-ter S constructed in the above proof only uses black-box access to the adversary A, even if thesimulation-extractability property of 〈Ptag, Vtag〉 has been proven using a non black-box simulation.Thus, in essence, the simulation of our non-malleable commitment is always black-box. However,the analysis showing the correctness of the simulator relies on non black-box techniques, wheneverthe simulator-extractor for 〈Ptag, Vtag〉 is proven using non black-box techniques.

19Note that since view is a joint view of A and the honest receiver R in the right-execution, one can efficientlydetermine whether R indeed accepted the commitment.

42

Page 44: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Since families of non-interactive statistically binding commitments schemes can be based oncollision-resistant hash functions (in fact one-way functions are enough [34, 28]) we get the followingcorollary:

Corollary 6.7 (Statistical binding non-malleable commitment) Suppose that there exists afamily of collision resistant hash functions. Then, there exists a constant-round statistically-bindingcommitment scheme that is non malleable with respect to commitment.

6.2 A statistically-hiding scheme (NM with respect to opening)

We proceed to the construction of a statistically-hiding commitment scheme 〈C,R〉 which is non-malleable with respect to opening. Our construction relies on a quite straight-forward combina-tion of a (family) of non-interactive statistically-hiding commitments and a simulation-extractableargument.20 Let {Comr}r∈{0,1}∗ be a family of non-interactive statistically hiding commitmentschemes (e.g., [12]) and let 〈Ptag, Vtag〉 be simulation extractable protocol. The protocol is de-picted in Figure 16.

Protocol 〈C,R〉

Security Parameter: 1n.

String to be committed to: v ∈ {0, 1}n.

Commit Phase:

R→ C: Pick uniformly r ∈ {0, 1}n.

C → R: Pick s ∈ {0, 1}n and send c = Comr(v; s).

Reveal Phase:

C → R: Send v.

C ↔ R: Let tag = (r, c, v). Prove using 〈Ptag, Vtag〉 that there exist s ∈ {0, 1}n so thatc = Comr(v; s). Formally, prove the statement (r, c, v) with respect to the witnessrelation

RL = {(r, c, v), s|c = Comr(v; s)})

R: Verify that 〈Ptag, Vtag〉 is accepting.

Figure 16: A statistically hiding non-malleable string commitment protocol - 〈C,R〉.

Theorem 6.8 (nmC with respect to opening) Suppose that {Comr}r∈{0,1}∗ is a family of non-interactive commitment schemes, and that 〈Ptag, Vtag〉 a simulation extractable argument with anefficient prover strategy. Then, 〈C,R〉 is a non-malleable commitment scheme with respect toopening. If furthermore {Comr}r∈{0,1}∗ is statistically hiding, then 〈C,R〉 is so as well.

Proof: We need to prove that the scheme satisfies the following three properties: computationalbinding, (statistical) hiding and non-malleability with respect to opening.

20Note that whereas our construction of statistically binding commitments required that the simulation-extractableargument provides a simulation that is statistically close, we here are content with a computationally indistinguishablesimulation.

43

Page 45: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

We start by proving the hiding and non-malleability properties, and then return to the proofof the binding property.

(Statistical) Hiding: The hiding property directly follows from the hiding property of Com.Note that if Com is statistically hiding then 〈C,R〉 is also statistically hiding.

Non-malleability: We show that for every probabilistic polynomial-time man-in-the-middleadversary A, there exists a probabilistic expected polynomial-time stand-alone adversary S and anegligible function ν : N → N , such that for every irreflexive polynomial-time computable relationR⊆{0, 1}n×{0, 1}n, every v∈{0, 1}n, and every z∈{0, 1}∗, it holds that:

Pr[mimA

open(R, v, z) = 1]

< Pr[staS

open(R, v, z) = 1]

+ ν(n)

We remark that the stand-alone adversary S constructed here will be conceptually quite differentfrom the one constructed in the proof of Theorem 6.1.

Description of the stand-alone adversary. We proceed to describe the stand-alone adversaryS. On a high-level, S internally incorporates A and emulates the commit phase of the left executionfor adversary A by honestly committing to 0n, while externally forwarding messages in the rightexecution. Once A has completed the commit phase, S interprets the residual adversary (after thecompleted commit phase) as a man-in-the middle adversary A′ for 〈Ptag, Vtag〉. It then executesthe simulator-extractor S for A′ to obtain a witness to the statement proved in the right executionby A′ (and thus A). Using this witness S can then complete the decommit phase of the externalexecution. (We here rely on the fact that 〈Ptag, Vtag〉 has an efficient prover strategy.).

More formally, the stand-alone adversary S proceeds as follows on input z.

1. S internally incorporates A(z).

2. During the commit phase S proceeds as follows:

(a) S internally emulates the left interaction for A by honestly committing to 0n.

(b) Messages from right execution are forwarded externally.

3. Once the commit phase has finished S receives the value v. Let (r, c), (r, c) denote the left andright-execution transcripts of A (recall that the left execution has been internally emulated,while the right execution has been externally forwarded).

4. Construct a man-in-the-middle adversary A′ for 〈Ptag, Vtag〉. Informally, A′ will simply con-sist of the residual machine resulting after the above-executed commit phase. More formally,A′(x,tag, z′) proceeds as follows:

(a) Parse z′ as (r, c, z).

(b) Parse x as (r, c, v).

(c) Internally emulate the commit phase (r, c), (r, c) for A(z) (i.e., feed A the message r aspart of its right execution, and c as part of its left execution).

(d) Once the commit phase has finished, feed v to A.

(e) Externally forward all the remaining messages during the reveal phase.

44

Page 46: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

5. Let S denote the simulator-extractor for A′.

6. Let x = (r, c, v), tag = (r, c, v) and z′ = (r, c, z)

7. Run S on input (x,tag, z′) to obtain the view view and the witness w.

8. Finally, if the statement proved in the right-execution of view is x = (r, c, v) (where v isan arbitrary string), the right-execution proof is accepting and uses a tag ˜tag such that˜tag 6= tag, and w contains a valid witness for x, run the honest prover strategy Ptag on

input x and the witness w. (Otherwise, simply abort.)

Analysis of the stand-alone adversary. Towards the goal of showing equation 5, we define ahybrid stand-alone adversary S that also receives v as auxiliary input. S proceeds exactly as S, butinstead of feeding A a commitment to 0n in the commit phase, S instead feeds A a commitment tov.

Since both the experiment staopen and S are efficiently computable, the following claim directlyfollows from the hiding property of Com.

Claim 6.9 There exists some negligible function ν ′ such that∣∣∣ Pr[staS

open(R, v, z) = 1]− Pr

[staS

open(R, v, z) = 1]∣∣∣ ≤ ν ′(k)

We proceed to show the following claim, which together with Claim 6.9 concludes Equation 5.

Claim 6.10 There exist some negligible function ν ′′ such that∣∣∣ Pr[mimA

open(R, v, z) = 1]−

[staS

open(R, v, z) = 1]∣∣∣ ≤ ν ′′(k)

Proof: Towards the goal of showing this claim we introduce an additional hybrid experimenthyb(R, v, z) which proceeds as follows: Emulate staS

open(R, v, z) but instead of defining v as thevalue (successfully) decommitted to by S, define v as the value (successfully) decommitted toin the view view output by simulator-extractor S (in the execution by S). We start by notingthat it follows directly from the indistinguishability property of the simulator-extractor S that thefollowing quantity is negligible.21∣∣∣ Pr

[mimA

open(R, v, z) = 1]− Pr

[hyb(R, v, z) = 1

]∣∣∣To conclude the claim, we show that

Pr[hyb(R, v, z) = 1

]=

[staS

open(R, v, z) = 1]

Note that the only difference between experiments hyb(R, v, z) and staSopen(R, v, z) is that in hyb,

the value v is taken from the view output by S, whereas in staSopen it is defined as the value

successfully decommitted to by S. Also recall that S “attempts” to decommit to the value vsuccessfully decommitted to in the output by S; S is successful in this task whenever S also isable to extract a witness to the right-execution proof. Note that by the simulation-extractabilityproperty of 〈Ptag, Vtag〉 it follows that S always outputs a valid witness if the right-execution in

21We remark that it is here sufficient that the simulator-extractor outputs a view that is merely computationallyindistinguishable from the view in a “real” execution.

45

Page 47: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

view is accepting, as long as the tag ˜tag of the right execution is different from tag. Thus, in case˜tag 6= tag, we conclude by the perfect completeness of 〈Ptag, Vtag〉 that the output of experiments

sta and hyb are defined in exactly the same way. In case ˜tag = tag, sta will output 0 (as S will noteven attempt to decommit). However, in this case, it holds that that v = v (since tag = (r, c, v)and ˜tag = (r, c, v)); this means that hyb will also output 0 (since R is irreflexive). The claimfollows.

We now return to the binding property.

Computational Binding: The binding properties of the scheme intuitively follows from thebinding property of the underlying commitment scheme Com and the “proof-of-knowledge” prop-erty implicitly guaranteed by the simulation-extractability property of 〈Ptag, Vtag〉. A formal proofproceeds along the lines of the proof of non-malleability (but is simpler.) More precisely, assumefor contradiction that there exists some adversary A that is able to violate the binding propertyof 〈C,R〉. We show how to construct a machine A that violates the binding property of Com. Astarts by running A, letting it complete the commit phase by externally forwarding its messages.Once A has completed the commit phase, A interprets the residual adversary (after the completedcommit phase) as a man-in-the middle adversary A′ (that ignores all left-execution messages) for〈Ptag, Vtag〉; a formal description of this adversary is essentially identical to the one described inthe proof of non-malleability. A then executes the simulator-extractor S for A′ to obtain a witnessto the statement proved in the right execution by A′ (and thus A). Using this witness A canthen complete the decommit phase of the external execution of Com. It directly follows by theindistinguishability property of the simulator-extractor S that A violates the binding property ofCom with essentially the same probability as A violates the binding property of 〈C,R〉.

This completes the proof of Theorem 6.8.

Remark 6.11 (Black-box v.s. Non Black-box Simulation) Note that the stand-alone adver-sary S constructed in the proof of Theorem 6.8 is very different from the stand-alone adversary con-structed in the proof of Theorem 6.1. In particular S constructed above in fact runs the simulator-extractor S (whereas in the proof of Theorem 6.1 the simulator extractor is simply used in theanalysis. As a consequence, (in contrast to the simulator constructed in 6.1) the stand-alone ad-versary S constructed above makes use of the man-in-the middle adversary in a non black-box wayif relying on a simulation-extractable argument with a non-black box simulator.

Since families of non-interactive statistically hiding commitments can be based on collision-resistanthash functions [36, 12] we obtain the following corollary:

Corollary 6.12 (Statistically hiding non-malleable commitment) Suppose that there existsa family of collision resistant hash functions. Then there exists a constant-round statistically hidingcommitment scheme which is non-malleable with respect to opening.

7 Acknowledgments

We are grateful to Johan Hastad and Moni Naor for many helpful conversations and great advice.Thanks to Boaz Barak for useful clarifications of his works. The second author would also liketo thank Marc Fischlin, Rosario Gennaro, Yehuda Lindell and Tal Rabin for insightful discussionsregarding non-malleable commitments. Thanks to Oded Goldreich for useful feedback on an earlier

46

Page 48: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

version of this work, and to the anonymous referees for their thoughtful comments. Finally, thanksto Huijia Lin for pointing out a subtlety in the proof of Proposition 4.2.

References

[1] B. Barak. How to go Beyond the Black-Box Simulation Barrier. In 42nd FOCS, pages 106–115, 2001.

[2] B. Barak. Constant-Round Coin-Tossing or Realizing the Shared Random String Model. In 43rd FOCS,p. 345-355, 2002.

[3] B. Barak and O. Goldreich. Universal Arguments and their Applications. 17th CCC, pages 194–203,2002.

[4] M. Bellare and O. Goldreich. On Defining Proofs of Knowledge. In CRYPTO’92, Springer (LNCS 740),pages 390–420, 1993.

[5] B. Barak and Y. Lindell. Strict Polynomial-Time in Simulation and Extraction. In 34th STOC, p. 484–493, 2002.

[6] M. Blum. Coin Flipping by Telephone. In CRYPTO 1981, pages 11-15, 1981.

[7] M. Bellare, R. Impagliazzo and M. Naor. Does Parallel Repetition Lower the Error in ComputationallySound Protocols? In 38th FOCS, pages 374–383, 1997.

[8] G. Brassard, D. Chaum and C. Crepeau. Minimum Disclosure Proofs of Knowledge. JCSS, Vol. 37, No. 2,pages 156–189, 1988. in 27th FOCS, 1986.

[9] R. Canetti and M. Fischlin. Universally Composable Commitments. In Crypto2001, Springer LNCS 2139,pages 19–40, 2001.

[10] Ivan Damgard: A Design Principle for Hash Functions. CRYPTO 1989, pages 416-427, 1989.

[11] Ivan Damgard and Jens Groth. Non-interactive and Reusable Non-Malleable Commitment Schemes. In35th STOC, pages 426-437, 2003.

[12] I. Damgard, T. Pedersen and B. Pfitzmann. On the Existence of Statistically Hiding Bit CommitmentSchemes and Fail-Stop Signatures. In Crypto93, pages 250–265, 1993.

[13] A. De Santis, G. Di Crescenzo, R. Ostrovsky, G. Persiano and A. Sahai. Robust Non-interactive ZeroKnowledge. In CRYPTO 2001, pages 566-598, 2001.

[14] G. Di Crescenzo, J. Katz, R. Ostrovsky and A. Smith. Efficient and Non-interactive Non-malleableCommitment. In EUROCRYPT 2001, pages 40-59, 2001.

[15] G. Di Crescenzo, Y. Ishai and R. Ostrovsky. Non-Interactive and Non-Malleable Commitment. In 30thSTOC, pages 141-150, 1998

[16] D. Dolev, C. Dwork and M. Naor. Non-Malleable Cryptography. SIAM Jour. on Computing, Vol. 30(2),pages 391–437, 2000. Preliminary version in 23rd STOC, pages 542-552, 1991

[17] U. Feige, D. Lapidot and A. Shamir. Multiple Noninteractive Zero Knowledge Proofs under GeneralAssumptions. Siam Jour. on Computing 1999, Vol. 29(1), pages 1-28.

[18] U. Feige and A. Shamir. Witness Indistinguishability and Witness Hiding Protocols. In 22nd STOC,p. 416–426, 1990.

[19] M. Fischlin and R. Fischlin. Efficient Non-malleable Commitment Schemes. In CRYPTO 2000, SpringerLNCS Vol. 1880, pages 413-431, 2000.

47

Page 49: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

[20] O. Goldreich. Foundation of Cryptography – Basic Tools. Cambridge University Press, 2001.

[21] O. Goldreich and A. Kahan. How to Construct Constant-Round Zero-Knowledge Proof Systems for NP.Jour. of Cryptology, Vol. 9, No. 2, pages 167–189, 1996.

[22] O. Goldreich and Y. Lindell. Session-Key Generation Using Human Passwords Only. In CRYPTO 2001,p. 408-432, 2001.

[23] O. Goldreich, S. Micali and A. Wigderson. Proofs that Yield Nothing But Their Validity or All Lan-guages in NP Have Zero-Knowledge Proof Systems. JACM, Vol. 38(1), pages 691–729, 1991.

[24] O. Goldreich, S. Micali and A. Wigderson. How to Play any Mental Game – A Completeness Theoremfor Protocols with Honest Majority. In 19th STOC, pages 218–229, 1987.

[25] O. Goldreich and Y. Oren. Definitions and Properties of Zero-Knowledge Proof Systems. Jour. of Cryp-tology, Vol. 7, No. 1, pages 1–32, 1994.

[26] S. Goldwasser and S. Micali. Probabilistic Encryption. JCSS, Vol. 28(2), pages 270-299, 1984.

[27] S. Goldwasser, S. Micali and C. Rackoff. The Knowledge Complexity of Interactive Proof Systems.SIAM Jour. on Computing, Vol. 18(1), pages 186–208, 1989.

[28] J. Hastad, R. Impagliazzo, L.A. Levin and M. Luby. Construction of Pseudorandom Generator fromany One-Way Function. SIAM Jour. on Computing, Vol. 28 (4), pages 1364–1396, 1999.

[29] J. Kilian. A Note on Efficient Zero-Knowledge Proofs and Arguments. In 24th STOC, pages 723–732,1992.

[30] Y. Lindell. Bounded-Concurrent Secure Two-Party Computation Without Setup Assumptions. In 34thSTOC, pages 683–692, 2003.

[31] P. D. MacKenzie, M. K. Reiter, K. Yang: Alternatives to Non-malleability: Definitions, Constructions,and Applications. TCC 2004, pages 171-190, 2004.

[32] R. C. Merkle. A Certified Digital Signature. In CRYPTO 1989, 218-238, 1989.

[33] S. Micali. CS Proofs. SIAM Jour. on Computing, Vol. 30 (4), pages 1253–1298, 2000.

[34] M. Naor. Bit Commitment using Pseudorandomness. Jour. of Cryptology, Vol. 4, pages 151–158, 1991.

[35] M. Naor, R. Ostrovsky, R. Venkatesan and M. Yung. Perfect Zero-Knowledge Arguments for NP Usingany One-Way Permutation. Jour. of Cryptology, Vol. 11, pages 87–108, 1998.

[36] M. Naor and M. Yung. Universal One-Way Hash Functions and their Cryptographic Applications. In21st STOC, pages 33–43, 1989.

[37] M. Nguyen and S. Vadhan. Simpler Session-Key Generation from Short Random Passwords. In 1stTCC, p. 428-445, 2004.

[38] R. Pass. Bounded-Concurrent Secure Multi-Party Computation with a Dishonest Majority. In 36thSTOC, 2004, pages 232-241, 2004.

[39] R. Pass and A. Rosen. Bounded-Concurrent Secure Two-Party Computation in a Constant Number ofRounds. In 34th FOCS, pages 404-413, 2003.

[40] R. Pass and A. Rosen. Concurrent Non-Malleable Commitments. In 36th FOCS, pages 563–572, 2005.

[41] A. Sahai. Non-Malleable Non-Interactive Zero Knowledge and Adaptive Chosen-Ciphertext Security. In40th FOCS, pages 543-553, 1999.

48

Page 50: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Appendix

A Missing Proofs

Proposition 4.2 (Argument of knowledge) Let 〈PsWI, VsWI〉 and 〈PUA, VUA〉 be the protocolsused in the construction of 〈PsUA, VsUA〉. Suppose that {Hn}n is collision resistant for T (n)-sizedcircuits, that Com is statistically hiding, that 〈PsWI, VsWI〉 is a statistical witness indistinguishableargument of knowledge, and that 〈PUA, VUA〉 is a universal argument. Then, for any tag ∈ {0, 1}n,〈Ptag, Vtag〉 is an interactive argument of knowledge.

Completeness of 〈Ptag, Vtag〉 follows from the completeness property of 〈PsUA, VsUA〉. Specifi-cally, an honest prover P , who possesses a witness w for x ∈ L can always make the verifier acceptby using w as the witness in the n parallel executions of 〈PsUA, VsUA〉. To demonstrate the argumentof knowledge property of 〈Ptag, Vtag〉, it will be sufficient to prove that 〈Ptag, Vtag〉 is an argumentof knowledge. This is because the prescribed verifier in 〈Ptag, Vtag〉 will accept the proof only if allruns of 〈Ptagi

, Vtagi〉 are accepting.22

Lemma A.1 Suppose that {Hn}n is collision resistant for T (n)-sized circuits, that Com is sta-tistically hiding, that 〈PsWI, VsWI〉 is a statistical witness indistinguishable argument of knowledge,and that 〈PUA, VUA〉 is a universal argument. Then, for any tag ∈ [2n], 〈Ptag, Vtag〉 is an argumentof knowledge.

Proof: We show the existence of an extractor machine E for protocol 〈Ptag, Vtag〉.

Description of the extractor machine. E proceeds as follows given oracle access to a maliciousprover P ∗. E, using black-box access to P ∗, internally emulates the role of the honest verifier Vtag

for P ∗ until P ∗ provides an accepting proof (i.e., if P ∗ fails, E restarts P ∗ and attempts a newemulation of Vtag). Let σs denote the messages received by P ∗, in the successful emulation by E,up until protocol 〈PWI, VWI〉 is reached; let P ∗WI denote the residual prover P ∗(σs).

The extractor E next applies the witness extractor EWI for 〈PWI, VWI〉 on P ∗WI. If EP ∗

WIWI outputs a

witness w, such that RL(x,w) = 1, E outputs the same witness w, otherwise it outputs fail. (Thereason E constructs P ∗WI as above is to ensure that P ∗WI convinces VWI with non-zero probability;otherwise EWI is not guaranteed to extract a witness).

Analysis of the extractor. Let P ∗ be a non-uniform PPT that convinces the honest verifierVtag of the validity of a statement x ∈ {0, 1}n with probability ε(n). We assume without loss ofgenerality that P ∗ is deterministic. We need to show the following two properties:

1. The probability that P ∗ succeeds in convincing Vtag, but E does not output a valid witnessto x, is negligible.

2. The expected number of steps taken by E is bounded by poly(n)ε(n) .

We start by noting that since E perfectly emulates the role of the honest verifier Vtag, E

requires in expectation poly(n)ε(n) steps before invoking the extractor EWI. Additionally, by the proof-

of-knowledge property of 〈PWI, VWI〉 it follows that the expected running-time of EWI is poly(n)ε(n) . To

22One could turn any cheating prover for 〈Ptag, Vtag〉 into a cheating prover for 〈Ptagi, Vtagi

〉 by internally emulatingthe role of the verifier Vtagj

for j 6= i and forwarding the messages from Ptagito an external Vtagi

.

49

Page 51: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

see this, let the random variable T denote the running-time of EWI in the execution by E and letT (σ) denote the running-time of EWI given that E chooses the prefix σ. We abuse of notationand let σ denote the event that P ∗ receives the prefix σ in an interaction with Vtag; also let acceptdenote the event that P ∗ produces a convincing proof. We have,

E[T ] =∑σ

E[T (σ)] Pr(σ|accept) =∑σ

poly(n)Pr[accept|σ]

Pr(σ|accept) =

poly(n)∑σ

Pr[σ]Pr[accept]

=poly(n)

Pr[accept]=

poly(n)ε(n)

where the second equality follows from the definition of a proof of knowledge. We conclude by thelinearity of expectations that the expected running-time of E is poly(n)

ε(n) and thus the second of theabove properties holds.

We turn to show that also the first property holds. Assume that there exist some polynomialp(n) such that for infinitely many n, ε(n) ≥ 1

p(n) but EP ∗fails to output a valid witness with

probability 1p(n) . Note that EP ∗

can fail for two reasons:

1. Either the extraction by EWI fails, or

2. EWI outputs 〈β, δ, s1, s2〉 so that:

• β = Com(β; s1).

• δ = Com(δ; s2).

• (α, β, γ, δ) is an accepting transcript for 〈PUA, VUA〉

If the second event occurs we say that EWI outputs a false witness. Consider an alternative extractorE′ which proceeds just as E except that if in the first emulation of Vtag, P ∗ fails in producing aconvincing proof, E′ directly aborts. Note that the only difference between E and E′ is thatE continues sampling executions until P ∗ produces a convincing proof, whereas E′ only samplesonce. Since by our assumption, ε(n) ≥ 1

p(n) , it follows that there exists some polynomial p′(n)such that in the execution by E′, EWI either fails or outputs a false witness with probability

1p′(n) . It directly follows from the proof-of-knowledge property of 〈PWI, VWI〉 that EWI fails onlywith negligible probability. We show below that EWI outputs a false witness also with negligibleprobability; this is a contradiction.

Proposition A.2 In the execution of E′P∗, EWI outputs a false witness with negligible probability.

Proof: Towards proving the proposition we start by showing the following lemma.

Lemma A.3 Let P ∗sUA be a non-uniform polynomial-time machine such that EP ∗

sUA(α,γ)

WI outputs afalse witness to the statement x = (x, 〈h, c1, c2, r1, r2〉) with probability ε(n) = 1

poly(n) given uniformlychosen verifier messages α, γ. Then, there exists a strict polynomial-time machine extract such thatwith probability poly(ε(n)), extract(P ∗sUA, x) outputs

• an index i ∈ {1, 2}

• strings y, s, z such that z = h(Π), ri = Π(y, s), and ci = Com(z; s)

• a polynomial-time machine M such that M(j) outputs the j’th bit of Π. (M is called the“implicit” representation of Π.)

50

Page 52: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Proof: The proof of the lemma proceeds in the following two steps.

1. Using an argument by Barak and Goldreich [3], we use P ∗sUA and EWI to construct a proverP ∗UA for the UARG 〈PUA, VUA〉 that succeeds with probability poly(ε(n)).

2. Due to the weak proof of knowledge property of UARG we then obtain an index i ∈ {1, 2},strings y, s, a hash z = h(Π) s.t. ri = Π(y, s) and ci = C(z; s). We furthermore obtain an“implicit” representation of the program Π.

Step 1. Constructing P ∗UA. P ∗UA proceeds as follows.

• P ∗UA starts by receiving a message α from the honest verifier VUA.

• P ∗UA incorporates P ∗sUA and internally forwards the message α, to P ∗sUA, resulting in a residualprover P ∗sUA(α).

• P ∗UA then internally emulates the role of the honest verifier for P ∗sUA until protocol 〈PWI, VWI〉 isreached (i.e., P ∗UA uniformly choses a random message γ that it forwards to P ∗sUA, resulting in aresidual prover P ∗sUA(α, γ)). Thereafter, P ∗UA honestly emulates the verifier VWI for P ∗sUA(α, γ)).If P ∗sUA(α, γ) succeeds in providing an accepting proof, P ∗UA invokes the knowledge extractorEWI on the prover P ∗sUA(α, γ).

• In the event that P ∗sUA(α, γ) does not produce an accepting proof, or if EWI does not outputan accepting tuple 〈β, δ, s1, s2〉, P ∗UA halts. Otherwise, it externally forwards the message βto VUA and receives as response γ.

• P ∗UA now rewinds P ∗sUA until the point where it awaits the message γ and internally forwardsγ to P ∗sUA, resulting in a residual prover P ∗sUA(α, γ). As before P ∗UA first honestly verifies theWI proof that P ∗UA(α, γ) gives and in the case this proof is accepting applies the extractorEWI to P ∗sUA(α, γ).

• If EWI outputs an accepting tuple 〈β′, δ′, s′1, s′2〉, such that β′ = β, P ∗UA forwards δ to VUA,and otherwise it halts.

Since 〈α, β′, γ, δ′〉 is an accepting transcript of 〈PUA, VUA〉, it follows that unless P ∗UA halts theexecution, it succeeds in convincing the verifier VUA.

We show that P ∗UA finishes the execution with probability poly(ε(n)). Using the same argumentas Barak and Goldreich [3] (of counting “good” verifier messages, i.e., messages that will let theprover succeed with “high” probability, see Claim 4.2.1 in [3]), it can shown that with probabilitypoly(ε(n)), P ∗UA reaches the case where the extractor outputs 〈β′, δ′, s′1, s′2〉. Thus it only remainsto show that conditioned on this event, β′ 6= β occurs with polynomial probability. In fact, theevent that β′ = β can only occur with negligible probability or else we would contradict the bindingproperty of Com (since β = Com(β, s1) = Com(β′, s′1)). We thus conclude that P ∗UA succeeds inconvincing VUA with probability poly(ε(n)).

Furthermore, since the extractor EWI is only applied when P ∗UA provides an accepting proof,it follows from the definition of a proof of knowledge that the expected running-time of P ∗UA is apolynomial, say g(n). Finally, if we truncate the execution of P ∗UA after 2g(n) steps we get bythe Markov inequality that (the truncated) P ∗UA still convinces produces convincing proofs withprobability poly(ε).

51

Page 53: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Step 2: Extracting the “false” witness. By the weak proof of knowledge property of 〈PUA, VUA〉there exists a strict PPT machine EUA such that E

P ∗UA

UA outputs an “implicit” representation of a“witness” to the statement x = (x, 〈h, c1, c2, r1, r2〉) proved by P ∗UA. Since the values i, y, s, z havefixed polynomial length, they can all be extracted in polynomial time. Note, however, that sincethere is not a (polynomial) bound on the length of the program Π, we can only extract an implicitrepresentation of Π. The concludes the proof of the lemma.

Armed with Lemma A.3, we now turn to show that EWI outputs a false witness with negligibleprobability in the execution of E′P

∗. Suppose for contradiction that there exist a polynomial p(n)

such that for infinitely many n’s, EWI outputs a false witness, with probability at least ε(n) = 1p(n) .

We construct a T (n)O(1)-sized circuit family, {Cn}n, that finds collisions for {Hn}n with probabilitypoly(ε(n)).

More specifically,

• On input hr← Hn, the circuit Cn incorporates P ∗ and internally emulates the honest verifier

V for P ∗ until the protocol 〈PsUA, VsUA〉 is reached (i.e., Cn internally sends randomly chosenmessages h, r1, r2 to P ∗, resulting in a residual prover P ∗(h, r1, r2)).

• Cn then invokes the knowledge extractor extract, guaranteed by lemma A.3, on P ∗(h, r1, r2),extracting values i, y, s, z and an implicit representation of Π, given by a machine M .

• If extract fails, Cn outputs fail, otherwise it rewinds P ∗ until the point where it expects toreceive the message ri, and then continues the emulation of the honest verifier from this point(using new random coins).

• Once again, when P ∗ reaches 〈PsUA, VsUA〉, Cn invokes extract on the residual prover, extract-ing values i′, y′, s′, z′ and an implicit representation of Π′, given by a machine M ′.

• If the extraction fails or if i′ 6= i, Cn outputs fail.

It remains to analyze the success probability of Cn. We start by noting that y = y′ occurs onlywith negligible probability. This follows from the computational binding property of Com. Sincethe probability that EWI outputs a false witness is ε(n) it must hold that for a fraction ε(n)/2 of theverifier messages before protocol 〈PsUA, VsUA〉, EWI outputs a false witness with probability ε(n)/2when given oracle access to P ∗ having been feed messages in this set of “good” messages. Due tothe correctness of extract it holds that when Cn selects verifier messages from this “good” set, theprobability that extraction succeeds (in outputting a false witness) on P ∗ is

ε′ = poly(ε)

Thus, given that Cn picks random verifier messages, it holds that the probability that extractsucceeds is at least

ε′′ =ε

2ε′ = poly(ε)

There, thus, exists an index σ ∈ {1, 2} such that the extraction outputs the index i = σ withprobability ε′′′ = ε′′/2. Again, for a fraction ε′′′/2 of verifier messages before slot σ (when σ = 1,there is only one message, namely h, while when σ = 2, the messages are h, r1), the residual prover(P ∗(h) when σ = 1, or P ∗(h, r1) when σ = 2) succeeds in convincing the verifier with probabilityε′′′/2. We conclude that with probability

ε′′′/2 · (ε′′′/2)2 = poly(ε)

52

Page 54: New and Improved Constructions of Non-Malleable ...rafael/papers/nmc-journal-final.pdfated in the seminal paper by Dolev, Dwork and Naor [16]. The paper contains definitions of security

Cn obtains an implicit representation of programs Π,Π′ such that ∃y, y′ ∈ {0, 1}(|ri|−n) for whichΠ(y) = ri, Π′(y′) = r′i and h(Π) = h(Π′). Using a simple counting argument it follows that withprobability (1− 2−n) (over the choices of ri, r

′i), Π 6= Π′.23 Thus, by fully extracting the programs

(from the implicit representation) Cn finds a collision with probability

poly(ε) · (1− 2−n) = poly(ε)

Note that the time required for extracting these programs is upper bounded by T (n)O(1). Thus,any poly-time prover P ∗ that can make V accept x 6∈ L with non-negligible probability can be usedin order to obtain collisions for h

r← Hn in time T (n)O(1) (note that we here additionally rely on thefact that extract is a strict polynomial-time machine), in contradiction to the collision resistance of{Hn}n against T (n)-sized circuits.24 This concludes the proof of the proposition.

This completes the proof of Lemma A.3.

Basing the construction on “standard” collision resistant hash functions Althoughthe above analysis (for the proof of knowledge property of 〈Ptag, Vtag〉) relies on the assumptionthat {Hk}k is a family of hash functions that is collision resistant against T (k)-sized circuits, wenote that by using the method of Barak and Goldreich [3], this assumption can be weakened tothe (more) standard assumption of collision resistance against polynomial-sized circuits. The mainidea in their approach is to replace the arbitrary hashing in Slot 1 and 2 with the following twostep hashing procedure:

• Apply a “good”25 error-correcting code ECC to the input string.

• Use tree-hashing [32, 10] to the encoded string.

This method has the advantage that in order to find a collision for the hash function in the “proofof knowledge” proof, the full description of programs Π,Π′ is not needed. Instead it is sufficient tolook at a randomly chosen position in the description (which can be computed in polynomial timefrom the implicit representation of Π,Π′). The analysis, here, relies on the fact that two differentcodewords differ in random position i with a (positive) constant probability.

23Fix Π, Π′, y, ri. Then, with probability 2−n over the choice of r′i, there exist a y′ ∈ {0, 1}(|r′i|−n) s. t. Π′(y′) = r′i.

24We mention that by slightly modifying the protocol, following the approach by Barak and Goldreich [3], one caninstead obtain a polynomial-sized circuit Cn finding a collisions for Hk. More details follow after the proof.

25By “good” we here mean an error-correcting code correcting a constant fraction of error.

53