 Research
 Open access
 Published:
Continuously nonmalleable codes from block ciphers in splitstate model
Cybersecurity volumeÂ 6, ArticleÂ number:Â 25 (2023)
Abstract
Nonmalleable code is an encoding scheme that is useful in situations where traditional error correction or detection is impossible to achieve. It ensures with high probability that decoded message is either completely unrelated or the original one, when tampering has no effect. Usually, standard version of nonmalleable codes provide security against one time tampering attack. Block ciphers are successfully employed in the construction of nonmalleable codes. Such construction fails to provide security when an adversary tampers the codeword more than once. Continuously nonmalleable codes further allow an attacker to tamper the message for polynomial number of times. In this work, we propose continuous version of nonmalleable codes from block ciphers in splitstate model. Our construction provides security against polynomial number of tampering attacks and it preserves nonmalleability. When the tampering experiment triggers selfdestruct, the security of continuously nonmalleable code reduces to security of the underlying leakage resilient storage.
Introduction
Physical attacks on the implementation of various cryptographic schemes are the most threatening aspects for the crypto designer. In theoretical cryptography, the algorithm under consideration is modeled as a blackbox with which an adversary can interact via the inputâ€“output interface of the system. Such blackbox security notions do not incorporate an adversary that can change the secret message into some related value through tampering attack, and analyse the outcomes. The adversary can perform tampering attack by heating up the devices, fault injections (Sergei and Ross 2002) etc. In software module, viruses or malwares can also carry out the attack on storage device by corrupting some regions of the memory. Boneh etÂ al. (2001) show that a single bit flip of the signing key is enough to extract the secret information of RSA signature completely. This is one of the most devastating attack where an adversary makes minor modification in the cryptographic device and the sensitive information can be recovered. A line of research have focused on how to secure any cryptographic implementation from such tampering attacks (Bellare and Kohno 2003; Bellare etÂ al. 2011; Kalai etÂ al. 2011; Bellare etÂ al. 2012; DamgÃ¥rd etÂ al. 2013; Chen etÂ al. 2019; Ghosal etÂ al. 2022).
Nonmalleable codes, introduced by Dziembowski etÂ al. (2010, 2018), are used as one of the applications of tamper resilient cryptography. It is required when correction of the message is not the main concern but privacy and integrity are more important. Further, the guarantee is that if an adversary tampers any message encoded by nonmalleable codes, output is either completely unrelated or the original one. Let k be the secret message (e.g., key of any cryptographic algorithm) and f be the tampering function. An adversary encodes the secret message k as Enc(k). It uses the tampering function f on the encoded message Enc(k) and performs decoding, i.e., Dec(f(Enc(k))). Nonmalleability property guarantees that Dec(f(Enc(k))) = k, for every k with probability 1, when tampering has no effect or Dec(f(Enc(k))) = \(k^{'}\), in case of tampering, where k and \(k^{'}\) are computationally independent. Generally, nonmalleability cannot be achieved for arbitrary classes of tampering functions. Let \(f_{increment}\) be the tampering function. An adversary uses this function on the encoded message as \(f_{increment}(Enc(k)+1)\) and tries to decode it as \(Dec(f_{increment}(Enc(k)+1))\). After decoding, the adversary gets output as \(k+1\). It is highly related to the original secret message, i.e., k. Hence, nonmalleable codes can be constructed for some classes of tampering functions only. In literature, most widely used model is splitstate, where the codeword is divided into two different parts \({M}_{0}\), \({M}_{1}\), and it is stored into memory \({\textsf{M}}_{L}\), \({\textsf{M}}_{R}\) respectively (Liu and Lysyanskaya 2012; Dziembowski etÂ al. 2013; Jafargholi and Wichs 2015; Aggarwal etÂ al. 2015; Kiayias etÂ al. 2016; Aggarwal etÂ al. 2016; Fehr etÂ al. 2018). Two different tampering functions f = \((f_0({M}_{0}), f_1({M}_{1}))\) modifies the codeword in an arbitrary and independent way. One important functionality is that both tampering functions cannot run the decoding procedure because two shares are needed in order to decode a codeword whereas each of the functions \(f_{0}({M}_{0})\), \(f_{1}({M}_{1})\) can access only one share. Standard notion of nonmalleability protects message for one time tampering attack only. Such codeword is called oneshot nonmalleable code. It cannot handle the situation when an adversary tampers the codeword more than once. A stronger version of nonmalleability, called continuously nonmalleable codes (CNMC) is proposed in Faust etÂ al. (2014a), where the following attack \(f = (f_{i}({M}_{0}),\) \(f_{i+1}({M}_{1}))\) (\(i \in q \wedge q \in poly(n)\)) is performed polynomial number of times for each \(f_{i} \in {\mathcal {F}}\), and still nonmalleability is preserved.
Continuous nonmalleability has various flavours. Let m be the original message and \(m^{'}\) be the decoded tampered message. Moreover, c denotes the codeword and \(c^{'}\) denotes the tampered codeword in a continuous tampering experiment. Standard version of continuous nonmalleability or default version refers to the situation where decoded tampered message \(m^{'}\) and original message m are completely independent but it is possible for an attacker to create an encoding such that \(c^{'}\) is not equal to c but \(c^{'}\) decodes to m as discussed in Dziembowski etÂ al. (2010). In case of strong continuous nonmalleability, when \(c^{'}\) is not equal to c, it is guaranteed that both \(m^{'}\) and m completely are independent. Another stronger flavour is superstrong continuous nonmalleability, where \(c^{'}\) is not equal to c implies that \(c^{'}\) and c are independent (Faust etÂ al. 2014a, b; Jafargholi and Wichs 2015). Our construction considers stronger version of continuous nonmalleability. Again, based on the situation that how tampering functions are applied to the codeword, tampering experiment of continuous nonmalleability has two versions as shown in Jafargholi and Wichs (2015). When tampering functions are applied always to initial encoding of the codeword, it is called nonpersistent tampering. Here, an auxiliary memory is required beyond n bits of activememory to store the codeword. An attacker can make a copy of the original codeword to the auxiliary memory. Further, the attacker can tamper the original version of the codeword from the auxiliary memory and place it to the activememory. In persistent version, tampering functions are applied to the previous version of tampered codeword rather than initial encoding. So, the extra memory requirement is not present here. An adversary can tamper two different parts of the memory until decoding error is triggered. Additional feature of continuous nonmalleability is to handle leakage attacks while tampering attacks are being performed. The adversary can gain leakage values as a partial information. Earlier constructions of continuously nonmalleable codes are built on top of some leakage resilient primitives which can handle some bounded amount of leakages (Faust etÂ al. 2014a; Aggarwal etÂ al. 2014, 2015) independently from two different parts of the memory. Continuously nonmalleable code constructions are broadly categorized into two domains as informationtheoretic (Aggarwal etÂ al. 2019) and computational (Faust etÂ al. 2014a; Faonio etÂ al. 2018; Ostrovsky etÂ al. 2018). In Faust etÂ al. (2014a), it is shown that informationtheoretic continuous nonmalleability is not possible to achieve in splitstate model due to the generic attack. Later, Aggarwal etÂ al. (2017) show that in case of persistent tampering in splitstate model, informationtheoretic continuous nonmalleability can be achieved. Further research work shows a more relaxed version of CNMC from computational assumption in the plain model (i.e., without common reference string based setup) but it provides weaker security guarantee (Ostrovsky etÂ al. 2018). In DachmanSoled and Kulkarni (2019), authors describe that it is necessary to rely on setup assumptions, i.e., common reference string (CRS) to achieve stronger security. Hence, the proposed construction relies on block cipher and robust non interactive zero knowledge (NIZK) (De Santis etÂ al. 2001) proof in CRS based trusted setup environment. In Table Â 1, we describe various constructions of continuously nonmalleable codes in splitstate model as available in the literature.
Limitations of the Existing Work and Our Motivation. Usually, nonmalleable codes are keyless encoding scheme in nature. The first construction of a continuously nonmalleable code is proposed in Faust etÂ al. (2014a). Their work is based on collision resistant hash function with robust non interactive zero knowledge (NIZK) proof. Later, Fehr etÂ al. (2018) show that oneshot nonmalleable codes can be constructed from relatedkey secure block ciphers. Such construction does not satisfy security against continuous attacks. An attacker can create two valid codewords \((M_{0},M_{1})\) and \((M_{0},M_{1}^{'})\) such that their decoding does not return \(\bot\), i.e., \(\bot \ne Dec_{k}(\alpha ,(M_{0},\) \(M_{1}))\) \(\ne Dec_{k}(\alpha ,(M_{0},M_{1}^{'}\) )) \(\ne \bot\), where \(M_{1} \ne M_{1}^{'}\). It produces two valid messages m, \(m^{'}\). Moreover, assuming the tampering function is nonpersistent, an adversary can leak all the bits of \(M_{1}\) without activating the selfdestruct feature. In general, for any continuously nonmalleable codes, it should be hard to find two valid codewords \((M_{0},M_{1})\) and \((M_{0},M_{1}^{'})\) such that \(Dec_{k}(\alpha ,(\) \(M_{0},\) \(M_{1}))\) \(\ne Dec_{k}(\alpha ,(\) \(M_{0},M_{1}^{'}))\). This property is called \(mess\) \(age\) uniqueness as described in Faust etÂ al. (2014a). Our goal is to design nonmalleable codes from any kind of block cipher such as AES (Joan and Vincent 2002), SHACAL (Handschuh and Naccache 2002), Midori (Banik etÂ al. 2015) etc. that is secure against polynomial number of tampering attempts. The block ciphers used in our construction should satisfy the following properties:

a)
The output produced by the underlying block cipher should be strong pseudorandom permutation (sprp).

b)
If decryption of a ciphertext c with a key k succeeds, it should return \(\bot\) if it is decrypted with a different key \(k^{'}\) (SubsectionÂ 2.5).
Our Contribution. In this work, we propose the construction of continuously nonmalleable codes in splitstate model from any block cipher in computational domain with trusted setup, i.e., CRS. We remove the restriction of relatedkey secure block cipher as used in Fehr etÂ al. (2018). The codeword is capable of handling nonpersistent tampering attempts until selfdestruct occurs. Initially, the message is encoded into leakage resilient storage (lrs). Further, it is encoded with block cipher along with robust non interactive zero knowledge (NIZK) proof. The key k of the block cipher is divided into two shares \(k_{0}\), \(k_{1}\). Left part of a codeword \({M}_{0}\) stores \(k_{0}\) whereas \({M}_{1}\) stores \(k_{1}\). During decoding, it is reconstructed as \(k \leftarrow k_{0} \oplus k_{1}\).
Organization. The paper is organized as follows. SectionÂ 2 describes some preliminaries whereas Sect.Â 3 provides a brief description about continuous nonmalleability. Code construction and basic proof ideas are illustrated in Sect.Â 4. Thereafter, proof of security is given in Sect.Â 5. Finally, we conclude the paper in Sect.Â 6.
Preliminaries
Notations and basic results
Let m be the original message. \({M}_{0}\) and \({M}_{1}\) are the left and right half of a codeword in splitstate model, stored in memory \({\textsf{M}}_{L}\) and \({\textsf{M}}_{R}\) respectively. \({\mathcal {O}}^{T}_{cnmc}(.,.)\) represents the tampering oracle. Two tampering functions are \(f_{0}\) and \(f_{1}\) working in \({M}_{0}\) and \({M}_{1}\) respectively. Moreover, \(f^{i}_{0}\) (or \(f^{i}_{1}\)) denotes the tampering function used by an adversary at \(i^{th}\) round. \({\mathcal {K}}\) is the usable key set after removing the weak and semiweak keys of the block cipher. If \({\mathcal {K}}\) is the key set, \({\mathcal {K}}\) represents the number of key elements in \({\mathcal {K}}\). When k is uniformly chosen at random from \({\mathcal {K}}\), we write \(k \xleftarrow \$ {\mathcal {K}}\). n is the security parameter. \({\mathcal {O}}^{l}(s)\) denotes the leakage oracle that takes string s as input and performs leakage function \(\tau _{b}()\) on s, and it returns at most l bits. \(r \in \{0,1\}^{n}\) denotes the randomness. \(\alpha\) represents an untamperable common reference string (CRS). A function \(\epsilon (n)\) is called negligible in n if it vanishes faster than the inverse of any polynomial in n. P(x;Â r) is a randomized algorithm which takes \(x \in \{0,1\}^{n}\), randomness \(r \in \{0,1\}^{n}\) as input and produces the output \(y \in \{0,1\}^{n}\). An algorithm P is called probabilistic polynomialtime (PPT) if P is allowed to make random choices, and the computation of P(x;Â r) terminates in a polynomial number of steps (x) at most for \(x \in \{0,1\}^{n}\), \(r \in \{0,1\}^{n}\). Let \({\mathbb {E}} = \{E_{k}\}_{k \in N}\), \({\mathbb {F}} = \{F_{k}\}_{k \in N}\) be two ensembles and \({\mathbb {E}} \underset{c}{\approx }\ {\mathbb {F}}\) represents the computational indistinguishability that for every PPT distinguisher D, \({\text {Pr}}[D(E_{k}) = 1]  {\text {Pr}}[D(F_{k})= 1 ]  \le \epsilon (n)\). In similar way, \({\mathbb {E}} \underset{s}{\approx }\ {\mathbb {F}}\) denotes the statistical indistinguishability for computationally unbounded scenario. \({\mathcal {H}}_{\infty }(X)\) and \(\tilde{{\mathcal {H}}}_{\infty }(XY)\) denote the minentropy and conditional average minentropy of the random variable X. \(\delta _{0}[i]\), \(\delta _{1}[i]\) are two arrays used to store the result of tampering queries whereas \(\mu _{0}[i]\), \(\mu _{1}[i]\) are used to store leakage queries result at each invocation (\(i \in q \wedge q \in poly(n)\)) in AlgorithmÂ 3 and AlgorithmÂ 4. In TableÂ 2, we describe a summary of notations. We now define some definitions and lemmas related to the code construction.
Definition 2.1.1
(SplitState Model) Let M be a codeword which consist of two shares \(M= ({M}_{0}, {M}_{1}\)), and they are stored into two different parts of the memory \({\textsf{M}}_{L}\), \({\textsf{M}}_{R}\) respectively. Each tampering attempt f = \((f_{0}, f_{1})\) is described by two arbitrary chosen functions that can be applied to the codeword f = \((f_0({M}_{0}), f_1({M}_{1}))\) in an independent way. The model which satisfies the above property is said to be splitstate model.
Definition 2.1.2
(Nonpersistent Tampering) Let f = \((f_{0}, f_{1})\) be the tampering function and M be a codeword which is split into two shares \(M= ({M}_{0}, {M}_{1}\)). The tampering experiment is said to be nonpersistent if the tampering functions are applied to initial encoding of the codeword always. Moreover, such model considers the scenario when an adversary has access to an nbit auxiliary memory beyond the active memory, and it can copy the original codeword to the auxiliary memory. Later, the subsequent attack can be performed on the auxiliary memory and the tampered codeword can be placed to the original memory.
Lemma 2.1.1
A random variable X has minentropy over the set \({\mathcal {X}}\), denoted as \({\mathcal {H}}_{\infty }(X)\) = \(log\) \(max_{x \in {\mathcal {X}}} {\text {Pr}}[X=x]\). It represents the probability of guessing X by an unbounded adversary.
Lemma 2.1.2
A random variable X has conditional average minentropy given some information Y over the set \({\mathcal {X}}\), \({\mathcal {Y}}\), denoted as \(\tilde{{\mathcal {H}}}_{\infty }(XY)\) = \(log{\mathbb {E}}_{y \in {\mathcal {Y}}}\) \(max_{x \in {\mathcal {X}}} {\text {Pr}}[X=xY=y]\). It represents the probability of guessing X when some related information of X is available to the adversary through side channel leakage.
Lemma 2.1.3
For a random variable X and another random variable Y, \(\tilde{{\mathcal {H}}}_{\infty }(XY)\) \(\ge {\mathcal {H}}_{\infty }(X)  l\), where Y takes \(2^{l}\) possible values \((l \in \{0,1\}^{n})\).
Lemma 2.1.4
For a random variable X and other two correlated random variables \(Y_{1},Y_{2}\), we get \(\tilde{{\mathcal {H}}}_{\infty }(XY_{1},Y_{2})\) \(\ge \tilde{{\mathcal {H}}}_{\infty }(XY_{1})  l\), where \(Y_{2}\) takes \(2^{l}\) possible values \((l \in \{0,1\}^{n})\).
Lemma 2.1.5
Let \(\tau\) be the leakage function (possibly randomized) used by an adversary on variable X. Then, \(\tilde{{\mathcal {H}}}_{\infty }(X\tau (X))\) \(\ge {\mathcal {H}}_{\infty }(X)  l\), where \(\tau (X)\) generates l bits of leakage through the side channel \((l \in \{0,1\}^{n})\).
Lemma 2.1.6
Let X,Â Y be the correlated random variables and \(\tau\) be the leakage function used by an adversary A. Then, \(\tilde{{\mathcal {H}}}_{\infty }(X\tau (Y)) \ge \tilde{{\mathcal {H}}}_{\infty }(XY)\).
Leakage resilient storage
Leakage Resilient Storage (lrs) scheme encodes message in such a way that secures the underlying message against leakage attacks. It consists of a pair of algorithms (\(\mathfrak {Enc}^{lrs}\), \(\mathfrak {Dec}^{lrs}\)) with the following properties:

\(\mathfrak {Enc}^{lrs}\) algorithm takes input a message m, randomness r and produces the output \(p_{0}\), \(p_{1}\).

\(\mathfrak {Dec}^{lrs}\) algorithm takes \(p_{0}\), \(p_{1}\) as input and generates m as output.
Original idea of (\(\mathfrak {Enc}^{lrs}\), \(\mathfrak {Dec}^{lrs}\)) algorithm is used in literature (DavÃ¬ etÂ al. 2010; Dziembowski and Faust 2011) for computationally unbounded adversary. In our construction, it is used for computationally bounded adversary (Faust etÂ al. 2014a). Leakage experiment is defined below:
Initially, a counter ctr is set to 0. When strings are passed into \({\mathcal {O}}^{l}(p_{0},.)\), \({\mathcal {O}}^{l}(p_{1},.)\), along with leakage function \(\tau (.)\), leakage values are calculated through \(\tau (p_{0})\), \(\tau (p_{1})\), and it is added to ctr, until \(ctr \le l\) from each part. Oracle terminates if \(ctr > l\), and further query would return \(\bot\).
Storage scheme is said to be strong lrs if an adversary should not be able to distinguish between two arbitrarily chosen messages m and \(m^{'}\) except with negligible probability, i.e.,
\({\textbf {Adv}}_{\mathfrak {leak}_{A}^{\beta }}^{strong}(A) = [Pr [A(\mathfrak {leak}_{A,m}^{\beta })=1]\)  \({\text {Pr}}[A(\mathfrak {leak}_{A,m^{'}}^{\beta })=1] ] \le \epsilon (n)\), where m, \(m^{'}\) \(\in\) \(\{ 0,1 \}^{n}\) and \(\epsilon (n)\) denotes a negligible function.
Robust noninteractive zero knowledge
Let \({\mathcal {R}}\) be a relation for the language \({\mathfrak {L}}\), denoted as \({\mathfrak {L}}^{{\mathcal {R}}}\) = { \(m :\exists ~w\) such that \({\mathcal {R}}(m,w)=1 \}\) and \(m \in {\mathcal {M}}\). Robust noninteractive zero knowledge (NIZK) proof system for \({\mathfrak {L}}^{{\mathcal {R}}}\) consists of a set of algorithms \((CRSGen, Prove, Vrfy, S= (S_{0}, S_{1}), Xtr)\), defined as follows. CRSGen takes input a security parameter \(1^{n}\) and generates \(\alpha \in \{0,1\}^{n}\) as a common reference string (CRS). Prove takes \(\alpha\), a label \(\lambda\), \((m,w) \in {\mathcal {R}}\) as input and produces proof \(\pi =Prove^{\lambda }(\alpha ,m,w)\) as output. The deterministic Vrfy algorithm outputs true when verification of statement is successful, i.e., \(Vrfy^{\lambda }(\alpha ,m,Prove^{\lambda }(\alpha ,m,w))=1\). The algorithm S consists of two simulators, i.e., \(S_{0}\) and \(S_{1}\). \(S_{0}\) generates a CRS and the trapdoor key whereas \(S_{1}\) performs simulated game with an adversary A. Xtr outputs the hidden value of the relation \({\mathcal {R}}(m,w)\). It satisfies all the below properties as mentioned in De Santis etÂ al. (2001):

Completeness. For every \(m \in {\mathfrak {L}}^{{\mathcal {R}}}\) and all w such that \({\mathcal {R}}(m,w)=1\), for all \(\alpha\) \(\leftarrow CRSGen(1^{n})\), we require that the following probability should be satisfied \({\text {Pr}}[Vrfy(\alpha ,m,Prove(\alpha ,w,m))=1]\).

Multitheorem zero knowledge. It says that honestly computed proof does not reveal anything beyond the validity of the statement. Mathematically, it is represented as follows. For every probabilistic polynomialtime adversary A, real experiment, i.e., Real(n) and simulated experiment, i.e., Simulated(n) are completely indistinguishable, i.e., \(Real(n) \underset{}{\approx } Simulated(n)\). Real(n) and Simulated(n) are described below:
$$\begin{aligned} Real(n)= & {} \left\{ \begin{array}{c} \alpha \leftarrow CRSGen(1^{n}); {\mathcal {L}} \leftarrow A^{Prove(\alpha ,.,.)}(\alpha )\\ output: {\mathcal {L}} \end{array} \right\} \\ Simulated(n)= & {} \left\{ \begin{array}{c} (\alpha ,pk)\leftarrow S_{0}(1^{n}); {\mathcal {L}} \leftarrow A^{S_{1}(\alpha ,.,pk)}(\alpha )\\ output: {\mathcal {L}} \end{array} \right\} \end{aligned}$$ 
Extractability. For all PPT adversary A, there exists a PPT algorithm Xtr, a negligible function \(\epsilon\) and a security parameter n such that \({\text {Pr}}[ G^{Xtr} = 1] \le \epsilon (n)\), where game \(G^{Xtr}\) is described below.
$$\begin{aligned} G^{Xtr} = \left\{ \begin{array}{l} (\alpha ,pk,sk)\leftarrow S_{0}(1^{n})\\ (m,\pi ) \leftarrow A^{S_{1}(\alpha ,.,pk)}(\alpha ); w \leftarrow Xtr(\alpha ,(m,\pi ),sk)\\ (m,\pi ) \notin {\mathcal {Q}} \wedge {\mathcal {R}}(m,w) \ne 1 \wedge Vrfy(\alpha ,m,\pi ) = 1 \\ \end{array} \right\} , \end{aligned}$$
\({\mathcal {Q}}\) is the query set of \((m,\pi )\) pairs that an adversary A asks to \(S_{1}\).
In Liu and Lysyanskaya (2012); Faust etÂ al. (2014a), authors show that if the proof statement is modified, the verification algorithm should not proceed further. We use the same approach in our construction. Moreover, the proof algorithm supports public label \(\lambda\) and such label is incorporated with the statement of the message m to calculate the above algorithms, i.e., \(Prove^{\lambda }(.,.,.)\), \(Vrfy^{\lambda }(.,.,.)\), \(Xtr^{\lambda }(.,.,.),\) \(S_{1}^{\lambda }\) (.,Â .,Â .) etc.
Pseudorandom permutation
Let block cipher \({\mathfrak {E}}\): \(\{0,1\}^n \times \{0,1\}^k \rightarrow \{0,1\}^n\) be a mapping from message space \({\mathcal {M}}\) to ciphertext space \({\mathcal {C}}\) through a fixed k. An adversary A plays fixed pseudorandom security (prp) game with prp oracle \({\mathcal {O}}_{prp}()\) and random permutation oracle \({\mathcal {O}}_{R}()\). The pseudorandom permutation security advantage is defined as follows: \({\textbf {Adv}}_{{\mathfrak {E}}}^{prp}(A\)) = \({\text {Pr}}[A^{{\mathcal {O}}_{prp}()}=1\)]  Pr[\(A^{{\mathcal {O}}_{R}()}=1\)].
\({\textbf {Adv}}_{{\mathfrak {E}}}^{prp}(q,t) = \smash {\displaystyle \max _{A}} \{{\textbf {Adv}}_{{\mathfrak {E}}}^{prp}(A)\},\) where q is the maximum number of queries with time at most t.
An adversary A guesses the value of b, where \(b \xleftarrow \$ \{0,1\}\). If b = 0, A proceeds with \({{\mathcal {O}}_{prp}()}\) and if b = 1, A proceeds with \({{\mathcal {O}}_{R}()}\). \({{\mathcal {O}}_{prp}()}\) returns encryption \({\mathfrak {E}}_{k}(m)\) and \({{\mathcal {O}}_{R}()}\) returns random keyed permutations \(E_{k}(m)\), \(k \xleftarrow \$ {\mathcal {K}}\).
\({\mathfrak {K}}\) is the total key set whereas \({\mathcal {K}}\) is the usable key set after removing weak and semiweak keys, i.e., \({\mathcal {K}}= {\mathfrak {K}}\)  \(\{ k^{weak} \cup k^{semiweak}\}\). In a cipher, weak and semiweak keys are such keys by which an encryption scheme can be broken more efficiently than usual keys.
Block cipher
A block cipher \({\mathfrak {E}}\): \(\{0,1\}^n \times \{0,1\}^k \rightarrow \{0,1\}^n\) is a keyed permutation which takes message \(m \in {\mathcal {M}}\), key \(k \in {\mathcal {K}}\) and outputs \(c \in {\mathcal {C}}\), called encryption. Its inverse algorithm which takes \(c \in {\mathcal {C}}\), \(k \in {\mathcal {K}}\) and generates \(m \in {\mathcal {M}}\), called decryption \({\mathfrak {D}}\). Classical security models for block ciphers are pseudoran dompermutation (prp) and strong pseudorandom permutation (sprp). In prp security model, an adversary has only access to encryption oracle whereas in strong pseudorandom permutation model the adversary has access to both encryption and decryption oracle.
Moreover, the block cipher used in our construction has the following property: If key is modified then decryption algorithm should return \(\bot\). To achieve such property in our nonmalleable code construction, we check the key in AlgorithmÂ 3 and AlgorithmÂ 4. The original key k of a cipher is stored into two parts of codeword \(M_{0}\) and \(M_{1}\). Whenever original key k and tampered key \(k^{'}\) are completely different, i.e., \(k^{'} \ne k\), decryption algorithm \({\mathfrak {D}}_{k}()\) should not be called and we return \(\bot\) from the decoding algorithm of nonmalleable code. Since the decryption algorithm of a block cipher with a different key \(k^{'}\) returns some other message rather than original one, we need to restrict it in this way.
Continuously nonmalleable codes
Leakage Oracle. Leakage Oracle \({\mathcal {O}}^{l}(.)\) is a stateful oracle that calculates total leakage through some arbitrary leakage function \(\tau ()\). AlgorithmÂ 1 shows the leakage experiment. Initially, a counter ctr is set to 0. When strings are passed into it, leakage values are calculated and its length is added with the ctr, until \(ctr \le l\). Otherwise, it returns \(\bot .\)
Tampering Oracle. Tampering Oracle \({\mathcal {O}}^{T}_{cnmc}(.,.)\) in splitstate model is a stateful oracle that takes two codewords \(M_{0},M_{1}\) and tampering function f = (\(f_{0}\), \(f_{1}) \in {\mathcal {F}}\) with initial \(state =alive\) and performs the below experiment as defined in AlgorithmÂ 2.
Coding Scheme. Let CNMC = (CRSGen,Â \(Enc_{k},Dec_{k})\) be a splitstate coding scheme in the CRS model.

CRSGen algorithm takes security parameter \(1^{n}\) as input and generates output \(\alpha \in \{0,1\}^{n}\) as CRS.

\(Enc_{k}\) algorithm takes key \(k \in {\mathcal {K}}\), CRS \(\alpha\), message \(m \in {\mathcal {M}}\) and produces the codeword \((M_{0},M_{1})\).

\(Dec_{k}\) algorithm takes the codeword \((M_{0},M_{1})\), key \(k \in {\mathcal {K}}\), CRS \(\alpha\) and generates message m or special symbol \(\bot\).
Continuous Nonmalleability. The coding scheme CNMC is said to be l leakage resilient, q continuously nonmalleable code in splitstate model if for all messages \(m,m^{'} \in \{0,1\}^{n}\) and for all probabilistic polynomialtime adversaries A, \({\textbf {Tamper}}_{cnmc}^{A,m}\) and \({\textbf {Tamper}}_{cnmc}^{A,m^{'}}\) are computationally indistinguishable, i.e.,
\({\textbf {Adv}}_{{Tamper}_{cnmc}^{A}}^{Strong}(A) = [Pr [A({\textbf {{Tamper}}}_{cnmc}^{A,m})=1]\)  \({\text {Pr}}[A\) \(({\textbf {{Tamper}}}_{cnmc}^{A,m^{'}})\) \(=1] ] \le \epsilon (n)\), where m, \(m^{'}\) \(\in\) \(\{ 0,1 \}^{n}\) and
\({\mathcal {L}}^{i}_{A}\) contains the view of an adversary with two parameters \(\mu\) and \(\delta\), for i number of tampering queries (\(i \le q \wedge q \in poly(n)\)). \(\mu\) stores the result of leakage queries \((\mu \le 2\,l)\) and \(\delta\) stores the result of tampering queries \((\delta \le q)\) from \({\mathcal {O}}^{T}_{cnmc}()\). When i = 1, our code behaves as oneshot nonmalleable code and without any tampering query, i.e., i = 0, it acts as leakage resilient code (DavÃ¬ etÂ al. 2010).
Message Uniqueness. Let CNMC = \((CRSGen,Enc_{k},De\) \(c_{k})\) be a splitstate (l,Â q) continuously nonmalleable code. It is said to satisfy message uniqueness property if there does not exist a valid pair \((M_{0},M_{1})\), \((M_{0},M_{1}^{'})\) such that \(\bot \ne Dec_{k}(\alpha ,\) \((M_{0},M_{1})) \ne Dec_{k}(\alpha ,(M_{0},M_{1}^{'}))\) \(\ne \bot\), where \(M_{1} \ne M_{1}^{'}\) and it produces two valid messages m, \(m^{'}\). A continuously nonmalleable code should not violate uniqueness property as mentioned in Faust etÂ al. (2014a).
Code construction
We propose the construction of continuously nonmalleable codes from block cipher along with robust non interactive zero knowledge (NIZK) proof. Then, we analyse the uniqueness property of the codeword and proof of security. Let CNMC = \((CRSGen,Enc_{k},Dec_{k})\) be splitstate (l,Â q) continuously nonmalleable code in the CRS model based on leakage resilient storage (\(\mathfrak {Enc}^{lrs}\), \(\mathfrak {Dec}^{lrs}\)), on a block cipher \({\mathfrak {E}}\): \(\{0,1\}^n \times \{0,1\}^k \rightarrow \{0,1\}^n\) with some properties incorporated and on a robust noninteractive zero knowledge (NIZK) proof system (CRSGen,Â Prove,Â Vrfy) with label support for language \({\mathfrak {L}}^{{\mathfrak {E}}_{k_{0}}}\) = { \(c_{key}:\exists ~k\) such that \(c_{key}={\mathfrak {E}}_{k_{0}}(k) \}\), where \(k \in {\mathcal {K}}\), \(k \leftarrow k_{0} \oplus k_{1}\). The construction of our codeword is illustrated below:

I.
\({{CRSGen(1^{n}). }}\) The algorithm takes \(1^{n}\) as a security parameter and generates the common reference string \(\alpha\).

II.
\({{Enc_{k}(\alpha ,m). }}\) Encoding algorithm takes key \(k \in {\mathcal {K}}\), CRS \(\alpha\) and message \(m \in {\mathcal {M}}\) as input. Initially, the message m with some randomness \(r \leftarrow \{0,1\}^{n}\) is fed into leakage resilient storage, i.e., \((p_{0},p_{1})\leftarrow \mathfrak {Enc}^{lrs}(mr)\). Next, it encrypts \(p_{0}\), \(p_{1}\) as \(c_{0} \leftarrow {\mathfrak {E}}_{k}(p_{0})\), \(c_{1} \leftarrow {\mathfrak {E}}_{k}(p_{1})\), where \({\mathfrak {E}}_{k}()\) is an encryption algorithm of a block cipher. The key k is divided into two shares \(k_{0}\), \(k_{1}\) and it is reconstructed as \(k \leftarrow k_{0} \oplus k_{1}\). Further, the master key k is encrypted as \(c_{key}\) = \({\mathfrak {E}}_{k_{0}}(k)\). Thereafter, proof of statements are calculated in the following way, i.e., \(\pi _{0}\) = \(Prove^{c_{1}}(\alpha ,k_{0},(c_{key},c_{0}))\), \(\pi _{1} = Prove^{c_{0}}(\alpha ,k_{1},\) \((c_{key},c_{1}))\). Finally, it outputs the codeword \((M_{0},M_{1})\) = \((((k_{0},c_{0}),p_{0},(c_{key},c_{1}),\pi _{0},\pi _{1})\), \(((k_{1},c_{1}),p_{1},(c_{key},c_{0})\),\(\pi _{0},\pi _{1}))\). The codeword (M_{0}, M_{1}) is stored into the memory (M_{L}, M_{R}) respectively.

III.
\({{Dec_{k}(\alpha ,(M_{0},M_{1})). }}\) Decoding algorithm starts by parsing \(\pi _{0}\) and \(\pi _{1}\). Then, it constructs the key \(k \leftarrow k_{0} \oplus k_{1}\) and performs the below steps:

IV.
Left & Right verification. If the verification of statement in the codeword \((M_{0},M_{1})\) are not successful, i.e., either \(Vrfy^{c_{1}}(\alpha ,(c_{key},c_{0}))\) or \(Vrfy^{c_{0}}(\alpha ,(c_{key}\) \(,c_{1}),\pi _{1})\) returns 0, it outputs \(\bot\). Otherwise, go to the next step.

V.
Uniqueness check. If k = \({\mathfrak {D}}_{k_{0}}(c_{key})\), go to the next step. Otherwise, it returns \(\bot\).

VI.
Cross check & Decode. If \(p_{0} \ne {\mathfrak {D}}_{k}(c_{0})\), \(p_{1} \ne {\mathfrak {D}}_{k}(c_{1})\) and proofs \(\pi _{0}\), \(\pi _{1}\) both are different, it returns \(\bot\). Otherwise, check \(p_{0}\), \(p_{1}\), if both are equal in \(M_{0}\) and \(M_{1}\), call decode \(\mathfrak {Dec}^{lrs}(p_{0}\), \(p_{1})\).
Lemma 1
CNMC = \((CRSGen,Enc_{k},Dec_{k})\) satisfies message uniqueness property if implemented with the block cipher.
Proof
Message uniqueness is based on the property (b) (SubsectionÂ 2.5) of the underlying block cipher, i.e., ciphetext generated by the cipher with a key k returns \(\bot\) if it is decrypted with a different key \(k^{'}\). Hence, integrity of the key has to be maintained. Suppose, an adversary A generates a pair \((M_{0},M_{1})\), \((M_{0},M_{1}^{'})\) such that both are valid and \(M_{1} \ne M_{1}^{'}\). It means \(\bot \ne Dec_{k}(\alpha ,(M_{0},M_{1})) \ne Dec_{k}(\alpha ,(M_{0},M_{1}^{'})) \ne \bot\). The equation is only possible if an adversary is able to produce a valid key pair \((k_{0},k_{1})\), \((k_{0},k_{1}^{'})\) such that for \((k_{0},k_{1})\), \({\mathfrak {D}}_{k_{0}}(c_{key})\) = \(k_{0} \oplus k_{1}\) (for \(M_{0},M_{1}\)) which is equal to \(k_{0} \oplus k_{1}^{'}\) = \({\mathfrak {D}}_{k_{0}}(c_{key})\) for \((k_{0},k_{1}^{'})\) (for \(M_{0},M_{1}^{'})\), where \(k_{1} \ne k_{1}^{'}\). Unfortunately, it violates the deterministic property of decryption algorithm as the decrypted key and newly formed key are same. So, \({\mathfrak {D}}_{k_{0}}(c_{key}) = (k_{0} \oplus k_{1})\) (for \(M_{0},M_{1}\)) \(\ne (k_{0} \oplus k_{1}^{'}) = {\mathfrak {D}}_{k_{0}}(c_{key})\) (for \(M_{0},M_{1}^{'})\). Therefore, the key is modified and decoding should return \(\bot\).
Security proof idea of CNMC
Our hunch is to develop the continuous version of nonmalleable codes from block ciphers with some additional properties incorporated on the cipher. As mentioned by Gennaro etÂ al. (2004), certain strong cryptographic assumptions are necessary when an adversary tampers a portion of the memory. To prove that codeword is continuously nonmalleable, a simulator for the \({\textbf {Tamper}}_{cnmc}^{A,m}\) experiment is developed. In \({\textbf {Tamper}}_{cnmc}^{A,m}\) experiment, an adversary A performs all leakage and tampering oracle queries in real environment on the codeword \((M_{0}, M_{1})\), stored in memory \({\textsf{M}}_{L}\) and \({\textsf{M}}_{R}\) respectively, whereas simulated experiment \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\) simulates the adversaries view of the tampering experiment in an ideal scenario. We need to show that both experiments are indistinguishable except with negligible probability, i.e., \( {\text {Pr}}[{\textbf {Tamper}}_{cnmc}^{A,m}=1]\)  \({\text {Pr}}[{\textbf {SimTamper}}_{cnmc}^{A,0^{n}}=1] \le \epsilon (n)\). Simulated tampering experiment takes \(r \leftarrow \{0,1\}^{n}\) and proceeds with encryption of message \(0^{n}r\). But the original tampering experiment proceeds with encryption of message mr. Initially, mr is encoded using leakage resilient storage which splits the message into two halves, and it keeps the message secure as long as l bits are leaked at most from each parts of the memory. Given the codeword M = \((M_{0}\), \(M_{1})\), oracle continues until simulated output from left (AlgorithmÂ 3) and right (AlgorithmÂ 4) sides of (\(T_{0}, T_{1}\)) are equal. The experiment stops when decoding error is triggered, i.e., outputs are not equal. From that point further query would return \(\bot\), and selfdestruct state is invoked. Since nonpersistent tampering is considered, a separate memory \({\mathfrak {M}}\) of polynomial length is used to store tampered versions of the codeword at each round along with leakage and tampering data.
The main difficulty of our experiment is to find selfdestruct index, i.e., from the point experiment would return \(\bot\) for further query. Let \(\tau (M)\) be the leakage function on the codeword M. \({\mathcal {H}}_{\infty }(M  \tau (M))\) denotes the conditional average entropy of the codeword M when some information is available through sidechannel, i.e., the best chance of guessing message m from the codeword M with some sidechannel information by an adversary A. Leakage functions are applied in the interleaved way by an adversary A on \((M_{0}, M_{1})\) as \(\tau ^{0}_{0}(M_{0})\), \(\tau ^{0}_{1}(M_{1})\), \(\tau ^{1}_{0}(M_{0})\), \(\tau ^{1}_{1}(M_{1})\),... \(\tau ^{i1}_{0}(M_{0})\), \(\tau ^{i1}_{1}(M_{1})\). The \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\) experiment proceeds until output produced by two algorithms \(T_{0}\) and \(T_{1}\) are equal. From informationtheoretic observation, it can be viewed as \(\tilde{{\mathcal {H}}}_{\infty }(M_{0}  \tau ^{i}_{0}(M_{0}))\) = \(\tilde{{\mathcal {H}}}_{\infty }(M_{1}  \tau ^{i}_{1}(M_{1}))\), i.e., best chance of guessing message m from the codeword M = \((M_{0}\), \(M_{1})\) is same when some information is available through sidechannel leakage to the adversary A. At each query invocation, simulated experiment proceeds by checking tampered output from both halves of the memory. If it matches, leak the entire part so that total amount of leakage is upper bounded by \({\mathcal {O}}(n)\), where n represents the security parameter. The experiment triggers selfdestruct when outputs are unequal. Simulated tampering experiment consists of \(S = (S_{0},S_{1})\) and it works in the following way. The simulator \(S_{0}\) generates an untamperable CRS and the key (\(\alpha ,pk,sk)\). Further, the key is passed to \(S_{1}\) which takes \(r \leftarrow \{0,1\}^{n}\), encoding of message \(0^{n}r\), and invokes \((T_{0}, T_{1})\) to simulate the tampering experiment until outputs are equal. The simulator \(S_{1}\) makes simulated proof of statement \(\pi _{0}\) = \(S_{1}^{c_{1}}(\alpha ,\) \((c_{key},c_{0}),pk)\) and \(\pi _{1}\) = \(S_{1}^{c_{0}}(\alpha ,(c_{key},c_{1}),pk)\). Then, it calls the algorithm \(T_{0}\) and \(T_{1}\) in an interleaved manner. Algorithm \(T_{0}\) simulates left part of a codeword (simulated) \(M_{0}\) and algorithm \(T_{1}\) simulates right part of a codeword \(M_{1}\). Both the algorithm proceeds by parsing \(M_{0}\) and \(M_{1}\). It calculates leakage through \((\tau ^{i}_{0},\tau ^{i}_{1})\) and stores the value into \(\mu _{b}[i]\). Then, it applies tampering function \(f^{i}_{0}\) on \(M_{0}\) and \(f^{i}_{1}\) on \(M_{1}\), and it compares tampered codeword \(M^{'}\) with the original codeword M. If both are same, \(\delta _{b}[i]\) is set to \(same^{*}\). Next, it verifies the proof of the statement and if it is successful, \(T_{b}\) proceeds further. Otherwise, \(\delta _{b}[i]\) is set to \(\bot\). Further, the original and tampered proof of statement are compared, and the corresponding values are stored into \(\delta _{b}[i]\). The extractor Xtr algorithm retrieves the key \(k^{'}_{0}\) in algorithm \(T_{1}\), \(k^{'}_{1}\) in algorithm \(T_{0}\) and the key \(k^{'}\) is formed, i.e., \(k^{'} \leftarrow k^{'}_{0} \oplus k^{'}_{1}\). Next, uniqueness condition of the key \(k^{'}\) is checked with k, and if they are same, decoding is performed to retrieve the message \(m^{'}\).
Now, we discuss why the known attacks are not possible to perform in the proposed construction. Firstly, if an adversary tampers \(c_{0}\) and changes it some related value \(c_{0}^{'}\) in \(M_{0}\), NIZK proof \(\pi _{0}\) should be changed to \(\pi _{0}^{'}\). Hence, both values \(\pi _{0}\), \(\pi _{0}^{'}\) should be different and by the property of robust NIZK, experiment should return \(\bot\). Also the adversary has to make same changes in \(M_{1}\), this should be hard without knowing a witness by robustness of the proof. Apart from that if an adversary tampers the key k, and make it to \(k^{'}\), NIZK proof should be different and decryption with \(k^{'}\) should return \(\bot\) as per cipher property (b). Hence, the codeword is secure against continuous tampering attacks. In the next section, we discuss the security of the construction in detail.
Proof of security
Theorem 1
Let \({\mathfrak {E}}\): \(\{0,1\}^n \times \{0,1\}^k \rightarrow \{0,1\}^n\) be the block cipher with message space \({\mathcal {M}}\), key space \({\mathcal {K}}\) and ciphertext space \({\mathcal {C}}\), \((\mathfrak {Enc}^{lrs}, \mathfrak {Dec}^{lrs})\) be \(l^{'}\) leakage resilient storage, (CRSGen,Â Prove,Â Vrfy) is a robust NIZK proof for language \({\mathfrak {L}}^{{\mathcal {R}}}\) chosen from message space \({\mathcal {M}}\). Then CNMC = \((CRSGen,Enc_{k},Dec_{k})\) is \(((l+\gamma +\eta ), q)\) continuously nonmalleable and l leakage resilient code under nonpersistent tampering when instantiated with all the above primitives, where \(q = poly(n)\), \(\gamma = log({\mathcal {M}})\), \(\eta = log({\mathcal {K}})\), \(l^{'} \ge (2\,l + n)\) and n denotes the security parameter.
Proof
The proof of our theorem is quite involved. We develop a simulator that simulates the tampering experiment in an ideal scenario. It is shown that an adversary cannot distinguish between the real and simulated experiment except with negligible probability, i.e., \( {\text {Pr}}[{\textbf {Tamper}}_{cnmc}^{A,m}=1]\)  \({\text {Pr}}[{\textbf {SimTamper}}_{cnmc}^{A,0^{n}}=1] \le \epsilon (n)\). In \({\textbf {Tamper}}_{cnmc}^{A,m}\) experiment, an adversary A proceeds with q number of leakage and tampering queries in real environment until the selfdestruct state is invoked. \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\) experiment simulates the adversaries view in an ideal environment. Here, the simulator \(S = (S_{0}, S_{1})\) is constructed to execute the \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\) experiment. The simulator \(S_{0}\) generates a triplet \((\alpha ,pk,sk)\) and passes it to \(S_{1}\). \(\alpha\) is an untamperable CRS and (pk,Â sk) pair is used to make the simulated proof of statement in Xtr algorithm. The goal of \(S_{1}\) is to simulate the actual tampering experiment. It consists of two algorithms \((T_{0},T_{1})\) with tampering functions \(f^{i}_{0}\) and \(f^{i}_{1}\) (\(i \le q \wedge q \in poly(n)\)). Algorithm \(T_{0}\) works on the codeword \(M_{0}\) with tampering function \(f^{i}_{0}\) and \(T_{1}\) works on the codeword \(M_{1}\) with tampering function \(f^{i}_{1}\). Simulated experiment proceeds with encoding of message \(0^{n}r\) whereas real experiment proceeds with message mr (\(r \leftarrow \{0,1\}^{n}\)). To show that simulation works in a proper way, distribution of simulated experiment is changed incrementally until we reach to the real tampering experiment \({\textbf {Tamper}}_{cnmc}^{A,m}\). At each step, a negligible amount of error is introduced. Such change is not noticeable due to the security of lrs scheme. In this way, encryption of \(0^{n}\) switches to the codeword M, i.e., encoding of message m. \(S_{1}\) calls \((T_{0},T_{1})\) in the interleaved manner and experiment stops when outputs from both algorithms are unequal, i.e., \(T_{0}(M_{0},f^{i}_{0},r,i) \ne T_{1}(M_{1},f^{i}_{1},r,i)\). Any further query would return \(\bot\) and experiment leads to selfdestruct in \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\). Whenever the experiment triggers selfdestruct, security of continuous nonmalleability reduces to the security of underlying lrs scheme. Alternatively, we can say that if an adversary A breaks the security of continuous nonmalleability then there exists an efficient reduction that breaks the security of lrs which contradicts the fact that lrs scheme is secure. \(S_{1}\) simulates the actual reduction with \((T_{0},T_{1})\) in the following way.
AlgorithmÂ 3 illustrates the working strategy of the simulated tampering experiment \(T_{0}\). It parses the left part of a codeword first and applies the leakage function \(\tau ^{i}_{0}()\). The maximum leakage bound tolerated by \(T_{0}\) is l. All leakage values are stored in \(\mu _{0}[i]\) array. Then, tampered codeword \(M^{'}_{0}\) is obtained after applying \(f^{i}_{0}\) on \(M_{0}\), i.e., \(M^{'}_{0}\) = \(f^{i}_{0}(M_{0})\) = \(((k^{'}_{0},c^{'}_{0}),p^{'}_{0},(c_{key}^{'},c^{'}_{1}),\) \(\pi ^{'}_{0},\pi ^{'}_{1})\). If \(M_{0}\) and \(M^{'}_{0}\) are equal, \(\delta _{0}[i]\) array is set to \(same^{*}\). Next, the verification of statement is checked and in case, it is unsuccessful, \(\delta _{0}[i]\) array is set to \(\bot\) and the experiment stops. If the original proof of statement \(\pi\) and the tampered one \(\pi ^{'}\) are same, \(\delta _{0}[i]\) array is set to \(\bot\) and it returns \(\bot\). Extractor algorithm Xtr is run to extract \(k^{'}_{1}\) from the simulated proof of statement with the extractor key sk, i.e., \(k^{'}_{1} \leftarrow Xtr^{c_{0}^{'}}(\alpha ,((c_{key}^{'},c^{'}_{1}),\pi ^{'}_{1}),sk)\). Further, the key \(k^{'}_{1}\) in conjunction with \(k^{'}_{0}\) is XORed to form the original key \(k^{'}\) which is checked against \({\mathfrak {D}}_{k_{0}}(c_{key})\). If both are same, \(p^{'}_{1} \leftarrow {\mathfrak {D}}_{k^{'}}(c^{'}_{1})\) is called. Next, the \(\mathfrak {Dec}^{lrs}(p^{'}_{0}\), \(p^{'}_{1})\) algorithm is invoked to retrieve the message \(m^{'}.\) Since tampering experiment is nonpersistent, a separate memory \({\mathfrak {M}}\) stores all the tampered codeword along with leakage and tampering data, i.e., \(\delta _{0}[i]\) and \(\mu _{0}[i]\).
AlgorithmÂ 4 describes the simulated tampering experiment \(T_{1}\). It starts by parsing right part of a codeword \(M_{1}\) and calculates leakage through \(\tau ^{i}_{1}()\). The maximum leakage tolerated by \(T_{1}\) is upper bounded to l. \(\mu _{1}[i]\) array stores the leakage data and \(\delta _{1}[i]\) stores all the tampering information. At each query invocation, tampering function \(f^{i}_{1}\) is applied on \(M_{1}\). Next, if verification of the statement with label \(c^{'}_{0}\) is successful, proof of statement is compared with the tampered one. In case of successful comparison, Xtr algorithm retrieves \(k^{'}_{0}\) from the simulated proof of statement, i.e., \(k^{'}_{0} \leftarrow Xtr^{c_{1}^{'}}(\alpha ,((c_{key}^{'},c^{'}_{0}),\pi ^{'}_{0}),sk)\). The original key \(k^{'}\) is formed and compared with \({\mathfrak {D}}_{k_{0}}(c_{key})\). Finally, \(p^{'}_{0}\) is recovered from lrs and \(\mathfrak {Dec}^{lrs}(p^{'}_{0},p^{'}_{1})\) is invoked. The \(\mathfrak {Dec}^{lrs}(p^{'}_{0},p^{'}_{1})\) algorithm returns \(m^{'}\).
The simulator \(S_{1}\) runs algorithm \(T_{0}\) and \(T_{1}\) alternatively as long as their outputs are same. Let \(\tilde{{\mathcal {H}}}_{\infty }(M_{0}  \tau ^{i}_{0}(M_{0}))\) be the average conditional entropy. It captures the scenario that best chance of guessing \(M_{0}\) when some information is available through side channel leakages \(\tau ^{i}_{0}(M_{0})\) to the adversary A. Information theoretically, we can write \(\tilde{{\mathcal {H}}}_{\infty }(M_{0}  \tau ^{i}_{0}(M_{0}))\) = \(\tilde{{\mathcal {H}}}_{\infty }(M_{1}  \tau ^{i}_{1}(M_{1}))\) from the working strategy of the simulator \(S_{1}\). \(\tilde{{\mathcal {H}}}_{\infty }(M_{0}  \tau ^{i}_{0}(M_{0}))\) can be written as follows (Lemma 2.1.3).
Similarly,
Here, \(\tau ^{i}_{0}(M_{0})\) or \(\tau ^{i}_{1}(M_{1})\) can leak at most l bits as per security of the lrs scheme. The simulator \(S_{1}\) runs until selfdestruct is invoked or returns \(\bot\). Let q be the maximum number of queries that are made by A in \({\textbf {Tamper}}_{cnmc}^{A,m}\). It is assumed that the experiment stops at \(q^{th}\) query. In case of \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\), same number of queries are performed and the experiment returns \(\bot\) whenever outputs from \(T_{0}\) and \(T_{1}\) are different. The algorithm \(T_{0}(M_{0},f^{q}_{0},r,q)\) and \(T_{1}(M_{1},f^{q}_{1},r,q)\) are l leaky. For 1 to \((q1)^{th}\) query, we get the below equation with the assumption that function output cannot be more informative than its own input and last inequality comes from Lemma 2.1.4. Apart from that \(M_{1}\), \(T_{0}(M_{0},f^{q}_{0},r,q))\) do not give much useful information about \(M_{0}\) to guess the message m and it decreases the minentropy of \(M_{0}\) by \({\mathcal {O}}(n)\), i.e., its size. Hence, the security of codeword reduces to the security of leakage resilient storage.
At each query invocation, tampered output from both sides of (\(M_{0}, M_{1}\)) are compared and if it matches, leak the entire codeword. At last query invocation when output from both sides are not same (also \(\tau ^{q}_{0}(M_{0}) \ne \tau ^{q}_{1}(M_{1})\)), leak the entire tampered codeword so that total leakage is upper bounded by \({\mathcal {O}}(n)\). Apart from that lrs in both parts of the codeword can tolerate leakages upto \(2\,l\) (l bits from each side) bits. Combining the parameters, we need \(l^{'}\) at least greater than \((2l + n)\) to work the simulator \(S_{1}\) properly.
Conclusion
In this work, we propose a generic method to construct continuously nonmalleable codes from any kind of block cipher in splitstate model. The length of codeword depends on the block size of underlying cipher. A nonpersistent version of tampering with selfdestruct capability is considered here. Further research work can be pursued to construct superstrong continuously nonmalleable codes with selfdestruct or without selfdestruct capability, and nonpersistent tampering attempts from block ciphers in splitstate model.
Availability of data and materials
No such data is used in this research work.
References
Aggarwal D, Agrawal S, Gupta D, Maji HK, Pandey O, Prabhakaran M (2016) Optimal computational splitstate nonmalleable codes. In: Kushilevitz E, Malkin T (eds) TCC 2016, vol 9563. LNCS. Springer, Heidelberg, pp 393â€“417
Aggarwal D, Kazana T, Obremski M (2017) Inception makes nonmalleable codes stronger. In: Kalai Y, Reyzin L (eds) TCC 2017, vol 10678. LNCS. Springer, Cham, pp 319â€“343
Aggarwal D, DÃ¶ttling N, Nielsen JB, Obremski M, Purwanto E (2019) Continuous nonmalleable codes in the 8splitstate model. In: Ishai Y, Rijmen V (eds) EUROCRYPT 2019, Part I, vol 11476. LNCS. Springer, Cham, pp 531â€“561
Aggarwal D, Dodis Y, Kazana T, Obremski M (2015) Nonmalleable reductions and applications. In: Proceedings of the FortySeventh Annual ACM on Symposium on Theory of Computing, pp 459â€“468
Aggarwal D, Dodis Y, Lovett S (2014) Nonmalleable codes from additive combinatorics. In: STOC, pp 774â€“783
Banik S, Bogdanov A, Isobe T, Shibutani K, Hiwatari H, Akishita T, Regazzoni F (2015) Midori: a block cipher for low energy. In: Iwata T et al (eds) ASIACRYPT 2015, vol 9453. LNCS. Springer, Heidelberg, pp 411â€“436
Bellare M, Kohno T (2003) A theoretical treatment of relatedkey attacks: Rkaprps, rkaprfs, and applications. In: Biham E (ed) EUROCRYPT 2003, vol 2656. LNCS. Springer, Heidelberg, pp 491â€“506
Bellare M, Cash D, Miller R (2011) Cryptography secure against relatedkey attacks and tampering. In: Lee DH, Wang X (eds) ASIACRYPT 2011, vol 7073. LNCS. Springer, Heidelberg, pp 486â€“503
Bellare M, Paterson KG, Thomson S (2012) RKA security beyond the linear barrier: IBE, encryption and signatures. In: Wang X, Sako K (eds) ASIACRYPT 2012, vol 7658. LNCS. Springer, Heidelberg, pp 331â€“348
Boneh D, DeMillo RA, Lipton RJ (2001) On the importance of eliminating errors in cryptographic computations. J Cryptol 14(2):101â€“119
Chen B, Chen Y, HostÃ¡kovÃ¡ K, Mukherjee P (2019) Continuous spacebounded nonmalleable codes from stronger proofsofspace. In: CRYPTO, pp 467â€“495
DachmanSoled D, Kulkarni M (2019) Upper and lower bounds for continuous nonmalleable codes. In: PKC, pp 519â€“548
DamgÃ¥rd I, Faust S, Mukherjee P, Venturi D (2013) Bounded tamper resilience: How to go beyond the algebraic barrier. In: Sako K, Sarkar P (eds) ASIACRYPT 2013, Part II, vol 8270. LNCS. Springer, Heidelberg, pp 140â€“160
DavÃ¬ F, Dziembowski S, Venturi D (2010) Leakageresilient storage. In: Garay JA, De Prisco R (eds) SCN 2010, vol 6280. LNCS. Springer, Heidelberg, pp 121â€“137
De Santis A, Di Crescenzo G, Ostrovsky R, Persiano G, Sahai A (2001) Robust noninteractive zero knowledge. In: Kilian J (ed) CRYPTO 2001, vol 2139. LNCS. Springer, Heidelberg, pp 566â€“598
Dziembowski S, Faust S (2011) Leakageresilient cryptography from the innerproduct extractor. In: Lee DH, Wang X (eds) ASIACRYPT 2011, vol 7073. LNCS. Springer, Heidelberg, pp 702â€“721
Dziembowski S, Kazana T, Obremski M (2013) Nonmalleable codes from twosource extractors. In: Canetti R, Garay JA (eds) CRYPTO 2013, vol 8043. LNCS. Springer, Heidelberg, pp 239â€“257
Dziembowski S, Pietrzak K, Wichs D (2018) Nonmalleable codes. J ACM 65(4):1â€“32
Dziembowski S, Pietrzak K, Wichs D (2010) Nonmalleable codes. In: Yao ACC (ed) ICS 2010, Tsinghua University Press, Beijing, pp 434452
Faonio A, Nielsen JB, Simkin M, Venturi D (2018) Continuously nonmalleable codes with splitstate refresh. In: Preneel B, Vercauteren F (eds) ACNS 2018, vol 10892. LNCS. Springer, Cham, pp 1â€“19
Faust S, Mukherjee P, Nielsen JB, Venturi D (2014a) Continuous nonmalleable codes. In: Lindell Y (ed) TCC 2014, vol 8349. LNCS. Springer, Heidelberg, pp 465â€“488
Faust S, Mukherjee P, Nielsen JB, Venturi D (2020) Continuously nonmalleable codes in the splitstate model. J Cryptol 33(4):2034â€“77
Faust S, Mukherjee P, Venturi D, Wichs D (2014b) Efficient nonmalleable codes and keyderivation for polysize tampering circuits. In: EUROCRYPT. pp 111â€“128
Fehr S, Karpman P, Mennink B (2018) Short nonmalleable codes from relatedkey secure block ciphers. IACR Trans Symm Cryptol, 336â€“352
Gennaro R, Lysyanskaya A, Malkin T, Micali S, Rabin T (2004) Algorithmic tamperproof (ATP) security: theoretical foundations for security against hardware tampering. In: Naor M (ed) TCC 2004, vol 2951. LNCS. Springer, Heidelberg, pp 258â€“277
Ghosal AK, Ghosh S, Roychowdhury D (2022) Practical nonmalleable codes from symmetrickey primitives in 2splitstate model. In: Ge C, Guo F (eds) Provable and practical security
Goldreich O, Micali S, Wigderson A (1991) Proofs that yield nothing but their validity for all languages in NP have zeroknowledge proof systems. J ACM 38(3):691â€“729
Handschuh H, Naccache D (2002) SHACAL: A Family of Block Ciphers. Submission to the NESSIE project
Jafargholi Z, Wichs D (2015) Tamper detection and continuous nonmalleable codes. In: Dodis Y, Nielsen JB (eds) TCC 2015, vol 9014. LNCS. Springer, Heidelberg, pp 451â€“480
Joan D, Vincent R (2002) The Design of Rijndael. SpringerVerlag, New York Inc, Secaucus
Kalai YT, Kanukurthi B, Sahai A (2011) Cryptography with Tamperable and Leaky Memory. In: Rogaway P (ed) CRYPTO 2011, vol 6841. LNCS. Springer, Heidelberg, pp 373â€“390
Kiayias A, Liu FH, Tselekounis Y (2016) Practical nonmalleable codes from lmore extractable hash functions. In: Weippl ER, Katzenbeisser S, Kruegel C, Myers AC, Halevi S (eds) ACM CCS 2016, ACM Press, pp 1317â€“1328
Liu FH, Lysyanskaya A (2012) Tamper and leakage resilience in the splitstate model. In: SafaviNaini R, Canetti R (eds) CRYPTO 2012, vol 7417. LNCS. Springer, Heidelberg, pp 517â€“532
Ostrovsky R, Persiano G, Venturi D, Visconti I (2018) Continuously nonmalleable codes in the splitstate model from minimal assumptions. In: Shacham H, Boldyreva A (eds) CRYPTO 2018, Part III, vol 10993. LNCS. Springer, Cham, pp 608â€“639
Sergei P, Ross J (2002) Optical fault induction attacks. In: Revised Papers from the 4th International Workshop on Cryptographic Hardware and Embedded Systems, Springer, Heidelberg, pp 212
Acknowledgements
The authors did not receive support from any organization for the submitted work.
Funding
No funding is received for conducting this study.
Author information
Authors and Affiliations
Contributions
All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors have no relevant financial or nonfinancial interests to disclose. The authors have no financial or proprietary interests in any material discussed in this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ghosal, A.K., Roychowdhury, D. Continuously nonmalleable codes from block ciphers in splitstate model. Cybersecurity 6, 25 (2023). https://doi.org/10.1186/s42400023001521
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s42400023001521