Enhancing Security Based on Feed Back Mechanism in Neural Cryptography Srivinay Department of Computer Science RLJIT, VTU University, Belgaum [email protected]

Abstract—The Cryptographic system is designed for the exchange of information among the intended nodes without any leakage of information. Number of cryptographic techniques involving complexity to various extents is proposed to secure the wireless communication. In this paper, we propose a simple cryptographic technique which involves generating a common secret key for exchanging information. In order to achieve this, we use the concept of Neural Network in Cryptography. This is a new potential source for public key cryptographic schemes which is not based on number theory. Here the partners use a cryptographic key exchange protocol in order to generate a common secret key. This can be accomplished by two Tree Parity Machines (TPMs), which are trained on their mutual outputs that synchronize to an identical time dependent weight vector. The generated weights are used as keys in encryption process. We also show why an attacker using a similar TPM is unlikely to converge to the same key. In this, we are using the Feed Back Mechanism which enhances the security of the system. Here we are adding a new parameter by creating confusion on the input values in our algorithm which improves the security as well as creates the difficulties to attack the system. In this paper, we also implement two test methods to check whether the generated key using TPM algorithm is random or not.

Key Words—Feed Back Mechanism, Neural Cryptography, Synchronization, Tree Parity Machine

I.

INTRODUCTION

The Neural Networks [1] has been motivated right from its inception by the recognition that the human brain computes in an entirely different way from the conventional digital computer. The brain is a highly complex, nonlinear and parallel information processing system. It has the capability to organize its structural constituents, known as neurons, so as to perform certain computations (e.g. pattern recognition, perception, and motor control) main times faster than the fastest digital computer in existence today. Consider for example human vision, which is information processing task, it is the function of the visual system to provide a representation of t h e environment around us and more important, to supply the information we need to interact with the environment. To be specific, the brain routinely accomplishes perceptual recognition tasks (e.g. recognizing familiar face in unfamiliar scene) in approximately 100-200 ms, whereas tasks of much lesser complexity may take days on a conventional computer. An Artificial Neural Network (ANN) [2] is a system based on the operation of biological neural networks. It intends to model some of the functionalities of the human nervous system. An ANN is an adaptive, most often nonlinear system that learns to perform a function (an

Aanchal Malhotra Department of Computer Science Amity University, Noida [email protected]

input/output map) from data. Adaptive means that the system parameters are changed during operation, normally called the training phase. After the training phase the ANN parameters are fixed and the system is deployed to solve the problem at hand called the testing phase. The neural network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by examples, they cannot be programmed to perform a specific task. The examples must be selected carefully, otherwise useful time is wasted and the network might be functioning incorrectly. The disadvantage is that the network finds out how to solve the problem by itself, its operation can be unpredictable. Synchronization of neural networks [3, 4] is a special case of an online learning situation. The learning abilities of artificial neural networks with difficult inputoutput relations have involved a lot of attention in the recent years. To send a secret message over a public channel one needs a secret key either for encryption, decryption or both. In 1976, Diffie and Hellman have shown how to generate a secret key over a public channel for the exchange of secret message. Recently it has been shown how to use synchronization of neural network to generate secret keys over public channel. This algorithm is not based on number theory but it contains physical mechanism. Here we show how neural networks can produce a common secret key by exchanging bits over a public channel and by learning from each other. [5]

II.

NEURAL CRYPTOGRAPHY

A. Interacting Neural Network and Cryptography: Neural Cryptography is based on the effect that two neural networks are able to synchronize by mutual learning. Two identical systems [6], starting from different initial conditions, can be synchronized by a common externalsignal. Synchronization has recently been observed in artificial neural networks as well. Two networks which are trained on their mutual output can synchronize to a time dependent state of identical synaptic weights [5]. This phenomenon has been applied to cryptography as well. In this case, the two partners A and B do not have to share a common secret key but use their identical weights as a secret key needed for encryption. The secret key is generated over a public channel. An attacker E who knows all the details of the algorithm and records any communication transmitted

through this channel finds it difficult to synchronize with the parties, and hence to calculate the common secret key. Synchronization by mutual learning (A and B) is much faster than leaning by listening (E). Neural cryptography is much simpler than the commonly used algorithms which are mainly based on number theory or on quantum mechanics.

Consider for example the case τA = τB = 1. There are four possible configurations of the hidden units in each network: (+1,+1,+1), (+1,-1,-1), (-1,+1,-1), (-1,-1,+1) In the first case, all three weight vectors, w1, w2, w3 are changed, in all other three cases only one weight vector is changed. The partner as well as any opponent does not know which one of the weights is updated.

B. Algorithm:

III. CRYPTANALYSIS Any attacker who knows all the details of the protocol and all the information exchanged between A and B should not have the computational power to calculate the secret key. [8] A. Brute Force Attack:

Fig 1. Tree Parity Machine The mathematical model used for representing a party is a Tree Party machine (TPM) [7] as shown in figure 1. It consists of multilayer network with ‘k’ hidden units, N dimensional inputs ‘x’, and N dimensional synaptic weights ‘w’. All input values are binary, xi,j 𝜖 {-1, +1}

(1)

and the weights, which define the mapping from input to output, are discrete numbers between -L and +L, wi,j 𝜖{-L,-L+1,…..,+L}

(3)

The three hidden bits σ are combined to an output bit τ (Tau) of each network. The total output of each network is given as: 𝑘 𝑖=1 𝜎𝑖

(4)

Two Tree Parity Machines one for A and one for B respectively, start with random initial weights, which are kept secret. In each step a new random input vector is generated publicly. Then the partners calculate the output of their neural networks and send it to each other. Afterwards the weight vectors are updated according to a suitable learning rule. Because both the inputs and weights are discrete, this procedure leads to full synchronization, wiA = wiB after a finite number of steps. Then A and B can use the weight vectors as a common secret key for encryption. The output bits τ of the two neural networks is used for the mutual training process. If the output bits are equal then the weights are changed using the Hebbian Learning Rule: wi (t+1) = wi (t) + τ xi

One of the basic attacks can be provided by an attacker, who owns the same tree parity machine as the parties A and B. In each step there are three situations possible: 

The outputs of the k hidden perceptrons are calculated as:

𝜏=

B. Simple Attack:

(2)

Here the index i = 1, 2. . . K denotes the ith hidden unit of the Tree Parity Machine and j = 1, 2. . . , N denotes the elements of the vector.

σ =sign (wi . xi)

To provide a brute force attack, an attacker has to test all possible keys (all possible values of weights). By K hidden neurons, K*N input neurons and boundary of weights L, this gives (2L+1) KN possibilities. For example, the configuration K = 3, L = 3 and N = 100 gives us 3*10 253 key possibilities, making the attack impossible with today’s computer power.

(5)

If the training step pushes any component w (i, j) out of the interval –L….+L, the component is replaced by ±L, correspondingly.

 

Output (A) ≠ Output (B): None of the parties updates its weights. Output (A) = Output (B) = Output (E): All the three parties update weights in their tree parity machines. Output (A) = Output (B) ≠ Output (E): Parties A and B update their tree parity machines, but the attacker cannot do that. Because of this situation his learning is slower than the synchronization of parties A and B.

It has been proven, that the synchronization of two parties is faster than learning of an attacker. It can be improved by increasing of the synaptic depth L of the neural network. That gives this protocol enough security and an attacker cannot find out the key. C. Geometric Attack: The geometric attack [9] performs better than the simple attack, because the attacker takes τC and the local fields of its hidden units into account. In fact, it is the most successful method for an attacker using only a single tree parity machine. Here, the attacker imitates one of the parties but in the steps in which his outputs disagrees with the imitated party’s output, he negates or flips the sign of the hidden unit in his network. The unit most likely to be wrong is the one with minimal absolute value of the local field, and therefore that is the unit which is flipped. Here, one ensemble of independent attackers is used. Formally, the attacker constructs a single neural network C with the same structure as A and B, and randomly initializes its weight. At each step, the attacker is trained with the same input as the two parties and weights are updated by the following rules:

  

If A and B have different outputs τA ≠ τB, then the attacker does not update his network. If A and B have the same outputs, τA = τB and τC = τA, then the attacker updates C using the usual learning rule. If A and B have the same outputs, τA = τB and τC ≠ τA, then the attacker finds the hidden perceptron with the minimum absolute value. The attacker negates the particular Sigma and updates C assuming the hidden bits and output τA.

Different attackers starting from randomly chosen states behave independently and thus multiple attackers have a high probability to be successful. For the geometric attack, it has been found that the synaptic depth L determines the security of the system. The success probability decreases exponentially with L, while the synchronization time increases proportionally to L2. Therefore, any desired level of security against this attack can be reached by increasing range limit L.

IV. FEEDBACK MECHANISM Feedback creates correlations between the weights and the inputs. Therefore the system becomes sensitive to the learning rule.. A tree parity machine cannot learn the bit sequence generated by another tree parity machine, since the two input vectors are completely separated by the feedback mechanism. So we have added a new parameter in our algorithm which increases the synchronization time as well as the difficulties to attack the system. The following figure below shows the generation of inputs by feedback mechanism:

As we have assumed it to be, security is increased with the confusion on the input values. However, making confusion on the inputs also results in the increase of the synchronization time. Therefore small change in the inputs on synchronization increases in the security.

V.

RANDOM NUMBER GENERATION TESTS

The Test methods intend to check the generated key is truly random or not. The NIST Test was developed to test randomness of binary sequence produced by either hardware or software based cryptographic random or pseudorandom number generators. The test methods are: A. Frequency Test: The focus of the test is the proportion of zeroes and ones for the entire sequence. The purpose of this test is to determine whether the number of ones and zeroes in a sequence are approximately the same as would be expected for a truly random sequence. The test assesses the closeness of the fraction of ones to ½, that is, the number of ones and zeroes in a sequence should be about the same. All subsequent tests depend on the passing of this test; there is no evidence to indicate that the tested sequence is non-random. Function Call Frequency (n), where: n -The length of the bit string. Additional input used by the function, but supplied by the testing code: The sequence of bits as generated by the RNG or PRNG being tested; this exists as a global structure at the time of the function call; ε = ε1, ε2,…., εn. Test Statistics and Reference Distribution sobs: The absolute value of the sum of the Xi in the sequence divided by the square root of the length of the sequence. The reference distribution for the test statistic is half normal. (Note: If z (where z= sobs /√2) is distributed as normal, then |z| is distributed as half normal). If the sequence is random, then the plus and minus ones will tend to cancel one another out so that the test statistic will be about zero. If there are too many ones or too many zeroes, then the test statistic will tend to be larger than zero.

Fig 2. Feedback Mechanism in Tree Parity Machine For the feedback scheme, there are three new evaluations to consider: ● If 𝝉A = 𝝉B, the weights are updated according to the Hebbian learning rule and the feedback mechanism is used to generate the next input. ● After each step the input is changed to X[i] = f(X[i]*𝜎 *W[i][j]*𝜏). For sigma value we take the parity among the hidden units ● After R steps with different output i.e. if 𝜏A is not equal to 𝜏B for R steps, then all input vectors are reset to public common random vectors.

Test Description  Conversion to erfc: The zeroes and ones of the input sequence (ε) are converted to values of -1 and +1 and are added together to produce Sn= X1+X2+………+Xn , where Xi = 2εi -1  Compute the test statistic Sobc= |Sn| / √n  Compute the P-value = erfc (Sobc / √2), where erfc is the complementary error function. First the key sequence is generated by the neural synchronization and applying frequency test algorithm checks the number of zeroes is equal to number of ones in the key sequence. Using this algorithm we get the p value. If the computed p value is <=0.01 then it is non random and if the p value is >= 0.01 then we conclude that the key sequence is random.

B. Runs Test: The focus of Runs test is to test the total number of runs in the sequence, where a run is an uninterrupted sequence of identical bits. A run of length k consists of exactly k identical bits and is bounded before and after with a bit of the opposite value. The purpose of the runs test is to determine whether the number of runs of ones and zeroes of various lengths is as expected for a random sequence. In particular, this test determines whether the oscillation between such zeroes and ones is too fast or too slow.

A. Synchronization: The two tree parity machine which form the communicating parties, generate a secret key by the process of synchronization. The following graph below plots the various values of the range against the number of steps taken to synchronize. It is observed that as the range increases the number of steps taken for synchronization also increases.

Function Call Runs (n), where: n - The length of the bit string. Additional inputs used by the function, but supplied by the testing code: The sequence of bits as generated by the RNG or PRNG being tested; this exist as a global structure of the time of the function call; ε=ε1, ε2, ε3……εn Test statistic and Reference Distribution Vn (obs): The total number of runs (i.e., the total number of zero runs + the total number of one-runs) across all n bits. 2 The reference distribution for the test statistic is a χ distribution. Test Description The Runs test carries out a Frequency test as a prerequisite.  Compute the pre-test proportion π of ones in the input sequence: π=∑ εj / n  Determine if the prerequisite frequency test is passed: if it can be shown that |π-1/2| >= τ then the runs test need not be performed (i.e. the test should not have been run because of a failure to pass test 1, the frequency test. If the test is not applicable, then the P-value is set to 0.0000. Note that for this test, τ = 2/√n has been pre-defined in the test code.  Compute the test statistic 𝑉𝑛 𝑜𝑏𝑠 = 𝑛−1 𝑘=1 𝑟 𝑘 + 1, where r(k)=0 if εk = εk+1, r(k)= 1, otherwise  Compute P-value = erfc ( |Vn (obs) – 2nπ (1-π)| / 2√2nπ(1-π) First the key sequence is generated by the neural synchronization algorithm and applying runs test algorithm checks the total number of runs in the sequence, where a run is uninterrupted sequence of identical bits. A run of length k consist of exactly k identical bits and is bounded before and after with a bit of the opposite values. The purpose of the runs test is to determine whether the number of ones and zeroes of various lengths is expected for a random sequence. Based on that, we get the p value and conclude that if the computed p value is <=0.01 then it is non random and if the p value is >= 0.01 then we conclude that the key sequence is random.

Fig 3. Range vs. Number of steps

B. Geometric Attack: In the geometric attack the attacker will be having the same algorithm which is at both sender and receiver, and imitates one of the parties. At each step the attacker is trained with the same input as the two parties and weights are updated as key synchronization algorithm, then the attacker will compare the output with one of the party, if the output matches then the attacker updates weight using learning rule. If the output doesn’t match, attacker is going to negate the minimum local field sigma and is going to update weights. Using this approach attacker may get the key. To secure present system, we increase the range of weight of each node, so that we can observe the probability of success of the attacker decreases as shown in the figure below. It is observed that as the range increases the probability of success decreases.

VI. RESULTS AND DISCUSSIONS The implementation of the algorithms, the attacks and test methods is done using Socket Programming in C over the Linux environment. Fig 4. Range vs. Probability of Success

C. Feedback Mechanism: In the feedback mechanism we are adding a new parameter in our algorithm which increases synchronization time as well as the difficulties to attack the system. Here at each step we are changing the input by X[i]=f(X[i]* 𝜎 *W[i][j]*𝜏), for sigma value we take parity among the hidden units, so security is increased with confusion on the input values. However making confusion on the inputs also results in the increase of the synchronization time. The following graph below plots the various values of the range against the number of steps taken to synchronize. It is observed that as the range of weights increases, the time taken for the synchronization is much longer.



Runs Test:

If the P value is ≥ 0.01, the conclusion is that the sequence is random. Note that a large value for Vn (obs) would have indicated an oscillation in the string which is too fast; a small value would have indicated that the oscillation is too slow. (An oscillation is considered to be a change from a one to a zero or vice versa.) A fast oscillation occurs when there are a lot of changes, e.g., 010101010 oscillate with every bit. A stream with a slow oscillation has fewer runs than would be expected in a random sequence, e.g., a sequence containing 100 ones, followed by 73 zeroes, followed by 127 ones (a total of 300 bits) would have only three runs, where as 150 runs would be expected. The following graph below plots the P-values with given value 0.01 in Runs Test:

Fig 5. Range vs. Number of steps

D. Randomness Test: The purpose of this test is to determine whether the number of ones and zeros in a sequence are approximately the same as would be expected for a truly random sequence. 

Frequency Test:

If the P-value were small (<0.01), then this would be caused by |Sn| or |Sobs| being large. Large positive values of Sn are indicative of too many ones, and large negative values of S n are indicative of too many zeros. The following graph below plots the P-values with given value 0.01 in Frequency Test:

Fig 6. Comparison of P-values with given value 0.01 in Frequency Test

Fig 7. Comparison of P-values with given value 0.01 in Runs Test

VII.

CONCLUSION

This paper aims at introducing the concepts of neural networks and how it can be used in the field of cryptography. The purpose of generating key using the process of synchronization is to overcome the complexities and disadvantages associated with the key distribution in the previously existing techniques. The Neural Cryptography is based on repulsive and attractive stochastic forces. A feedback mechanism has increased the synchronization time of two networks and decreases the probability of the successful attack. Feedback yields a small enhancement in the security of the system. After synchronization, the system generates a pseudorandom bit sequence which passes test on the random numbers like frequency test and runs test. When another network is trained on this bit sequence it is not possible to extract some information on the statistical properties of the sequence. The concept of neural cryptography is still under research. Better and improved techniques are being found to check whether the proposed algorithm is stable. The neural algorithm is to be made secure against the attack thereby devising a secure way to transmit and receive messages.

REFERENCES [1]. J. Hertz, A. Krogh, and R. G. Palmer. Introduction to the Theory of Neural Computation. Addison-Wesley, Redwood City, 1991 [2]. Jacek M. Zurada, Introduction to Artificial Neural Systems. [3]. R. Metzler, W. Kinzel, and I. Kanter. Interacting neural networks. [4]. W. Kinzel and I. Kanter. Disorder generated by interacting neural networks: application to econophysics and cryptography. J. Phys. A: Math. Gen., 36(43):11173-11186, 2003. [5]. M. Rosen-Zvi, E. Klein, I. Kanter and W. Kinzel, Mutual learning in a tree parity machine and its Application to cryptography, Phys. Rev. E 2002.. [6]. W. Kinzel and I. Kanter. Interacting neural networks and cryptography. In B. Kramer, editor, Advances in Solid State Physics, volume 42, pages 383-391. Springer, Berlin, 2002. [7]. M. Volkmer and S. Wallner: Tree parity machine rekeying architectures, IEEE Transactions on Computers 2004. [8]. L. N. Shacham, E. Klein, R. Mislovaty, I. Kanter, and W. Kinzel. Cooperating attackers in neural cryptography. Phys. Rev. E, 69(6):066137, 2004. [9]. Klimov, A. Mityagin and A. Shamir: Analysis of Neural Cryptography, ASIACRYPT 2002.

AUTHORS Srivinay is a lecturer at R.L.Jalappa Institute of Technology, Doddaballapur, Bangalore. His research interests include Advanced Computer Networks, Cryptography and Network Performance Analysis.

Aanchal Malhotra pursuing B.Tech degree in Computer Science from Amity school of Engineering and technology, Amity University, Noida. Her research interests include Cryptography, Network Security, Computer Networks, Web Technologies and Database Management system.

of 6

EnhancingSecurityBasedonFeedBackMechanisminNeuralCryptography.pdf. EnhancingSecurityBasedonFeedBackMechanisminNeuralCryptography.pdf. Open.

602KB Sizes 7 Downloads 161 Views

Recommend Documents

of 6
“The language I am writing in does not belong to me. Breathing through everything, being ... Page 3 of 6. Main menu. Displaying Metalanguage.pdf. Page 1 of 6.

of 6
Page 3 of 6. STERCK_A_8_STERCK_verhaal_Doelbewust_Van_Moer_Logistics.pdf. STERCK_A_8_STERCK_verhaal_Doelbewust_Van_Moer_Logistics.pdf.

of 6 - Drive
SUN Certified SYSTEM ADMINISTRATOR. FOR SOLARIS 8 UPGRADE. Free Download Real Questions & Answers PDF and VCE file from: http://www.

of 6
... Merkel eine Anklage wegen. 2. Page 2 of 6. Page 3 of 6. _Strafanzeige_gegen_Mitvergewaltigerin_Bundeskanzlerin_Angela_Merkel_Silvesternacht.pdf.

of 6 - Drive
Page 1 of 6. $~32 & 33. *IN THE HIGH COURT OF DELHI AT NEW DELHI. + W.P.(C) 8917/2015. GAURI GROVER ..... Petitioner. Through : Mr. Satya Ranjan ...

of 6
000201 AGUA POTABLE OBRERO GENERAL RIVERA REYES RAUL ALBERTO 108.53 1,627.95 - 2,937.10 4,565.05 421.03 62.47 32.56 516.06 4,048.99. 000395 AGUA POTABLE OBRERO GENERAL ROMERO ATLAHUA APOLINAR 108.53 1,627.95 108.53 2,937.10 4,673.58 430.76 63.66 224.

of 6
may or may not be suitable for inclusion in the National Wilderness Preservation System (NWPS) and. determine whether to ... Inventory: Identify and inventory all lands that may or may not be suitable for inclusion in the. National .... Page 3 of 6.

of 6
Candidatos_Nominaciones_Premios_Publico_BroadwayWorld_Spain_2017.pdf. Candidatos_Nominaciones_Premios_Publico_BroadwayWorld_Spain_2017.

Route 6 - City of Brownsville
Jul 31, 2012 - For more info call: 956.548.6050 ... Center. • Southmost Public Library. • Bisteiro Middle School. • Gonzalez Park. • Brownsville Health Center.

6
Oct 1, 2015 - SCHOOLS GOVERNANCE and OPERATIONS DIVISION ... Procedure: A Motivational Approach in Safeguarding School ... IF, fl 030, Q. 3 O. `-' ...

Wrox.Beginning.PHP.6.Apache.MySQL.6.Web.Development.pdf
Page 3 of 838. Wrox.Beginning.PHP.6.Apache.MySQL.6.Web.Development.pdf. Wrox.Beginning.PHP.6.Apache.MySQL.6.Web.Development.pdf. Open. Extract.

Grade 6 Overview - Tahanto 6
Email: [email protected]. Phone: 508-869-2333 ext 2201. Course Objectives. Reading. *Understand the power of words and images to transform lives and ...

Grade 6 Overview - Tahanto 6
Email: [email protected]. Phone: 508-869-2333 ext 2201. Course Objectives. Reading. *Understand the power of words and images to transform lives and ...

6 Characteristics of Life
6 Characteristics of Life. 1. Reproduction. 2. Grow and Develop. 3. Made of Cells. 4. Respond to a Stimulus. 5. Obtain and Use Energy. 6. Adapt and Evolve ...

the 6 Self of newborn_2
He does not only want to listen on the telephone, he grabs it or he plays with the wire. ... He enjoys being strolled around stores, he looks all around, he babbles, ...

Page 4 of 6
serve more Fairfax students who are currently shunted to schools in San Anselmo. 50 students currentl. y at other district schools in San Anselmo would move ...

6-Fundamental of TLC .pdf
internet less readily wi!l be carried ~·i!cng rnore rapidly by the mc,bile phase. Page 3 of 25. 6-Fundamental of TLC .pdf. 6-Fundamental of TLC .pdf. Open. Extract.

Copy of Release #6 .pdf
Page 1 of 1. Catholic Daughters of the Americas. California State Court. www.catholicdaughterscalifornia.org ....the fruit of the spirit is: charity, joy, peace, ...

CV (6-6-16).pdf
2011 M.A., Forensic Psychology, John Jay College of Criminal Justice. Thesis title: Forensic confirmation bias: A review and extension to handwriting evidence.

Math 6+ Unit 6 Overview.pdf
Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps.

6. Faculty of Law.pdf
Page 1 of 1. UNIVERSITY OF CAMBRIDGE INTERNATIONAL EXAMINATIONS. International General Certificate of Secondary Education. MARK SCHEME for the May/June 2011 question paper. for the guidance of teachers. 0620 CHEMISTRY. 0620/12 Paper 1 (Multiple Choic

Feb 6 Systems of EQ.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Feb 6 Systems of EQ.pdf. Feb 6 Systems of EQ.pdf. Open. Extract.

blok 6 les 6.pdf
Sign in. Page. 1. /. 1. Loading… Page 1 of 1. Page 1 of 1. blok 6 les 6.pdf. blok 6 les 6.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying blok 6 les 6.pdf. Page 1 of 1.Missing: