Privacy-Preserving Protocols for Perceptron Learning Algorithm in Neural Networks Saeed Samet

Ali Miri

School of Information Technology and Engineering (SITE) University of Ottawa Ottawa, Canada K1N 6N5 Email: [email protected]

School of Information Technology and Engineering (SITE) University of Ottawa Ottawa, Canada K1N 6N5 Email: [email protected]

Abstract—Neural networks have become increasingly important in areas such as medical diagnosis, bio-informatics, intrusion detection, and homeland security. In most of these applications, one major issue is preserving privacy of individual’s private information and sensitive data. In this paper, we propose two secure protocols for perceptron learning algorithm when input data is horizontally and vertically partitioned among the parties. These protocols can be applied in both linearly separable and non-separable datasets, while not only data belonging to each party remains private, but the final learning model is also securely shared among those parties. Parties then can jointly and securely apply the constructed model to predict the output corresponding to their target data. Also, these protocols can be used incrementally, i.e. they process new coming data, adjusting the previously constructed network. Index Terms—Security and Privacy Preserving; Neural Networks; Data Mining and Machine Learning; Distributed Data Structures.

I. I NTRODUCTION Preserving the privacy of sensitive data in machine learning and data mining methods is an important issue in data communication and knowledge base systems. Therefore, many protocols have been proposed for different methods such as classification using decision tree [1], [2], [3], [4], [5], [6], [7], [8], association rule mining [9], [10], [11], [12], and clustering [13], [14], [15], [16], [17]. The common purpose in these protocols is to keep the individual and sensitive data of the involved parties private, while each party can get some knowledge from the output of the system without violating others’ privacy. Neural network learning system is one of the important methods used in data mining and machine learning. However, to the best of our knowledge, there is no privacy-preserving technique to collaboratively produce a neural network in the case of two or more parties, in which each party has a private set of data. The only work in this area [18] deals with the case of client-server environment, and it is assumed that the neural network learning model already exists. There are different algorithms and architectures in neural networks such as perceptron learning algorithm, back-propagation model, and radial basis networks. In this paper we propose two

new privacy-preserving protocols for perceptron learning algorithm, which is a feedforward neural network. These protocols are applied on distributed data environment, such that the original training data is vertically or horizontally partitioned among several parties and no party wants to reveal her/his own private data to the others. In Section 2 neural networks are briefly reviewed along with background in privacy-preserving algorithm for neural networks. Two protocols for horizontally and vertically partitioned data are proposed in Section 3. Section 4 is dedicated to the conclusions and future work. II. BACKGROUND AND R ELATED W ORK Neural Networks system is an information processing concept which processes information in the same way as biological nervous systems. Some common applications of this system are classification, pattern recognition, function approximation and filtering. Normally, this system has a huge number of processing elements, which are highly interconnected and makes a model figuring out patterns or complex relationships between input and output data. The key point in this system is that it can learn and modify itself using its inputs. It is composed of a set of artificial neurons, or computational cells and a set of one-way connectors connecting those cells together [19]. Perceptron learning algorithm is a fundamenAlgorithm 1 The perceptron learning algorithm 1: Set W to the 0 vector. 2: Select a training example < E k , C k >. 3: if W correctly classifies E k , i.e., {W · E k > 0 and C k = +1} or {W · E k < 0 and C k = −1} then 4: Do nothing 5: else 6: Modify W by adding or subtracting E k according to whether the correct output C k is +1 or -1: W 0 = W + CkEk. 7: end if 8: Go to step 2. tal and important algorithm in feedforward neural networks

learning systems proposed by Rosenblatt in [20]. Algorithm 1, adopted from [19], shows the steps of this method. The weights vector W =< W0 , W1 , · · · , Wp > for a neural network model with p inputs, is computed from the set of N training examples E = {< E 1 , C 1 >, < E 2 , C 2 >, · · · , < E N , C N >}. Each E k is a p-vector input such that E k =< u1 , u2 , · · · , up > and it is extended to a (p + 1)-vector by adding an additional input, u0 with value +1 for every training example for the bias. C k is the corresponding output of E k . Perceptron learning algorithm is also used as a base algorithm for multi-layer constructive algorithms such as Tower and Pyramid algorithms. Therefore, after providing privacypreserving protocols for perceptron learning algorithm, they can be extended to apply on other types of learning models with different configurations. Barni et al. in [18] presented an asymmetric privacypreserving protocol neural network for client-server environment. In this protocol, a client, or data owner, is able to process her/his data and receive the resulting output, using the neural network model owned by the server. In this approach, it was assumed that learning model has already been created by the server. They proposed three algorithms with different levels of privacy-preserving. In the first one, weights vector is private and the computation of weighted sum of the input data and weights vector is done securely. Parties use a secure dot product protocol to compute weighted sum, in which weights come from the server and inputs provided by the client. The client then applies the activation function, which is known to both parties, to the dot product result. In the second algorithm, activation function is considered private for the server. Two types of functions were assumed. The first one was a threshold function which can be solved by using a secure comparison protocol. For the second type of functions, which can be approximated with a polynomial, Oblivious Polynomial Evaluation (OPE) was used to securely evaluate the private function. In the third algorithm, the server prevents the client from having a correct prediction of the underlying neural network model, by adding some fake cells to the system and resetting the outbound weights such that the final result remains unchanged. One important privacy problem in these algorithms is that the client, after sending some request to the server, is able to learn the model. Also, in the second algorithm that the activation function is private, although OPE is used to hide the activation function, it can be disclosed to the client after receiving the result of some requests from the server. Orlandi et al. [21] proposed another protocol which improves the number of interactions between the Neural network owner and data provider. It also prevents information leakage at the intermediate levels. All the intermediate computations in this protocol are concealed, and the terms of the comparison are obfuscated for the evaluation of activation function and are

sent to the data owner by the neural network owner such that the real values are not revealed while the correct output can be calculated by the data owner. Homomorphic encryption, Paillier cryptosystem [22], and secure dot product are used in this protocol as building blocks. Another work by Secretan et al. [23] presents a protocol for Probabilistic Neural Network (PNN) learning system using Bayesian theorem (Bayesian optimal classifier). They assume that the training data is known to all the parties, and is already loaded to the memory prior to the initiation of the PNNs performance phase, and each party can have private or public query for testing data to get the prediction of the class value. Also, because of using Secure Sum as a sub-protocol, it is assumed that at least three parties are involved in the protocol. III. N EW P ROTOCOLS FOR DISTRIBUTED DATA The main idea in the following protocols is to design privacy-preserving protocols for perceptron learning algorithm when input data is horizontally or vertically partitioned among multiple parties. In these protocols, in each step of the perceptron algorithm private output shares are created from the private input shares, and the final model is privately shared among the parties. Our protocols also cover both separable and non-separable datasets. According to the definition provided by Goldreich [24], privacy means that each party can only get information which is inferred by using its own input and output available to that party. However, we believe that output share of a party should not help getting access to others’ private and sensitive information. Thus, in our protocol the final model is not released as a whole to each party. It is rather partitioned to private shares among the parties. Also, we assume that the parties are semi-honest. A semi-honest party properly follows the protocol, except that she/he might use received intermediate outputs to figure out some private information belonging to the other parties. A. New Protocol For Horizontally Partitioned Data In this section, we propose a protocol for creating a neural network learning model, using the perceptron algorithm, in which the data is horizontally partitioned among several parties. In horizontal or homogeneous distribution each party owns the value of all attributes of some records or rows of the whole database. At the end of this protocol, the model is privately shared among the parties involved and they can jointly and securely use the model to predict the output for a target data. The model considered here is a single-layer network using a threshold function, shown in Algorithm 1, as the activation function. Suppose dataset D is horizontally partitioned to D1 , D2 , · · · , Dm owned by parties P1 , P2 , · · · , Pm respectively, and |Di | = ni , 1 ≤ i ≤ m. Each item di,j ∈ Di , 1 ≤ j ≤ ni , is a pair < Ei,j , Ci,j >, in which Ei,j =< 1, ui,j,1 , ui,j,2 , · · · , ui,j,p > is the input vector, Ci,j is its corresponding output, and p is the number of input cells. Our goal is to compute the network weights

vector W =< W0 , W1 , · · · , Wp > for this set of training example. To preserve the privacy of the learning model, this vector will be privately divided to all the parties such that wi =< wi,0 , wi,1 , · · · , wi,p > belongs to Pi and: Wk

=

m X

wj,k

0≤k≤p

(1)

j=1

At the beginning of the protocol, W should be initialized to a vector with small values. Each party randomly and privately generates its own vector values. To make sure that the main vector W has small values, parties agree to set their vector values in a way that the summation (1) for each k should be a small number. For instance, parties with odd index start by a negative number and alternatively change the sign of the next value, and parties with even index do the opposite. Therefore, the summation of corresponding items would be a small value. Now, each party Pi has the following information: di,1 = < Ei,1 , Ci,1 > .. .. .. . . . di,ni

= < Ei,ni , Ci,ni >

Ei,j

= < 1, ui,j,1 , ui,j,2 , · · · , ui,j,p >

wi

Steps of the protocol are as follows: 1) Selecting a party: One party is randomly selected from the m parties, say Pi . 2) Selecting an item: Party Pi randomly generates an integer number j, 1 ≤ j ≤ ni , and selects item di,j . 3) Computing weighted sum: Now, R = Ei,j · W has to be computed. = Ei,j · < W0 , W1 , · · · , Wp > = Ei,j · < w1,0 , w1,1 , · · · , w1,p > + Ei,j · < w2,0 , w2,1 , · · · , w2,p .. .. . . Ei,j · < wi,0 , wi,1 , · · · , wi,p .. .. . .

>+ .. .

> + .. .

Ei,j · < wm,0 , wm,1 , · · · , wm,p > In this equation, Ei,j · < wi,0 , wi,1 , · · · , wi,p > is computed locally by Pi because both sides of the dot product belong to this party. The value of other dot products, i.e. Ei,j · < wk,0 , wk,1 , · · · , wk,p >, for k 6= i is computed jointly by Pi and Pk using a secure dot product protocol, such as [25]. Thus, we have: Ei,j · < wk,0 , wk,1 , · · · , wk,p >= Ri,k + Rk in which Ri,k and Rk are private output shares of Pi and Pk respectively. Thus: R

R = R1 + R2 + · · · + Rm

(2)

4) Applying activation function: Threshold function is applied by comparing sign(R) and sign(Ci,j ). According to the algorithm 1, we don’t need to compute the result of the summation (2), and we can just run a subprotocol to find sign(R). If we convert this summation to multiplication of output shares, then by having the number of negative outputs, the sign of the summation can be determined. For this purpose, we first use the sub-protocol Secure Multi-party Addition, such that: m m X Y R= Rl = rl . l=1

l=1

Then, each party Pk , k 6= i, sends the sign of its private output share, i.e. sign(rk ), to Pi , and Pi by counting the number of negative signs determines sign(R) and compares it with sign(Ci,j ), which already belongs to this party. 5) Adjusting the weights vector: If sign(Ei,j · W ) = sign(Ci,j )

= < wi,0 , wi,1 , · · · , wi,p > .

R = Ei,j · W

If we assume Ri = Ri,1 + Ri,2 + · · · + Ri,m then:

= Ei,j · W = (Ri,1 + Ri,2 + · · · + Ri,m ) + (R1 + R2 + · · · + Ri−1 + Ri+1 + · · · + Rm )

then nothing is needed to do, otherwise W has to be adjusted using Ei,j and Ci,j as follows: W

=

W + Ci,j ∗ Ei,j

=

< W0 , W1 , · · · , Wp > + Ci,j ∗ Ei,j

=

< w1,0 , w1,1 , · · · , w1,p > + < w2,0 , w2,1 , · · · , w2,p > + .. .. . . < wi−1,0 , wi−1,1 , · · · , wi−1,p > + < wi,0 , wi,1 , · · · , wi,p > + Ci,j ∗ Ei,j < wi+1,0 , wi+1,1 , · · · , wi+1,p > + .. .. . . < wm,0 , wm,1 , · · · , wm,p >

As we see in this equation, only Pi has to modify its weights vector by adding Ci,j ∗Ei,j to it, which is known to this party, and the weights vectors belonging to the other parties are not changed. 6) Go to step 1. Although perceptron learning algorithm correctly works for separable set of training examples, it is not suitable for nonseparable datasets. For nonseparable set of training examples other algorithms are used, such as Pocket algorithm [19], in which the weights vector with the longer run of correct classifications is kept inside the iteration. Thus, we modify our protocol to handle Pocket algorithm as well. Two integer variables, RunW and RunWP ocket have to be stored and updated inside the iteration. RunW is the number of items correctly classified by the current weights vector and RunWP ocket is

the number of items correctly classified by the pocket weights vector, which can be public and known to all the parties. Also, a weights vector, WP ocket is considered to keep the weights vector with the longest run of the correct classifications, and is privately shared to the parties in the same way of the current weights vector, W . Before starting the protocol RunW and RunWP ocket are set to 0, and WP ocket is set to W , it means that each party has its own private share of WP ocket . Inside the protocol and in step 4, if sign(Ei,j · W ) = sign(Ci,j ), RunW is increased by one, and if RunW > RunWP ocket , each party replaces its own share of WP ocket with its current share of W , and RunWP ocket is set to RunW . By this modification, and by having a specific threshold for the desired number of items that are correctly classified by a weights vector, algorithm is able to find and keep the best weights vector in case of nonseparable set of training data. For security analysis, we use the composition theorem of secure protocols by going through the steps of the protocol. This theorem states that if all sub-protocols used in a protocol are privacy-preserving, such that intermediate output of one subprotocol becomes input of the next sub-protocol as a private share, then the main protocol is also privacy-preserving [26], [27]. First of all, each party securely and randomly generates its own initial weights vector and thus it is not known by the other parties. In step 2 the selected party randomly selects an item from its dataset. Therefore other parties do not know the input vector values. In step 3, secure dot product is applied to each pair of private vectors belonging to two parties and final result is divided to two private shares for the parties involved. Goethals et al., in [25], have proposed an efficient protocol for this secure two-party computation by using homomorphic encryption and we use that in our protocol. Thus, both sides are unaware from each other’s input and output. In step 4, because the summation is converted to multiplication of private shares and parties only send the sign of their outputs to the party Pi , their private values, i.e. Ri s, are not revealed. We use our algorithms proposed in [17] for secure multi-party addition in this protocol. By using this building block, each party Pi having private inputs xi obtains a private output share ri such that: n n X Y xi = ri i=1

i=1

Finally in step 5, only Pi ’s private share of the weights vector is locally modified by this party and no information is exchanged. Therefore, the final shares of the weights vector are kept private. B. New Protocol For Vertically Partitioned Data In this section, a protocol is presented to produce neural network model using perceptron learning algorithm when data is vertically partitioned among the parties. It means each party owns a subset of attributes, and class attribute or the output Ci is public. Same as the protocol for horizontal case, learning model is finally shared among the parties involved,

and prediction of testing data is jointly and securely done by all the parties. Suppose dataset D is vertically partitioned to D1 , D2 , · · · , Dm owned by parties P1 , P2 , · · · , Pm respectively, and the number of attributes in Di is ni , m P such that ni = p, in which p is the number of all i=1

attributes. Each item di ∈ D is a pair < Ei , Ci >, in which: Ei = <

1, u1,1 , u1,2 , · · · , u1,n1 , u2,1 , u2,2 , · · · , u2,n2 , · · · , um,1 , um,2 , · · · , um,nm >

is the input vector and Ci is its corresponding output. Note that ui,1 , ui,2 , · · · , and ui,ni belong to Pi and let the first item of Ei , 1, is owned by the first party, P1 . Our goal is to compute the network weights vector W =< W0 , W1 , · · · , Wp > for this set of training examples. To preserve the privacy of the computed learning model, this vector will be securely distributed to all the parties, such that: W =

<

w1,0 , w1,1 , w1,2 , · · · , w1,n1 , w2,1 , w2,2 , · · · , w2,n2 , · · · , wm,1 , wm,2 , · · · , wm,nm > .

Therefore, each party knows the initial, intermediate, and final weight values for its own attributes. Without loss of generality, we assume that W0 = w1,0 is maintained by P1 . First, W should be initialized to a vector with small values. Thus, each party randomly and privately generates its own part of the weights vector, i.e. Pi initializes wi,1 , wi,2 , · · · , and wi,ni . Following are the Steps of the protocol: 1) Selecting an item: One item di is randomly selected from the set of training examples 2) Computing weighted sum: R = Ei · W is computed: R

= Ei · W = < 1, u1,1 , · · · , u1,n1 , u2,1 , · · · , u2,n2 , · · · , um,1 , · · · , um,nm > · < w1,0 , w1,1 , · · · , w1,n1 , w2,1 , · · · , w2,n2 , · · · , wm,1 , · · · , wm,nm > = < 1, u1,1 , · · · , u1,n1 > · < w1,0 , · · · , w1,n1 > + < u2,1 , · · · , u2,n2 > · < w2,1 , · · · , w2,n2 > + .. .. .. . . . < um,1 , · · · , um,nm > · < wm,1 , · · · , wm,nm > = R1 + · · · + Rm

In this equation, Ri is locally computed by the party Pi . 3) Applying activation function: sign(R) and sign(Ci,j ) have to be compared. Same as the previous protocol, we don’t need to compute R, and only sign(Ri ) has to be compared with sign(Ci ), by using Secure Multi-party Addition same as that in the previous protocol. Then, one party, say Pk , is selected and each party Pj , j 6= k, sends the sign of its output, to Pk . Now, Pk by counting the number of negative signs determines the sign of the summation and compares it with sign(Ci ).

4) Adjusting the weights vector: If sign(Ei · W ) = sign(Ci ) we do nothing, otherwise W has to be adjusted using Ei and Ci as follows: W

=

W + Ci ∗ Ei

=

< w1,0 , w1,1 , · · · , w1,n1 , · · · , wm,1 , · · · , wm,nm > + Ci ∗ < 1, u1,1 , · · · , u1,n1 , · · · , um,1 , · · · , um,nm >

=

< w1,0 + Ci , w1,1 + Ci ∗ u1,1 , · · · , w1,n1 + Ci ∗ u1,n1 , .. .

.. .

wm,1 + Ci ∗ um,1 , · · · , wm,nm + Ci ∗ um,nm > Thus, each party modifies its own part of the weights vector. 5) Go to step 1. At the end, each party has the corresponding weights to its input attributes from the weights vector. Modification of this protocol to handle the Pocket algorithm is straightforward. We go through the protocol to check that the parties’ privacy is preserved during the algorithm. In the initialization step, each party, according to its attributes, randomly and privately generates its own part of the weights vector. In step 2, each dot product is computed locally by each party and no information is exchanged. In step 3, after running secure multi-party addition protocol, each party Pi , i 6= k, only sends the sign of its private output to Pk and no information about its input, i.e. Ri , is revealed. Finally in step 4, each party locally modifies its own part of the weights vector. Both of this and previous protocols for vertical and horizontal cases can be applied incrementally. It means, whenever the model needs to be adjusted with extra training data, we can use the constructed model which is shared among the parties and apply the same protocol to adjust the weights vector according to the new data. IV. C ONCLUSIONS AND F UTURE WORK Nowadays, Neural network is one of the important subjects in machine learning and data mining systems, and in most of the underlying applications, such as health, government, and commercial, privacy preserving is very crucial. Therefore, existing algorithms in this learning system need to be investigated and privacy-preserving protocols have to be proposed. In this paper two new protocols are presented for perceptron learning algorithm in multi-party environment when data is horizontally or vertically partitioned. To preserve the privacy of the output model as well as the input data, final weights vector is not released to all the parties involved. Instead, each party has a private share of the model, and they can jointly use the model to predict the output of the target data.

Neural network has many different algorithms to create various models according to the type of data, and activation functions. Thus, a possible future work is to extend the proposed protocols in this paper to cover other types of configurations, such as continuous data, multi-layer models, stochastic activation functions like sigmoid function, backpropagation algorithm, and recurrent networks. Currently, we are working on the back-propagation algorithm with sigmoid function as the activation function on continuous input and output data, to design privacy-preserving protocols for different types of data distributions in multi-party environment. R EFERENCES [1] Y. Lindell and B. Pinkas, “Privacy Preserving Data Mining.” in CRYPTO, 2000, pp. 36–54. [2] R. Agrawal and R. Srikant, “Privacy-Preserving Data Mining.” in ACM Special Interest Group on Management of Data Conference, 2000, pp. 439–450. [3] M.-J. Xiao, L.-S. Huang, Y.-L. Luo, and H. Shen, “Privacy Preserving ID3 Algorithm over Horizontally Partitioned Data.” in Parallel and Distributed Computing, Applications and Technologies, 2005, pp. 239– 243. [4] W. Du and Z. Zhan, “Building Decision Tree Classifier on Private Data,” in CRPITS’14: Proceedings of the IEEE international conference on Privacy, security and data mining. Darlinghurst, Australia, Australia: Australian Computer Society, Inc., 2002, pp. 1–8. [5] J. Vaidya and C. Clifton, “Privacy-Preserving Decision Trees over Vertically Partitioned Data.” in Data and Application Security (DBSec), 2005, pp. 139–152. [6] E. Suthampan and S. Maneewongvatana, “Privacy Preserving Decision Tree in Multi Party Environment.” in Asia Information Retrieval Symposium, 2005, pp. 727–732. [7] N. Zhang, S. Wang, and W. Zhao, “A New Scheme on PrivacyPreserving Data Classification,” in Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. New York, NY, USA: ACM Press, 2005, pp. 374–383. [8] S. Samet and A. Miri, “Privacy Preserving ID3 Using Gini Index over Horizontally Partitioned Data. Accepted in the 6th ACS/IEEE International Conference on Computer Systems and Applications (AICCSA08), Doha, Qatar, March 2008.” [9] C. Clifton, M. Kantarcioglu, and J. Vaidya, “Defining Privacy for Data Mining,” in National Science Foundation Workshop on Next Generation Data Mining, Baltimore, MD, November 2002, pp. 126–133. [10] M. Kantarcioglu and C. Clifton, “Privacy-Preserving Distributed Mining of Association Rules on Horizontally Partitioned Data.” IEEE Trans. Knowl. Data Eng., vol. 16, no. 9, pp. 1026–1037, 2004. [11] J. Z. Zhan, S. Matwin, and L. Chang, “Privacy-Preserving Collaborative Association Rule Mining.” in DBSec, 2005, pp. 153–165. [12] J.-L. W. C.-F. X. Y.-H. Pan, “An Incremental Algorithm for Mining Privacy-Preserving Frequent Itemsets,” in 2006 International Conference on Machine Learning and Cybernetics, Dalian, China, August 2006, pp. 1132–1137. [13] S. Jha, L. Kruger, and P. McDaniel, “Privacy Preserving Clustering.” in ESORICS, 2005, pp. 397–417. [14] J. Vaidya and C. Clifton, “Privacy-Preserving k-Means Clustering over Vertically Partitioned Data.” in KDD, 2003, pp. 206–215. [15] G. Jagannathan and R. N. Wright, “Privacy-Preserving Distributed kMeans Clustering over Arbitrarily Partitioned Data.” in KDD, 2005, pp. 593–599. [16] G. Jagannathan, K. Pillaipakkamnatt, and R. N. Wright, “A New Privacy-Preserving Distributed k-Clustering Algorithm.” in SDM, 2006. [17] S. Samet, A. Miri, and L. Orozco-Barbosa, “Privacy Preserving kMeans Clustering in Multi-Party Environment,” in SECRYPT 2007 International Conference on Security and Cryptography, Barcelona, Spain, 2007, pp. 381–385. [18] M. Barni, C. Orlandi, and A. Piva, “A Privacy-Preserving Protocol for Neural-Network-Based Computation,” in Proceeding of the 8th workshop on Multimedia and security. New York, NY, USA: ACM Press, 2006, pp. 146–151.

[19] S. I. Gallant, Neural Network Learning and Expert Systems. Cambridge, MA, USA: MIT Press, 1993. [20] F. Rosenblatt, “Two Theorems of Statistical Separability in the Perceptron,” in Proceedings of a Symposium on the Mechanization of Thought Processes, London, 1959, p. 421456. [21] C. Orlandi, A. Piva, and M. Barni, “Oblivious Neural Network Computing via Homomorphic Encryption,” EURASIP Journal on Information Security, vol. 2007, Article ID 37343, 11 pages, 2007. [22] P. Paillier, “Public-Key Cryptosystems Based on Composite Degree Residuosity Classes.” in EUROCRYPT, 1999, pp. 223–238. [23] J. Secretan, M. Georgiopoulos, and J. Castro, “A privacy preserving probabilistic neural network for horizontally partitioned databases,” International Joint Conference on Neural Networks, IJCNN 2007, pp. 1554–1559, Aug 2007. [24] O. Goldreich, “Secure Multi-party Computation,” Working Draft, Available from http://www.wisdom.weizmann.ac.il/ oded/pp.html, 1998. [25] B. Goethals, S. Laur, H. Lipmaa, and T. Mielik¨ainen, “On Private Scalar Product Computation for Privacy-Preserving Data Mining.” in ICISC, 2004, pp. 104–120. [26] O. Goldreich, Foundations of Cryptography: Volume 2, Basic Applications. New York, NY, USA: Cambridge University Press, 2004. [27] R. Canetti, “Security and Composition of Multiparty Cryptographic Protocols,” Journal of Cryptology: the journal of the International Association for Cryptologic Research, vol. 13, no. 1, pp. 143–202, 2000.

Privacy-Preserving Protocols for Perceptron ... - Semantic Scholar

the case of client-server environment, and it is assumed that the neural ... Section 4 is dedicated ... preserving protocol neural network for client-server environ-.

125KB Sizes 2 Downloads 509 Views

Recommend Documents

Privacy-Preserving Protocols for Perceptron ... - Semantic Scholar
School of Information Technology and. Engineering (SITE). University ... to the best of our knowledge, there is no privacy-preserving technique to collaboratively ...

Notes on Scalable Blockchain Protocols - Semantic Scholar
availability; it is entirely possible for a block to appear that looks valid, and .... the basic principle. Definition 2.4 (Block). A block β is a package containing a list of trans- actions T, a reference to a parent block (in more exotic protocols

BitTorrent-Like Protocols for Interactive Access to ... - Semantic Scholar
Numerous solutions for improving video-on-demand (VoD) system scalability have been ... Besides, server bandwidth requirements are notably reduced [10, 13, 14]. .... window size, depending on how well the peer's download is progressing.

BitTorrent-Like Protocols for Interactive Access to ... - Semantic Scholar
Keywords: BitTorrent protocol, Hidden Markov Model, Multimedia, Streaming. ... multimedia streaming in mesh architectures. ..... He immediately starts the data.

Query Protocols for Highly Resilient Peer-to-Peer ... - Semantic Scholar
Internet itself, can be large, require distributed control and configuration, and ..... We call a vertex in h to be occupied if there is a peer or node in the network (i.e., ...

Query Protocols for Highly Resilient Peer-to-Peer ... - Semantic Scholar
is closest in spirit to the virtual content addressable network described by Fiat .... measures the cost of resolving a query in terms of the number hops taken by a ...

Privacy-Preserving Protocols for Perceptron Learning ...
the neural network learning model already exists. ... can be extended to apply on other types of learning models ... neural network model owned by the server.

Anesthesia for ECT - Semantic Scholar
Nov 8, 2001 - Successful electroconvulsive therapy (ECT) requires close collaboration between the psychiatrist and the anaes- thetist. During the past decades, anaesthetic techniques have evolved to improve the comfort and safety of administration of

Considerations for Airway Management for ... - Semantic Scholar
Characteristics. 1. Cervical and upper thoracic fusion, typically of three or more levels. 2 ..... The clinical practice of airway management in patients with cervical.

Czech-Sign Speech Corpus for Semantic based ... - Semantic Scholar
Marsahll, I., Safar, E., “Sign Language Generation using HPSG”, In Proceedings of the 9th International Conference on Theoretical and Methodological Issues in.

Discriminative Models for Semi-Supervised ... - Semantic Scholar
and structured learning tasks in NLP that are traditionally ... supervised learners for other NLP tasks. ... text classification using support vector machines. In.

Dependency-based paraphrasing for recognizing ... - Semantic Scholar
also address paraphrasing above the lexical level. .... at the left top of Figure 2: buy with a PP modi- .... phrases on the fly using the web as a corpus, e.g.,.

Coevolving Communication and Cooperation for ... - Semantic Scholar
Chicago, Illinois, 12-16 July 2003. Coevolving ... University of Toronto. 4925 Dufferin Street .... Each CA agent could be considered a parallel processing computer, in which a set of .... After 300 generations, the GA run converged to a reasonably h

Model Combination for Machine Translation - Semantic Scholar
ing component models, enabling us to com- bine systems with heterogenous structure. Un- like most system combination techniques, we reuse the search space ...

Biorefineries for the chemical industry - Semantic Scholar
the “green” products can be sold to a cluster of chemical and material ..... DSM advertised its transition process to a specialty company while building an.

Nonlinear Spectral Transformations for Robust ... - Semantic Scholar
resents the angle between the vectors xo and xk in. N di- mensional space. Phase AutoCorrelation (PAC) coefficients, P[k] , are de- rived from the autocorrelation ...

Leveraging Speech Production Knowledge for ... - Semantic Scholar
the inability of phones to effectively model production vari- ability is exposed in the ... The GP theory is built on a small set of primes (articulation properties), and ...

Enforcing Verifiable Object Abstractions for ... - Semantic Scholar
(code, data, stack), system memory (e.g., BIOS data, free memory), CPU state and privileged instructions, system devices and I/O regions. Every Řobject includes a use manifest in its contract that describes which resources it may access. It is held

SVM Optimization for Lattice Kernels - Semantic Scholar
gorithms such as support vector machines (SVMs) [3, 8, 25] or other .... labels of a weighted transducer U results in a weighted au- tomaton A which is said to be ...

Sparse Spatiotemporal Coding for Activity ... - Semantic Scholar
of weights and are slow to train. We present an algorithm .... They guess the signs by performing line searches using a conjugate gradi- ent solver. To solve the ...

A demographic model for Palaeolithic ... - Semantic Scholar
Dec 25, 2008 - A tradition may be defined as a particular behaviour (e.g., tool ...... Stamer, C., Prugnolle, F., van der Merwe, S.W., Yamaoka, Y., Graham, D.Y., ...

Improved Competitive Performance Bounds for ... - Semantic Scholar
Email: [email protected]. 3 Communication Systems ... Email: [email protected]. Abstract. .... the packet to be sent on the output link. Since Internet traffic is ...

Semantic Language Models for Topic Detection ... - Semantic Scholar
Ramesh Nallapati. Center for Intelligent Information Retrieval, ... 1 Introduction. TDT is a research ..... Proc. of Uncertainty in Artificial Intelligence, 1999. Martin, A.