1

Matroidal Undirected Network Chung Chan

Abstract—An undirected network link model is formulated, generalizing the usual undirected graphical model. The optimal direction for multicasting can be found in polynomial time with respect to the size of the network, despite the exponential number of possible directions. A more general problem is considered where certain function of a distributed source is to be computed at multiple nodes. The converse results are derived, not from the usual cut-set bound but through the related problem of secret key agreement and secure source coding by public discussion. A unifying model of partly directed network is also formulated, covering both the directed and undirected networks as special cases. Index Terms—Secret key agreement, undirected network coding, matroid, polymatroid, function computation

I. I NTRODUCTION The problem of undirected network coding was first studied in [2]. The network consists of point-to-point communication links, where information can flow in two different directions, up to a total rate below the capacity of the link. Independent messages are generated at a source node and then multicast to a set of sink nodes. This is done by choosing the directions of the links and performing coding at the source and intermediate nodes. Although the number of possible directions is exponential in the number of undirected links, efficient polynomial-time algorithm exists for computing the optimal direction [3]. Given the optimal direction, the network coding scheme can also be obtained in polynomial time [4]. There is also a symmetry in the undirectedness of the network that allows the same code to be used for different choices of the source from the same multicast group [5]. The purpose of this work is to identify the more general structure that makes this undirected network coding problem polynomial-time solvable. It is motivated by the previous work in [6, 7], which pointed out a common notion of mutual dependence underlying the undirected network coding problem and the seemingly different problem of secret key agreement [8]. More precisely, the capacities of the two problems are the same and can be computed in polynomial time by exploiting the underlying structure called matroid [9]. This structure also leads to an information-theoretically appealing characterization of the capacity, which can be viewed as a natural generalization of Shannon’s mutual information to the multivariate case and the combinatorial notion of partition connectivity [9] originated from the tree packing problem. A similar divergence upper bound on the capacity was first pointed out in [8], and the connection with steiner-tree packing was also observed independently by [2] and [10] for the Chung Chan ([email protected], [email protected]) is with the Institute of Network Coding, the Chinese University of Hong Kong. The manuscript is available online in [1]. Preliminary results submitted to ITA 2012 and ISIT 2012.

network coding and secrecy problems respectively. However, the bound is not tight in general as shown by the minimal counter-example in [11] and the connection with steiner-tree packing is not exact. The identity in [7] provides a way to resolve this disparity and better understand the connection between the different problems. The main result of this work is a generalization of the graphical undirected network model referred to as the matroidal undirected network. It can be viewed as the counterpart of the deterministic network model in [12] since the network may contain undirected broadcast links, interference links, and more general finite linear channels. It is also inspired by the more abstract view of a network as a matroid or linking system in [13, 14], for which the max-flow min-cut theorem takes on a more general form. In addition to studying the problem of multicasting independent messages, we will also consider multicasting a distributed source or function of the source just like [15, 16] for the directed network. By relating the problem to the secure source coding problem in [17, 18], polynomial-time solution or partial solution can be obtained by exploiting the underlying matroidal structure. The concept can also be extended to a network with its direction partially fixed, with some directed and undirected links. This creates a continuum between the directed and undirected network models. In the sequel, we will describe the model more precisely and introduce the problem of multicasting information over the network. Interested readers can also refer to the paper in [1], which contains the extended results. II. M OTIVATION The connection from the secret key agreement problem to the undirected network coding problem is motivated by a notion of multivariate correlation that appears to be a natural generalization of Shannon’s mutual information. Let us introduce this informally using the secret agreement game [6] played by a group of people. One person is chosen as the wiretapper while others are the users. Each player are given a piece of paper they can write on but they cannot show it to anyone. The users win if they all put down the same thing that is different from what the wiretapper put down. The users are also allowed to discuss in public as long as what they is clearly heard by the wiretapper. Is there a winning strategy for the users? If there is just one user, he can simply put down something random since the wiretapper probably cannot guess it. This does not work if there are more users because they need to agree on the same thing. If they do not discuss about what they want to write, it is unlikely that they can agree on the same thing. But if they describe too clearly about what they want to write, it is likely that the wiretapper guess it too. Is it useful to discuss at all?

2

The answer is affirmative if the users observe some correlated private events prior to playing the game. For example, suppose the users all like to play basketball but the wiretapper does not. Then, the users can put down the winning team of an important match as the secret. If any user forgets about the match, other users may remind them by naming a few players in the winning team. Assuming that the wiretapper does not follow any basketball game, he will not be able to guess it. If the secret agreement game is played repeatedly, how many times can the user win? Intuitively, the closer the users are to each other, the more secret they can generate. Can we say in a more concrete mathematical framework that the amount of winning is the correlation among the private observations of the users? The secret key agreement problem in [8] provides such a framework. It consists of a finite set V of users, a subset A of at least two users are called active. Each user i ∈ V observes privately a discrete memoryless source Zi that is correlated according to some joint distribution PZV with ZV := (Zi : i ∈ V ) denoting the entire multiple source. The users can discuss in public noiselessly and with unlimited rate until the active users can agree on some uniformly random key that is kept secret to a wiretapper who listens to the entire public discussion. The maximum secret key rate, called the secrecy capacity, is characterized in [8] by a linear program that is upper bounded by a divergence expression as follows. ∏ D(PZV ∥ C∈P PZC ) H(ZV ) − min z(V ) ≤ min zV P |P| − 1 where H(·) and D(·∥·) denote the entropy and divergence respectively [19]. The L.H.S. is the secrecy capacity with zV := (zi : i ∈ V ) being the public discussion rate tuple subject to the linear Slepian-Wolf constraints that ∑ z(B) := zi ≥ H(ZB |ZB c ) ∀B ⊆ V : B ̸⊇ A i∈B

The R.H.S. is the divergence upper bound with P being a partition of V into at least two parts such that each part contains at least an element in A. In the two-user case, the divergence bound is simply the Shannon’s mutual information I(Z1 ∧ Z2 ) := D(PZ1 Z2 ∥PZ1 PZ2 ). It is a theoretically-appealing measure of correlation because it is, roughly speaking, a normalized distance between the joint distribution and the set of product distributions obtained from the different ways of breaking the correlation. Indeed, the bound is tight [11] when A = V , confirming this notion of multivariate correlation for ZV . However, when A ⊊ V , we also have a minimal counter-example for which the bound is loose. An interesting question is whether we can modify the bound slightly to make it tight. The tightness of the divergence bound when A = V was also observed in [10] but in the special source model called the pairwise independent network, where the correlation of the source is captured by a dependency graph G = (V, E, ( θ)) where V and E are the vertex and edge sets, and θ : E 7→ V2 is the edge function. More precisely, the nodes in the graph correspond to the users, and each edge e ∈ E in the graph

corresponds to an independent random bit Ye observed by the incident nodes in θ(e). The divergence bound evaluates to min P

|δG (P)| |P| − 1

where δG (P) is the set of edges that crosses between two different parts in P. In combinatorics, this quantity is the partition connectivity of G. By Tutte and Nash-Williams theorem [9], it is the amount of (possibly different) spanning trees one can pack (fractionally) in the graph if we regard an edge in the graph as a container for an edge of a spanning tree. The fact that it equals the secrecy capacity when A = V is that each spanning tree corresponds to a way of sharing a secret key bit for all the users [10]. For the general case A ⊊ V , [10] extended this idea to pack steiner trees that span the nodes in A. This approach attains at last half of the secrecy capacity but is not optimal in general. The number of steiner trees can be strictly smaller than the secrecy capacity, for example, in the butterfly network in [3]. It was unclear whether the capacity equals the divergence bound for pairwise independent network. Since a counter-example was not known, the equality was conjectured in [20]. The counter-example in [11] is not a pairwise independent network and so does not apply to this special case. Nevertheless, the conjecture is unlikely because of the following reason. The divergence bound, referred to as the steiner strength, was proven to be NP-complete in [3]. The secrecy capacity, however, can be solved in polynomial time by the ellipsoid method [7] because the separation problem corresponds to submodular function minimizations which are solvable in polynomial time [21]. Equality of the expressions implies that P = NP in complexity theory, which seems unlikely. Interestingly, the same situation happens in the undirected network coding problem [2]. Consider G as a network where each edge corresponds to an independent undirected link that allows information to flow in both directions up a total amount of one bit. A source node s ∈ A tries to multicast a message to every users in A, referred to as the multicast group. It was shown that the capacity of the network, like the secrecy capacity for pairwise independent network, is upper bounded by the steiner strength. In the broadcast case when A = V , the bound is again tight because the partition connectivity is the number of spanning trees that can be packed in G, and each spanning tree corresponds to a way of routing one bit of information to every node. Similarly, for A ⊊ V , the routing solution corresponds to packing steiner trees, which can attain at least half of the capacity but is suboptimal in general. It is again unlikely that the capcity is equal to the steiner strength because the capacity can be computed in polynomial time but computing the steiner strength is NP-complete [3]. There seems to be a mapping between the results of secret key agreement and undirected network coding. Is it possible that the capacities for the two problems are the same? Can we relate the two problems somehow? If the answer is affirmative, then there may also be a way to view the secrecy capacity for a more general source model as the capacity of a more general undirected network. This appears to be the case. In [22, 23], the notion of partition

3

connectivity is extended to hypergraphs H = (V, E, ϕ), which is a generalization of a graph with edge function ϕ : V 7→ 2V \ {∅}, i.e. an edge can cover more than two nodes. More precisely, it is shown that a hypergraph can be decomposed (fractionally) into connected sub-hypergraphs up to an amount equal to ∑ [dH (e, P) − 1] min e∈E P |P| − 1 where dH (e, P) is the number of parts in P that overlaps ϕ(e). Just like the pairwise independent network, this is a special case of the divergence bound when each edge e ∈ E corresponds to an independent random bit observed by the incident nodes in ϕ(e), and so it is the secrecy capacity when A = V . It is also the capacity of the corresponding network coding problem [6] by viewing each edge e ∈ E as an undirected broadcast link where one of the incident node in ϕ(e) can broadcast one bit of information to the remaining incident nodes. Note that the partition connectivity for hypergraphs above looks rather different from the original expression for graphs. A more natural generalization would be to use the original expression with δH (e) defined as the set of hyperedges overlapping at least two parts in P. Unfortunately, this yields a different quantity that is not the solution to the problem of packing connected sub-hypergraphs. The question then is whether this quantity has other meaningful interpretation, perhaps equal to the capacity of a different generalization of the undirected network coding problem. This turns out to be the case. In [6], it was shown to be the secrecy capacity for A = V when each edge e ∈ E corresponds to a random bit vector (Yie : i ∈ ϕ(e)) with Yie observed by i ∈ ϕ(e). The random vector has its components sum to 0 and all its proper subvectors uniformly random. The capacity was also shown to be that of the undirected network coding problem by viewing each edge e as an undirected interference edge where an incident node i ∈ ϕ(e) can∑be selected as the receiver to observe the output bit Y = j∈ϕ(e)\{i} Xj with Xj being an input bit from user j. It became evident that the result of [22, 23] can be extended further using a more general source model for the secret key agreement problem, which potentially gives a more general undirected network model. But two questions remain: Are the capacities of the secret key agreement and undirected network coding problem the same even for A ⊊ V ? If so, can they be expressed in a form similar to the divergence bound? In the case of directed network coding, the original graphical network has also be extended to network with broadcast links, interference links, and more general finite linear network [12]. It turns out that a further generalization is possible using the abstract mathematical structure called matroid [14]. A natural idea then is to generalize the undirected network using the structure of a matroid. A matroid (M, ρ) [9] can be defined by a finite ground set M and a rank function ρ : 2M 7→ Z+ , which satisfies ρ(B ′ ) ≤ ρ(B) ≤ |B| for all B ′ ⊆ B ⊆ M , and the submodularity property that

be understood as the non-negativity of (conditional) mutual information I(ZB1 ∧ ZB2 |ZB3 ) ≥ 0 for Bi ⊆ M [19]. It is essentially the property used in [11] to prove the tightness of the divergence bound when A = V , and so it is not too surprising to see a generalization of the result using this property. In [14], the general directed network model in [12] is identified with the linking system (or bi-matroid) in [13]. The connection between a matroid and a linking system is a notion called the base of a matroid, which is defined as the subset X ∈ M with full rank ρ(X) = |X| = ρ(M ). Suppose M = X ⊔ Y is the disjoint union of X and Y where X is a base of the matroid (M, ρ). Then, (X, Y, λ) defines a linking system where λ : 2X × 2Y 7→ Z+ is the linking function

ρ(B1 ) + ρ(B2 ) ≥ ρ(B1 ∩ B2 ) + ρ(B1 ∪ B2 )

There can be many ways to define an undirected network. For example, an undirected network can be regarded as a class of channels, each modeled by a transition probability matrix relating the channel outputs to the channel inputs. The

for all B1 , B2 ⊆ M . This property is also satisfied by the entropy function h(B) := H(ZB ), which can be also

λ(X ′ , Y ′ ) = ρ(Y ′ ∪ (X \ X ′ )) − |X \ X ′ | for all X ′ ⊆ X and Y ′ ⊆ Y . In [14], X is identified as the set of input variables of a directed network and Y the output variables. λ(X ′ , Y ′ ) generalizes the notion of the maximum information flow from nodes controlling X ′ to nodes observing Y ′ . e.g. set ρ(B) = 1 for all ∅ ̸= B ⊆ M and ρ(∅) = 0. Any element, say x ∈ M , is a base since ρ({x}) = r(M ) = 1 = |x|. With X = {x} and Y = M \ {x}, the linking function is λ(x, Y ′ ) = 1 for every ∅ ̸= Y ′ ⊆ Y . This represents a broadcast channel, where there can be at most one bit of information flow from the input variable to every non-empty subset of the output variables. The insight here is to view a directed network more generally as a matroid with a base of the matroid fixed as the set of input variables. Naturally then, an undirected network can also be viewed as a matroid but with the freedom of choosing different bases of the matroid as the set of input variables. Each choice of a base corresponds to a different direction of the network. This idea was successfully used in [6] to generalize some results of [23] on the partition connectivity for hypergraphs. This also helped discover a way to turn the secret key agreement problem into the undirected network coding problem, where a solution to the latter also solves the prior. Finally, the matroidal structure was used in [7] to prove that the capacities for the secret key agreement problem and the undirected network coding problem are equal in general for all A ⊆ V . There is also an equivalent expression in the form of partition connectivity. It turns out that the secret key agreement problem can be further generalized to secure source coding in [17, 18], where the goal is to compute a secret source as securely as possible instead of agreeing on a secret key. It is natural to think that the results there can also be translated into solutions of certain undirected network problem. This is the motivation of the current work. In the sequel, we will describe the model more precisely and introduce the problem of multicasting information over a general undirected network. III. S YSTEM MODEL A. Matroidal undirected finite linear network

4

1

Z1a

Z1b

.

.

. 3

(a) 1-bit undirected links between user 1 and 2, and between 1 and 3.

Y1a

. Y2

2

Z2

X1b

X1b

(b) Z1a = Z2 , Z1b = Z3 independent and uniformly random.

X1a

. Y3

X2

.

3

Z3

Z3

2

X1a

Z1

1

Y1b

Y1a

. Y3

Y2

X1

Y1b

X2

(b) Z1 ⊕ Z2 ⊕ Z3 = 0 and (Zi , Zj ) uniformly random for any i ̸= j.

(a) 1-bit undirected interference link among user 1, 2 and 3.

X1 .

. X3

Z2

Y1 .

Y3

.

X3

X3

X3

(c) Inputs denoted by Xi ’s and outputs by Yi ’s.

X2

Y2

X2

(c) Yk = Xi ⊕ Xj with different permutations (i, j, k) of (1, 2, 3).

Fig. 1. A graphical undirected network: (a) undirected network; (b) emulated source; (c) possible directions.

Fig. 2. A matroidal undirected network: (a) undirected network; (b) emulated source; (c) possible directions.

choice of the direction can correspond to the choice of the input. Alternatively, the undirected network can also be viewed simply as a directed network where the choice of the direction is also part of the channel input. These kinds of models, unfortunately, are too general for a practical solution. We want to represent an undirected network in a way that highlights instead of abstract away the structure required for a simple optimization over the choices of the directions. For simplicity, consider first a network with just one undirected link between two users. A symbol can be transmitted from one user to the other in one of the two possible directions. Regardless of the choice of the direction, denote the symbol transmitted or received by each user as a random variable. For an undirected link, there are two random variables, one at each incident node. The two random variables are required to be equal in value, but any one of them can be chosen as the channel input, after which the other becomes the output. If the input distribution is chosen to be uniform, then the two random variables form a discrete memoryless source, referred to as the emulated source. Indeed, this source completely characterizes the undirected link as in the sense that the entropy of the source is the capacity of the link while the input of the link can be any uniformly random source component that attains the entropy. The main idea is to define an undirected network not by the interconnections of a graph but by the more general statistical dependence of a distributed source. To illustrate this, consider the graphical network in Fig. 1(a), which consists of three users and two undirected links. One of the links is between user 1 and 2 and the other is between user 1 and 3. There are altogether 22 = 4 different ways to direct the network as shown in Fig. 1(c). Consider the first direction in Fig. 1(c). If we set the input symbols (X1a , X1b ) to the independent uniformly random (Z1a , Z1b ), then the statistics of the inputs and outputs are captured by the distributed source in Fig. 1(b). If we instead choose the second direction in Fig. 1(b) and set

the input symbols (X2 , X1b ) to the independent uniformly random (Z2 , Z1b ), we have the same source in Fig. 1(c). n.b. the statistics of the channel inputs and outputs are always captured by the same distributed source regardless of the direction. The source obtained this way by sending independent and uniformly random inputs over the network is called the emulated source. Indeed, the emulated source completely characterizes the undirected network in the sense every possible direction corresponds to a different maximal subset of uniformly random source components, referred to as a base of the emulated source. e.g. (Z1a , Z1b ) is a base that corresponds to the first directed network in Fig. 1(c) with (X1a , X1b ) as the inputs. (Z2 , Z1b ), (Z1a , Z3 ) and (Z2 , Z3 ) are the remaining bases, each corresponding to one of the remaining directions. In general, any graphical undirected network can be characterized by an emulated source that is obtained by fixing an arbitrary direction and sending independent uniformly random input symbols. The set of possible bases of the emulated source corresponds to the set of possible ways to direct the network. We call this kind of undirected network matroidal because the emulated source form a matroid [9] with the rank function being the entropy function [19] of the source. This is along the same idea as in [13] where a channel viewed more generally as a linking system can be regarded as a matroid with a fixed base being the set of input variables. Matroidal undirected network is a more general concept than the graphical network. e.g. it covers the network in Fig. 2(a) with an undirected interference edge. There are three possible directions as shown in Fig. 2(c). In the first configuration, user 1 and 2 choose the input bits X1 and X2 respectively, while user 3 observes the xor bit Y3 = X1 ⊕ X2 . With X1 and X2 chosen uniformly randomly and independently, the channel turns into the emulated source in Fig. 2(b). There are three possible choices of the bases, namely (Z1 , Z2 ), (Z1 , Z3 ) and (Z2 , Z3 ). Each of them corresponds to a different direction in Fig. 2(c). The network is characterized by the emulated source,

5

and so it is matroidal. Similarly, an undirected broadcast link among the three users can be represented by the emulated source with Z1 = Z2 = Z3 being a uniformly random bit. Any Zi for i = 1, 2, 3 is a base and can therefore be chosen as the input of the link. We will consider more generally the undirected version of the following finite linear network. Directed finite linear network: Denote the finite field of order q by Fq and have all logarithms taken with base q for convenience. Let V := [m] := {1, . . . , m} be the finite set of users in the network. User i ∈ V chooses the channel input x˙ i as a column vector of elements in ⊺ Fq . After the entire input vector x˙ V := [ x˙ ⊺1 ... x˙ ⊺m ] is specified, user i ∈ V observes the output vector z˙ i := H i x˙ V

i∈V

(1)

where H i is a transfer matrix with entries in Fq . We will impose the additional requirement that x˙ i is a subvector of z˙ i , i.e. x˙ i ⊆ z˙ i . This does not lose generality as user i observes his channel input xi trivially. If the inputs of a finite linear network are generated uniformly randomly and independently, then the channel outputs form a finite linear source. Finite linear source / undirected network: A source ZV := (Zi : i ∈ V ) is called finite linear if the component source Zi for user i ∈ V can be written as a vector zi over Fq satisfying zi = H i x V

i∈V

(2)

for some uniformly random subvector xV of zV , i.e. xi ⊆ zi ∀i ∈ V H(xV ) = ℓ(xV ) = H(zV )

(3a) (3b)

where ℓ(xV ) denotes the length of xV , and H(·) is the entropy [19] with all logarithms taken base q. xV is called a base ⊺ of ZV and H V := [ H ⊺1 ... H ⊺m ] is called a representation. The set of all possible bases of ZV is denoted as B(ZV ). A base xV satisfying (3) is a perfect compression of the source because xV ⊆ zV and H(xV ) = H(zV ) means that there is a bijection between xV and zV , while H(xV ) = ℓ(xV ) means that xV cannot be compressed further. A finite linear source defines an undirected network because every base corresponds to a different representation which can be viewed as a directed finite linear network. e.g. if ¯ xV is another base satisfying (3), then it is a subvector of zV by (3a) and so can be written as ¯ xV = M zV for some boolean matrix M . Since zV = xV = M H V xV . M H V must be H V xV by (2), we have ¯ invertible since the bases are uniformly distributed with the same length by (3b). Thus, xV = (M H V )−1 ¯ xV and so zi = H i (M H V )−1 ¯xV for all i ∈ V . H V (M H V )−1 is therefore the representation of ZV corresponding to ¯ xV . The set of all bases then corresponds to a collection of finite linear networks, which defines the undirected finite linear network.

B. Multicasting over the undirected network We will consider the problem of multicasting correlated sources or some function of the sources like [15, 16] but over a matroidal undirected finite linear network characterized by a finite linear source ZV as described in §III-A. Let UV := (Ui : i ∈ V ) be the distributed source where Ui is the component observed privately by user i ∈ V . A subset A ⊆ V of |A| ≥ 1 sink nodes or active users want to recover some function G of UV by the following block coding scheme. Block code: A positive integer n ∈ P is chosen as the block length. UV and ZV are extended n times into the i.i.d. sequences UnV := (UV t : t ∈ [n]) and similarly ZnV , with ZV t represented by the vector zV t over Fq . Each user i ∈ V observes Uni and then transmits over the undirected network as follows. Encoding: At time t from 1 to n, user i ∈ V specifies the direction xit ⊆ zit and the corresponding channel input x˙ it with ℓ(x˙ it ) = ℓ(xit ) as a function of his private source Uni and previous channel outputs denoted by z˙ i[t−1] . An encoding error occurs if the direction is invalid, i.e. xV ̸∈ B(ZV ). Otherwise, the channel return to user i ∈ V his output z˙ it := H it x˙ V t as in (1), where H V t is the representation of ZV t corresponding to the base xV t , i.e. zit := H it xV t as in (2). Decoding: After time n, each active user j ∈ A attempts to ˆ j of his private source Un and his recover Gn as a function G j entire channel output sequence z˙ i[n] . A decoding error occurs ˆ j for any j ∈ A. if Gn ̸= G Let εn be the probability of encoding or decoding error, i.e. { } ˆ j (4) εn := Pr ∃t ∈ [n], xV t ̸∈ B(ZV ) or ∃j ∈ A, Gn ̸= G Gn is said to be transmissible to A if εn decays to zero, i.e. lim supn→∞ εn = 0. The objective is to find an easily computable condition for transmissibility. A particular case of interest is to attain omniscience of the distributed source, i.e. G = UV . Another case of interest is when the users want to send independent messages instead of correlated memoryless sources. Let Wi be the message from ˆ j be the estimate of the entire message user i ∈ V and W WV by user j ∈ A. The encoding and decoding proceed in ˆ j replaced by Wi , the same way as before with Uni , Gn and G ˆ j respectively. εn is also defined as in (4) with WV and W the corresponding modifications. Each Wi takes values from a message set Wi that is growing exponentially in n at rate Ri := lim sup n→∞

log|Wi | . n

(5)

The rate tuple RV := (Ri : i ∈ V ) is said to be achievable if lim supn→∞ εn = 0 assuming the messages are uniformly distributed. The maximum throughput or simply the capacity of the network is defined as the maximum achievable sum rate ∑ log|Wi | sup R(V ) = sup lim sup (6) n n→∞ i∈V

where the supremums on the left and right are taken over all achievable rate vectors and block codes respectively.

6

IV. M AIN RESULTS A. Function computation We first derive a necessary condition for transmissibility. Such converse results for network coding are often derived using the cut-set bound. It is rather tricky to apply the same technique here because the directions of the network can vary in time and adapt to the channel outputs. We will obtain the desired condition in a different way through the closely related problem of secure source coding by public discussion [24]. Secure source coding: The secure source coding problem involves a wiretapper in addition to the set V of users, a subset A of which is also identified as the active users. The users observe n iid samples ˜ V . They want to of some private distributed source say U discuss in public until some given function G of the source, called the secret source, can be computed by the active users but not the wiretapper. Unlike the original network coding problem where the communication is over a given undirected network, there is no restriction on the public discussion. The users can broadcast messages to all other users noiselessly with unlimited rates for multiple rounds. The only catch is that the public messages and the discussion scheme are known to the wiretapper. The secret source G is said to be securely computable if the discussion can be chosen such that the error probability in recovering secret source sequence Gn by the active users and the amount of information about Gn leaked to the wiretapper decay to zero in n. This problem was first proposed in [24] as an extension to the secret key agreement problem in [8]. It was further extended in [18, 25] in the study of imperfect secrecy, the achievable exponents and the admissible choices of key functions. The problem of secure source coding can be mapped to that of undirected network coding as follows such that G is securely computable by public discussion if G is transmissible over the undirected network. From secure source coding to undirected network coding: For the secure source coding problem, we set (UV , ZV ) as the multiple source, where UV is independent of ZV , and (Ui , Zi ) is the component source observed privately by user i ∈ V . The secret source G is a function of UV but not ZV . The additional source ZV allows the users to simulate the undirected network by public discussion as follows. At time t from 1 to n, user i ∈ V broadcasts the public message f it := xit + x˙ it

(7)

where xV t is the base for ZV t and x˙ it is the corresponding channel input (1) for the undirected network coding problem. Recall that x˙ it has to be computed from Uni and the previous channel outputs z˙ i[t−1] but there is no undirected network in the secure source coding problem to generate the channel outputs. Instead, the user simulate the network by computing H it f it − zit = H it (xit + x˙ it ) − H it xit = H it x˙ it = z˙ it

where H V t is the representation of ZV t corresponding to the ˆj base xV t . After time n, user j ∈ A compute the estimate G of Gn from Unj and z˙ i[n] . Using the above mapping, the overall error probability is ϵn in (4), which decays to zero by the assumption that G is transmissible. It remains to argue that the public discussion reveals no information about Gn . From (31), f V [n] is uniformly distributed because xV [n] is not only uniformly distributed by (3b) but also independent of x˙ V [n] since x˙ V [n] is a function of UnV and the channel output z˙ V [n] , which is ultimately a function of UnV independent of ZnV . Since f V [n] is uniformly distributed regardless of the realization of UnV , we have UnV independent of f V [n] . Hence, G is securely computable and so the necessary condition in [24] and [18, Theorem 7] for G to be securely computable is also the necessary condition for G to be transmissible. Theorem 1 A function G of UV is transmissible to A ⊆ V : |A| ≥ 1 over a matroidal undirected network ZV only if H(G) = H(UV ) + H(ZV ) − min z(V ) zV

(8)

with zV := (zi ∈ R : i ∈ V ) subject to the linear constraints z(B) ≥ H(UB |UB c ) + H(ZB |ZB c )

∀B ⊆ V : B ̸⊇ A (9a)

z(B) ≥ H(UB |UB c G) + H(ZB |ZB c ) ∀B ⊆ V : B ⊇ A (9b) ∑ where z(B) := i∈B zi and B c denotes V \ B. The optimal zV can be computed in polynomial time with respect to the size |V | = m of the network, assuming that H(UB ) and H(ZB ) can be evaluated in polynomial time for B ⊆ V .1 2 P ROOF For the secure source coding problem in [18], the R.H.S. of (8) is a linear program (LP) that characterizes the maximum amount of information about G that can be kept secret from the wiretapper under the requirement that G is recoverable by all active users after public discussion. Intuitively, it should be equal to H(G) for G to be securely computable since no information about G should be leaked to the wiretapper. For the secret key agreement problem in [25], the R.H.S. of (8) is the maximum amount of secret key, called the secrecy capacity, that can be agreed upon by the active users after public discussion, and that is restricted to be a function of G. Once again, it should be equal to H(G) if G is securely computable since all the randomness of G can be used as the secret key. With the previous argument that G being transmissible implies it is securely computable, we have the same necessary condition for the original network coding problem. The polynomial time complexity in computing the optimal zV is not straightforward because the number of constraints in 1 Computing H(Z ) directly from the joint distribution of Z B V is exponential in m because the support set ZV is exponentially large. Fortunately, H(ZB ) can be computed as the rank of the corresponding rows of H V in (2). This can be done by the Gaussian elimination, which has complexity polynomial in the dimensions of the matrix. The dimensions grow linearly in m if each user can transmit and receive at a rate upper bounded by a constant with respect to m. H(UB ) can also be computed in polynomially time, for instance, if the number of source components is fixed or if UV is a finite linear source like ZV .

7

(9) is exponential in m. The idea is to exploit the underlying matroidal structure by solving the LP using the ellipsoid method [9]. The complexity of the ellipsoid method is equivalent to that of the separation oracle that determines whether a solution is feasible. From (9), zV is feasible iff for all j ∈ A 0≤ 0≤

min

B⊆V :j ∈B /

min

[z(B) − H(UB |UB c ) − H(ZB |ZB c )]

B⊆V :A⊆B

, and

[z(B) − H(UB |UB c G) − H(ZB |ZB c )]

These are submodular function minimizations over lattice families, and so can be solved in polynomial time [9]. More precisely, the first constraint set {B ⊆ V : j ∈ / B} is a lattice family because, for every B1 and B2 in the family, B1 ∩ B2 and B1 ∪B2 are also in the family. The first objective function f (B) := z(B) − H(UB |UB c ) − H(ZB |ZB c ) is submodular because f (B1 ) + f (B2 ) ≥ f (B1 ∩ B2 ) + f (B1 ∪ B2 ) using the fact that entropy function is submodular (??) [19]. The same argument applies to the last minimization. Since there are only |A| + 1 ≤ m + 1 submodular function minimizations, the overall complexity is polynomial in m as desired. ■ Example 1 Consider a simple three-user case where V = [3]. User 2 and 3 observes the independent uniformly random bits U2 and U3 respectively, while user 1 observes nothing, i.e. U1 = ∅. User 1 is the only active user, i.e. A = {1}, who wants to recover the mod-2 sum G = U2 ⊕ U3 . Suppose user 2 and 3 each has a 1-bit undirected link to user 1 as shown in Fig. 1. Then, G is obviously transmissible by having user 2 and 3 send U2 and U3 to user 1 using the last directed network in Fig. 1(c). G being transmissible should imply the necessary condition in (8). To verify this, let Z2 and Z3 be two independent uniformly random bits and Z1 = (Z2 , Z3 ). This is the emulated source ZV of the undirected network in Fig. 1(b) with Z1a = Z2 and Z1b = Z3 . With all logarithms taken base 2, we have H(G) = 1 and H(UV ) = H(ZV ) = 2. The LP in (8) is over (z1 , z2 , z3 ) ∈ R3 satisfying z2 ≥ H(U2 |U1 U3 ) + H(Z2 |Z1 Z3 ) = 1 z3 ≥ 1 z2 + z3 ≥ H(U2 U3 |U1 ) + H(Z2 Z3 |Z1 ) = 2 z1 ≥ H(U1 |U2 U3 G) + H(Z1 |Z2 Z3 ) = 0 z1 + z2 ≥ H(U1 U2 |U3 G) + H(Z1 Z2 |Z3 ) = 1 z1 + z3 ≥ 1 z1 + z2 + z3 ≥ H(U1 U2 U3 |G) + H(Z1 Z2 Z3 ) = 3 The optimal solution is z1 = z2 = z3 = 1, which minimizes z1 +z2 +z3 to 3. Thus, the necessary condition is satisfied since H(G) = 1 = 2 + 2 − 3 = H(UV ) + H(ZV ) − minzV z(V ). 2 In general, the necessary condition is not sufficient because G being securely computable does not necessarily mean that G is transmissible. This is roughly because the public discussion for the secure source coding problem is unrestricted in rate. If no public discussion is allowed, for example, then G being securely computable implies G is transmissible trivially without any communication over the undirected network. This is illustrated by the following example.

Example 2 Consider the same setting in Example 1 but with A = V , i.e. all users want to recover G. For user 1 to recover G, user 2 must send U2 to user 1. However, for user 2 to recover G, he must receive U1 . The 1-bit undirected link between user 1 and 2 can be support this communication of 2-bits of information, and so G is not transmissible. If the condition (8) were sufficient, it would not hold in this case. With A = V , zV for the LP in (8) has to satisfy the additional constraints that z1 ≥ H(U1 |U2 U3 ) + H(Z1 |Z2 Z3 ) = 0 z1 + z2 ≥ H(U1 U2 |U3 ) + H(Z1 Z2 |Z3 ) = 2, z1 + z3 ≥ 2 Once again, the optimal solution is z1 = z2 = z3 = 1. It follows that the necessary condition (8) holds and so it is not sufficient for G to be transmissible. Although G is not transmissible, it is securely computable by the following linear public discussion scheme. User 1, 2 and 3 reveals the public messages F1 = Z2 ⊕Z3 , F2 = Z2 ⊕U2 and F3 = U3 respectively. User 1 can recover G with ˆ 1 = Z2 ⊕F2 ⊕F3 = U2 ⊕U3 , while user 2 and 3 can recover G G ˆ 2 = U2 ⊕F3 and G ˆ 3 = F1 ⊕F2 ⊕Z3 respectively. The public as G discussion does not reveal any information about G because F3 is independent of G = U2 ⊕ U3 since U2 is uniformly distributed, and F2 is independent of G and F3 because Z2 is uniformly distributed independent of UV , and finally F1 is independent of G, F1 and F2 because Z3 is uniformly distributed independent of UV and Z2 . Hence, G is securely computable even though it is not transmissible over the undirected network. This is primarily because the public message F3 = U3 does not reveal any information G = U2 ⊕U3 to the wiretapper, but help both user 1 and 2 recover G given their private observations. G would be transmissible if an additional 1-bit broadcast link were available to user 3 for sending U3 because U2 can be communicate from user 2 to user 1 and then user 3 using second directed network in Fig. 1(c). 2 B. Multicasting correlated sources In some special cases, it is possible for the necessary condition in (8) to be sufficient. One case of interest is when G = UV , i.e. the omniscience requirement that every active user recovers the entire source. In Example 2, for instance, if G is set to (U2 , U3 ) instead of U2 ⊕ U3 , then the necessary condition no longer holds since H(G) = 2 for the L.H.S. of (8) while the R.H.S. is 1, again with z1 = z2 = z3 = 1 being the optimal solution to the LP. This is in agreement with the fact that G is not transmissible, and so the necessary condition may potentially be sufficient in the omniscience case. Indeed, the problem of multicasting correlated sources has been studied for various directed network models as summarized in [15], and the necessary and sufficient conditions for transmissibility were derived using the classic cut-set bound and random coding scheme respectively. The necessary condition matches the sufficient condition very closely, and so it is quite natural to think that the same case may be true for the undirected network model here, which is after all a collection of directed networks. Is it possible to reduce the necessary condition (8) from secure source coding to a form similar to the one for directed networks from the cut-set bound so that it can match

8

the sufficient condition from random coding? It turns out that the answer is affirmative by the information identity in [7]. Theorem 2 UV is transmissible to A over a matroidal undirected network ZV only if H(ZV ) = min z(V )

where

zV

z(B) ≥ H(UB |UB c ) + H(ZB |ZB c ) z(B) ≥ H(ZB |ZB c )

(10)

min

xV ∈B B⊆V :B̸⊇A

[H(ZB c ) − x(B ) − H(UB |UB c )] (12) c

with B being the set of xV satisfying x(B) ≤ H(ZB )

∀B ⊊ V

x(V ) = H(ZV ).

(13a)

P ROOF Suppose zV is a feasible solution to (11). It suffices to show that zV˜ with zm+1 = 0 is feasible to (16) because then (10) implies H(ZV ) ≥ minz˜V˜ z˜(V˜ ) with z˜V˜ subject to (16), which in turn implies (15) as desired since H(UV˜ ) = H(UV ) and H(ZV˜ ) = H(ZV ). By (11), we have for all B ⊆ V

where the first equality is because H(UB |UB c Um+1 ) = 0 and H(ZB |ZB c Zm+1 ) = H(ZB |ZB c ), and the last equality is by the independence between UV˜ and ZV˜ . By (11a) in particular, ˜ = B ∪ {m + 1} that we have for all B ⊆ V : B ̸⊇ A and B z(B) + zm+1 ≥ H(UB |UB c ) + H(ZB |ZB c ) = H(UB Um+1 |UB c ) + H(ZB Um+1 |ZB c ) = H(UB˜ ZB˜ |UV˜ \B˜ ZV˜ \B˜ )

(13b)

Furthermore, xV = zV is an optimal solution to (13) if zV is an optimal solution satisfying (10). 2 (12) is in the desired form of the cut-set bound. To see this, assume for simplicity that the direction of the network is chosen offline independent of the correlated sources and channel outputs. Consider some subset B ⊆ V : B ̸⊇ A. Since the source UV needs to be recovered by some users in B c , namely the active users in A \ B ̸= ∅, there must be an information flow of rate at least H(UB |UB c ) collectively from B to B c . Suppose xV t ∈ B(ZV ) is chosen as the direction at time t. Then, using some standard information-theoretic argument, it can be argued that the network supports a flow rate of at most 1 ∑ I(xBt ∧ ZB c |xB c t ) = H(ZB c ) − x(B c ) (14) n t∈[n]

∑ where xi := n1 t∈[n] ℓ(xit ), and the last equality is by (3). Furthermore, it can be shown [9] that the set of all possible xV forms the base B of a polymatroid. (12) simply asserts that there is a way to direct the network according to some xV ∈ B such that any subset B c of users containing an active user in A can obtain information at the required rate H(UB |UB c ) ≤ H(ZB c ) − x(B c ). The following proof using Theorem 2 and the identity in [7] extend the result conveniently to the case where the direction of the network need not be fixed prior to observing the correlated sources. P ROOF (P ROOF OF T HEOREM 2) (10) follows directly from (8) in Theorem 1 with G = UV . To prove that (10) implies (12), we will apply the identity in [7] as follows by introducing a virtual user. Let m + 1 be the index of the virtual user who observes Um+1 = UV and Zm+1 = ∅. With V˜ := V ∪{m+1} and A˜ := A ∪ {m + 1}, the following condition turns out to be equivalent to (10) H(UV˜ ) ≤ H(UV˜ ZV˜ ) − min z˜(V˜ ) z˜V˜

(15)

where z˜V˜ is subject to ˜ ≥ H(U ˜ Z ˜ |U ˜ ˜ Z ˜ ˜ ) z˜(B) B B V \B V \B



∀B ⊆ V :B ⊇ ̸ A (11a) z(B) ≥ H(ZB |ZB c ) = H(UB |UB c Um+1 ) + H(ZB |ZB c Zm+1 ) ∀B ⊆ V :B ⊇ A. (11b) = H(UB ZB |UV˜ \B ZV˜ \B )

This necessary condition holds only if (and if) 0 = max

Subclaim 2.1 (10) implies (15).

˜ ⊆ V˜ : B ˜ ̸⊇ A˜ (16) ∀B

˜ ⊆ V˜ : B ˜ ̸⊇ A˜ can Thus, z˜V˜ = zV˜ satisfies (16) because B ˜ be divided into two cases, one with B = B for some B ⊆ V , ˜ = B ∪ {m + 1} for some B ̸⊇ A. ◀ and the other with B By the identity in [7], the R.H.S. of (15) equals [ ] ˜ max min H(UV˜ \B˜ ZV˜ \B˜ ) − x ˜(V˜ \ B) x ˜V˜

˜ V˜ :m+1∈B̸ ˜ ⊇A ˜ B⊆

(17)

where x ˜V˜ is subject to the linear constraints ˜ ≤ H(U ˜ Z ˜ ) x ˜(B) B B ˜ x ˜(V ) = H(U ˜ Z ˜ ) V

˜ ⊊ V˜ ∀B

(18a) (18b)

V

We want to apply this identity to reduce (15) to (12), completing the proof that (10) implies (12). We will need the following fact from [7]. There exists an optimal z˜V˜ to (15) with z˜(V ) = H(ZV )

(19)

Given any such z˜V˜ , we can obtain an optimal x ˜V˜ to (17) as x ˜i = z˜i x ˜m+1 = H(UV )

∀i ∈ V

(20a) (20b)

x ˜m+1 and z˜(V ) are indeed the maximum and respectively minimum possible according to the constraints z˜(V ) ≥ H(UV ZV |Um+1 Zm+1 ) from (16) and x ˜m+1 ≤ H(Um+1 Zm+1 ) from (18) respectively. As a side note, z˜V˜ and therefore x ˜V˜ can be computed in polynomial time by the ellipsoid method because the separation oracle involves computing a few submodular function minimizations as described in the proof of Theorem 1, and checking the equality (19). Subclaim 2.2 (15) is equivalent to (12).



P ROOF Under the admissible constraint (20b) that x ˜m+1 = H(UV ), the constraints (18) is equivalent to (13) with x ˜V = xV . More precisely, if xV is feasible to (13), then (13b) implies x ˜(V˜ ) = x(V ) + x ˜m+1 = H(ZV ) + H(UV ) = H(UV˜ ZV˜ ), which is (18b). Furthermore, for all B ⊆ V , we have x ˜(B) = ˜ = B ∪ {m + 1}, x(B) ≤ H(ZB ) ≤ H(UB ZB ), and with B ˜ = x(B) + x we have x ˜(B) ˜m+1 ≤ H(ZB ) + H(Um+1 ) ≤ ˜V˜ is feasible to H(UB˜ ZB˜ ). This implies (18a). Conversely, if x

9

(18), then for all B ⊆ V we have x(B) = x ˜(B) = x ˜(B∪{m+ 1}) − H(UV ) ≤ H(UB Um+1 ) + H(ZB Zm+1 ) − H(UV ) = H(ZB ), with equality when B = V . This implies (13) as desired. Hence, we can replace the maximization over x ˜V in (17) by the maximization over xV satisfying (13). i.e. ] [ ˜ max min H(UV˜ \B˜ ZV˜ \B˜ ) − x(V˜ \ B) xV

˜ V˜ :m+1∈B̸ ˜ ⊇A ˜ B⊆

= max xV

min

B⊆V :B̸⊇A

[H(UB c ) + H(ZB c ) − x(B c )] (21)

˜ = B ∪{m+1} where the last equality is obtained by setting B ˜ = V \ B = B c and we also for B ⊆ V , in which case V˜ \ B ˜ ̸⊇ A˜ iff B ̸⊇ A. Finally, replace the R.H.S. of (15) have B by (21) above and apply H(UV˜ ) − H(UB c ) = H(UB |UB c ). This gives (12) with ≤. Equality is because the R.H.S. of (12) equals zero when B = ∅ since x(V ) = H(ZV ) and H(U∅ |UV ) = 0 ◀ Hence, (10) implies (15), which in turn implies (12) as desired. The converse is also true by the following. Subclaim 2.3 (15) implies (10).



P ROOF Let z˜V˜ be an optimal solution to (15) satisfying the additional constraint (19). It suffices to show that zV = z˜V is a feasible solution to (13) because then (10) holds by (19) that z(V ) = z˜(V ) = H(ZV ). To do so, we first argue that z˜m+1 = 0. (15) implies that H(UV ) ≤ H(UV )+H(ZV )−˜ z (V )−˜ zm+1 , or equivalently, z˜m+1 ≤ 0 by (19). The reverse inequality ˜ = {m + 1}. follows directly from (16) with B It remains to show that zV = z˜V is a feasible solution to (13). For all B ⊆ V , (16) implies that z(B) = z˜(B) ≥ H(UB ZB |UB c ZB c Um+1 Zm+1 ) = H(ZB |ZB c ), which implies (13b). For all B ⊆ V : B ̸⊇ A, ˜ = B ∪ {m + 1} implies that z(B) = (16) with B ˜ ≥ H(UB Um+1 ZB Zm+1 |UB c ZB c ) = H(UB |UB c ) + z˜(B) H(ZB |ZB c ), which is (13a). ◀ Subclaim 2.1, 2.2 and 2.3 establish the equivalence of (10) and (12). It can be argued more carefully that xV = zV is an optimal solution to (12) if zV is an optimal solution satisfying (10). Suppose (10) holds with some optimal zV . Then, by the proof of Subclaim 2.1, zV˜ with zm+1 = 0 is a feasible solution to (15). This is indeed optimal by the proof of Subclaim 2.3. More precisely, it was shown that an optimal z˜V˜ exists satisfying z˜(V ) = H(ZV ) and z˜m+1 = 0. But z(V ) = H(ZV ) by the assumption (10), and so zV˜ is indeed the optimal solution. By the result of [7], x ˜V˜ defined by (20) is an optimal solution to (17). In particular, we have x ˜V = zV by (20a). xV = x ˜V = zV is an optimal solution to (17) as desired according to the proof of Subclaim 2.2. Note that the optimal zV and therefore xV = zV can be computed in polynomial time by the ellipsoid method as described in the proof of Theorem 1. ■ The necessary condition in Theorem 2 is quite tight. If (12) is satisfied with strict inequality instead, then UV is transmissible by the usual random coding argument. More precisely, a direction xV t ∈ B(ZV ) is chosen for each time t ∈ [n] a priori. For each user i ∈ V , the channel input x˙ it at time t is generated by mapping his accumulated observation

(Uni , z˙ i[t−1] ) uniformly randomly to one of the possible input ℓ(x ) symbols in Fq it . The random mappings together with the choice of the direction for each time constitute the random codebook, which is generated and revealed to every user independent of the random source G = UV . Each active n user j ∈ A tries to find all typical sequences T[U defined in V] n ˙ [26] that possibly lead to his observation (Ui , zi[n] ) according to the codebook. If there is only one such sequence, it is ˆ j . We can ignore the case declared as the source estimate G n where no such sequence exists because Pr{UnV ∈ T[U } goes V] to one by the method of types [26]. It turns out that the other case where there are more than one such sequence can also be ignored if the following condition similar to (12) is satisfied. Theorem 3 UV is transmissible to A if 0 < max

min

[H(ZB c ) − x(B c ) − H(UB |UB c )] . (22)

xV ∈B B⊆V :∅̸=B̸⊇A

(22) is necessary if < is replaced by ≤ as it becomes (12). 2 P ROOF The idea is to show that the probability that more than one typical source sequence matching the observations of an active user decays to zero at an exponential rate with exponent no smaller than the R.H.S. of (22). Then, the sufficient condition (22) simply means that the exponent is positive. In the following, we will adapt the error analysis in [12] for multicasting independent messages over a directed network to the current problem of multicasting correlated sources over the undirected network. Suppose the realization of UnV is some typical sequence n uV . For there to be an error, at least one active user, say j ∈ A, finds another typical sequence u ˜nV with u ˜nj ̸= unj consistent with his observation. Consider some subset B0 ⊆ V : ∅ ̸= B0 ̸⊇ A for which there exists a typical sequence u ˜nV n n such that u ˜i ̸= ui iff i ∈ B0 . Before any transmission over the undirected network, B0c is the set of confused users who cannot tell whether u ˜nV or unV is the actual source sequence. By the method of types [26], the number of confusing sequences u ˜nV grows exponentially in n at rate H(UB0 |UB0c ). For some arbitrary sequence u ˜nV , define Bt as the set of user not confused ˜nV has u ˜ni = uni and maps to at time t ∈ [n], i.e. i ∈ Bct iff u n ˙ the same channel output zi[τ ] as uV does for τ up to time t. Once again, for there to be an error, we must have Bn ̸⊇ A. Furthermore, Bt ’s must form a chain B0 ⊆ B1 ⊆ B2 ⊆ · · · ⊆ Bn because a user that is not confused cannot become confused after observing more channel outputs. Thus, Bt ̸⊇ A for all t ∈ [n]. Given Bt−1 = B for some B ̸⊇ A, we want to compute the probability that Bt = B, i.e. the event that the confused set of users stay unchanged at time t. This probability can be shown to be q −H(ZBc |xBc t ) = q −[H(ZBc )−ℓ(xBc t )] using the same argument in [12] by the uniformity of the random code and the linearity of the channel. There must be at least n − m elements of t ∈ [n] with Bt−1 = Bt because |Bn \ B0 | ≤ |V | = m. Thus, the probability that some active users remain confused about u ˜nV at time n decays exponentially in

10

n at a rate at least

C. Multicasting independent messages min

B̸⊇A:B⊇B0

[H(ZB c ) − x(B )] c

∑ where xi := limn→∞ n1 t∈[n] ℓ(xit ). Applying the union bound over all possible u ˜nV and B0 , the probability decays at a rate at least [ ] min H(ZB c ) − x(B c ) − H(UB0 |UB0c ) B0 ,B̸⊇A:∅̸=B0 ⊆B

It is optimal to set B0 = B because that maximizes H(UB0 |UB0c ). Since the set of possible xV is B, the base of a polymatroid [9], we have the desired exponent after maximizing over xV ∈ B. ■ In the proof of Theorem 3, the optimal xV for (22) needs to be computed for the direction of the network. Although an optimal xV to (22) is also optimal to (12), the converse is not necessarily true, which means that the desired xV may not be obtained directly from the optimal zV to (10) in Theorem 2. Nevertheless, it can be computed efficiently by exploiting the matroidal structure. More precisely, given any xV ∈ B, the inner minimization of (22) is the maximum value γ that satisfies the linear constraints γ ≤ H(ZB c ) − x(B c ) − H(UB |UB c )

∀B ⊆ V : ∅ ̸= B ̸⊇ A

The optimal desired xV can be obtained by maximizing γ over xV ∈ B. This is an LP, which can be solved again by the ellipsoid method. The separation oracle again corresponds to solving the following submodular function minimizations for i ∈ V, j ∈ A 0≤

min

B⊆V :i∈B̸∋j

[H(ZB c ) − x(B c ) − H(UB |UB c ) − γ]

0 ≤ min [H(ZB ) − x(B)] B⊆V

and also checking if x(V ) = H(ZV ).2 These can be computed in polynomial time given that H(UB ) and H(ZB ) can. Theorem 3 can be strengthened if the source has addition special structure. In particular, the equality case for (22), i.e. the necessary condition (12), is also sufficient for any finite linear source UV over Fq to be transmissible using a convolutional network code. The optimal direction of the network can also be obtained from the optimal xV to (12), and hence the optimal zV to (10) by Theorem 2. This can be proved using some algebraic techniques from [27, 28] and the generalized max-flow min-cut theorem from [13, 14] as in [29]. The details will be omitted here. Arguably, whether the equality case for (22) is sufficient for non-linear sources is not of much practical significance. If a source UV is not transmissible over the undirected channel, then a natural remedy is to adjust the coding rate of the source, i.e. transmit k samples of source with n uses of the network. By making k small enough, the source would be transmissible. If (12) is satisfied, it is possible to have k approaches n. Thus, we can transmit nearly one source sample per use of the channel. 2 There is a technical requirement that the polytope of possible solutions easily fixed by projecting xV to has some volume [21]. This can be ∑ xV \⇕ by substituting xm = f (V ) − m−1 i=1 xi . The separation problem again corresponds to solving a polynomial number of submodular function minimization problems.

Consider the problem of multicasting independent messages over the undirected network. i.e. each user i ∈ V wants to multicast a message Wi at rate Ri to all active users in A, where Wi ’s are independent and uniformly random. Since this can be regarded as a special case of multicasting correlated sources described in the previous section, we can compute the achievable rate region using the converse and achievability results in Theorem 2 and 3. More precisely, RV being achievable means that a uniformly random source UV is transmissible over the k-extended i| undirected network ZkV with Ri = limk→∞ log|U . This k implies the condition in Theorem 2 with H(UB |UB c ) replaced by limn→∞ n1 H(UB |UB c ) = R(B). The set of all rate tuples satisfying the necessary condition forms an outer bound to the achievable rate region. This outer bound is indeed tight. Suppose RV satisfies (12) in Theorem 2 with H(UB |UB c ) replaced by R(B). Choose the i| sequence of message sets WV in n such that log|W < Ri for n all positive integer n but that approaches Ri as n → ∞. Since B| R(B) > log|W = H(WB |WB c ) by uniformity, we have n (12) satisfied with strict inequality if H(UB |UB c ) is replaced by H(WB |WB c ) instead of R(B). In other words, WV is transmissible, and therefore RV is achievable. The achievable rate region is given explicitly below. Theorem 4 The set of achievable RV for the undirected network ZV is the set of non-negative tuples defined by G := {xV − zV ≥ 0 : xV ∈ B, zV ∈ Q }

(23)

where B is defined by (13), and Q is the set of zV satisfying z(B) ≥ H(ZB |ZB c )

∀B ⊆ V : B ̸⊇ A.

(24)

The capacity or maximum achievable sum rate R(V ) is C := H(ZV ) − min z(V ). zV ∈Q

(25)

The projection of G onto the coordinates in A is {yA : yV ∈ G} = {yA ≥ 0 : y(A) ≤ C} which is completely characterized by C.

(26) 2

The capacity also has the alternative form of partition connectivity in [7]. It can also be computed using the ellipsoid method in polynomial time because the separation oracle corresponds to computing minB⊆V :B∋i z(B) − H(ZB |ZB c ) for i ∈ A, which are submodular function minimizations. Since the projection (26) of G onto A is characterized completely by C, it can be computed in polynomial time. It is not clear whether G can be computed in polynomial time, or whether there are exponentially many vertices. Nonetheless, one can compute whether a given rate tuple is achievable in polynomial time by the ellipsoid method since the condition derives from the necessary condition in Theorem 1. P ROOF (P ROOF OF T HEOREM 4) Suppose some RV ≥ 0 is achievable. From (13) with H(UB |UB c ) replaced by R(B),

11

we have for some xV ∈ B that R(B) ≤ H(ZB c ) − x(B c ) for all B ̸⊇ A. In other words, RV belongs to ∪ {yV ≥ 0 : y(B) ≤ H(ZB c ) − x(B c ), B ̸⊇ A} xV ∈B

This is G as argued in [7, (18)] because for any yV = xV −zV , we have y(B) ≤ H(ZB c ) − x(B c ) iff z(B) ≥ H(ZB |ZB c ). Consider any xV ∈ B and zV ∈ Q satisfying RV = xV − zV . We have R(V ) = x(V ) − z(V ) = H(ZV ) − z(V ) ≤ C by (13b). Achievability follows from (26), which in turn follows from [7, Lemma 1,2] that maxyV ∈G yi = maxyV ∈G y(V ) = C for all i ∈ A, and the convexity of G, which can be argued from the convexity of B and Q . ■

where zi for i ∈ V is the vector of all elements in Zi over Fq . The block code over a partly undirected network can be defined in the same way as in §III-B. For simplicity, however, we will consider a restricted model where the choice of the direction cannot adapt to the correlated sources nor the channel outputs. In other words, the direction x′′V t ∈ B(ZV |x′V ) for time t ∈ [n] is chosen before observing the source UV . This restriction will weaken but also simplify the converse part by ruling out the more complicated adaptation schemes. In particular, we will use the cut-set bound to establish the converse that is analogous to the one established previously through the secrecy problem. A. Function computation

V. PARTLY DIRECTED NETWORK It is possible to apply the previous results to other undirected network model where the matroidal structure can be identified. Consider, in particular, adding some directed links to a matroidal undirected network. The resulting network is no longer covered by the previous model because its direction is partially fixed. However, the set of possible directions inherits the matroidal structure from the undirected part of the network, and so, by exploiting this structure, the direction of the network can potentially be optimized efficiently as before. In this section, we will define a more general partly directed network model and show how to adapt the previous results to this case by exploiting the matroidal structure. A matroidal partly directed finite linear network is characterized by a finite linear source ZV over some finite field Fq in the same way as the undirected network described in §III. However, the direction xV ∈ B(ZV ) of the network consists ⊺ of two components, i.e. xi = [ x′i x′′i ] for i ∈ V , where one of the component x′V is fixed while the other x′′V can be chosen by the system. If x′V is empty, then x′′V ∈ B(ZV ) and so we have the original matroidal undirected network. If x′′V is empty instead, then x′V ∈ B(ZV ) and we have a directed finite linear network. Another special case is the combination of an undirected network Z′′V and a directed network Z′V with direction x′V ∈ B(Z′V ), where Z′V and Z′′V are independent. The resulting network is characterized by the partial direction x′V and the emulated source ZV with Zi = (Z′i , Z′′i ) for i ∈ V . The requirement that xV ∈ B(ZV ) limits the possible values of x′V and x′′V . Let P(ZV ) denote the set of possible partial directions x′V , and B(ZV |x′V ) denote the set of possible directions x′′V that completes x′V . These two sets can be characterized more explicitly from (3). More precisely, (3a) means that x′i and x′i are vectors of elements from Zi , while (3b) means that H(x′V ) + H(x′′V |x′V ) = ℓ(x′V ) + ℓ(x′′V ) = H(ZV ). Thus, the inequalities H(x′V ) ≤ ℓ(x′V ) and H(x′′V |x′V ) ≤ ℓ(x′′V ) must be satisfied with equalities. We also have ℓ(x′′V ) = H(ZV ) − ℓ(x′V ) = H(ZV |x′V ). In summary, { x′i ⊆ zi ∀i ∈ V (27a) ′ xV ∈ P(ZV ) ⇐⇒ ′ ′ H(xV ) = ℓ(xV ) (27b) { x′′i ⊆ zi ∀i ∈ V (28a) x′′V ∈ B(ZV |x′V )⇐⇒ ′′ ′′ ′ H(xV ) = ℓ(xV ) = H(zV |xV ) (28b)

The necessary condition in Theorem 1 directly applies to the partly directed network because G is transmissible over an undirected network if it is transmissible under the restricted model with the direction of the network fixed partly. The condition can be improved, however, since it does not take into account the additional restriction on the choice of direction. Consider some subset B ⊆ V : B ̸⊇ A. There is at least one active user in B c because B ̸⊇ A. For this user to recover the function G of the multiple source UV over a partly directed network ZV with partial direction x′V ∈ P(ZV ), there should be altogether a flow of information at rate at least H(G|UB c ) from B to B c . Suppose xV t is the direction of the network ⊺ at time t ∈ [n] with xit = [ x′it x′′it ] for all i ∈ V and ′′ ′ some xV t ∈ B(ZV |xV ). As mentioned before, the network supports a flow upper bounded by (14), which ∑ can be written as H(ZB c |x′V ) − x′′ (B c ) where x′′i := n1 t∈[n] ℓ(x′′it ). From (28), it can be shown [9] that the set of possible x′′V forms the base B ′ of a polymatroid, namely, the set of x′′V satisfying x′′ (B) ≤ H(ZB |x′V ) x′′ (V ) = H(ZV |x′V )

∀B ⊊ V

(29a) (29b)

Thus, a necessary condition for G to be transmissible to A is min [H(ZB c |x′B c ) xV ∈B B⊆V :∅̸=B̸⊇A

0 ≤ max ′′ ′

− x′′ (B c ) − H(G|UB c )] (30)

which simply states that there exists some way to direct the remaining part of the network such that it supports the required flow for G to be transmissible. It is not clear whether this condition in the form of a cutset bound can be evaluated efficiently since the minimization is over exponentially many choices of B but the expression to minimize may not be a submodular function of B.3 We want to weaken the condition slightly so that it becomes easily computable while still taking into account the restriction on the choice of direction. This can be done using the insight from Theorem 2 that equates an easily computable condition to some form of the cut-set bound. Indeed, we will present the result as follows in a form that appears as a generalization of Theorem 2. 3 In particular, −H(G|U c ) may not be submodular in B. As a counterB example, consider G = U1 ⊕ U2 with U1 and U2 uniform and independent. Then, −H(G|U1 ) − H(G|U2 ) = −2 < −1 = −H(G|U1 , U2 ) − H(G).

12

Theorem 5 G is transmissible to A over a partly directed finite linear network ZV with partial direction x′V ∈ P(ZV ) only if (30) holds. With V˜ :− V ∪ {m + 1}, A˜ := A ∪ {m +1}, ˜ Um+1 := G, Zm+1 := x′V and x′m+1 := ∅, define f : 2V 7→ R with ˜ := H(U ˜ ) + H(Z ˜ ) − ℓ(x′˜ ) f (B) B B B

˜ ⊆ V˜ ∀B

(31)

which is submodular with f (∅) = 0. Then, (30) holds only if f (V |{m + 1}) = min z(V )

(32)

zV

with zV subject to the linear constraints z(B) ≥ f (B)

∀B ̸⊇ A

(33a)

z(B) ≥ f (B|{m + 1})

∀B ⊇ A

(33b)

which holds only if (and if) f ({m + 1}) = max xV

min

B⊆V :B̸⊇A

[f (B ) − x(B )] c

c

∀B ⊊ V

B. Multicasting correlated sources The cut-set bounds (30) and (34) become equivalent in the special case when G = UV , i.e. the entire correlated sources need to be recovered by every active user. In other words, (30) can be computed efficiently without any weakening. To show the equivalence, note that for all B ⊆ V , f (B) = H(UB ) + H(ZB |x′B ) f (B ∪ {m + 1}) = H(UV ) + H(ZB |x′V ) Substituting this into (34), we have H(UV ) ≤ max xV

(34)

with B c := V \ B and xV satisfying x(B) ≤ f (B|{m + 1}) x(V ) = f (V |{m + 1})

we have the desired result since the bracketed expression is non-negative by (30) and equal to zero when B = ∅ while d(B c ) ≤ H(UB c |G) with equality when B = ∅ by the definition of dV . ■

(35a) (35b)

xV = zV is an optimal solution to (35) if zV is an optimal solution satisfying (32). 2 The necessary condition (32) can be computed in polynomial time if f (B) can be. This can be done by the ellipsoid method as described in the proof of Theorem 1 using the submodularity of f . (34) is therefore the desired form of the cut-set bound that is weaker than (30) but can be efficiently computed through (32). P ROOF (P ROOF OF T HEOREM 5) Note that if we have G = UV and x′V = ∅, then (32) and (34) above becomes (10) and (12) in Theorem 2 respectively. The equivalence between (32) and (34) can be argued following the proof of Theorem 2 using only the fact that f is submodular with f (∅) = 0. It remains to prove that (30) implies (34). Suppose (30) is satisfied for some x′′V ∈ B(ZV |x′V ). Set xV = x′′V + dV for any dV satisfying d(B) ≤ H(UB |G) for all B ⊊ V and d(V ) = H(UV |G). This is feasible, for instance, with di = H(Ui |U[i−1] G) for i ∈ V . It follows that xV satisfies (34). i.e.

f (B c ) − x(B c ) − H(G) = H(UB c ) + H(ZB c ) − ℓ(x′B c ) − x′′ (B c ) − d(B c ) − H(G)

[H(UB c ) + H(ZB c |x′B c ) − x(B c )]

This is equivalent to (30) as desired with H(G|UB c ) = H(UV ) − H(UcB ) and xV = x′′V , which is valid because the constraints (29) on x′′V ∈ B ′ are identical to the constraints (35) on xV with f (B|{m + 1}) = H(ZB |x′V ). Indeed, not only can (30) efficiently be computed, it is also nearly tight using the same random coding argument as in the proof of Theorem 2, but with the direction specified by xit := [ x′i x′′it ] for user i ∈ V and time t ∈ [n]. In summary, we have the following theorem. Theorem 6 UV is transmissible to A only if 0 ≤ max min [H(ZB c |x′B c )−x′′ (B c )−H(UB |UB c )] (36) ′′ ′ xV ∈B B⊆V :∅̸=B̸⊇A

where B ′ is defined in (29). If (36) is satisfied with strict inequality, UV is transmissible. 2 C. Multicasting independent messages Consider now the case when each user i ∈ V wants to multicast an independent message at rate Ri to the active users in A. Using the same argument as for Theorem 4, the achievable rate region can be obtained by replacing H(UB |UB c ) with R(B) in the condition (36) in Theorem 6 for the transmissibility of the correlated sources UV . i.e. RV is achievable iff

x(B) = x′′ (B)+d(B) ≤ H(ZB |x′V )+H(UB |G) = f (B|{m+1}) with equality if B = V . The inequality follows from (29) and the last equality follows from the definition (31) of f . We have (34) as desired if f (B c ) − x(B c ) − H(G) ≥ 0 for all B ⊆ V : B ̸⊇ A and the inequality is satisfied with equality when B = ∅. This holds because

min

B⊆V :∅̸=B̸⊇A

0 ≤ max ′′ ′

min

[H(ZB c |x′B c ) − x′′ (B c ) − R(B)] (37)

xV ∈B B⊆V :∅̸=B̸⊇A

We can rewrite H(ZB c |x′B c ) − x′′ (B c ) as H(ZB c ) − x(B c ) with xi := ℓ(x′i ) + x′′i for i ∈ V . Then, following the proof of Theorem 4, we can obtain the achievable rate region below. Theorem 7 The set of achievable RV for the partly directed network ZV with partial direction x′V ∈ P(ZV ) is G ′ := {xV −zV ≥0 : xV = x′V +x′′V , x′′V ∈ B ′ , zV ∈ Q } (38)

′ ′ = [H(ZB c |x′B c ) − x′′ (B c ) − H(G|UB c )] + H(UB c |G) − d(B c ) where xi := ℓ(xi ) for i ∈ V , and Q is defined in (24).

The first equality is by the definition of f and xV . The last equality is because H(ZB c ) − ℓ(x′B c ) = H(ZB c |x′B c ) by (27a) and H(UB c ) − H(G) = H(UB c |G) − H(G|UB c ) by the different chain rule expansions of H(GUB c ). Finally,



2

The capacity C := maxRV ∈G ′ R(V ) is upper bounded by (25) because of the additional restriction on the choice of direction. It can also be computed in polynomial time by the ellipsoid method because the condition (37) for RV

13

to be achievable can be. This is because (37) is obtained from (34) with H(UB ) replaced by R(B), which is obviously ˜ 4 polynomially computable, and so is f (B). The earlier result (26) may not extend here, however. i.e. it may not be possible to attain the capacity with just one active user transmitting an independent message. For instance, consider a two-user network with two directed one-bit links pointing in opposite direction between the two users. With all users active, the capacity is 2 bits but each user can transmit at most 1 bit. In other words, the presence of fixed directed links impose additional restriction on the rate allocation, even among the active users. VI. C ONCLUSION The undirected network coding problem is related to the secret key agreement problem by the process of source emulation which turns the network into a source. The source characterizes the undirected network because the choice of the bases of the source corresponds to a possible direction of the network. In other words, the interconnections of an undirected network can be described as the correlation of its emulated source. The graphical network can then be generalized by a more general source model the correlation of which may not be captured by a graph, but can be captured by the more general structure of a matroid. We found that the source emulation process can be reversed by public discussion in the sense that the emulated source can be turned into a effective secure channel if we use the base of the source to encrypt secret inputs by the one-time pad. As a consequence, a secrecy problem can be solved as a network coding problem, by supporting a hidden flow of information underneath the public discussion using the correlation among the components of the emulated source. The capacities of the two problems are equal and lead to a common notion of multivariate correlation that appears to be a natural generalization of Shannon’s mutual information and the combinatorial notion of partition connectivity. Some more general results in secure source coding are also applicable to the undirected network coding problem. In particular, it leads to a computable sufficient condition for multicasting some single-letter function of a distributed source, with a closely matching sufficient condition or an exact solutions in the omniscience case when the function is the entire source, and when independent messages are multicasted instead of the distributed source. The idea of viewing an undirected network more abstractly as a matroid also leads to a more general network model with the direction partially fixed. The same matroidal structure allows for polynomial time solutions or partial solutions to different multicast problems. The polynomial time complexity described here relies on the use of ellipsoid method, which translates the linear programming problem of interest to the separating problem that involves submodular function minimizations, which can be solved in polynomial time [21]. This method, however, is not considered practical and therefore only serves as an initial 4 The other terms in f (B) ˜ involving Z ˜ and x′ corresponds to computing ˜ B B the rank of a submatrix of H V in (2), which can be done by the Gaussian elimination.

pointer to the search of potentially more practical solutions like the ones given in [3] for the graphical undirected network. The relationship between the secrecy and the undirected network coding problems is also not very symmetrical. Some results in the secure source coding problem do not yet have a counterpart in the undirected network coding problem. Examples are the achievable secrecy exponents in [18, 25, 30]. Although matroidal undirected network is a very general concept, practical coding appears to be possible only in the special case when the network is finite linear. The identity proven in [7] relating the capacities of the secrecy and network coding problems, however, do not rely on such linearity, and is therefore more general. It is not clear whether such generality has further practical significance. Nevertheless, the matroidal framework has given many fruitful research directions. For example, a linear perfect secret key agreement is proposed in [29] using the generalized max-flow min-cut theorem for the linking systems. The block length required to attain the capacity is upper bounded by [31] using again the submodularity of entropy and an integer programming technique called the total dual integrality, which is also well-known in matroid theory. It is optimistic that further discovery of the theory of information for the secrecy problem can enhance our understanding of other related problems. R EFERENCES [1] C. Chan, publications. http://chungc.net63.net/pub, http://goo.gl/4YZLT. [2] Z. Li and B. Li, “Network coding in undirected networks,” in Proceedings of 38th Annual Conference on Information Sciences and Systems (CISS), 2004. [3] Z. Li, B. Li, and L. C. Lau, “On achieving maximum multicast throughput in undirected networks,” Information Theory, IEEE Transactions on, vol. 52, no. 6, pp. 2467 – 2485, jun. 2006. [4] S. Jaggi, P. Sanders, P. A. Chou, M. Effros, S. Egner, K. Jain, and L. Tolhiuzen, “Polynomial time algorithms for multicast network code construction,” Information Theory, IEEE Transactions on, vol. 51, no. 6, pp. 1973–1982, jun. 2005. [5] J. Goseling, C. Fragouli, and S. N. Diggavi, “Network coding for undirected information exchange,” IEEE Communications Letters, vol. 13, no. 1, January 2009. [6] C. Chan, “Generating secret in a network,” Ph.D. dissertation, Massachusetts Institute of Technology, 2010, see [1]. [7] ——, “The hidden flow of information,” in 2011 IEEE International Symposium on Information Theory Proceedings (ISIT2011), St. Petersburg, Russia, Jul. 2011, see [1]. [8] I. Csisz´ar and P. Narayan, “Secrecy capacities for multiple terminals,” IEEE Transactions on Information Theory, vol. 50, no. 12, Dec 2004. [9] A. Schrijver, Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2002. [10] C. Y. S. Nitinawarat and A. Reznik, “Secret key generation for a pairwise independent network model,” in IEEE International Symposium on Information Theory, 2008. ISIT 2008., July 2008, pp. 1015–1019. [11] C. Chan and L. Zheng, “Mutual dependence for secret key agreement,” in Proceedings of 44th Annual Conference on Information Sciences and Systems, 2010, see [1]. [12] A. S. Avestimehr, S. N. Diggavi, and D. N. C. Tse, “Wireless network information flow: A deterministic approach,” CoRR, vol. abs/cs/0906.5394, 2009. [13] A. Schrijver, “Matroids and linking systems,” Journal of Combinatorial Theory, Series B, vol. 26, no. 3, pp. 349 – 369, 1979. [14] M. Goemans, S. Iwata, and R. Zenklusen, “An algorithmic framework for wireless information flow,” in Communication, Control, and Computing, 2009. Allerton 2009. 47th Annual Allerton Conference on, 30 2009-oct. 2 2009, pp. 294 –300. [15] T. S. Han, “Multicasting multiple correlated sources to multiple sinks over a noisy channel network,” Information Theory, IEEE Transactions on, vol. 57, no. 1, pp. 4 –13, jan. 2011.

14

[16] R. Appuswamy, M. Franceschetti, N. Karamchandani, and K. Zeger, “Network coding for computing: Cut-set bounds,” IEEE Transactions of Information Theory, vol. 57, no. 2, pp. 1015–1030, Feb 2011. [17] H. Tyagi, P. Narayan, and P. Gupta, “When is a function securely computable?” Information Theory, IEEE Transactions on, vol. 57, no. 10, pp. 6337 –6350, oct. 2011. [18] C. Chan, “Multiterminal secure source coding for a common secret source,” in Forty-Ninth Annual Allerton Conference on Communication, Control, and Computing, Allerton Retreat Center, Monticello, Illinois, sept 2011. [19] R. W. Yeung, Information Theory and Network Coding. Springer, 2008. [20] C. Ye and A. Reznik, “Group secret key generation algorithms,” in IEEE International Symposium on Information Theory, 2007., June 2007, pp. 2596–2600. [21] M. Gr¨otschel, L. Lov´asz, and A. Schrijver, “The ellipsoid method and its consequences in combinatorial optimization,” Combinatorica, vol. 1, pp. 169–197, 1981, 10.1007/BF02579273. [22] A. Frank, T. Kir´aly, and M. Kriesell, “On decomposing a hypergraph into k-connected sub-hypergraphs,” Discrete Applied Mathematics, vol. 131, no. 2, pp. 373–383, September 2003. [23] J. Bang-Jensen and S. Thomass´e, “Decompositions and orientations of hypergraphs,” Preprint no. 10, Department of Mathematics and Computer Science, University of Southern Denmark, May 2001. [24] H. Tyagi, P. Narayan, and P. Gupta, “Secure computing,” in Information Theory Proceedings (ISIT), 2010 IEEE International Symposium on, june 2010, pp. 2612 –2616. [25] C. Chan, “Agreement of a restricted secret key,” in 2012 IEEE International Symposium on Information Theory Proceedings (ISIT2012), Cambridge, MA, Jul. 2012, see [1]. [26] I. Csisz´ar and J. K¨orner, Information Theory: Coding Theorems for Discrete Memoryless Systems. Akad´emiai Kiad´o, Budapest, 1981. [27] R. Koetter and M. M´edard, “An algebraic approach to network coding,” IEEE/ACM Transactions on Networking, vol. 11, no. 5, October 2003. [28] T. Ho, M. Medard, R. Koetter, D. Karger, M. Effros, J. Shi, and B. Leong, “A random linear network coding approach to multicast,” Information Theory, IEEE Transactions on, vol. 52, no. 10, pp. 4413 –4430, oct. 2006. [29] C. Chan, “Linear perfect secret key agreement,” in 2011 IEEE Information Theory Workshop Proceedings (ITW2011), Paraty, Brazil, Oct. 2011, see [1]. [30] ——, “Universal secure network coding by secret key agreement,” see [1]. [31] ——, “Delay of linear perfect secret key agreement,” see [1].

Matroidal Undirected Network

The users win if they all put down the same thing that is different ... Is there a winning strategy for .... an incident node i ∈ ϕ(e) can be selected as the receiver to.

193KB Sizes 3 Downloads 139 Views

Recommend Documents

Network Coding in Undirected Networks
With network coding, the achievable multicast throughput is 2. Similar to source erasure codes, ..... parity of the degree of any node in the network. Therefore the.

Distributed MAP Inference for Undirected ... - Research at Google
Department of Computer Science, University of Massachusetts, Amherst. † Google ... This jump is accepted with the following Metropolis-Hastings acceptance probability: .... by a model can be used as a measure of its rate of convergence. ... Uncerta

Distributed MAP Inference for Undirected ... - Research at Google
Department of Computer Science, University of Massachusetts, Amherst. † Google Research, Mountain View. 1 Introduction. Graphical models have widespread ...

Evolving network structure of academic institutions - Applied Network ...
a temporal multiplex network describing the interactions between different .... texts such as groups of friends in social networks and similar species in food webs ( ...

EU Innovation Network - Core profile within the network - European ...
1. EU Innovation Network Membership. Pre-Adoption of mandate Sept. 2016 ... The EU Innovation offices Network focuses on ... The services of the Innovation.

Global Network for Progress, Inc. Global Network for ... -
Brooklyn & Queens: Grand Central East which becomes Northern State Parkway. Take Exit 32 to Westbury. Proceed through 4 lights. Make a left on Scally Pl ...

network client gateway content creator content provider network admin ...
content creator content provider network admin claimant advertiser seek peers connect to peers redistribute content to peers receive content request from media client request resource media client reassemble file cache file broadcast admin request re

Network Coding, Algebraic Coding, and Network Error Correction
Abstract— This paper discusses the relation between network coding, (classical) algebraic coding, and net- work error correction. In the first part, we clarify.

Network Embedded Support for Sensor Network Security
May 5, 2006 - multicast region that uses a common key for communications. Multicast ...... Reliance on limited, non-renewable battery energy resources.

Evolving network structure of academic institutions - Applied Network ...
differ from the typical science vs humanities separation that one might expect – instead ... Next, for each graduating year we identify all students that earned a degree ..... centrality of chemistry, computer science, engineering, mathematics, and

Define computer network. Computer network is a ...
Reed Solomon code is used to correct burst errors. ➢ The use of error-correcting codes is often referred to as forward error correction. Hamming code. ➢ Hamming codes are code words formed by adding redundant check bits, or parity bits, to a data

Remote Network Labs: An On-Demand Network Cloud ...
Aug 21, 2009 - must configure the peering routers to be in the VPN mode. Whereas in RNL, the router could be set to any configura- tion the users want. Since the users' settings could conflict with the VPN setting, we cannot use VPN as an implemen- t

Evolving network structure of academic institutions - Applied Network ...
texts such as groups of friends in social networks and similar species in food webs (Girvan .... munity membership at least once through the ten years studied.

NETWORK TECHNOLOGY.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. NETWORK ...

Network Security
[Read PDF] Network Security: Private. Communication in a Public ... Series in Computer Networking and Distributed). Best Online ... Information Technology Law.

Network Security.pdf
(a) Types of Digital watermarking. (b) Biometrics. (c) Transport Layer Security (TLS). (d) Post Office Protocol 3s (POP3s). (e) Asymmetric Cryptography. 5x2=10.

Energy Network -
Oct 18, 2013 - Next Steps. Sarna / Rachel b. Discussion. All. 5. Other Agenda Items a. Creating Opportunity Summit. Janie McNabb b. MSU –Energy Network ...

Neural Network Toolbox
3 Apple Hill Drive. Natick, MA 01760-2098 ...... Joan Pilgram for her business help, general support, and good cheer. Teri Beale for running the show .... translation of spoken language, customer payment processing systems. Transportation.

Network-Technician.pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Network-Technician.pdf. Network-Technician.pdf. Open. Extract.

Network Security.PDF
Project Athena. (b) A is a ... affecting normal system operation. (d) A is a ... MSEI-022. Page 3 of 5. Main menu. Displaying Network Security.PDF. Page 1 of 5.