[ieee 2012 ieee 14th international conference on communication technology (icct) - chengdu, china...

5
The Characterization of Multi-source Multicast with Network Coding Hanxu Hou, Hui Li Shenzhen Key Lab of Cloud Computing Technology & Application, Shenzhen Graduate School, Peking University Shenzhen Eng. Lab of Converged Network Tech., Shenzhen Graduate School, Peking University, Shenzhen, China, 518055 [email protected], [email protected], [email protected] Abstract—The capacity region of the single-source multicast network coding has an explicit Max-flow Min-cut representation. But for multi-source multicast networks the problem is still open. In this paper, we mainly discuss the case of independent encoding multi-source multicast network using inter-session network coding. We propose a multi-source independent encoding theorem for this problem which characterizes the admissible coding rate region of independent encoding for relevant multiple sources. The theorem is proposed by the paper according to the strongly typical sequences and random coding. We also point out the connections between our theorem and the general multi- source network coding problem, of which the results are computable and can be used to design the multi-source network coding algorithm. Keywords- network coding; multi-source; multicast; correlated data; computable I. INTRODUCTION So far, there have been many papers [2][3][4][6][9][10][11] dedicated to network coding in a single communication session, i.e., unicast communication to one receiver node or multicast of common information to multiple receiver nodes. This type of network coding is called intra-session network coding, since we only encode together information symbols that will be decoded by the same set of receiver nodes. For intra-session network coding, it suffices for each intermediate node to form its outputs as linear combinations of its inputs. Every receiver node can decode the source symbols once receives enough independent mixed information symbols. Another type of network coding is called inter-session network coding, where coding is allowed between packets from different sessions. This is also referred to as multi-source network coding problem in [6] which is yet to be explored. In this work, we focus on inter-session network coding with multiple multicast sessions. The admissible region for an arbitrary acyclic network with multiple sessions is given in [8].There are many studies [12]- [18] dedicated to characterizing the capacity region of inter- session network coding and suboptimal solutions to the multi- session problem. Yan et al. [18] have demonstrated the entropy characterizations of the capacity region which are not computable as yet. They showed that by carefully bounding the constrained regions in the entropy space, they obtained the exact characterization of the capacity region. Han [17] established a necessary and sufficient condition for reliable transmission over a noisy network for multicasting multiple correlated sources altogether to multiple receiver nodes. He considered the network model with multiple independent sources and multiple receivers, but each receiver is required to reliably reproduce data from all the sources. This is not generally the case for inter-session network coding. In this paper, we adopt the same model of Han [17], and derive the admissible coding rate region for independent encoding of relevant multi-source information. Our work, done independently of Han’s, differs from his and complements it in the following ways. First, we assume encode-and-forward scheme in the model without the need for decoding, Han assumes decode-and- forward in his problem statement. Second, the proof techniques are different. Han takes a purely combinatorial approach to solve the problem, while we establish our theorem with a constructive method. Also, we just give the admissible coding rate region of independent encoding relevant multi-source while Han establishes a necessary and sufficient condition. Furthermore, our work not only gives an admissible coding rate region of the general inter-session network coding, but also discusses the connections to the general inter-session network coding problem that are not considered in [17]. The rest of this paper is organized as follows. In section II, we give some preliminaries of the theorem. Section III presents the main results. Finally, we conclude with the main contributions and the future work in section IV. II. PRELIMINARIES A. Network Model The multi-source multi-receiver network is represented as an acyclic graph G = (V, E), where V and E are, respectively, the set of all nodes and the set of all channels. Consider two disjoint subsets S, T of V such that S = {s 1 , s 2 , … , s N } and T = {t 1 , t 2 , … , t h }, where S is the set of source nodes and T is the set of receiver nodes respectively. We assume each channel e E with capacity C e . Without loss of generality, we assume that each source node has no input channel and each receiver node has no outgoing channel. B. Sources and Channels Each source node s i S generates a set of data sequences X i = 1 2 ( , ,... ) n i i i X X X . A receiver node t j T requires the data from a set of sources (t j ) S. In the case when (t j ) S for The National Basic Research Program of China (No.2012CB315904) and The National Natural Science Foundation of China (61179028). ___________________________________ 978-1-4673-2101-3/12/$31.00 ©2012 IEEE

Upload: buimien

Post on 30-Mar-2017

227 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: [IEEE 2012 IEEE 14th International Conference on Communication Technology (ICCT) - Chengdu, China (2012.11.9-2012.11.11)] 2012 IEEE 14th International Conference on Communication Technology

The Characterization of Multi-source Multicast with Network Coding

Hanxu Hou, Hui Li Shenzhen Key Lab of Cloud Computing Technology & Application, Shenzhen Graduate School, Peking University

Shenzhen Eng. Lab of Converged Network Tech., Shenzhen Graduate School, Peking University, Shenzhen, China, 518055 [email protected], [email protected], [email protected]

Abstract—The capacity region of the single-source multicast network coding has an explicit Max-flow Min-cut representation. But for multi-source multicast networks the problem is still open. In this paper, we mainly discuss the case of independent encoding multi-source multicast network using inter-session network coding. We propose a multi-source independent encoding theorem for this problem which characterizes the admissible coding rate region of independent encoding for relevant multiple sources. The theorem is proposed by the paper according to the strongly typical sequences and random coding. We also point out the connections between our theorem and the general multi-source network coding problem, of which the results are computable and can be used to design the multi-source network coding algorithm.

Keywords- network coding; multi-source; multicast; correlated data; computable

I. INTRODUCTION So far, there have been many papers [2][3][4][6][9][10][11]

dedicated to network coding in a single communication session, i.e., unicast communication to one receiver node or multicast of common information to multiple receiver nodes. This type of network coding is called intra-session network coding, since we only encode together information symbols that will be decoded by the same set of receiver nodes. For intra-session network coding, it suffices for each intermediate node to form its outputs as linear combinations of its inputs. Every receiver node can decode the source symbols once receives enough independent mixed information symbols. Another type of network coding is called inter-session network coding, where coding is allowed between packets from different sessions. This is also referred to as multi-source network coding problem in [6] which is yet to be explored. In this work, we focus on inter-session network coding with multiple multicast sessions.

The admissible region for an arbitrary acyclic network with multiple sessions is given in [8].There are many studies [12]-[18] dedicated to characterizing the capacity region of inter-session network coding and suboptimal solutions to the multi-session problem. Yan et al. [18] have demonstrated the entropy characterizations of the capacity region which are not computable as yet. They showed that by carefully bounding the constrained regions in the entropy space, they obtained the exact characterization of the capacity region. Han [17] established a necessary and sufficient condition for reliable transmission over a noisy network for multicasting multiple

correlated sources altogether to multiple receiver nodes. He considered the network model with multiple independent sources and multiple receivers, but each receiver is required to reliably reproduce data from all the sources. This is not generally the case for inter-session network coding. In this paper, we adopt the same model of Han [17], and derive the admissible coding rate region for independent encoding of relevant multi-source information. Our work, done independently of Han’s, differs from his and complements it in the following ways.

First, we assume encode-and-forward scheme in the model without the need for decoding, Han assumes decode-and-forward in his problem statement. Second, the proof techniques are different. Han takes a purely combinatorial approach to solve the problem, while we establish our theorem with a constructive method. Also, we just give the admissible coding rate region of independent encoding relevant multi-source while Han establishes a necessary and sufficient condition.

Furthermore, our work not only gives an admissible coding rate region of the general inter-session network coding, but also discusses the connections to the general inter-session network coding problem that are not considered in [17].

The rest of this paper is organized as follows. In section II, we give some preliminaries of the theorem. Section III presents the main results. Finally, we conclude with the main contributions and the future work in section IV.

II. PRELIMINARIES

A. Network Model The multi-source multi-receiver network is represented as

an acyclic graph G = (V, E), where V and E are, respectively, the set of all nodes and the set of all channels. Consider two disjoint subsets S, T of V such that S = {s1, s2, … , sN} and T = {t1, t2, … , th}, where S is the set of source nodes and T is the set of receiver nodes respectively. We assume each channel e �E with capacity Ce. Without loss of generality, we assume that each source node has no input channel and each receiver node has no outgoing channel.

B. Sources and Channels Each source node si� S generates a set of data sequences

Xi = 1 2( , ,... )ni i iX X X . A receiver node tj� T requires the data

from a set of sources �(tj) � S. In the case when �(tj) � S for

The National Basic Research Program of China (No.2012CB315904) and The National Natural Science Foundation of China (61179028).

___________________________________ 978-1-4673-2101-3/12/$31.00 ©2012 IEEE

Page 2: [IEEE 2012 IEEE 14th International Conference on Communication Technology (ICCT) - Chengdu, China (2012.11.9-2012.11.11)] 2012 IEEE 14th International Conference on Communication Technology

all tj � T, the network turns into a single source multicast network [6].

Definition 1. For each receiver node t T, the decoding error probability of t is Pt = Pr{gt(XS) (t )X �� }, where gt(XS) is the value of gt as a function of data sequences XS, and gt is a function that map the data Xs to the channel sequence. (t )X �

is a set of channel sequences that the receiver node receives. The data rate of the source set S is (R1, R2, … , RN) respectively, if there exists an encoding algorithm and a decoding algorithm such that Pt� 0 for all receiver node t T. We say the rate tuple (R1, R2, … , RN) is in the admissible rate region of independent encoding for these multiple sources.

C. Strongly Typical Sequences Let X be a random variable representing an information

source, H(X) the entropy of X, and SX the support of X. Assume H(X) < �.

Definition 2 ([19]). The strongly typical set [ ]nXT � with

respect to p(x) is the set of sequences xn = (x1, x2, … , xn) Xn such that N(x; x) = 0 for x � SX, and

n1| ( ; x ) ( ) | ,x

N x p xn

��

where N(x; x) is the number of occurrences of x in the sequence x, and � is an arbitrarily small positive real number. The sequences in [ ]

nXT � are called strongly �-typical sequences.

Lemma 1: (Strong AEP)([19]). There exists a small positive quantity � such that � � 0 as � � 0 and the following hold:

(1) If xn[ ]nNT �� , then

( ( ) ) n ( ( ) )2 (x ) 2n H X n H Xp� �� � � � . (1)

(2) For n suf�ciently large,

Pr{ Xn� [ ]nXT � } > 1 �� . (2)

(3) For n suf�ciently large, ( ( ) ) ( ( ) )

[ ](1 )2 | | (1 )2n H X n n H XXT� ��� �� �� � . (3)

III. MAIN RESULT In this section, we will present our main result and prove it

using lemma 1. After that, we discuss the general inter-session network coding problem, and point out the connections between our theorem and the general problem. Recall that a set of data sequences Xi = 1 2( , ,... )n

i i iX X X are generated at each source node si� S and the data encoding rate is Ri for each data sequence Xi, i = 1, 2, … , N. In the network G, the receiver node would ask for the source sequences of S. Our main result gives the admissible data encoding rate. Some notations of the paper are listed in the follows.

N: number of sources.

h: number of receivers. Xi: the information sequence of the source Si. Q (Xi): the probability distribution of the information sequences {Xi} . gt(Xs): the encoding function of the data Xs. R1: the encoding rate of the source S1. R: the addition of the encoding rate of all the needed sources Si, i=1,2,…,N. H (X): the entropy of information variable X. H(X1, X2): the joint entropy of a pair of information variables X1 and X2. H (X1 | X2): the conditional entropy of X1 given X2. E11: the event of decoding source X1 is error and other sources are correct. Pr(E11); the probability of the event E11.

Theorem 1 For any finite source sequences Xi = 1 2( , ,... )n

i i iX X X , i = 1, 2, … , N. The source encoding rate set R = (R1, R2, … , RN), which meet all the following requirements is admissible.

R1 > H(X1|X2, … , XN); R2 > H(X2|X1,X3, … , XN); … ;

RN > H(XN|X2, … , XN-1);

R12 = R1 + R2 > H(X1, X2);

R13 = R1 + R3 > H(X1, X3); … ;

RNN-1 = RN-1 + RN > H(XN-1, XN); … ;

Ri…j = Ri +…+ Rj > H(Xi, … , Xj); … ;

R = R1 +R2+…+ RN > H(X1, … , XN);

Where i, j are positive integers and 1 i<jN.

Proof: A random encoding method and lemma 1 of the strongly �-typical sequences are employed to prove the theorem. Let typical sequences Xi = 1 2( , ,... )n

i i iX X X with length n to be the outputs of the source node si for i = 1, 2, …, N. Then let the �-typical sequences Xi be one-to-one mapped to the integer set Fi = [1, 2, …, 2 inR ], i = 1, 2, …, N. Consequently, our encoding operations for the source sequences Xi are given as: fi(Xi) = Fi, i = 1, 2, …, N.

Note that Xi is the set of typical sequence of source si with length n, 1 iN, the value of length n can be any positive integer. fi(Xi) is a function of the typical sequence Xi, the function of fi maps the typical sequence Xi to the integer set [1, 2, … , 2 inR ], 1 iN. Let all atypical sequences of source si mapped to integer "0". When n is long enough, because of the randomness of the mapping and the typicality of the source sequences, from the inequality (1) of lemma 1, we have

( ( ) ) ( ( ) )2 (f ( )) 2i in H F n H Fi ip X� �� � � � .

For � � 0 along with � � 0, for arbitrary positive integer i, j and k, so the probability appeared in the encoding operations will be:

P(i)= Pr{f1(X1)=i}= 12 nR�

P(j)= Pr{ f2(X2)=j}= 22 nR�

Page 3: [IEEE 2012 IEEE 14th International Conference on Communication Technology (ICCT) - Chengdu, China (2012.11.9-2012.11.11)] 2012 IEEE 14th International Conference on Communication Technology

P(k)= Pr{fN(XN)=k}= 2 NnR� Our decoding operations are as follows: if the receiver node

receives one typical sequence ' ' '1 2 N(X , , ... , X )X among the set

of typical sequences XN that meet f1'1(X ) =i, f2

'2(X ) =j, ,

fN'N(X ) =k and

1 2

' ' '1 2 [ ... ]...

N

nN X X XX X X T �� .

We decode the integer vector (i, j, …, k) as sequence ' ' '1 2 N(X , , ... , X )X , that is

gt: (i, j, … , k) ' ' '1 2 1 2

NN NF F F X X X X� ��� � ��� � .

Function ft is the inverse function of gt. If the decoded sequences ' ' '

1 2 N(X , , ... , X )X = ( 1 2 ... NX X X ), then the receiver node receives the correct data. On the contrary, if the received sequences don’t meet the above conditions, then some errors will happen. So, the average error probability is

r 1 2[ ( , ,..., k) ... ]t t NP P g i j X X X� � .

In order to find the upper bound of the error probability Pt, we define the following events

1 20 1 2 [ ... ]{( ... ) }N

nN X X XE X X X T �� � ,

1 2

' ' ' ' ' '1 1 2 N 1 2 N 1 2 N

' ' '[ ... ] 1 1 2 2 N N

{ (X X ...X ) (X X ...X ), (X X ...X )

f (X ) i,f (X ) j, ..., f (X ) k}N

nX X X

ET and�

� � � �

� � �

. E0 is the event that the decoded sequences do not belong to the typical sequence

1 2[ ... ]N

nX X XT � . E1 is the event that the

decoded sequences belong to the typical sequence

1 2[ ... ]N

nX X XT � but the decoded sequences are unequal to the

original sequences 1 2 N(X X ...X ) . Obviously, the error probability Pt is the probability of events E0 and E1, that is

r 0 1[ ]tP P E E� � . By the characteristics of the union set we have

r 0 1 r 0 r 1[ ] [ ] [ ]tP P E E P E P E� �� .

From the inequality (2) of lemma 1 of the typical sequence we have that

Pr{ Xn� [ ]nXT � } > 1 �� . So Pr{ Xn� [ ]

nXT � } < � .

Therefore, as n �� we have that r 0[ ]P E � 0. Since n is a finite integer, the event 1E can be divided into the following limited (2N-1) events:

1 2 3

' n ' '11 1 1 1 1 1 1

'1 2 N [ | ... ]

{ X X that X X and f (X ) i,

(X X ,... , X ) } ;N

nX X X X

ET �

� � � � �

2 1 3 4

' n ' '12 2 2 2 2 2 2

'1 2 N [ | ... ]

{ X X that X X and f (X ) j,

(X X ,... , X ) } ;N

nX X X X X

ET �

� � � � �

1 2 1

' n ' '1 N N N N N N

'1 2 N [ | ... ]

{ X X that X X and f (X ) k,

(X X ,... , X ) } ;N N

NnX X X X

ET ��

� � � � �

1 2 3 4

' n ' '112 1 1 1 1 1 1

' n ' '2 2 2 2 2 2

' '1 2 N [ | ... ]

{ X X , that X X and f (X ) i,

X X , that X X and f (X ) j,

(X X ,... , X ) } ;N

nX X X X X

E

T �

� � � � �

� � � �

1 2 3 1

' n ' '11 1 1 1 1 1 1

' n ' 'N N N N N N

' '1 2 N [ | ... ]

{ X X , that X X and f (X ) i,

X X , that X X and f (X ) k,

(X X ,... , X ) } ;N N

N

nX X X X X

E

T ��

� � � � �

� � � �

1 2 3 4 1

' n ' '112 1 1 1 1 1 1

' n ' '2 2 2 2 2 2' n ' 'N N N N N N

' ' '1 2 N [ | ... ]

{ X X , that X X and f (X ) i,

X X , that X X and f (X ) j,

X X , that X X and f (X ) k,

(X X ,... , X ) } ;N N

N

nX X X X X X

E

T ��

� � � � �

� � � �

� � � �

1 2

' n ' '112... 1 1 1 1 1 1

' n ' '2 2 2 2 2 2' n ' 'N N N N N N

' ' ' '1 2 3 N [ ... ]

{ X X , that X X and f (X ) i,

X X , that X X and f (X ) j, ... ,

X X , that X X and f (X ) k,

(X X , , ... , X ) } N

N

nX X X

E

X T �

� � � � �

� � � �

� � � �

, where � is an arbitrarily small positive number. For all the events above, E1ij means that there are two sequences

' ' '1 2( ... )NX X X , 1 2( ... )NX X X among strongly �-typical

sequences meet ' ',i i j jX X X X� � and

f( ' ' '1 2 ... NX X X ) = f( 1 2 ... NX X X ). E1ij is one of the error

event E1. When n ��, then we have that � � 0 for all of the above formulas. By the characteristics of the union set we have that

'1 1

'1 2 [ ]

1

1 2

'11 1 1

( , ,..., )

[ | ... ]

( ) [ ( ) ]

2 | |

nN N

N

r rX XX X X T

nR nX X X

P E P f X i

T�

��

(4)

From the inequality (3) of the lemma 1 that

( ( ) ) ( ( ) )[ ](1 )2 | | (1 )2n H X n n H XXT� ��� �� �� � .

We have that

1 2 N

1 2

[ (X |X ...X ) ][ | ... ]| | 2

N

n HnX X XT �

�� (5).

Therefore, we can get from (4) and (5) that

Page 4: [IEEE 2012 IEEE 14th International Conference on Communication Technology (ICCT) - Chengdu, China (2012.11.9-2012.11.11)] 2012 IEEE 14th International Conference on Communication Technology

'1 1

'1 2 [ ]

1 21 1

1 2

1 2 1

'11 1 1

( , ,..., )

[ ( | ... ) ][ | ... ]

[ ( | ... ) ]

( ) [ ( ) ]

2 | | 2 2

2

nN N

N

N

N

r rX XX X X T

n H X X XnR nRnX X X

n H X X X R

P E P f X i

T�

��

��

�� �

� �

� �

(6).

When � � 0, then � � 0, as R1 > H (X1|X2, … , XN), we can guarantee that 1 2 1[ ( | ... ) ] 0NH X X X R�� � � . So

1 2 1[ ( | ... ) ]Nn H X X X R�� � � �� , as n � �. Then 1 2 N 1[ (X |X ...X ) ]

11( ) 2n H RrP E �� � � 0. In the same way, if R2>H

(X2|X1, X3, …, XN), when � � 0, then �� 0, we have 2 1 3 N 2[ (X |X , X ... X ) ]

12( ) 2n H RrP E �� � � 0. … . If R= R1

+R2+…+RN > H(X1, X2, … , XN) when �� 0, then � � 0, we have 1 2 N[ (X ,X ... X ) ]

112( ) 2n H Rr NP E �� �

��� � 0. This is to say Pr(E1i…j) � 0 for all i, j, 1 i<jN.

So we can predictably have that 1 11 12 112...( ) ( ) ( ) ... ( ) 0r r r r NP E P E P E P E � � � � .

Thus we have proven that Pt tends to zero when n ��, i.e., there must exist certain encoding and decoding methods such that the decoding error probability Pt� 0. �

The inspiration of this theorem comes from the classical Slepian–Wolf problem [1]. Theorem 1 generalizes the encoding rate region of N-source independent encoding. If N = 3, the admissible rate region is R = {(R1, R2, R3,) R1>H(X1|X2,X3), R2>H(X2|X1,X3), R3 >H(X3 |X1,X2), R1 +R2>H(X1,X2), R1 +R3>H(X1,X3), R2 +R3>H(X2,X3), R1 +R2 +R3>H(X1,X2,X3)} in Theorem 1. We have proved this theorem by constructing the encoding algorithm and decoding algorithm using typical sequences. This theorem gives the rate region for multi-source independent encoding to multicast correlated data. The theorem focuses on the case in which all of the multiple correlated sources are to be multicast to all the receivers. But for the general model of multiple sources and multiple receivers, each receiver only needs to reliably reproduce a prescribed subset of the multiple sources that is chosen by the receiver itself. Each receiver may require different sources and some links may provide transmitting service for several receivers that those receivers may require different sources, so mixing those sources will become extraordinarily difficult. Therefore, some sources would partition the capacity of links and make the admissible encoding rate of sources extremely difficult. The problem with this general model is quite hard. But we can have an observation for the general problem from the theorem and its proven method. We thus have

Observation: If a receiver t requires all the information of source node si, i = 1, 2, … , N, the capacity from source node si to receiver t is equal to or greater than Ri and the capacity from all the source nodes si (i = 1, 2, … , N) to receiver t is equal to or greater than R, the receiver t can reproduce all data from the sources.

Consider the general case that the receiver node t just requires the data of partial source nodes St = {s1, s2, …, si}, i< N. Without loss of generality, assume the sources St are mixed

with the data of source nodes 'tS on the paths of transmitting to

the receiver t, 't tS S

� is a subset of the source node set tS�

(complementary set of St). We give a deduction of the general problem from our theorem 1.

Deduction: The capacity region of receiver t for an arbitrary acyclic multi-source multi-receiver network is characterized by

cj> H(Xj| 'tX ) for j = 1, 2, …, i, c > H(X1, X2, …, Xi) and

ct> H( 'tX |Xt).

Where cj is the capacity from source node sj to receiver t, c is the capacity from source nodes St to receiver t and ct is the capacity from source nodes '

tS to receiver t.

The receiver t could reproduce all the data if the capacity meets Theorem 1. Of course, the receiver t could reproduce partial data with the Theorem 1. But the deduction broadens the capacity region of Theorem 1. The receiver t could reproduce the information of source nodes St with the data encoding rate cj for j = 1, 2, …,i, and meet c > H(X1, X2, …, Xi) under the condition that the receiver t has already received the data of source nodes '

tS according the Theorem 1. And the capacity ct would provide enough side-information for the receiver t to decode the required data. The deduction gives an idea that the capacity region of receiver t for multi-source multi-receiver network is characterized by Theorem 1 for the required data and ct>H( '

tX | Xt) for the data which mixed with the required data. The proof of the deduction is similar to the proof of Theorem 1.

In this section, we present the main result of the paper. Theorem 1 gives the admissible encoding region of multiple sources. The deduction gives the capacity region of receiver t for an arbitrary acyclic multi-source multi-receiver network. This deduction takes network coding and side-information into account.

IV. CONCLUSION AND FUTURE WORK In this paper, we studied the problem of multi-source

multicasting with network coding, which leads to the more general network coding problem. We have characterized the admissible coding rate region of the multi-source independent encoding problem. Our result can be regarded as a generalization of the admissible coding rate region for two independent encoded sources. From the classical information theory for point-to-point communication, if two sources are independent, optimality can be achieved (asymptotically) by coding the sources separately. However, it has been shown by a simple example in [7] that for simultaneous multicasting of two sources, it may be necessary to code the sources jointly in order to achieve optimality. A special case of the multi-source multi-receiver problem which finds application in satellite communication has been studied in [5]. In that work, the inner and outer bounds of the admissible coding rate region were obtained.

Page 5: [IEEE 2012 IEEE 14th International Conference on Communication Technology (ICCT) - Chengdu, China (2012.11.9-2012.11.11)] 2012 IEEE 14th International Conference on Communication Technology

This theoretically paper puts forward the multi-source independent encoding rate region theorem. The theorem has the significance of theoretical reference for implementing network coding to actual communication network. Multi-source network coding algorithm will be our future work.

ACKNOWLEDGMENT The authors would like to thank the support of National

Basic Research Program of China No. 2012CB315904, NSFC No. 61179028. NSFGD No.S201101000923, SZ JC201005260234A SZ JC201104210120A. ZD201006110044A.

REFERENCES

[1] D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Inform. Theory, vol. IT- 19, pp. 47 I-480, 1973.

[2] P. Sanders, S. Egner and L. Tolhuizen,“Polynomial time algorithms for network information flow,” Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures, 2003, 286-294.

[3] R. Koetter and M. Medard,“An algebraic approach to network coding,” IEEE/ACM Trans. on Networking, 2003, 11(5): 782-795.

[4] S. Jaggi, P. Sanders and P. A. Chou, et al,“Polynomial time algorithms for multicast network code construction,” IEEE Trans. Inf. Theory, 2005, 51(6): 1973-1982.

[5] R.W. Yeung and Z. Zhang, “Distributed source coding for satellite communications,” IEEE Trans. Inform. Theory, vol. 45, no. 4, pp. 1111–1120,May 1999.

[6] R.Ahlswede, N.Cai and S.-Y.R.Li, “Network information flow,”IEEE Trans.Inf.Theory. 2000,46(4):1204–1216.

[7] R. W. Yeung,“Multilevel diversity coding with distortion,” IEEE Trans. Inform. Theory ,vol.41,pp.412–422, Mar.1995.

[8] X.Yan,R.W.Yeung and Z. Zhang, “The capacity region for multi-source multi-sink network coding,” in Proc. 2007 IEEE Int. Symp. InformationTheory (ISIT 2007), Nice, France, June 2007.

[9] K. Bhattad, N. Ratnakar, R. Koetter and K. R. Narayanan, “Minimal network coding for multicast,” in Proc. 2005 IEEE Int. Symp. Information Theory (ISIT 2005), Adelaide, Australia, Sept. 2005, pp. 1730–1734.

[10] M. Kim, M. M´ edard, V. Aggarwal, U.-M. O’Reilly, W. Kim, C. W. Ahnand M. Effros, “Evolutionary approaches to minimizing network codingresources,” in Proc. IEEE INFOCOM 2007, Anchorage, AK,May 2007.

[11] Y. Xi and E.M. Yeh, “Distributed algorithms for minimum cost multicast with network coding,” in Proc. 43rd Annu. Allerton Conf. Communication, Control, and Computing, Monticello, IL, Sept. 2005.

[12] Z. Li and B. Li, “Network coding: The case of multiple unicast sessions,” presented at the 42nd Allerton Conf.,Monticello, IL, Sep. 2004.

[13] K. Jain, V. Vazirani, R. Yeung, and G. Yuval, “On the capacity of multiple unicast sessions in undirected graphs,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Sep. 2005. Inf. Theory, Jul. 2006, pp. 563–567.

[14] J. Price and T. Javidi, “Network coding games with unicast �ows,”IEEE J. Sel. Areas Commun., vol. 26, no. 7, pp. 1302–1316, Sep. 2008.

[15] A. H. Mohsenian-Rad, J. Huang, V. W. S. Wong, S. Jaggi, and R.Schober, “A game-theoretic analysis of inter-session network coding,”in Proc. IEEE ICC, Germany, Jun. 2009, pp. 1–6.

[16] M. Kim, M. Medard, U. O’Reilly, and D. Traskov, “An evolutionary approach to inter-session network coding,” in Proc. IEEE INFOCOM, Rio de Janeiro, Brazil, Apr. 2009, pp. 450–458.

[17] Te Sun Han,“Multicasting Multiple Correlated Sources to Multiple Receivers Over a Noisy Channel Network,” in IEEE Trans.Inform.Theory, vol. 57, no. 1, Jan 2011.

[18] X. Yan, R. W. Yeung, and Z. Zhang, “The capacity region for multiple-source multiple-receiver network coding,” in Proc. IEEE Int. Symp Information Theory, Jun. 2007.

[19] R. W. Yeung, Information Theory and Network Coding. Hong Kong 2007.