paper-2-dec-it-2013
DESCRIPTION
Multistage Interconnection Networks (MINs) interconnect various processors and memorymodules. In this paper the incremental permutation passable of Irregular Fault Tolerant MINshaving same number of stages named as Triangle, Theta, Omega and Phi Networks have beenanalysed and compared.TRANSCRIPT
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
ALGEBRAIC AND ANALYTIC PROPERTIES OF HR(P)
*Dr Meenakshi Darolia
In this paper, first we give the definition of R-norm information measure and then
discuss the algebraic and analytic properties of the Rnorm information measure
( )PH R and also it will be summarized in the following two theorems.
First we introduce some notations for convenience. We shall often refer to the set
of positive real numbers, not equal to 1. We denote this set by R+ with
}1,0;{ >=+ RRRR
We also define n as the set of all n-ary probability distributions P = ( p1, p2 ,
p3., pn ) which satisfy the conditions:
0ip , =
=n
i
ip1
1
DEFINITION: The R-norm information of the distribution P is defined for R R+
( )
=
=
RR
i
n
i
R pR
RPH
1
1
11
*Assistant Professor, Isharjyot Degree College Pehova
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
The R-norm information measure (2.1) is a real function + Rn defined on
n where 2n and R+ is the set of positive real numbers .This measure is
different from Shannons entropy, Renyi and Havrda and Charvat and Daroczy .
ALGEBRAIC AND ANALYTIC PROPERTIES OF HR(P) :
The algebraic and analytic properties of the Rnorm information measure ( )PH R
will be summarized in the following two theorems. First we consider the algebraic
properties of the R-norm information measure.
Theorem 1: The R-norm information measure HR(P) has the following
Algebraic properties:
1. ( ) ( )nRR pppHPH ,...,, 21= is a symmetric function of ( )nppp ,...,, 21 .
2. ( )PH R is no-expansible i.e. ( ) ( )021021 ,...,,0,,...,, nRnR pppHpppH =
3. ( )PH R is decisive i.e. ( ) ( ) 01,00,1 == RR HH .
4. ( )PH R is non-recursive.
5. ( )PH R is pseudoadditive, i.e. if P and Q are independent then
( ) ( ) ( ) ( ) ( )QHPHR
RQHPHQPH RRRRR
1,
+= i.e. ( )PH R is non-additive.
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
Proof:
(1) To prove ( ) ( )nRR pppHPH ,...,, 21= is a symmetric function of p1 , p2., pn .
By definition (2.1), we have
( )
=
=
RR
i
n
i
R pR
RPH
1
1
11
[ ]
++
= R
R
n
RRppp
R
R1
21.......1
1
Since
=
R
i
n
i
p1
is a symmetric relation in pis.
i.e. p1R
+ P2R++ pn
R is same if p1,
p2 ,
,pn are changed in cyclic order.
( )
=
=
RR
i
n
i
R pR
RPH
1
1
11
is a symmetric function.
Hence completes the proof.
(2) To prove ( ) ( )021021
,...,,0,,...,, nRnR pppHpppH =
To prove this let us consider
( )=0,,...,,021 nR
pppH [ ]
+++
RR
R
n
RRoppp
R
R1
021.......1
1
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
[ ]
++
= R
R
n
RRppp
R
R1
021.......1
1
=
=
RR
i
n
i
pR
R1
0
1
11
( )021
,...,, nR pppH=
(3) To prove ( )PH R is decisive i.e. ( ) ( ) 01,00,1 == RR HH .
By definition, we have
( )
=
=
RR
i
n
i
R pR
RPH
1
1
11
Now If consider only two events,
( ) ( )
+
= R
RR
R ppR
RPH
1
2111
And if we take
p1=1 , p2=0 ,then (2.3) becomes
( ) =PH R ( ) [ ] oR
R
R
RR
RR =
=
+
11
1011
1
1
( ) 00,1 = RH
Similarly we can proof that ( ) 01,0 =RH
(4) To prove ( )PHR is non-recursive, we have to prove that
( ) ( ) ( )nRRnR pppHpp
p
pp
pHppppppH ..,.,,,..,.,, 21
21
2
21
121321
+++++
For this first we consider
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
+
+
=
++
RRR
Rpp
pp
R
R
pp
p
pp
pH
1
21
21
21
2
21
1 11
,
Multiply both sides by )( 21 pp + in (2.4), we get
+
+
+=
+++
RRR
Rpp
pp
R
Rpp
pp
p
pp
pHpp
1
21
2121
21
2
21
121 1
1)(,)(
( )nR ppppH .,..,, 321 + ( ){ }
++++
= RRn
RRpppp
R
R1
331 ...11
By combining 2, we have
( ) ( ) ( )nRRnR pppHpp
p
pp
pHppppppH ..,.,,,..,.,, 21
21
2
21
121321
+++++
Thus ( )nR ppppH ..,.,,, 321 is non-recursive.
(5) To prove ( ) ( ) ( ) ( ) ( )QHPHR
RQHPHQPH RRRRR
1,
+=
i.e. ( )PH R is non-additive
Proof: Let nAAA ...,,, 21 and mBBB ...,,, 21 be the two sets of events associated with
probability distributions nP and mQ .We denote the probability of the joint
occurrence of events ( )niAi ...,,2,1== and ( )mjB j ...,,2,1== on ( )ji BAp .
Then the R-norm information is given by
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
( ) ( )
= ==
R
ii
R
i
m
j
n
i
R BApR
RQPH
1
11
11
*
Since the events considered here are stochastically independent therefore, We
have ( ) ( ) ( )
= ==
R
j
R
j
m
j
R
i
R
i
n
i
R BpApR
RQPH
1
1
1
1
11
*
= ==
RR
j
m
j
RR
i
n
i
ppR
R
1
1
1
1
11
( ) ( )
= QH
R
RPH
R
R
R
R
R
RRR
11
11
11
( ) ( ) ( ) ( )QHPHR
RQHPH RRRR
1+=
Theorem 2: Let ( ) ( )nRR pppHPH ..,.,, 21= be the R-norm information measure.
Then for nP and + RR we find
(1) ( )PH R Non-negative.
(2) ( ) ( ) 00,...,0,0,1 = RR HPH .
(3) ( )
=
R
R
RR nR
R
nnnHPH
1
11
1,....,
1,
1.
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
(4) ( )PH R is a monotonic function of P.
(5) ( )PH R is continuous at + RR .
(6) ( )PH R is stable in nipi ,...,2,1, = .
(7) ( )PH R is small for small probabilities.
(8) ( )PH R is a concave function for all ip .
(9) ( ) iRR
pPH .max1lim =
.
Proof:
(1) To prove that ( )PH R > 0, we consider the following two cases:
Case I: When 1>R , then ipp iR
i 111
= ==
i
n
i
R
i
n
i
pp
1
1
1
1, then 01>
R
R
Multiplying both sides of (2.9) by 1R
Rwe get
011
1
1
=
RR
i
n
i
pR
R
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
But =
=
RR
i
n
i
pR
R1
1
11
( )PH R
( ) 011
1
1
=
=
RR
i
n
i
R pR
RPH for R >1
Case II: When 10
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
=
RR
Rn
nR
R
nnnH
1
11
1
1,......
1,
1
( ) 0=PH R By definition, we have
( )
=
=
RR
i
n
i
R pR
RPH
1
1
11
And if we take p1=1 and p2 = p3 = p4.= pn , then
Then =
=
R
i
n
i
p1
p1R
+ P2R++ pn
R = 1, 1
1
1
=
=
RR
i
n
i
p
01
1
1
=
=
RR
i
n
i
p , 011
1
1
=
=
RR
i
n
i
pR
R When p1=1 and p2 = p3 = p4...= pn = 0
( ) 00,0,0,0,0,1 = RH
(3) To prove ( )
=
R
R
RR nR
R
nnnHPH
1
11
1,....,
1,
1
By definition (2.1), we have ( )
=
=
RR
i
n
i
R pR
RPH
1
1
11
And if we take p1 = p2 = p3 = p4 .= pn n
1= , then becomes
=
RR
Rn
nR
R
nnnH
1
11
1
1,......
1,
1
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
( )
nnnHPH RR
1,....,
1,
1
=
R
R
nR
R1
11
Now we prove
Then by Lagrange multipliers, we have
RRRn
i
R
i np/)1(
1
1
=
for R >1 And
RRRn
i
R
i np/)1(
1
1
=
for 0 < R < 1
Equality holds iff pi=1/n for all i=1,2,,n .
Substituting the results obtained in into definition and noting that R/R-1>0 for R >
1 and R/R-1< 0 for 0 < R
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
( ) ( )[ ]RRR pppG1
11 +=
Differentiate (2.17) w.r.t p we get
( ) [ ] [ ]11
1
)()1()1(1
0
++= RRRR
RRpRpRpp
Rpd
pGd
Now when R > 1 011
1
From (2.16), we note that
( )( )pd
pGd
R
RppH
pd
dR
=
11,
( ) 01, ppHpd
dR for
>
2
1,0,1 pR
Similarly we prove ( )
0pd
pGd for 0 < R < 1 and
2
1,0p
And when 0 < R < 1, then 01==
RPHPHD RkRk
r
k
( )PH R will be concave if D is less than zero for R( > 0) 1. So we consider
( ) ( )01
PHPHD RkRk
r
k
==
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
D ( )1
1
1
11
= == R
Rxp
R
i
R
k
m
i
k
r
k
( )
=
R
i
Rm
i
xpR
R1
0
1
11
D ( ) ( )
=
====
R
i
R
k
m
i
k
r
k
RR
ikk
r
k
m
i
xpxpR
R
1
11
1
111
Now using the inequality tkkr
k
t
kk
r
k
xaxa =
=
11
according as 1>
=
i
R
kk
r
k
R
ikk
r
k
xpxp 11
According as 1>
==
ixRkPkr
k
m
i
R
ikk
r
k
m
i
xp 1111
according as 1>
==
according as 1>
< according as 1
>
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
nippR
k
R
i 1
)(0
1
R
k
n
i
R
i pnop =
Case I: when R >1, then ki pp
=
no
i
R
k
n
i
R
i pp1
0
1
( ) ( ))p(nop Rk
0n
1i
R
i
=
( ) ( )RRkRn
i
R
i pnop110
1
)( =
( )RRkR pno11
)(= kR pn1
0=
Thus ( ) =
R
n
i
R
ip10
1
kR pn1
0= , It is also noted that for R > 1
kR
R
i
n
i
pp
=
1
1
0
kR
RR
i
n
i
k pnpp
1
0
1
1
0
=
By taking limit for R , on each side of (2.29), we obtain
ii
k
RR
i
n
iR
ppp .maxlim
1
1
0
==
=
Also we know that 1
1lim =
R
R
R
And finally ( ) ii
RR
i
n
iR
RR
ppR
RPH .max11
1limlim
1
1
0
=
= =
i.e. ( ) iRR
pPH .max1lim =
This completes the proof of theorem 2.
Property (9) is of particular interest since it provides us with a direct interpretation
of the value of R which can be chosen. It shows that for increasing R, the
probability, say kp , which has the largest value tends to dominate the R-norm
information of the distribution P. Therefore the R-norm information for large
values of R seems appropriate for those applications in which we are mainly
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
( )
=
=
n
i
ippH1
log1
1
interested in events with large probability It is interest to relate the R-norm
information to the information of order and of type and Shannons
information measure .As may be expected this depends on the values of R, and
Theorem: Let ( )pH be the information of order [], and )( pH the information
of type Havedra and Charvat, 1967; Daroczy, 1970) .Then for
R= It holds that
( ) [ ]
= RRR pH
R
RpH
11 )()21(11
1 for R= we have
( )
= )(
1exp1
1pH
R
R
R
RpH R Since the definition of the information measure
of order and of type , given by
, 1,0 > And
( ) 1,0,121
1
11
>
=
=
n
i
ippH To prove this theorem, let us consider the
R.H.S of i.e.
)(
1exp1
1pH
R
R
R
R
)(
1exp1
1pH
R
R
R
R
=
=
n
i
ipR
R
R
R
1
log1
11exp1
1 Put R= we have
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
)(
1exp1
1pH
R
R
R
R = )(1
1
1
1
pHpR
RR
RRn
i
i =
=
Hence proved
i.e. [ ]
R
RPH
R
R 11 )()21(111
[ ]
R
RPH
R
R 11 )()21(111
=
=
Rn
i
i
Rp
R
R
1
11
1 121
1)21(11
1
Put R= =
=
Rn
i
R
iR
Rp
R
R
1
11
1 121
1)21(11
1
= )(11
1
1
pHpR
RR
RRn
i
i =
=
Thus we have
[ ]
R
RPH
R
R 11 )()21(111
)( pH R= Hence proved the theorem. Now we
discuss the most interesting property of the R-norm information measure
i.e. )()(lim 1 pHpH SRR =
Where )( pH S denotes the Shannons information measure.
Since by definition
=
=
RRn
i
iR pR
RpH
1
1
11
)(
)form(0
0p1
1R
Rlim)p(Hlim
R
1
Rn
1ii1RR1R
=
=
=
I
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
Then by LHospital Rule, we have
=
==
))((0_)(1.1lim)(lim
1
1
1
1
11R
n
i
R
iR
n
i
R
iRRR PdR
dRppH
If we takeRn
i
R
ipT
1
1
=
=
Taking log both sides log T=
=
n
i
R
ipR 1
log1
REFRENCES
[1]. ACZEL, J. AND DARCOZY, Z. (1975), On Measure of Information and their
Characterizations, Academic Press, New York.
[2]. ARIMOTO , S. (1971), Information theoretical considerations on problems, Inform. Contr.
19, 181-194.
[3]. BECKENBACH, E.F. AND BELLMAN, R.(1971),Inequalities, Springer- Verlag, Berlin.
[4]. BOEKEE, D.E. AND VAR DER LUBEE, J. C.A. (1979), Some aspects of error bounds
in features selection, Pattern Recognition 11, 353-360.
[5]. CAMPBELL ,L.L (1965), A Coding Theorem and Renyis Entropy, Information and
Control, 23, 423-429.
[6]. DAROCZY, Z. (1970), Generalized information function, Inform. Contr.16, 36-51.
[7]. D. E. BOEKKE AND J. C. A. VAN DER LUBBE ,R-Norm Information Measure,
Information and Control. 45, 136-155(1980
[8]. GYORFI ,L. AND NEMETZ, T. (1975), On the dissimilarity of probability measure, in
Proceedings Colloq. on Information Theory, Keshtely, Hungary.
[9]. HARDY, G. H., LITTLEWOOD, J. E., AND POLYA, G.(1973), Inequalities Cambridge
Univ. Press, London /New York
-
IJITE Vol.01 Issue-07, (December, 2013) ISSN: 23211776
International Journal in IT and Engineering
http://www.ijmr.net
[10]. HAVDRA, J. AND CHARVAT, F. (1967). Quantification method of Classification
processes, Concept of structural - entropy, Kybernetika 3, 30-35.
[11]. Nath, P.(1975), On a Coding Theorem Connected with Renyis Entropy Information and
Control 29, 234-242.
[12]. O.SHISHA, Inequalities,Academic Press, New York.
[13]. RENVI, A. (1961),On measure of entropy and information, in proceeding, Fourth
Berkeley Symp. Math. Statist. Probability No-1, pp. 547-561.
[14]. R. P. Singh (2008), Some Coding Theorem for weighted Entropy of order .. Journal of
Pure and Applied Mathematics Science : New Delhi.