multidomain shared protection with limited information via mpp and p-cycles

10
Multidomain shared protection with limited information via MPP and p-cycles János Szigeti, László Gyarmati, and Tibor Cinkler* Department of Telecommunications and Media Informatics, High Speed Networking Laboratory, Budapest University of Technology and Economics, H-1117 Budapest, Magyar Tudósok Körútja 2, Hungary * Corresponding author: [email protected] Received February 5, 2008; revised March 4, 2008; accepted March 4, 2008; published April 4, 2008 Doc. ID 89256 The Internet consists of a collection of more than 21,000 domains called au- tonomous systems operated mostly under different authorities (operators– providers) that, although they cooperate over different geographical areas, compete in a country or other area. Recently, the path computation element concept has been proposed to generalized multiprotocol label switching con- trolled optical borne networks to make routing decisions for interdomain con- nections taking into account traffic engineering, quality of service, and resil- ience considerations. Still the question of protection shareability emerges. For dedicated protection it is enough to know the topology of the network to be able to calculate disjoint paths. However, to reduce network resource usage by sharing of protection resources (e.g., end-to-end shared protection) it is also mandatory to know the exact working and protection path pairs for all the de- mands. This can be checked within a domain where not only the full topology and link-state information is flooded but also the working and protection paths are known for each connection; however; over the domain boundaries for security and scalability reasons no such information is being spread. We pro- pose using two techniques that do not require flooding the information on working and protection paths while still allowing the sharing of resources. These two techniques are the multidomain p-cycles and the multidomain mul- tipath routing with protection. After explaining the principles of these meth- ods we evaluate the trade-off between the resource requirement and availabil- ity of these techniques by simulations. © 2008 Optical Society of America OCIS codes: 000.1200, 060.0060. 1. Introduction Nowadays almost all transport networks are based on optical transmission. The majority of them employ wavelength division multiplexing (WDM), which results in typical fiber capacities of 100 Gbits/ s to 2 Tbits/ s. WDM, and particularly dense WDM (DWDM), offer not only many parallel links within a fiber but also the capability to set up paths between nodes that are not adjacent, making them adjacent in the vir- tual topology. The result is that over a sparse topology we build a dense -path sys- tem, i.e., a dense virtual topology. The role of resilience is increasing: the services, and the capacities that these ser- vices use, have to be protected to survive failures, e.g., fiber cuts. However, there is always a trade-off between the availability to be guaranteed to these services on the one hand and between the costs of guaranteeing this availability on the other hand. This cost consists of two parts: first, the network resources (e.g., link and node capaci- ties) utilized for protection, often referred to as capital expenditure (CAPEX), and sec- ond, the complexity of employing these resilience strategies, including steady flooding of routing and state information and their processing, as well as the calculation of optimal working and protection paths. This is often referred to as operational expen- diture (OPEX). In practice, dedicated protection (1 + 1 or 1:1) is still the most widespread resilience approach due to its simplicity. However, dedicated protection itself always requires more transmission capacities than the working paths. The reason is that the working path is always the shortest, while the protection path is the next shortest one avail- able, which should typically be disjoint from the working one. When considering the Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 400 1536-5379/08/050400-10/$15.00 © 2008 Optical Society of America

Upload: janos

Post on 05-Oct-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 400

Multidomain shared protection withlimited information via MPP

and p-cycles

János Szigeti, László Gyarmati, and Tibor Cinkler*

Department of Telecommunications and Media Informatics, High Speed NetworkingLaboratory, Budapest University of Technology and Economics,

H-1117 Budapest, Magyar Tudósok Körútja 2, Hungary*Corresponding author: [email protected]

Received February 5, 2008; revised March 4, 2008; accepted March 4, 2008;published April 4, 2008 �Doc. ID 89256�

The Internet consists of a collection of more than 21,000 domains called au-tonomous systems operated mostly under different authorities (operators–providers) that, although they cooperate over different geographical areas,compete in a country or other area. Recently, the path computation elementconcept has been proposed to generalized multiprotocol label switching con-trolled optical borne networks to make routing decisions for interdomain con-nections taking into account traffic engineering, quality of service, and resil-ience considerations. Still the question of protection shareability emerges. Fordedicated protection it is enough to know the topology of the network to beable to calculate disjoint paths. However, to reduce network resource usage bysharing of protection resources (e.g., end-to-end shared protection) it is alsomandatory to know the exact working and protection path pairs for all the de-mands. This can be checked within a domain where not only the full topologyand link-state information is flooded but also the working and protectionpaths are known for each connection; however; over the domain boundaries forsecurity and scalability reasons no such information is being spread. We pro-pose using two techniques that do not require flooding the information onworking and protection paths while still allowing the sharing of resources.These two techniques are the multidomain p-cycles and the multidomain mul-tipath routing with protection. After explaining the principles of these meth-ods we evaluate the trade-off between the resource requirement and availabil-ity of these techniques by simulations. © 2008 Optical Society of America

OCIS codes: 000.1200, 060.0060.

1. IntroductionNowadays almost all transport networks are based on optical transmission. Themajority of them employ wavelength division multiplexing (WDM), which results intypical fiber capacities of 100 Gbits/s to 2 Tbits/s. WDM, and particularly dense WDM(DWDM), offer not only many parallel links within a fiber but also the capability toset up � paths between nodes that are not adjacent, making them adjacent in the vir-tual topology. The result is that over a sparse topology we build a dense �-path sys-tem, i.e., a dense virtual topology.

The role of resilience is increasing: the services, and the capacities that these ser-vices use, have to be protected to survive failures, e.g., fiber cuts. However, there isalways a trade-off between the availability to be guaranteed to these services on theone hand and between the costs of guaranteeing this availability on the other hand.This cost consists of two parts: first, the network resources (e.g., link and node capaci-ties) utilized for protection, often referred to as capital expenditure (CAPEX), and sec-ond, the complexity of employing these resilience strategies, including steady floodingof routing and state information and their processing, as well as the calculation ofoptimal working and protection paths. This is often referred to as operational expen-diture (OPEX).

In practice, dedicated protection (1+1 or 1:1) is still the most widespread resilienceapproach due to its simplicity. However, dedicated protection itself always requiresmore transmission capacities than the working paths. The reason is that the workingpath is always the shortest, while the protection path is the next shortest one avail-able, which should typically be disjoint from the working one. When considering the

1536-5379/08/050400-10/$15.00 © 2008 Optical Society of America

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 401

fact that protection resources are used very rarely and for a very short time only,using dedicated protection is a waste of resources.

Shared protection has the idea that up to one failure at a time is assumed and thatworking paths, which do not have any common element that can fail, can then shareresources allocated for their protection. This saves a significant amount of transmis-sion capacity. There is only one problem. Namely, before we can make a decision onwhat resources can be shared for protecting a new demand, we have to check all pro-tection paths to determine whether their working paths have any common element. Ifthey have, their capacities have to be summed up. This requires not only topology andlink-state information to be flooded, maintained, and processed, but also informationon all demands and their working and protection paths have to be maintained. In asingle domain operated by a single operator–provider this information can beexchanged; however, in a multiple operator–provider environment there are noadequate protocols yet. The scalability does not allow, nor are the operators willing, toallow access to their strategic information.

In Section 2 we give an overview of problems related to multidomain networks. InSections 3 and 4 we propose and explain the use of p-cycles and multipath protection(MPP) schemes that allow thrifty resource sharing, while not requiring any informa-tion except that on the topology and on the link states. Finally, in Section 6 we ana-lyze the performance of the presented multidomain resilience schemes.

2. Challenges Posed by Having Multiple DomainsPractically all the networks consist of horizontally interconnected parts where theseparts are defined for administrative or routing purposes. These domains are typicallyoperated by different operators–providers. A thorough explanation of how routingworks over this horizontal structure can be found in [1]. Analogously, not only theinternet protocol (IP) networks have this structure but also the current and futureoptical borne multilayer networks.

Partitioned networks consisting of multiple domains have both advantages anddrawbacks. The advantage is the scalability, where each node has to know everythingabout the domain it belongs to; however, it has a simplified view of all the otherdomains. For this reason less information has to be flooded, processed, stored, andused while routing; therefore, it improves the scalability. On the other hand, thedrawbacks are the inaccurate view of the topology as well as the lack of informationfor disjoint routing [2]. Also, in contrast to border gate protocol (BGP), which is usedcurrently in IP architectures and which floods only reachability information but nolink-state information, such a routing protocol is required that can support quality ofservice (QoS) guarantees and meet traffic engineering (TE) and resilience objectives[2,3]. Therefore, extensions of BGP and of the private network to node interface(PNNI) as well as the path computation element (PCE) [4] have been proposed.

When assuming a multidomain environment we consider two levels, the lower onethat is within each domain and the upper one where each domain is represented as anode only or as a simplified graph with parameters characterizing the connectionsbetween its own border nodes, while the links that interconnect these domains playthe main role [5].

In [6,7] the effects of the delay while flooding information and the period and trig-ger threshold for starting information flooding in multidomain networks are investi-gated. In [8] a game-theory-based approach is proposed to analyze what effects thepricing policy of certain operators has on the blocking and income of other operatorsin a multiple provider–operator environment. These papers, however, do not considerresilience.

In [9] the cases when different multidomain resilience strategies are to be employedare classified, [10] proposes end-to-end survivable connections in generalized multi-protocol label switching (GMPLS) networks by introducing the concept of backup gate-ways, and [11] applies shared backup path protection (SBPP) for the intradomainpaths of the connections while the interdomain parts remain unprotected. Anotherway of sharing resources is the use of the p-cycle protection scheme, which can beapplied even in a multidomain network [12], or the MPP scheme [13], which performsshared protection without knowing detailed information about the connections; thus it

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 402

is suitable to protect interdomain connections efficiently. Here we discuss how toemploy the latter two techniques on the two-level hierarchical network representa-tion.

3. MDPC: Multidomain p-cyclesThe concept of p-cycles [14] is that having a set of links within the network that forma cycle can protect not only those links that are part of the cycle (on-cycle links)—likebidirectional line-switched rings (BLSRs) do in synchronous optical networking(SONET)—but also those that do not form the cycle but still have both ends on thecycle (straddling links). Whenever an on-cycle link fails, the rest of the cycle serves asa backup path for the failed link. In the case of straddling link failure the cycle offerstwo alternate backup paths between the ends of the failed link. Therefore theresources of the p-cycle can be shared by traffic routed over the on-cycle or over thestraddling links of the cycle.

p-cycles are also suitable to protect interdomain links in hierarchical multidomainnetworks [12]. The basic idea of multidomain p-cycle (MDPC) is to represent thedomains at the upper level as a single node and also as a full mesh aggregation [5]:the upper level candidate p-cycle set is defined using the single node abstraction ofthe domains. However, when choosing the most appropriate cycle to protect an inter-domain link, the full mesh aggregation is used, where the virtual links of the aggre-gation carry information about either the availability or the setup costs depending onthe optimization goal, whether it is the availability or the total cost. Also the full meshaggregation is used to define those on-cycle or straddling border nodes (CBNs andSBNs), i.e., the ends of interdomain on-cycle and straddling links, that need to be con-nected internally by the domain to make the p-cycle continuous.

Figure 1 shows where the backup paths are routed and what internal connectionsbetween CBNs or SBNs are activated in case of interdomain on-cycle and straddlinglink failures. p-cycles can protect 100% of the traffic against single (on-cycle or strad-dling) link failures. Besides, they show good performance even in multifailure sce-narios [15].

Being a local span protection scheme, the p-cycle requires only topology and link-state (only free capacity) information to perform shared protection, i.e., to guaranteehigh availability with thrifty resource utilization. This upper-level interdomain pro-tection scheme does not require all the routing information about the connections andcan be combined with any lower-level strategy (e.g., dedicated 1+1, SBPP, or even p-cycle) to protect the intradomain parts of the connections.

on-cycle link

straddling link

Fig. 1. p-cycle protected interdomain links.

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 403

4. MPP: Multipath Routing with ProtectionAssuming single node aggregation at the upper level, we search for disjoint paths tobe used for routing and simultaneously protecting a single demand along multiplepaths. The idea of MPP is similar to threshold-based multi-label-switched-path sharedprotection [16]; however, in our case the demands are routed statically and MPP doesnot define any dedicated protection path. The concept of MPP differs from MAXFLOWand MAXAVAL heuristics [17] in the fact that MPP leads the routes on fully disjointpaths.

The objective of MPP is to find as many disjoint paths between the nodes as pos-sible. And both the bandwidth of the demand �W� and the bandwidth of the backuproute �B� are shared equivalently among the paths. The MPP is designed to protectagainst a single failure of a path, having n disjoint paths where each pathi carries wi=W /n working bandwidth; thus the total bandwidth to be protected(backup) is also B=W /n, and this is distributed among the remaining n−1 pathsresulting in bi=B /n−1=W / �n�n−1�� backup capacity on path i. The total requiredbandwidth on path i is then wi+bi=W /n+W / �n�n−1��=W / �n−1�.

Figure 2 illustrates these calculations. If we assume that n=2 paths are availableto route a demand of a bandwidth requirement of W=12 units [Fig. 2(a)] we do notgain at all, it requires as many resources [wi=12/2=6 for working and bi=12/ �2�1�=6 for protection, altogether 12–12 on each path] as the dedicated protection. How-ever, if we assume n=3 paths [Fig. 2(b)], then it requires fewer resources [wi=12/3=4 for working and bi=12/ �3�2�=2 for protection, i.e., a total of 12 working + 6backup capacity], which is a significant reduction through internal sharing betweenthese three paths. If we further increase the number of paths to n=4 [Fig. 2(c)], wecan further reduce the capacity requirements. If there are more than two disjointpaths available between two nodes, MPP can be calibrated to protect against multiplepath failures [by bi=W / �n�n−X�, where X is the number of path failures to protect].The total capacity allocated for the demand and its protection, �iwi+bi=nW / �n−X�=Wn / �n−X�, converges to W as the number of paths increases.

This is the ideal case when each path has the same length. However, as the numberof paths used grows, they become increasingly longer and although less capacity perpath is required, the total capacity requirement will first drop and after a while startincreasing. The other problem is that introducing multiple paths will use more linksthat are all prone to failures, i.e., by increasing the number of paths the availability

two paths

three paths

four paths

Fig. 2. Illustration of the MPP problems: working+protection bandwidth allocationsfor certain two, three, and four disjoint paths.

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 404

will decrease. The linear programming (LP) formulation of the problem is proposedwith a detailed simulation analysis in [13].

5. Usability of MDPC and MPP in Optical NetworksThe p-cycle protection, as it does not require sophisticated switching architecture, canbe applied at any level of a next-generation synchrous digital hierarchy (ng-SDH) oroptical transport network (OTN). Practically, at the wavelength level we have to dealwith the problem of wavelength continuity, but there are results on how to minimizethe number of wavelength converters [18].

In a multidomain network, having an aggregated topology available at the upperlevel, the wavelength continuity problem is still a challenge. Either the aggregationand assignment of the p-cycles is done per wavelength, or the full mesh aggregationcontains wavelength conversion capability information about the border nodes, whichacts as a constraint when assigning a cycle to a given set of traffic.

The MPP, however, splits the traffic into pieces, which is not supported by currentall-optical nodes at a wavelength level. Therefore this scheme is rather usable eitherat a lower (e.g., fiber) or at a higher [time division multiplexing (TDM), packet switch]level. The latter solution requires such devices that are capable of optoelectronic con-version and can perform grooming, virtual concatenation (VCat)—to support inversemultiplexing of high-bit-rate traffic streams into multiple lower-bit-rate streams—oreven packet level switching.

6. Numerical ResultsWe show simulation results to confirm the benefits of the presented multidomainresilience schemes. They are set against the routing without protection and againstthe dedicated protection that does not share the resources.

The two metrics for each connection obtained from the simulation are the availabil-ity and the resource consumption. We did not intend to deal with blocked connections;thus there is no capacity constraint set on the links of the network. Neither is a lowerbound defined for the availability of the connections.

The simulations were taken on the e1net [19] Pan-European multidomain network(Fig. 3). The network consists of 205 nodes and 384 links in 17 domains. Using this

Fig. 3. Topology of e1net Pan-European reference network.

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 405

network we defined 3000 simultaneous traffic demands wit a total of 9065 bandwidthunits to deliver.

6.A. Compared Resilience StrategiesFirst, we wanted to know what performance the resilience strategies could obtain ifthere were no domain boundaries, i.e., when the whole topology formed a single hugedomain. On this topology we applied four different strategies:

•No is no protection where the routes are unprotected. The resource consumptionand the availability metrics of this strategy are used as a reference for the other strat-egies.

•DPP is the dedicated 1+1 path protection. In our simulations—as we consideronly link failures—the default and the backup path are link disjoint; however, nodedisjointness is not required.

•PC is p-cycle protection. For assigning protection to the traffic we applied theCIDA algorithm [20].

•MPP is multipath protection.

Note, however, that the results corresponding to these strategies are hypothetical inthe sense that these strategies are unfeasible in a multidomain environment becauseof the lack of sharing among all the required topology and failure information amongthe domains.

Also note that the presented multidomain resilience schemes were defined to pro-tect primarily the interdomain links. Within the domains, however, the bypassing seg-ments of the interdomain connections must also be protected. For that reason, on theoriginal, multidomain topology we have defined the following strategies, which com-bine interdomain and intradomain resilience schemes using the XX+YY annotation,where XX stands for interdomain and YY for the intradomain protection scheme:

• PC¿DPP is protecting the interdomain links with a p-cycle, combined with dedi-cated 1+1 internal protection.

• PC¿PC is using p-cycle protection for protecting both interdomain and intrado-main links.

• MPP¿DPP, between the source and the target domain, multipath protection isapplied, while internally the domains protect the connection bypasses with dedicated1+1 protection.

• MPP¿MPP is almost the same as MPP+DPP, however, internally the domainsapply MPP, indeed.

To compare these strategies we assumed a homogeneous environment meaning thatduring the simulation the same resilience scheme is applied for each connection.Thus, having eight types of strategies (four hypothetical and four combinedinterprotections–intraprotections) means eight runs of the simulation, each timeassuming a different resilience strategy to use.

6.B. Resource RequirementFigure 4 shows the resource consumption of the different strategies. Two things can

0

50000

100000

150000

200000

250000

300000

MPP+MPPMPP+DPPPC+PCPC+DPPMPPPCDPPNo

Res

ourc

eun

its

Fig. 4. Resource consumption of different resilience strategies.

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 406

be seen clearly. First, the all-in-one domain approaches would require reasonably lesscapacity than the combined ones. Second, whereas the single-domain p-cycle scheme(PC) requires much more capacity than the MPP scheme, in the multidomain environ-ment the MPP-based strategies need more resources than the p-cycle.

The reason for that behavior of the p-cycle is that, having all the resources con-trolled within a single domain, the resource optimization of the p-cycle works betterthan in a multidomain environment; besides, even though the upper-level p-cycles useboth interdomain and intradomain links, their task is to protect only the interdomainlinks and bypass the intradomain links without the possibility to protect them. When-ever a MPP path enters a domain, it means a new branching opportunity for the con-nection, leading to higher resource consumption in the multidomain environment. Thefact that the network is decentralized and organized into domains gives flexibility forthe MPP resulting in more versatile solutions with higher connection availability—aswe will see in Subsection 6.C—whereas this structure means rather topological con-straint for the p-cycle.

6.C. Provided AvailabilityFor denoting the availability of a connection we use a simple probability metric A inthe range A� �0,1�, where 1 means that the connection is always operational, while 0means that it is always down. The connection availability can be derived from the linkavailability metrics along the path [21]. Two links or, more generally, componentsdenoted by l1 and l2, having a basic availability of A1 and A2 connected serially resultin an aggregate availability AS=A1A2. If l1 and l2 are connected parallelly, the avail-ability of the compound is AP=1− �1−A1��1−A2�=A1+A2−A1A2. However, the accurateavailability of the connections cannot be calculated as a structure of serially and par-allelly switched components either in the case of p-cycles or in the case of MPP.

The p-cycle protected connection can be treated as a series of parallel segments:each link of the working path is parallelly switched with its protecting cycle (which isa sequence of links). However, the problem is that the p-cycles assigned to the spansof the default route are not link disjoint and the shared links should be handled as keyelements [22] to get the exact availability metric of the connection. Unfortunately, thekey element calculations have a complexity of O�2n�. Still the inaccuracy of the serial–parallel calculation model is negligible in most of the cases [15]; thus for evaluatingthe availability of the p-cycle protected connections we use the serial–parallel model—knowing that the obtained results for the unavailability may have �10% error.

Estimating the availability of a MPP connection is more difficult. MPP is designedto protect against single failures and basically the availability of a MPP connectioncan be calculated as the end-to-end availability of a shared protection [23]. However,in the case of multiple simultaneous failures, affecting more than one path of the con-nection, there may still be some paths operational able to deliver a portion of the traf-fic between the end nodes (generally, an m-path protection having n paths failed isable to carry an �m−n� / �m−1� portion of the traffic; e.g., assuming that two paths failin a three-path connection, just 50% of the traffic is lost, whereas in a four-path con-nection only 33%). To be fair, we assumed that in such cases the unavailability of theconnection is proportional to the traffic loss ratio. To obtain the availability of a MPPconnection we enumerated all the multiple failure scenarios and derived a weightedaverage of the traffic loss.

The link availability depends mostly on two things: how often do failures happen(denoted by CC, average length of the cable suffering one failure in a year) and howlong does it take to recover the failures [mean time to repair (MTTR)]. In our simula-tions we denote the invariant MTTR/ �CC�365�24� with the link failure coefficient(LFC), meaning the probability of a 1 km long cable being in the failed state and tak-ing its value, corresponding to the values shown in [24], from the wide range of �3�10−7; 3�10−5� covering optimistic, nominative, and conservative expectations, andcalculate the availability of a link i as Ai=1−LFC� li, where li is the length of link i.

Figure 5 shows the average unavailability for connections protected by differentresilience schemes. The single domain strategies perform roughly equally. One can seethat having low LFCs—that means fast repair times (or reliable cables)—provides thehighest availability; however, in the case of high LFC values, the multidomain MPPsolutions are advised.

Our expectation was that the MPP strategies are always better than the single

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 407

domain solutions. Why do the simulations not confirm that expectation? The answerlies in the topology of the e1net. If we take a look at the tail behavior of the strategies(Fig. 6), we can observe the problem from another point of view. The figure shows howmany connections (y axis) have a higher availability metric than minimally required(x axis). These results were taken at LFC=3�10−6. On the left-hand side of the figurewe can see how many connections have at least an availability of 0.9998. Note thatthis is a relatively low requirement and all the protected connections fulfill it. Still wesee that in the case of MPP+DPP and MPP+MPP only �95% of the connections havehigher availability than required. In the case of PC+DPP the problem is even worse(and for no protection none of the connections has that high availability).

Those connections that do not comply with that minimum are those that are notprotected. Unfortunately, the e1net topology has some border nodes that are con-nected to the rest of the domain with a single link. Whenever an interdomain connec-tion uses such a border node, within the domain neither DPP nor MPP may beapplied.

Furthermore, the tail behavior shows unambiguously the dominance of the MPP-based interdomain resilience schemes: the curves show that these strategies providethe highest availability to most of the connections.

If we separate those connection that do not suffer from topological constraints fromthose that do, Fig. 7 shows that the difference in the average availability is significantwhen we apply MPP—or, internally, DPP—and if there were no topological problems,the five-nines availability requirement (corresponding to an unavailability of 10−5)would be easy to achieve using MPP. The figure also shows that the PC+PC strategy,and generally all strategies that use p-cycles internally, are much more resistant totopological problems.

10-7

10-6

10-5

10-4

10-3

10-2

10-1

3⋅10-51⋅10-53⋅10-61⋅10-63⋅10-7

Ave

rage

conn

ectio

nun

avai

labi

lity

LFC

NoDPP

PCMPP

PC+DPPPC+PC

MPP+DPPMPP+MPP

Fig. 5. Average unavailability of connections.

0

0.2

0.4

0.6

0.8

1

0.9998 0.99982 0.99984 0.99986 0.99988 0.9999 0.99992 0.99994 0.99996 0.99998 1

Rat

ioof

conn

ectio

nsfu

lfilli

ngth

eav

aila

bilit

yre

quire

men

t

Minimum availability requirement

NoDPP

PCMPP

PC+DPPPC+PC

MPP+DPPMPP+MPP

Fig. 6. Tail behavior of different resilience strategies.

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 408

7. ConclusionComing back to the title of this paper we can state that we can share resources formultidomain protection even if only limited (aggregated) information is available.Although the operators will never want to share with their competitors their strategicand confidential information needed for shared path protection, based only on aggre-gated views of the topology and on the advertised free capacities of this aggregatedtopology, we can still perform sharing of protection resources using the self-resourcesharing p-cycle or the internally resource-shared MPP techniques at the interdomainlevel of the network. Combining these interdomain strategies with intradomain pro-tections, we can offer the connections nearly the same availability as if the whole net-work were a single domain without domain boundaries. In the multidomain environ-ment the MPP-based strategies—due to the increased flexibility of branching ability—provide even higher availability for those connections that are not hampered bytopological constraints.

AcknowledgmentsThis work has been supported by the EC within the IST FP6 by the IP NOBEL II(www.ist-nobel.org) and by the NoE e-Photon/ONe+ (www.e-photon-one.org) researchframework.

References1. B. Halabi and D. McPherson, Internet Routing Architectures, 2nd ed. (Cisco, 2000).2. M. Yannuzzi, X. Masip-Bruin, S. Sánchez, J. Domingo-Pascual, A. Orda, and A. Sprintson,

“On the challenges of establishing disjoint QoS IP/MPLS paths across multiple domains,”IEEE Commun. Mag. 44(12), 60–66 (2006).

3. X. Masip-Bruin, M. Yannuzzi, R. Serral-Gràcia, J. Domingo-Pascual, J. Enríquez-Gabeiras,M. Callejo, M. Diaz, F. Racaru, G. Stea, E. Mingozzi, A. Beben, W. Burakowski, E. Monteiro,and L. Cordeiro, “The EuQoS system: a solution for QoS routing in heterogeneousnetworks,” IEEE Commun. Mag. 45(2), 96–103 (2007).

4. A. Farrel, J.-P. Vasseur, and J. Ash, “A path computation element (PCE)-basedarchitecture,” IETF RFC4655 (Internet Engineering Task Force, 2006), www.ietf.org/rfc/rfc4655.txt.

5. Q. Liu, M. Kok, N. Ghani, and A. Gumaste, “Hierarchical routing in multi-domain opticalnetworks,” Comput. Commun. 30, 122–131 (2006).

6. J. Szigeti, J. Tapolcai, T. Cinkler, T. Henk, and G. Sallai, “Stalled information based routingin multidomain multilayer networks,” in Proceedings of the 11th InternationalTelecommunication Network Planning Symposium (Networks2004) (2004).

7. J. Szigeti, I. Ballók, and T. Cinkler, “Efficiency of information update strategies forautomatically switched multi-domain optical networks,” in Proceedings of the IEEE 7thInternational Conference on Transparent Optical Networks (ICTON2005) (IEEE, 2005), pp.445–454.

8. K. Lója, J. Szigeti, and T. Cinkler, “Inter-domain routing in multi-provider optical networks:game theory and simulations,” in Proceedings of the First Conference on Traffic Engineeringfor the Next Generation Internet (EuroNGI) (IEEE, 2005), pp. 157–164.

9. D. Larrabeiti, R. Romeral, I. Soto, M. Urueñal, T. Cinkler, J. Szigeti, and J. Tapolcai,“Multi-domain issues of resilience,” in Proceedings of the IEEE 7th International Conferenceon Transparent Optical Networks (ICTON2005) (IEEE, 2005), pp. 375–380.

10. R. Romeral, D. Staessens, D. Larrabeiti, M. Pickavet, and P. Demeester, “End-to-endsurvivable connections in multi-domain GMPLS networks,” in Proceedings of the VIWorkshop in G/MPLS Networks (WGN6) (2007), pp. 75–84.

10-6

10-5

10-4

10-3

Connections without problemConnections with topology problemAll connections

Ave

rage

conn

ectio

nun

avai

labi

lity

PC+DPPPC+PC

MPP+DPPMPP+MPP

Fig. 7. Multidomain resilience strategies decoupled by topological problems.

Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 409

11. L. Guo, “LSSP: a novel local segment-shared protection for multi-domain optical meshnetworks,” Comput. Commun. 30, 1794–1801 (2007).

12. A. Farkas, J. Szigeti, and T. Cinkler, “p-cycle based protection scheme for multi-domainnetworks,” in Proceedings of the 5th International Workshop on Design of ReliableCommunication Networks (DRCN2005) (2005), p. 8.

13. T. Cinkler and L. Gyarmati, “MPP: optimal multi-path routing with protection,” inProceedings of the IEEE International Conference on Communications—CommunicationsQoS, Reliability, and Performance Modeling Symposium (ICC2008) (IEEE, to be published).

14. W. D. Grover and D. Stamatelakis, “Cycle-oriented distributed preconfiguration: ring-likespeed with mesh-like capacity for self-planning network restoration,” in Proceedings of theIEEE International Conference on Communications (ICC1998) (IEEE, 1998), pp. 537–543.

15. J. Szigeti and T. Cinkler, “Incremental availability evaluation model for p-cycle protectedconnections,” in Proceedings of the 6th International Workshop on the Design of ReliableCommunication Networks (DRCN2007) (2007), TAM 1.1.

16. T. Anjali and C. Scoglio, “A novel method for QOS provisioning with protection in GMPLSnetworks,” Comput. Commun. 29, 757–764 (2006).

17. S. Rai, O. Deshpande, C. Ou, and B. Mukherjee, “Reliable multi-path provisioning forhigh-capacity optical backbone mesh networks,” in Proceedings of the IEEE InternationalConference on Communications (ICC2005) (IEEE, 2005), pp. 1741–1745.

18. T. Li and B. Wang, “Minimizing wavelength-conversion costs in WDM optical networks withp-cycle-based protection,” Photonic Network Commun. 3, 769–786 (2004).

19. D. Meskó, G. Viola, and T. Cinkler, “A hierarchical and a non-hierarchical Europeanmulti-domain reference network: routing and protection,” in 12th InternationalTelecommunications Network Strategy and Planning Symposium (NETWORKS 2006),(IEEE, 2006), pp. 1–5.

20. J. Doucette, D. He, W. D. Grover, and O. Yang, “Algorithmic approaches for efficientenumeration of candidate p-cycles and capacitated p-cycles network design,” in Proceedingsof the 4th International Workshop on Design of Reliable Communication Networks(DRCN2003) (IEEE, 2003), pp. 212–220.

21. B. S. Dhillon, Reliability in Computer System Design (Ablex, 1987).22. D. L. Grosh, A Primer of Reliability Theory (Wiley, 1989).23. J. Zhang, K. Zhu, H. Zang, and B. Mukherjee, “A new provisioning framework to provide

availability-guaranteed service in WDM mesh networks,” in Proceedings of the IEEEInternational Conference on Communications (ICC2003) (IEEE, 2003), pp. 1484–1488.

24. S. Verbrugge, D. Colle, P. Demeester, R. Huelsermann, and M. Jaeger, “General availabilitymodel for multilayer transport network,” in Proceedings of the 5th International Workshopon the Design of Reliable Communication Networks (DRCN2005) (IEEE, 2005), p. 8.