bridging the network generation gap

8

Click here to load reader

Upload: brian-i-leigh

Post on 21-Jun-2016

217 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Bridging the network generation gap

network architecture

Bridging the network generation gap

Brian I Leigh considers corporate data networking evolution, and describes an architecture designed to smooth transitional problems

In common with other advanced technologies in widespread, popular use, corporate data networking tends to be subject to conflicting pressures. On the one hand are the seemingly ever-accelerating rates of technological obsolescence and innovation, on the other the day-to-day business constraints of economic and operational acceptability. Increasingly, the watchwords are 'open systems; "evolution' and 'network management" Some of the issues involved are investigated, and an architecture designed to alleviate the worst affects of the conflict is described.

Keywords: corporate data networkin& network management, open systems, network architecture

The accelerating pace of techno- logical development and change is an all-pervasive aspect of modern life. However, despite the general rise in the qualitative and utilitarian standards of products and services, technological change can also tend to have disruptive effects on existing systems. The introduction of new technologies in fact demands the adoption of imaginative strategies, aimed at deriving the maximum economic and operational benefits. Such benefits should at least be on a par with those already derived from equivalent, existing products and services. In particular, there should be no loss or degradation of existing

GEC-Plessey Telecommunications, Roxborough W a y , Foundation Maidenhead, Berks SL6 3UD, UK Paper received: 8 March 1990

6 Park,

services offered during the critical changeover period.

The ramifications of technological change are no less pertinent to the world of data communications than other fields of endeavour. Data communications networks nowadays often represent crucial corporate assets, of vital importance to the successful conduct of daily business. Where this is the case, it becomes essential for network administrators to avoid potentially disastrous upheavals to their networks. Such upheavals may be occasioned by an uncoordinated approach to network upgrades, possibly exacerbated by a lack of availability of high quality network support tools, aids and products, designed to ease installation, configuration and maintenance tasks.

This paper describes a data communications architecture spec- ifically oriented towards addressing these issues, by approaching the engineering and business aspects of technological innovation in an evolutionary manner.

The architecture is discussed and tested against the following key criteria, considered fundamental to the success of any such evolutionary strategy 1 :

• Protection of existing investment. • Technological 'future-proofing'. • Control over cost of network

ownership.

PROTECTION OF EXISTING INVESTMENT

Of fundamental importance in an evolving environment is the ability to fully amortize existing hardware and software investment, while intro- ducing clearly-defined evolutionary paths to the 'new' technologies.

Technology 'for its own sake' may be the secret yearning of many an engineer, but it is in general unlikely to represent the basis of a realistic business strategy. The data communications network must in fact be considered in a similar light to other capitalised fixed assets, viz. as a

0140-3664/91/002113-08 © 1991 Butterworth-Heinemann Ltd

vol 14 no 2 march 1991 113

Page 2: Bridging the network generation gap

network architecture

business resource that is expected to achieve appropriate return on investment over its planned lifetime.

The historical approach to corporate networks has been based largely on proprietary solutions, dictated in the main by the preselection of a central data processing equipment supplier. This is the inevitable situation created by the lack of 'open' standards for data communications. Thus, 'IBM shops' are traditionally served by SAA/SNA, 'DEC shops' by DNA/DECnet, 'ICL shops' by C-03, etc.

However, with the advent of Open Systems Interconnection (OSI) standards, and the growing OSI product market, many companies are now turning to open, multi-vendor solutions. The potential benefits are clear: more competitive tendering, an increased ability to pick and choose a mix of vendor equipment throughout the network life-cycle, and in consequence a lowering of life-time costs. Equipment may be specifically selected to suit particular processing tasks, such as general administrative or engineering support functions. However, the real 'icing on the cake' which OSI is intended to facilitate is the convergence towards a situation allowing disparate vendor equipment to be networked together in an integrated, homogeneous manner. An environment may thus be created which can positively improve business efficiency and effectiveness, both as regards internal (inter- departmental) and external (inter- company) communications.

Thus, the initial challenge is to allow the continued use of existing processing and proprietary com- munications resources for as long as required, while introducing the capability to commence a buildup to an OSI, open systems environment. A convenient way of achieving this is to create an environment which effectively isolates the proprietary and OSI aspects of the communica- tions infrastructure in a modular fashion. They may then cohabit the same physical network, share common user access mechanisms and network management functions, but remain mutually independent.

Two classes of device within the

architecture are specifically intended to facilitate such an environment: the Network Access Point (NAP) and gateways and interworking devices.

Network Access Point

The NAP is a network access layer device located on the periphery of the (X.25) backbone network, and designed to support requisite terminal and host access functionality. The NAP is highly configurable, offering simultaneous support for both proprietary and ISO/OSI services locally, on a 'mix and match' basis, via discrete plug-in cards supporting:

• IBM 3274 emulation and protocol conversion, providing 3270 access over X.25 networks for attached asynchronous ASCII terminals;

• IBM SNA/BSC emulation, for IBM host or cluster controller attachment to X.25 networks;

• ICL ICAB-02 emulation, enabling ICL host access over X.25 networks for attached asyn- chronous ASCII terminals;

• ICL ICAW-02 emulation, enabling ICL host access over X.25 networks to attached printers;

• ICL C-03 emulation for ICL host or cluster controller attachment to X.25 networks;

• Triple-X PAD functions, for asynchronous ASCII terminal and host attachment to X.25 networks;

• OSI Virtual Terminal (VT) support;

• X.500 Directory User Agent (DUA) services;

• X.400 User Agent (UA) services.

Gateways and interworking devices

Interworking between current and future networks and technologies with differing characteristics is facilitated by a range of gateway and interworking devices. A proliferation of such devices is to be discouraged in general, due to the particular problems they tend to introduce, such as in respect of network performance and maintenance.

However, their judicious use can significantly ease evolution towards new technologies, by confining interworking issues to well-defined, controllable locations. Gateways and interworking devices may be provided as required, for example, between:

• X.25 and PSDNs; • X.25 and X.75; • X.25 and LANs (of various

varieties); • X.25 and ISDN; • proprietary and OSI network

management; • proprietary and X.400 electronic

mail.

TECHNOLOGICAL 'FUTURE-PROOFING"

The technology and standards upon which the evolutionary strategy is based should not preclude the support of any current or projected technologies and standards of potential relevance.

Ideally, the aim should be to avoid potential future 'dead-ends' or 'brick- walls', which may impede future expansion or diversification. Tech- nologies and standards should be selected that are consistent with modern networking practices. Of course, the pace and direction of technological change are not always easy to predict. Certain emerging technologies and standards, not yet sufficiently stable or proven for immediate adoption, may nonethe- less be considered as future possible evolutionary targets. In such cases, the strategy should be such as not to preclude their subsequent intro- duction, in a manner non-disruptive of the existing network and its associated services.

Of prime concern in adopting 'future-proof' technologies are the standards selected, in particular for the main backbone network, and the ability to interwork with other network technologies.

Backbone network standard

X.25 is the ubiquitous international packet switched data networking standard, with little or no serious

114 computer communications

Page 3: Bridging the network generation gap

network architecture

competition. Current arguments, casting doubts on the place of X.25 in the future integrated voice and data world of Integrated Services Digital Networks (ISDN), are by no means cut-and-dried. ISDN is still very much in its infancy, and its eventual success and takeup are by no means assured. Even if ISDN proves to be a roaring success, packet data is a reality that will still need to be accommodated. The significant investment in X.25 networking, coupled with its widespread use in both the public and private sectors, assures its future for some considerable time to come.

Central servers

Central servers provide both OSI and non-OSI application layer services, accessible via the NAP as embedded network resources. The use of central servers encourages modularity in network design, and facilitates the relatively painless future introduction of new services. Servers support:

• X.500 directory facilities, providing both directory user agent (DUA) and directory service agent (DSA) services;

• X.400 electronic mail facilities, providing user agent (UA), message transfer agent (MTA) and message store (MS) services;

• FTAM file transfer and house- keeping facilities;

• general user database and filestore facilities.

NETWORK OWNERSHIP COST CONTROL

The evolutionary strategy should address effectively the most significant areas affecting the totality of costs inherent in network ownership.

Recent studies have indicated corporate network ownership costs to have the profile shown in Figure 12.

As indicated, some 64% of the total costs of ownership are incurred in the five years after acquisition. The most sensitive costs are those relating to hardware equipment and

Acquisition C"z-,.e: t ~

Op(~l ~LIUI Idl Costs 63.9%

Software 79%

Per:sonn( 26% -'.Qu iprnen t

88.6t6

Corrlm8 Line8 2(}4%

itie8 f1%

Figure 1. Cost of network ownership over five years

personnel, representing some 59% of overall five-year costs.

Cost of ownership issues are addressed within the architecture in a number of ways.

Modular architecture

The ability to establish cost-effective, entry-level pilot solutions, fully- reusable in subsequent modular expansion and upgrade paths, offers a controllable, step-wise approach to network development and cost planning.

Despite detailed research, it may be difficult to be certain of the network capacity required. This could arise due to unforeseen changes in work-methods, suddenly made possible by the introduction of the new network facilities. The actual takeup by active users and the use they make of the network may not accurately reflect research predictions. A pilot network is useful in such circumstances, so as to be better able to gauge real demand and

acceptance of the new services, or simply as a trial-run to encourage feedback and suggestions for improvements or modifications. Similarly, the expectation would normally be for the network to be able to grow in step with general corporate growth. This should be achieved without any measurable degradation in quality of service, e.g. terminal response times or network round trip delays.

Packet switch exchanges (PSE)

To achieve these requirements, X.25 PSE devices within the architecture cover a wide performance and expansion range:

• low speed PSEs offering up to 150 packets/s throughput, expand- able from 4 to 36 ports at up to 64 Kbit/s;

• mid-range PSEs offering up to 750 packets/s throughput, expand- able from 4 to 60 ports at up to 64 Kbit/s;

• high performance PSEs offering up to 3000 packets/s throughput, expandable from 8 to 112 ports at up to 2 Mbit/s.

This wide range facilitates great flexibility in establishing and extending the X.25 bearer network, whether for pilot solutions or full- scale live operation. Networks may be specially designed and configured to support the required traffic profile, in terms of packet switching and call setup capacities. Upgrade paths to meet future demand can be easily and cost-effectively accommodated without compromise to established services.

The ability to engage in such modular growth allows financial and resource management to be conducted in a controlled manner, if the availability of adequate spare expansion capacity within existing equipment can be predicted with some certainty. Unavoidable step- function characteristics within upgrade costs can also be better anticipated, e.g. when additional chassis and power supplies are required to house extra line cards.

vol 1 4 n o 2 march 1991 115

Page 4: Bridging the network generation gap

PSEs are designed, moreover, with the minimization of support and maintenance overheads very much in mind. Such features as Live System Board Replacement and Live System Power Supply Replacement enable swapouts to be carried out with minimal disturbance to the operational system, reducing average service visit times and mitigating the business repercussions of even partial system outages.

Network management

Figure 2. NMC subsystems

A highly effective means of controlling network costs is through the use of comprehensive network management facilities. These allow the capability to exercise effective and efficient operational control and monitoring from a resilient network management facility.

In particular, the eventual adoption of the emerging OSI network management standards should introduce the ability to treat disparate network devices in a generalized and unambiguous fashion, and to exercise control over them in the true OSI spirit of multi- vendor 'openness'. This should reap significant benefits for the network administrator, through increased procurement flexibility and reduced operating costs.

Increasingly, the network manage- ment requirement is perceived to be applicable to all devices in the network, regardless of capacity, service type (e.g. data, voice) or supplier origin. In addition, it is expected to interact meaningfully with peer systems within other networks, again regardless of supplier origin.

The availability, functionality and handling capacity of these facilities are configurable to requirements, ensuring a cost-effective solution in each case. This is facilitated by a Network Management Centre (NMC), comprising up to four distinct subsystems (see Figure 2), which may or may not be collocated within the same processing environment:

• a Physical Management Centre (PMC), for network control,

network architecture

monitoring, alarms handling and device configuration;

• a Name Server (NS), for directory lookup and access control on a per-call basis;

• a Billing Centre (BC), for accounts records collection and processing;

• a Statistics Management Centre (SMC), for device and call statistics collection and processing.

These NMC subsystems may be hosted on a range of processing environment, from PCs to powerful Unix workstations, each applicable to distinct marketing and technical requirements:

• medium-to-high capacity require- ments, or where advanced functionality is not compromised to cost;

• low-to-medium capacity require- ments, or where cost-sensitivity can be satisfied in terms of more basic functionality.

OVERALL NETWORK ARCHITECTURE

The overall general network architecture satisfying the above demands is illustrated in Figure 3.

As may be seen, the general configuration centres around an X.25 backbone network, comprising PSEs interconnected to achieve the required connectivity and capacity. NAPs and central servers attached at appropriate points on the periphery offer the requisite, customized end- user application support services. Gateways facilitate internetworking access to other networks, such as LANs, the PSDN or ISDN.

Resilient network management is provided by one or more intercommunicating Network Man- agement Centres, controlling and monitoring the target network over dedicated or switched X.25 links, either directly or via LAN gateway connections.

Due to the particularly important role that network management fulfils within the architecture, the remainder of this paper is devoted to describing this one facility in further detail.

NETWORK MANAGEMENT

Network management is crucial to the successful running of the network. At its best, network

116 computer communications

Page 5: Bridging the network generation gap

network architecture

PSDN

X.25 HOST

/ \

NMC

ISDN ASYNC HOST

GATEWAY i

/ PAD

X.25

NETWORK

CENTRAL SERVERS {X.400} {FTAM}

IX.5001

ICL C-03 HOST

j -

Figure 3. Generalized network architecture

management equips the network administrator with the control and supervisory tools necessary to manage the network efficiently and cost-effectively on a day-to-day basis.

• device configuration and control; • device statistics collection; • device status monitoring; • device and network testing.

N M C user interface

Operational management issues

The operational management issues of prime interest to network administrators may tend to vary according to the size and geographical spread of the installation. The following are considered representative of the most pertinent of these issues:

• NMC user interface; • NMC Ioadsharing and resilience; • NMC secure communications and

database integrity; • network modelling and generation;

The NMC user interface is designed to achieve a high degree of user friendliness. To this end, a high resolution colour graphics display is used in conjunction with a mouse input device (along with a standard keyboard). Standardized WlMP-style (Window, Icon, Menu, Pointer) techniques are adopted, including drop-down menus, multiple over- layed windows, etc. A typical display is shown in Figure 4, and comprises a menu bar at the top, a main window occupying the majority of the screen area and an events indicator and text line at the bottom.

The main window displays a graphical representation of the whole or part of the network to which the NMC is connected. It may contain, or be divided into, smaller windows giving more details of some aspect of the network. Windows generally show only part of an underlying canvas which may be scrolled beneath the window. Selectable objects on the screen represent regions, trunks, devices, ports or links. The colour of an object generally indicates its status.

Incoming events attract the attention of the operator by one or more of the following means, selected on a priority threshold basis: an animated icon; a text message; a short buzzer; or a continuous buzzer. In addition to this WlMP-style interface, a more traditional command/response interface is also provided, to enable fast interaction

vol 14 no 2 march 1991 117

Page 6: Bridging the network generation gap

network architecture

Management Admin Topology Access Info Root window Defaults Events Pad Silence ~ - -

M a n c h e s t e r

Liverpool

Oxford

\

Luton

Guildford

Gui ld fo rd

Status change to DOWN Guildford 9 Dec 89 @ 11:43 Status change to UP Manchester 9 Dec 89 @ 11:45

Figure 4. NMC typical screen display

by expert operators and access to the NMC by simpler, non-graphics terminals.

NMC Ioadsharing and resilience

To satisfy the management demands of large networks, comprising perhaps scores of PSEs and an even greater number of NAPs and third party devices, a single NMC (whether composed of distributed or collocated subsystems) may not deliver the requisite processing and networking bandwidth to provide a sufficiently reliable and responsive management mechanism. In addition, it is essential that the entire network should not become totally inoperable in the event of the failure of a single NMC computer or physical network link.

The combination of these factors results in a facility comprised in general of multiple, load-sharing instances of a resilient, fully- redundant NMC system architecture. Each NMC system may comprise one or more of the autonomous or collocated subsystems discussed above. As indicated in Figure 5, the global 'management domain' (controlled by the 'master' NMC system) is divided into 'management subdomains' (each controlled by a 'slave' NMC). Management domains or subdomains should be considered as fully-distributed processing environments, within which the network management function operates.

In specific instances, however, there may be no requirement for load-sharing and/or resilience, e.g. for

small and/or non-critical networks. Thus, the NMC is fully configurable with respect to these features, and is capable of normal operation independent of their presence•

NMC secure communications and database integrity

Networks are controlled and managed by NMC facilities generally remote from the target network devices. The network management database, or in OSI terms the Management Information Base (MIB), considered to comprise the full set of data describing all current network device configurations, alarm statuses, etc., must therefore be thought of as a distributed entity. Appropriate subelements of the MIB

118 computer communications

Page 7: Bridging the network generation gap

network architecture

Slave NMC

Management Subdomain 1

./"

Master NMC

/ r

/..

/,

~: Slave NMC

• Management ..... ~ ......... Subdomain 3

//

Slave NMC

Management Subdomain 2

Management Global Domain

Figure 5. NMC Ioadsharing

are located within managed network devices and slave NMCs, with the complete database held on the master NMC.

In such an environment it is essential that the NMC retains at all times an accurate, up-to-date 'picture' of the entire MIB, since it alone is responsible for its maintenance (establishment and modification) and for reporting on its contents. It is upon the accuracy and currency of this information, describing the disposition of each device within the network, that the operator relies to make sensible decisions and take appropriate and timely action. Unreliable information could result in the operator taking action which could at best be unnecessary, and at worst have potentially disastrous consequences on network operation.

These requirements are satisfied by a combination of secure, end-to- end communications via an appropriate transport service, and secure distributed database tech- niques.

Network modelling and generation

Network topology, routing, traffic loading, node and link failure can have effects on network performance that are especially difficult to predict analytically, particularly when combinations of factors and large networks are involved.

Network modelling tools allow the definition and emulation of proposed networks via com- puterized models. Various factors, in isolation and in combination, may

then be simulated and applied to the model and their effects measured. Network optimization may then proceed against a background of controlled experiment, thus mini- mizing the risks of potential catastrophe in the real, live environment.

Network modelling takes due account of the wide range of performance characteristics and functionality supported by network devices. To this end, appropriate algorithms are applied for achieving optimum network configurations. These algorithms are designed to ensure the smooth flow of traffic, i.e. no line or nodal flooding or unacceptable flow control delays, and to optimize the selection and usage of higher bandwidth lines and higher capacity PSEs.

Once experimentation with the

vol 1 4 n o 2 march 1991 119

Page 8: Bridging the network generation gap

4 • r network arch tectu e

network model has been completed successfully and to the network administrator's satisfaction, the generation of the real network can proceed. The network modelling tools automatically compile the requisite configuration tables for each network device, appropriate to the final network model.

Device configuration and control

The NMC offers comprehensive facilities for the configuration and control of network devices. These include: • configuration table download; • device executable code download; • set alarm threshold levels; • initiate device or network

testing; • startup/closedown device or link; • interrogate device or link status; • collect device traffic and billing

statistics.

Device statistics collection

The judicious collection, analysis and reporting of device statistics such as totals of switched packets, protocol timeouts, error frequencies, etc., can be invaluable for network admini- strators, aiding such activities as:

• traffic analysis, to assess the effectiveness of current network topology and routing algorithms;

• deduction of future trends in traffic patterns, and prediction of network upgrade requirements;

• assistance in fault location; • prediction and preemptive

maintenance or replacement of failing links or components;

• assessment and improvement of operator procedures and effectiveness.

Device status monitoring

It is the responsibility of individual network devices to monitor their own status, and generate status/ alarm reports. Statistics thresholds may be set on a device, causing the generation of alarms above a certain event occurrence or frequency level,

and which may in addition cause the device to revert to a partial operating condition.

The NMC may be subjected on occasion to floods of alarm messages, e.g. a catastrophic failure involving several devices may result in the simultaneous generation of messages from various sources, coupled with a cascading effect to other alarm conditions. In such instances, the NMC applies knowledge-based techniques to assess the seriousness of the situation and filter out all but those alarms which convey the most urgent priorities. This allows the NMC to cope with the flood of information and present to the operator a timely and coherent picture of the situation.

Device and network testing

The availability of comprehensive test and diagnostic features, facilitating the remote testing of network devices, links and network characteristics is an essential feature of the network administrator's toolkit. Four main classes of test function are defined, as follows:

1 Test calls test network routing, network integrity and device operational status. They may be initiated between NMCs, between an NMC and a network element, or between any two network elements.

2 Network device hardware tests are designed to check the operational status of specific network devices. It is the responsibility of the target device to carry out the appropriate self-tests on receipt of these commands, according to its local hardware characteristics, and to return the results.

3 V.54/X.150 tests on inter-device links via internal and external loopbacks in interface hardware (modems or network termination equipment).

These CCI-I-F recommendations allow the NMC to initiate tests between remote network devices or hosts, on links which include

conformant modems or interface equipment.

4 Link protocol traces allow the monitoring of protocol activity on a specified link. Trace information may be displayed either formatted or unformatted, stored for future examination or statistically analysed for the detection of, for example, recurring fault patterns.

CONCLUSION

A data networking architecture has been described that addresses the conflicting pressures arising from continual technological change coupled with the need to satisfy strict business criteria.

The architecture is characterized by highly configurable and modular 'building brick' components, an evolutionary approach to OSI, and comprehensive network manage- ment facilities. In combination, these factors provide a powerful toolkit with which to tackle both tactical and strategic corporate networking issues efficiently and effectively, and ensure the maintenance of network stability in the face of expansion, general maintenance activities, or the introduction of new technology.

Unified network management facilities, oriented towards the emerging OSI network management standards, enable a wide variety of devices to be controlled from a single NMC. This in particular holds undoubted attractions to the network administrator, faced until now with a multiplicity of proprietary device mechanisms and user interfaces.

REFERENCES

1 I.eigh, B I'An architecture for all Seasons' Proc. Networks 89 Network Management Conf. Birmingham, UK (June 1989)

2 Nedzel, A E 'The costs of network ownership' Proc. Networks 89 Network Manage- ment Conf. Birmingham, UK (June 1989)

120 computer communications