performance monitoring tools for computer systems: hardware, software, firmware and hybrid controls

7
Comput. & Indus. Engng Vol. 9, No. 4, pp. 419-425, 1985 0360-8352/85 53.00 + .00 Primed in the U.S.A. © 1985PergamonPress Ltd. PERFORMANCE MONITORING TOOLS FOR COMPUTER SYSTEMS: HARDWARE, SOFTWARE, FIRMWARE AND HYBRID CONTROLS AvI RUSHINEK Department of Accounting, University of Miami, Coral Gables, FL 33124, U.S.A. and SARA F. RUSHINEK Department of Management Science and Computer Information Systems, University of Miami, Coral Gables, FL 33124, U.S.A. Al~traet--Pefformance monitors are devices used by system analysts and industrial engineers to measure the efficiency of a computer system. Their nature and applications in a computerized information system is the concern of this study. A description is given of the characteristics of monitors, the types of measurement they perform and their four basic categories: hardware, software, firmware and hybrid. The advan- tages and disadvantages of each of the categories are analyzed. The major problems facing the system analyst in a system where monitors are used are those of data integrity. The system analyst must ensure that monitors are correctly installed and that they fulfill their purpose as expected without altering the rest of the operations of the system. Unauthorized use of monitors must be prevented and the primary controls necessary to ensure data privacy, those of access control, be implemented and enforced. INTRODUCTION In a computerized information system, the proof that the system runs efficiently can become a difficult task. It is impossible for the analyst to simply walk around the organization's department visually checking that all the individual procedures are fitting together in the best possible way. Even if the analyst were able to look inside the computer and observe its manipulation of data, it would be impossible for him to ef- fectively evaluate whether the system was performing at its optimal level. The analyst's complex task of evaluating the efficiency of the system can be ac- complished through performance monitors which are able to measure entities in the system which are not otherwise measurable. All and any of the resources in a computer system may be measured to evaluate its performance, but generally the analyst will need to measure only a subset of the system's total resources. Examples of parameters measure by monitors are given by Lucas[l], who states that "CPU and Input/Output (I/O) time combined with wait and overlapped activities" are easily measured by hard- ware monitors, while software monitors typically provide such information as "statis- tics on I/O wait time, idle time," etc. Measurements can identify bottlenecks in the system, or underutilization of some resources. For example, it is very easy for monitors to report the exact amount of CPU utilization for particular processes or the exact way in which storage is being used. Evaluating system efficiency was formerly considered as the sole domain of the system's staff according to Bates[2]. However, the analyst's involvement in this task resulted from the fact that management realized that a considerable amount of money was being spent on Electronic Data Processing (EDP) and was not fully aware of where and why it was being spent. Since the analyst must determine whether an EDP facility is making the best use of its existing resources, it makes sense for the analyst to be involved in the monitoring process. It must be realized that this involvement entails a greater technical knowledge of electronics and computer programming than was for- merly require from analysts. 419

Upload: avi-rushinek

Post on 21-Jun-2016

217 views

Category:

Documents


3 download

TRANSCRIPT

Comput. & Indus. Engng Vol. 9, No. 4, pp. 419-425, 1985 0360-8352/85 53.00 + .00 Primed in the U.S.A. © 1985 Pergamon Press Ltd.

P E R F O R M A N C E M O N I T O R I N G T O O L S F O R C O M P U T E R S Y S T E M S : H A R D W A R E , S O F T W A R E ,

F I R M W A R E A N D H Y B R I D C O N T R O L S

A v I RUSHINEK Department of Accounting, University of Miami, Coral Gables, FL 33124, U.S.A.

and

SARA F . RUSHINEK Department of Management Science and Computer Information Systems, University of

Miami, Coral Gables, FL 33124, U.S.A.

Al~traet--Pefformance monitors are devices used by system analysts and industrial engineers to measure the efficiency of a computer system. Their nature and applications in a computerized information system is the concern of this study.

A description is given of the characteristics of monitors, the types of measurement they perform and their four basic categories: hardware, software, firmware and hybrid. The advan- tages and disadvantages of each of the categories are analyzed.

The major problems facing the system analyst in a system where monitors are used are those of data integrity. The system analyst must ensure that monitors are correctly installed and that they fulfill their purpose as expected without altering the rest of the operations of the system. Unauthorized use of monitors must be prevented and the primary controls necessary to ensure data privacy, those of access control, be implemented and enforced.

INTRODUCTION

In a computerized information system, the proof that the system runs efficiently can become a difficult task. It is impossible for the analyst to simply walk around the organization's department visually checking that all the individual procedures are fitting together in the best possible way. Even if the analyst were able to look inside the computer and observe its manipulation of data, it would be impossible for him to ef- fectively evaluate whether the system was performing at its optimal level.

The analyst's complex task of evaluating the efficiency of the system can be ac- complished through performance monitors which are able to measure entities in the system which are not otherwise measurable. All and any of the resources in a computer system may be measured to evaluate its performance, but generally the analyst will need to measure only a subset of the system's total resources. Examples of parameters measure by monitors are given by Lucas[l], who states that "CPU and Input/Output (I/O) time combined with wait and overlapped activities" are easily measured by hard- ware monitors, while software monitors typically provide such information as "statis- tics on I/O wait time, idle time," etc. Measurements can identify bottlenecks in the system, or underutilization of some resources. For example, it is very easy for monitors to report the exact amount of CPU utilization for particular processes or the exact way in which storage is being used.

Evaluating system efficiency was formerly considered as the sole domain of the system's staff according to Bates[2]. However, the analyst's involvement in this task resulted from the fact that management realized that a considerable amount of money was being spent on Electronic Data Processing (EDP) and was not fully aware of where and why it was being spent. Since the analyst must determine whether an EDP facility is making the best use of its existing resources, it makes sense for the analyst to be involved in the monitoring process. It must be realized that this involvement entails a greater technical knowledge of electronics and computer programming than was for- merly require from analysts.

419

420 A. RUSHINEK and S. F. RUSHINEK

CHARACTERISTICS OF PERFORMANCE MONITORS

The five elements usually present in a performance monitor are: the sensor, the selector, the processor, the recorder and the reporter. The sophistication and capa- bilities of these elements depend on the type of monitor. All monitors act as interfaces between the computer system and the system analyst, with the sensor taking the initial measurements from the computer and the reporter presenting information to the analyst in a suitable form. This interface concept is especially meaningful when it is noted that the analyst could have very little idea of what occurs inside the computer without the use of a monitor to interpret events in an intelligible form. After the measurements are taken by the sensor, the selector receives and collates these data prior to its transfer to the processor. The processor manipulates the data into a form suited for storage. Finally the reporter presents the data to the analyst in an understandable fashion. The reporter may be a sophisticated user-oriented software code, requiring minimal tech- nical knowledge by the analyst or a simple hardware device whose output cannot be meaningful without a great deal of technical knowledge or something between these two extremes.

Resource consumption activities can be measured in five ways: trace, activity duration, relative activity, activity frequency and distribution. The trace informs the analyst of the motions through which the system has passed. In many computer systems the trace is built into the operating system or can be purchased as part of the applications software. It is usually made up of software and/or firmware instructions and can be switched on or off at will. For example, in one system, the command " T R O N " before running a program causes a display of the program line numbers executed in order. " T R O F F " switches the trace off; this type of trace can be useful for checking and debugging software. Activity duration is merely a measurement of the length of time taken by an activity. Relative activity is the ratio of duration to total elapsed time, which can aid the analyst in estimating such important entities as idle time, for example. The frequency of activities and their statistical distribution are also useful measures accomplished by monitors.

TYPES OF MONITORS

There are four types of monitors: hardware, software, firmware and hybrid. Before deciding on the type of monitor to be used, Arndt and Oliver[3] suggest

that "specific evaluation questions should be formulated, hardware and software to be monitored should be fully understood and the capabilities of monitoring systems stud- ied." Hardware, software and firmware monitors have different capabilities and lim- itations as well as a variety of different costs.

There are seven features which influence the capabilities of monitors, but these features have varying effects depending on the type of monitor used. Hardware, soft- ware, firmware and hybrid monitors all have their strengths and weaknesses and it is incumbent upon the Data Processing (DP) manager and the analyst to ensure that the monitor or monitors chosen best satisfy the needs of the particular computer system and organization. No type of monitor can be considered as better than another in general, rather each type has its own inherent characteristics that enables it to perform some tasks better than others.

The seven features affecting performance of monitors are: artifact, domain, res- olution, input width, data reduction capability, data storage capacity and precision. The monitor artifact is the disturbance to the computer system caused by the actual measurement process.

The most significant artifact is encountered with software monitors, as they are programs needing storage space and CPU time. These concepts are termed "space" and " t ime" artifacts. A good example of a time artifact is encountered with the UNI- VAC 1100 system's software monitor to measure utilization of CPU time. This monitor is activated by using the word "PIP" ; it gives the exact CPU time used in executing

Performance monitoring tools for computer systems 421

a program. The accuracy of the result is not reliable for very small programs as "PIP" itself takes up CPU time in execution. Another important feature is the domain of the monitor, which is its scope, i.e. the group of activities it can measure successfully. Usually hybrid monitors are the most specialized, having the smallest domain, whereas hardware monitors generally have the largest. The resolution of a monitor is its fastest rate for accurate input data on events. Inputs width governs the number of bits of input data which the monitor can accept. Software monitors perform best in this respect with almost unlimited input width.

A monitor's ability to summarize data before storage (a task performed by the processor) is known as its data reduction capability. Storage space accessible by the monitor for its own data is another major consideration when choosing a monitor. For some applications not much storage may be required, but generally, in a large system the analyst will need to measure many activities needing a great deal of monitor storage space. The final factor for consideration when choosing a monitor is precision, which relates to the number of digits available for each item of data.

HARDWARE MONITORS

Hardware monitors are actual "physical" devices connected with probes to the circuitry of a computer. Roderick[4] describes them as devices which "gather utilization data by sampling the computer's electronic activity." Some may be automatically built into the system, fixed hardware monitors, while others may be connected by probes and termed wired-program hardware monitors. The most sophisticated have their own minicomputers to process results and are called stored-program hardware monitors. The advantage of these is their improved "user-friendliness" requiring less expertise than simpler hardware monitors.

Engle[5] gives a detailed comparison between hardware and software monitors. With hardware monitors, measurements are made electronically so that events of very short duration can be accurately detected, and also, several events can be measured simultaneously, even events on separate units of hardware; they cause no artifact in time or space as they are completely external to the system. Their physical properties mean that they can be used on any system, which make them very portable. Both Roderick[4] and Menkus[6] refer particularly to the broad domain of hardware monitors, another great advantage.

On the negative side, hardware monitors cannot make any measurements relating to software activities, like the changes in a variable within a program for example. These hardware monitors are also very easily damaged, intentionally or otherwise. Damage may occur when connecting them, if probes are wrongly attached, as they are very sensitive instruments. When they are correctly installed, damage is still possible through rough handling or blows (accidental or intentional) to the probes. They require expert installation and operation, not only to prevent any harm being done to them, but also because they are sophisticated devices and usually have a complex output.

With all these strengths and weaknesses, Roderick[4] notes that hardware monitors can still provide "detailed information about all aspects of the efficiency of a computer system." Menkus[6] is especially concerned with costs, and states that hardware mon- itors can help reduce spiralling EDP costs through their ability to measure " the share of available computer resources used by particular programs in multiprogram envi- ronments." Hardware monitors have been in use for a long time, and are still relied upon heavily in the computer world.

SOFffWARE MONITORS

There are two main categories of software monitors, event-driven and time-driven (also known as "sampling monitors"). Engle[5] describes the time-driven monitor (TDM) as a "periodic sampling system activated at user-specified intervals." There is a timer, either connected to a clock-like device generating signals at constant intervals

422 A. RUSHINEK and S. F. RUSHINEK

or programmed to generate signals at random intervals. These signals determine the time at which measurements take place by a statistically rigorous procedure, resulting in the term "sampling."

Event-driven monitors (EDM) are activated by the occurrence of a specific event within the system or a program. An example is a CPU interrupt being generated, but the most simple example is that of an " I F . . . T H E N " type of statement in a program. Engle's study also notes that event-driven monitors usually interface with the operating system's interrupt-handling mechanism, giving them CPU control at a very basic level. EDMs are more expensive than TDMs, but can capture almost every occurrence of the event being studied. However, a good sampling procedure can make TDMs almost as effective in many systems.

Among the many advantages encountered when using software monitors is their flexibility; they measure system events using instructions rather than the purely phys- ical entitites handled by hardware monitors. As already noted, they can accept an almost unlimited number of bits of input data, and the analyst can program as many checkpoints as required. They are easily installed, more so than other types of monitors. For example, software monitors may be typed at a keyboard by a typist with minimal supervision. Since they are internal coded devices, they cannot be easily damaged or altered. Of course the danger of damage by such physical factors as heat and external magnetic fields still remains, but these factors can damage the entire system and must be guarded against routinely.

Properly developed software monitors are almost impossible to damage acciden- tally in other ways and intentional alterations would require a good knowledge of the system and of the monitor itself, as well as physical and operational access. Added to all these advantages is the fact that software monitors are often less expensive than hardware monitors. Among the weaknesses of software monitors is their restricted portability.

Generally software monitors are written in the assembly language of the host com- puter and cannot run on most other computers. Although software monitors are easily installed and do not require much technical skill, still the person ultimately responsible for their implementation and use must be completely familiar with the host system or program for proper installation and meaningful results. For example, incorrect punc- tuation (. , ; , etc), typing in one line in the wrong place, etc, cause either incorrect results or usually, the inability to even run the program. Similarly, misunderstood output from monitors is, at the very least useless, and at the worst, it can be disastrous. Software monitors normally cause by far the greatest time and space artifact.

The software monitor instructions take up storage space and its execution uses CPU time. Engle[5] estimates the storage space requirement to be about 6-200 K bytes of core, typically, and the storage and processing overhead to be about 1-5%. The information gathered can be biased by the artifact. However, as already noted, this may not be a significant bias, depending on the type of measurements taken. Software monitors have limited scope, being able to measure only the information available through machine commands. Simultaneous measure is not possible; however, the ap- proach of frequent sampling can compensate for the lack of simultaneous measuring capability in many cases.

There are a wide range of software monitors available for the IBM 360 systems, so that an organization with this type of computer has a large selection. For cost simplicity and invulnerability, software monitors are very appealing; however, for more specialized needs, wider scope, lower artifact and use in more than one system, another type of monitor is necessary.

FIRMWARE MONITORS

The word firmware is difficult to define, as firmware is an overlap between hard- ware and software, where coded instructions are used to manipulate microprocessors. In the past there was no such overlap as the areas handled by firmware today were

Performance monitoring tools for computer systems 423

strictly hardware then. With a tendency towards more sophisticated and much smaller hardware components, firmware instructions are now widely used.

Firmware monitors have the advantage of smaller artifact than that encountered with software monitors. This is because firmware instructions use microcode which take less storage space and less execution time than ordinary instructions. The overlap between hardware and software causes firmware monitors to be able to access some hardware indicators so that they can take over some of the functions of hardware monitors, at less cost. Events can be input at a faster rate than with software monitors, yet still with a large input width.

However, some disadvantages are encountered with firmware monitors. The high input rate, which is an advantage, can lead to a slower execution speed for the host computer, causing time artifact. Firmware monitors have very simple processors, which cannot transform data as well as those of some software monitors.

Data is usually stored merely as a trace, for example. Firmware monitors are less widely used than the preceding two types and their main value lies in specialized ap- plications and needs. In some systems the subset of resource activities to be measured may exactly correspond to the capabilities of a firmware monitor. This is especially true when only one monitor is required; cost is a major factor and both hardware and software type activities are critical.

HYBRID MONITORS

Hybrid monitors, consisting of hardware, software and/or firmware elements are also used in specialized applications. The typical hybrid monitor features an external hardware device performing in cooperation with internal software or firmware instruc- tions. Advantages include reduction in artifact due to less interference than software or firmware monitors, and easier implementation through well defined processes. These well-defined processes are needed by the specialized nature of hybrid monitors. Their main disadvantage is that they can monitor only a small subset of events. However, when an organization chooses a hybrid monitor, it is normally chosen to accomplish a specific small group of measurement activities in the best possible way.

DATA INTEGRITY

When a performance monitor is used to evaluate the efficiency of the computer system, the analyst must take special care to assure data integrity. The monitor should not interfere with the correct functioning of the system, nor should it cause any alter- ations in the data used in the system. These problems may arise through accidental or intentional misuse of the performance monitor.

First, the analyst must ensure correct system instrumentation. Incorrect instal- lation of the monitor can cause damage to the monitor or the system, and affect the validity of the information being output by the system. There are several options whereby the analyst can check for correct instrumentation. First he can use test pro- grams which can, for example, check for proper placement of hardware monitor probes. These test programs should be available from the supplirs of the monitor or they can be developed in-house with the proper expertise. Another alternative is a special as- sembly program which executes loops of software or firmware instructions a specified number of times to check that results conform to a predetermined standard. Some monitors actually include controls to check their own installation. However, the analyst should understand exactly what these controls are capable of doing before relying on them. As noted by Ramamoorthy, Kim and Chen[7] software errors can have disastrous results. Complete validation is generally considered infeasible due to high costs, so that most approaches aim at partial validation with high cost-effectiveness.

The second major problem facing the analyst is the possibility that monitors may be purposely misused to violate data privacy. Bach[8] averts that all subjects should have the right of protection against data abuse; this right should reduce the possibility

424 A. RUSHINEK and S. F. RUSHINEK

of obtaining data illegally, allow data subjects access to their records, ensure the quality of data collected and define and enforce confidentiality. In many cases the organization must provide these rights by law. In the broadest sense, this principle of data privacy can be extended to encompass the accessibility of all types of data. In every organization there is some set of data which should not be generally accessible to every employee and especially to the general public. To prevent the intentional misuse of monitors the most important controls are physical and operational access controls. Passwords, rec- ognition of objects, recognition of characteristics and dialog procedures should all be extended to cover use of the monitor. Even increased physical controls may be nec- essary to ensure that the monitor is not damaged or misused.

Keliher[9] is concerned that " today it is not enough to establish a security and privacy system. Such a system must be managed as any other business project. As more members of the public gain access to terminals and computers, computer,suppliers and users must ensure that the sophistication of safeguards grows along with this trend." Security has to be built into a system especially in the area of access control.

CONCLUSIONS

Performance monitors are tools which measure the efficiency of a computer sys- tem. Of the four major categories of performance monitors, all have their strengths and weaknesses which must be evaluated before making a decision of which monitor to install. After choosing which type of monitor will be implemented, it is necessary to ensure that special controls are designed to preserve data integrity.

Several authors have made observations on the impact of monitors. Clawson[10] stresses the new insight they bring into the usage of a computer system. Menkus[6] is more interested in their capacity to offset costs through the definition of limits on the share of available computer resources for each operation or subsystem. Making the best use of existing resources is also a benefit afforded by performance monitors, according to Bates[2]. There is no doubt that familiarity with the system, cost control and optimization of resource usage are all highly desirable to the analyst. Performance monitors aid in accomplishing these goals, but they must be properly installed and controlled and their results correctly interpreted and acted upon.

The areas in which an analyst must be involved today continue to expand as the EDP industry reaches new heights of sophistication. There is no doubt that performance monitors bring great benefits, but along with these benefits come aditional problems as well. They create a need for personnel with more technical expertise, and controls are necessary to prevent their misuse. It is generally accepted that advances in the computer industry have outstripped the necessary controls. In the realm of performance monitors, analysts must rely heavily on access controls and these controls must con- tinue to develop as the computer system becomes more complex, both for the pre- vention of computer crime, and to ensure proper functioning of the system.

Acknowledgments--The authors gratefully acknowledge the anonymous referees for their helpful c o m m e n t s and suggestions. Thanks are due to Sheila Mihoubi for her typing of the revisions of this paper. For additional information, please write to the authors.

REFERENCES

1. H. C. Lucas, Jr., Performance evaluation and monitoring. Computing Surveys 3, 79-91 (1971). 2. C. F. Bates, Computer performance evaluation and the internal auditor. The Magazine of Bank Admin-

istration 52, 6, 12-16 (1976). 3. F. R. Arndt, & Capt. A. M. Oliver, Hardware monitoring of real-time computer system performance.

Digest of 1971 1EEE International Computer Society Conference, 123-125 (1971). 4. B. A. Roderick, How to audit the efficiency and economy of computer systems. Internal Auditor 34,

42-46 (1977). 5. William B. Engle, Making the most of performance monitors. Computer Decisions 10, 50-63 (1978). 6. B. Menkus, "EDP perspective--Four ways you can offset spiralling EDP costs. Adm. Management 36,

73 (1975).

Performance monitoring tools for computer systems 425

7. C. V. Ramamoorthy, K. H. Kim, & W. T. Chert, Optimal placement of Software monitors siding sys- tematic testing. IEEE Transactions on Software Engineering SE-1,403-411 (1975).

8. G. G. F. Bach, Data privacy--critical issue of the 80's. Telecommunications 14, 43--48 (1980). 9. M. J. Keliher, Computer security and privacy: A systems approach is needed. Vital Speeches 46, 662-

666 (1980). 10. W. K. Clawson, Software physics shows managers what's happening. Computer World 11, 25 (1977).