software metrics: of lightning rods and built-up tension

2
Editor’s Corner * Robert L. Glass Software Metrics: of Lightning Rods and Built-up Tension The speaker on Software Metrics had just finished his presentation. “Are there any questions?” he asked of the audience. In the back of the room, a hand went up. From the tone of voice, you could sense immediately that the question was hostile. “Why do you talk about Halstead metrics?” the questioner asked. “They have been pretty well discred- ited. They simply don’t validate in any meaningful way.” “In fact,” the questioner added as if in afterthought, “talking about Halstead’s Software Science is about like teaching Alchemy in a college course on Chemistry.” The room took on a stunned silence. That kind of direct confrontation is rare in technical circles. While the speaker fought both for composure and for an answer to the challenge, I thought about what was happening. Software metrics, as you probably know, is about the measurement of things pertaining to software. The most famous measurements are about complexity of software. Some other measurements have to do with productivity, quality, estimation, and a lot of other things we software folk wish we knew more about. Software metrics is that corner of the sofhvare world that seeks quantized answers to what so far have been qualitative issues. When people talk abut software metrics, there are a lot of schools of thought. One of those schools of thought is the Halstead school, the school labeled “Software Science,” the one where basic measures of software are undertaken in order to form a unified theoretical approach to the measurement of the artifacts of software. It seemed like a good idea when Halstead first came up with it. But, over the years, software researchers and *This editorial was reprinted with permission from the column “Software Reflections” by Robec L. Glass in System Development, P. 0. Box 9280, Phoenix, AZ 85068. developers trying to validate the metrics have encoun- tered more and more frustration. The metric results are supersensitive to the items measured. Many have given up on the value of the work. But while that has been happening, entrepreneurs have discovered metrics. There are now commercially available tools that calculate a variety of metrics, including Halstead’s, and tout those metrics as the answer to management’s plea for knowledge and control of the software development process. Those claims have escalated the emotional setting in which metrics already found itself. The explosive scene where the speaker was chal- lenged by hostile questions is not atypical of the late 1980’s reaction to software metrics. As the speaker regained his composure and began responding as best he could, I thought further about what was really happen- ing. There is a tension between those who espouse computer science theory and those who practice soft- ware as something of an engineering discipline. As I thought, I began to realize that here, in software metrics, is the lightning rod for that tension. The harsh question from the audience was the lightning bolt, the inevitable result of that tension. Metrics vary in difficulty from those that are easy to obtain to those that are hard. But, when looked at in the context of other computer science interest areas, even the hardest of the metrics is still easy to examine and evaluate. That is, it is much easier to try to validate a metric than to validate, for example, the quantitative value of Pamas’ principles of information hiding. No one doubts the value of Pamas’ work and the value of appropriate modularization, but no one really has any idea how much better software is when it has those characteristics than when it does not. So here, in software metrics, people can not only choose up-sides about the value of academic computer 157 The Journal of Systems and Software 10, 157-158 (1989) 0 1989 Elsevier Science Publishing Co., Inc. 01641212/89/$3.50

Upload: robert-l-glass

Post on 21-Jun-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Software metrics: of lightning rods and built-up tension

Editor’s Corner * Robert L. Glass

Software Metrics: of Lightning Rods and Built-up Tension

The speaker on Software Metrics had just finished his presentation. “Are there any questions?” he asked of the audience.

In the back of the room, a hand went up. From the tone of voice, you could sense immediately that the question was hostile.

“Why do you talk about Halstead metrics?” the questioner asked. “They have been pretty well discred- ited. They simply don’t validate in any meaningful way.”

“In fact,” the questioner added as if in afterthought, “talking about Halstead’s Software Science is about like teaching Alchemy in a college course on Chemistry.”

The room took on a stunned silence. That kind of direct confrontation is rare in technical circles. While the speaker fought both for composure and for an answer to the challenge, I thought about what was happening.

Software metrics, as you probably know, is about the measurement of things pertaining to software. The most famous measurements are about complexity of software. Some other measurements have to do with productivity, quality, estimation, and a lot of other things we software folk wish we knew more about. Software metrics is that corner of the sofhvare world that seeks quantized answers to what so far have been qualitative issues.

When people talk abut software metrics, there are a lot of schools of thought. One of those schools of thought is the Halstead school, the school labeled “Software Science,” the one where basic measures of software are undertaken in order to form a unified theoretical approach to the measurement of the artifacts of software.

It seemed like a good idea when Halstead first came up with it. But, over the years, software researchers and

*This editorial was reprinted with permission from the column “Software Reflections” by Robec L. Glass in System Development, P. 0. Box 9280, Phoenix, AZ 85068.

developers trying to validate the metrics have encoun- tered more and more frustration. The metric results are supersensitive to the items measured. Many have given up on the value of the work.

But while that has been happening, entrepreneurs have discovered metrics. There are now commercially available tools that calculate a variety of metrics, including Halstead’s, and tout those metrics as the answer to management’s plea for knowledge and control of the software development process. Those claims have escalated the emotional setting in which metrics already found itself.

The explosive scene where the speaker was chal- lenged by hostile questions is not atypical of the late 1980’s reaction to software metrics. As the speaker regained his composure and began responding as best he could, I thought further about what was really happen- ing.

There is a tension between those who espouse computer science theory and those who practice soft- ware as something of an engineering discipline. As I thought, I began to realize that here, in software metrics, is the lightning rod for that tension. The harsh question from the audience was the lightning bolt, the inevitable result of that tension.

Metrics vary in difficulty from those that are easy to obtain to those that are hard. But, when looked at in the context of other computer science interest areas, even the hardest of the metrics is still easy to examine and evaluate. That is, it is much easier to try to validate a metric than to validate, for example, the quantitative value of Pamas’ principles of information hiding. No one doubts the value of Pamas’ work and the value of appropriate modularization, but no one really has any idea how much better software is when it has those characteristics than when it does not.

So here, in software metrics, people can not only choose up-sides about the value of academic computer

157 The Journal of Systems and Software 10, 157-158 (1989) 0 1989 Elsevier Science Publishing Co., Inc. 01641212/89/$3.50

Page 2: Software metrics: of lightning rods and built-up tension

Editor’s Comer

science theory, but they can also point with either pride or disdain to the measurement of that value from easily obtained results.

In a very important way, that’s too bad. Because tangled in amongst the entrepreneurial hype and the theoretic advocacy, there is some important reality to software metrics. We may not have found the right ways to do it yet, but it is vital that we keep trying. If Halstead

was wrong, what is right? Can we learn anything about building software in the future based on the numeric artifacts gathered from dusty past efforts?

Surely, the answer will eventually be “yes.” And when that happens, perhaps lightning will no

longer strike at those who stand up and talk about software metrics.