WHAT IS SOFTWARE MEASUREMENT?
Put aside for a moment everything you know about software. Forget what you learned from Pressman and Kan. Ignore what the tools vendors have told you. With your slate clean, answer the following deceptively simple question: What are we measuring when we measure software, and why do we measure it?
On the face of it, the answer seems straightforward. We are measuring a process—the tasks involved in developing software. We are also measuring a thing—the software product’s functional “content” and its conformance with specifications and quality requirements. This answer, however, merely identifies the two major domains of inquiry process and product. It tells us the areas we want to measure, but it doesn’t help us decide what exactly we want to measure, why we want to measure that instead of something else, or what we ought to do with the data once we have it.
Those two domains of inquiry are huge, and they span a host of interrelated components. So, there won’t be a simple answer to the question. When we investigate “software,” we are examining design and development processes, validation processes, customer needs and savvy at various times, code, documents, specifications, online help, etc., etc. To make matters more interesting, very few of these components are actually tangible things.
For example, requirements drift is not a thing in itself—it’s a change, a delta. For convenience sake, we like to locate drift in the physical difference between a requirements document at time A and time B. That lexical difference isn’t the shift itself, however. It’s the symptom or the trace of the measurement target, which is the event of drift. And that event is very hard to analyze effectively. It might be that the customer simply changed its collective mind. It might be that the systems engineers neglected to probe customer requirements deeply enough to determine the real requirements. It might be that the requirements never really changed, but were just inaccurately documented or inappropriately interpreted during the development cycle.
Similarly, we often speak of the source code as the end product of software development. Source code isn’t a product in the typical sense of the term, and its transference to a CD isn’t the end result of the process. The source code is a code: like any language, it is the result of experience and thinking and analyzing and communicating. Like any language, it only exists as a language when it is used or executed. The process isn’t complete even when the software is first used by the customer’s employees to successfully accomplish some task. It’s an ongoing process with many exit points and many decision milestones. Between the time the request for proposal arrives and the time the customer signs an end-of-warranty agreement, hundreds of factors are involved in specifying, designing, creating, testing, producing, distributing, using, and evaluating “software.”
If “software” is really a collection of multiple attributes evaluated by many people over a long period of time, just what are we supposed to measure? The simple answer is: We measure what will help us get our work done.
All measurement has a rationale, a purpose. It has an audience. It is a means to an end. Someone is going to use the data for some purpose. They will draw conclusions from it. They may change project plans or scope or cost estimates based on those conclusions. Those actions will in turn affect other aspects of the project, maybe even the business itself.