I realized recently that I hadn’t posted in a while.
I have a couple PA meetings to complete, and then we’ll start the goal-setting process for next year. Much of this is still very up in the air, because HR is still thinking about guidelines for us.
As I’ve told most people, I think we should be doing competency goals, rather than task goals. There are several reasons for this, not least of which is that we don’t really yet know what we’re doing next year. I’ve asked people to think about numerical measures, and how we might collect numerical measures.
I have a thought.
Somebody (somebody tell me who) pointed out that software is different because it’s a design activity. The construction cost for software is almost unmeasurable. So we should measure software quality by measuring the design outcomes that we want.
Leaving aside affective design elements, we want software that is more capable, more readily modified and maintained, and that is as low in complexity as possible. Since the desired level of complexity is arbitrary (it depends on the feature set) then adding features will add complexity, but fixing bugs should reduce complexity.
So, we should measure things like the completeness (or maybe initially just the presence) of interface specification (in the form, for example, of Javadoc) and internal commentary, and compliance with a standard coding form. We can measure complexity several ways, but the simplest is the McCabe number, which we can directly compute from the code. For each revision, we would examine whether the code got more or less complex, and prefer less complex to more complex results.
So, somebody tell me what’s wrong, if anything, with starting to collect those three numbers? It won’t measure productivity, but it is a simple set of quality measures. As I previously suggested, the corrolary to the article I sent around is that organizations that don’t collect numbers probably suck. This would help us suck less.