Archive for September 2006

Where's Tom? (Tue/Thu)

September 13, 2006

Starting next week I’ll be taking a seven week online PMP prep class on Tuesday and Thursday afternoons. (It was cheaper than the five day course in NYC.)

While I’ll be online, I’ll want to minimize interruptions, and I may do some of the class sessions from home, where it’s harder to walk in on me. I will still be available by email and IM, though I may push you off if I’m doing something difficult in the class.

So if you start to wonder why I’m never around Tuesday and Thursday afternoons, that’s why.

tc>

HST MSR News

September 13, 2006

Yesterday was the HST MSR, and while there was little news, there is a little controversy brewing.

COS and WFC3 are making progress toward being flight ready, though in the case of WFC3 they have some very odd things they need to deal with. The oddest is that they seem to have some kind of small particulate contamination on the inner window of one of the detectors. The particles show up in images, but aren’t visible in a microscope.

Both teams are now thinking about what support they will need to be flight-ready on the data processing side. Both teams need support for Thermal Vacuum, Servicing Mission Ground Tests, and an Integrated Pipeline Test. Both teams would also like a separate test of association processing. None of the tests seem terribly challenging, though some of the WFC3 world coordinate systems work looks somewhat difficult.

The problem is, as the WFC3 team suggested, that the OPUS people are mostly working on Kepler, and the DADS people are mostly working on HLA. That leaves very limited resources available to support any changes needed (either prior to or as a result of these tests) to OPUS and DADS until some time in May. Since the tests are all scheduled to be done by sometime in March, that’s a problem.

You all know we’re in the process of trying to find somebody to work on OPUS, and later on Calibration (what I’m now calling the Calibration Products Team, since the acronym is available) to provide a longer-term solution to this problem. However, we need to figure out some kind of short-term option. Mike Hauser suggested that if nothing else, we’d need to move resources back from Kepler to address the HST issues.

Anyway, there will be several followup meetings to understand the schedule, to understand which tests really need to be done early, and who might be able to help out, either from within ESS or from elsewhere inside or outside the Institute.

More news on this will certainly be forthcoming.

tc>

Serendipity

September 11, 2006

This morning I couldn’t decide what to listen to on what turned out to be a very long ride into work. So I set my iPod to Shuffle, and the very first song it picked was Credence Clearwater Revival’s “Who’ll Stop the Rain.”

My first thought was to run out and get lottery tickets.

Of course, CCR’s music is (sometimes deliberately) ironic on multiple levels. A “Louisiana swamp band” from the San Fransisco Bay area, four middle-class Californians trying to sound like rednecks “Born on the Bayou.” Both Fogarty’s managed to avoid Vietnam despite being drafted, and thus wrote authoritatively about situations with which they had no experience. “Who’ll Stop the Rain” is apocryphally about the use of Agent Orange and its effects on American troops, and inspired a spectacularly good and under-appreciated Nick Nolte movie.

It was in fact written about the incessant rain at Woodstock, where CCR’s sets were all played wet.

Sometimes, however, irony works out.

tc>

Librarian replacement requirements exercise

September 7, 2006

Sarah Stevens-Rayburn is retiring, and the Institute is trying to figure out what to do about it.

Claus Leitherer is chairing a committee to define the future direction of the library, so that we can recruit the appropriate person to take us in that direction. Claus writes:
Most of the library users are scientists, but some other institute members use this resource well. In order to have a broad perspective I am contacting different divisions to explore whether it makes sense to add members of these divisions to our committee.

and
I am seeking user input from somebody who actually uses the library rather frequently. Issues that will come up are, e.g., the balance between hardcover books vs. electronic subscriptions, balance of scientific vs. non-scientific literature, cost and services, the future of a “traditional library”, etc.

So, if you are a library user, and are willing to participate in a committee to write the new job description, please let me know. I’m not sure how ESS will supply input, but I’d like to see who is interested in contributing.

It could be worse….

September 4, 2006

http://community.livejournal.com/_dilbert_strip/117418.html

What should we measure?

September 3, 2006

I realized recently that I hadn’t posted in a while.

I have a couple PA meetings to complete, and then we’ll start the goal-setting process for next year. Much of this is still very up in the air, because HR is still thinking about guidelines for us.

As I’ve told most people, I think we should be doing competency goals, rather than task goals. There are several reasons for this, not least of which is that we don’t really yet know what we’re doing next year. I’ve asked people to think about numerical measures, and how we might collect numerical measures.

I have a thought.

Somebody (somebody tell me who) pointed out that software is different because it’s a design activity. The construction cost for software is almost unmeasurable. So we should measure software quality by measuring the design outcomes that we want.

Leaving aside affective design elements, we want software that is more capable, more readily modified and maintained, and that is as low in complexity as possible. Since the desired level of complexity is arbitrary (it depends on the feature set) then adding features will add complexity, but fixing bugs should reduce complexity.

So, we should measure things like the completeness (or maybe initially just the presence) of interface specification (in the form, for example, of Javadoc) and internal commentary, and compliance with a standard coding form. We can measure complexity several ways, but the simplest is the McCabe number, which we can directly compute from the code. For each revision, we would examine whether the code got more or less complex, and prefer less complex to more complex results.

So, somebody tell me what’s wrong, if anything, with starting to collect those three numbers? It won’t measure productivity, but it is a simple set of quality measures. As I previously suggested, the corrolary to the article I sent around is that organizations that don’t collect numbers probably suck. This would help us suck less.

tc>


Follow

Get every new post delivered to your Inbox.