Monthly Archives: August 2010

Risk

Another quote from Peopleware I’d like to share (pg. 189-190):

All the projects that carry real benefit carry real risks along with them. It is the project that has some novelty, some innovation or invention, that might grab the customer’s imagination and wallet. It’s even possible your company’s most famous disaster project—the one that came in a year over schedule, at 3.5 times the cost, had loads of problems getting though system test, and still needs engineers standing around with code defibrillators just to keep it running—is still the best project your organization has done in years.

I worry about the level of risk-aversion at the University. As (also prompted by a Peopleware quote) I posted on my personal blog last week, if you’re not free to fail you’re not free.

Automated testing and risk

Eric S. Raymond on risk and verification, or how automated tests (like in Test Driven Development) are changing software development:

Thirteen years ago I wrote that in the presence of a culture of decentralized peer review enabled by cheap communications, heavyweight traditional planning and management methods for software development start to look like pointless overhead. That has become conventional wisdom; but I think, perhaps, I see the next phase change emerging now. In the presence of sufficiently good automated verification, the heavyweight vetting, filtering, and review apparatus of open-source projects as we have known them also starts to look like pointless overhead.

The whole article is long but worth reading. As a bonus, the context is the reconstruction of the version history of INTERCAL-C in order to start using git to manage the source code!

Mainframe load

Adam asked me to post something about mainframe load—“what’s the biggest source of load, what’s next, where’s broker, ADABAS, etc.” Let me start by showing a ten second snapshot I just took (2:10 pm) of CPU usage by job:

CPU BUSY   94%    10 SEC SAMPLE      4 CPUS ACTIVE

JES  UTCOM7      26% OF CPU BUSY    E8  <-- PRIORITY
JES  UTPRD2      11% OF CPU BUSY    EA  <-- PRIORITY
JES  UTPRD1       5% OF CPU BUSY    EA  <-- PRIORITY
STC  UTETB1       4% OF CPU BUSY    EE  <-- PRIORITY
JES  UTQUA1       4% OF CPU BUSY    DC  <-- PRIORITY
JES  UTCOM4       4% OF CPU BUSY    E8  <-- PRIORITY
JES  UTPRD5       4% OF CPU BUSY    EA  <-- PRIORITY
STC  UTNDVT       3% OF CPU BUSY    F4  <-- PRIORITY
JES  UTCOM2       3% OF CPU BUSY    E8  <-- PRIORITY
JES  UTQUA5       3% OF CPU BUSY    DC  <-- PRIORITY
JES  UTCOM8       2% OF CPU BUSY    E8  <-- PRIORITY
JES  SGNWFL9D     2% OF CPU BUSY    C0  <-- PRIORITY
JES  EWNWEOMO     2% OF CPU BUSY    C1  <-- PRIORITY
JES  UTCOM1       1% OF CPU BUSY    E8  <-- PRIORITY
JES  UTQUA4       1% OF CPU BUSY    DC  <-- PRIORITY
JES  UTPRD4       1% OF CPU BUSY    EA  <-- PRIORITY
JES  NRNW2041     1% OF CPU BUSY    C8  <-- PRIORITY
JES  EINWAEVE     1% OF CPU BUSY    C0  <-- PRIORITY

This is fairly typical of what we’ve been seeing this week. UTCOMx jobs are the COM-PLETEs; UTCOM7 is where most production Broker servers run, UTCOM2 is “Fiscal”, etc. They typically represent about a quarter to a third of the CPU being used during prime shift. UTPRDx jobs are the production Adabas databases, and they typically take up a fifth to a quarter of the CPU usage. UTQUAx are the quality assurance Adabas databases; their usage is highly variable. (Test Adabas databases are named UTTSTx; none of them was using enough CPU to show up when I took this snapshot.)

UTETBx jobs are Broker; UTETB1 is the production Broker nucleus. This is where the big difference has been during this semester’s registration: before I installed EntireX version 8 over the summer, Broker used about the same amount of CPU as COM-PLETE, but once I installed the new version, it dropped down to between a fourth and a sixth of that. If we were still at version 7 of Broker, it would have been taking up between 20 and 25% of the CPU, instead of around 5%.

I’ll also draw your attention to the priorities. (Those are hexadecimal numbers, by the way.) Broker has the highest priority, with production Adabas next and COM-PLETE next. Batch jobs are at the very bottom. The MVS dispatcher always selects the highest priority job that’s ready to use the CPU. If Broker were still using 20% or so of the CPU, the QUAL databases and the batch jobs would just be out of luck—they’d still be running, but they would never get a chance to use the CPU.

I’ll watch for any questions in the comments, or you can email me and I’ll post the answers.

Champions

This might be long. Let’s start with a quote from an article in the Economist linked by Tim Bray:

“I hate programmers,” replies this dyed-in-the-wool entrepreneur. “They only cause trouble.”

The article, by the way, is about the appalling state of IT at major banks, with some “buy vs. build” discussion. Anyway, since I’ve been saying my job is to cause trouble for quite a while, here I go again.

We can identify three basic paths the University could follow in moving to “open systems”:

  1. Convert to commercial products like Peoplesoft or Banner and stop developing our own administrative applications.
  2. Convert to shared source applications like Kuali and Sempai and collaborate on application development with those communities
  3. Select new development tools (programming languages, databases, etc.) and continue to develop UT-specific applications.

There are variations within these plans—especially the last one—but I think any roadmap we could come up with fits into one of these three categories. So what criteria should we use to pick one?

Continue reading

Being the best

When the Lawrence Hall Library on the 13th floor of the tower was closed, I went and looked through the books and took several. One of the ones I picked out was Peopleware by Tom DeMarco and Timothy Lister. I had heard about this book but never actually read it. So Friday afternoon I finished the task I was working on and didn’t really have time to start anything new before I needed to leave to catch my bus, so I picked it up and started reading. It’s just as good as I had heard.

I wanted to highlight this (from page 111):

The best organizations are not of a kind; they are more notable for their dissimilarities than for their likenesses. But one thing that they all share is a preoccupation with being the best. It is a constant topic in the corridors, in working meetings, and in bull sessions. The converse of this effect is equally true: In organizations that are not “the best,” the topic is rarely or never discussed.

The best organizations are constantly striving to be the best. This is a common goal that provides common direction, joint satisfaction, and a strong binding effect. There is a mentality of permanence about such places, the sense that you’d be dumb to look for a job elsewhere—people would look at you as though you were daft.

This reminds me of how things were when I first started working here. It’s been a while since I’ve felt that way, though.

Industrial strength

I’ve been thinking more about Nicholas Carr’s argument that IT has become a commodity. At first I thought his thesis was pretty compelling, except that I didn’t think IT had reached sufficient maturity to be a true commodity. A while ago, though, I had a thought that makes me wonder.

The two examples he uses of technologies that initially provided strategic advantages but matured into commodities were railroads and electricity. These are both signature technologies of the industrial revolution. One of the hallmarks of industrialization is mass production, where large numbers of identical products are manufactured and distributed at significantly lower costs than what applies to customized items.

However, we’ve now moved into the next, post-industrial stage of society, the information age. (Of course this has been driven to a large degree by computers.) A current trend is an increasing ability to provide somewhat customized products at mass-production prices, through “build-to-order” options and such. Will this apply to software too? When I’ve talked to people at shops that use commercial ERPs they’ve often indicated that they spend a lot of time and resources customizing the ERP to meet their needs, so I wonder if the analogy between IT and industrial-age technologies like railroads and electricity is fundamentally incorrect.

Links

Some interesting links that have come my way:

Who pays the hidden cost of University research? This is specifically focused on the University of California, but I wonder how it plays out here.

Texas students could be required to seek off-campus learning options. Closer to home. I thought the best criticism in the discussion in the article was “why require this when it’s going to happen organically?” I think higher education in general has been too slow in exploiting the options provided by modern communication technologies.

What Google Could Learn From Pixar. via DaringFireball, who highlights this:

Despite an unbroken string of 11 blockbuster films, Catmull regularly says, “Success hides problems.”

Ain’t that the truth. I think the successes we had in the 1990’s hid many of our problems well into this past decade.

SLA’s

The real meaning of the service level agreement

In today’s context, however, the SLA is mostly used as a get out of jail free card: as a limitation on service expectations by DP people; as a critical element in getting an easy ride from the auditors by both DP executives and the senior people they report to; and, as barrier keeping DP people in a distant and clearly subordinate role by executive management.

Basically the problem is that an SLA commits DP to meeting specified expectations – and thus both relieves DP of any need to exceed those expectations and acts as a barrier separating those on each side of the agreement. As a result its existence in an organization testifies to that organization’s ability to resist change by passing costs on its customers – meaning that its existence is characteristic of government and monopoly, or near monopoly, organizations; including industry level IT monopolies in which the employers compete but all use the same, essentially interchangeable, IT people, tools, and methods.

Those of us who went through the old DP training here at UT have always resisted SLA’s precisely because they feel like an unnatural barrier between us and the people we’re supposed to work with. We prefer to view “users” as colleagues rather than customers.

The more dangerous alternative is to go after real change now – but that’s extremely hard to do largely because you’ve got to pull off two miracles at once: change senior management perceptions, and change the way IT is run.

At the senior management level you’ll be dealing with people who mostly don’t want to hear it: and getting them to first internalize the reality that IT provides the organization’s “nervous system” and isn’t an arms length expense center at all, and then accept that DP’s relatively poor performance, organizational isolation, and freedom to escalate project costs have historically been due to the SLA centric management processes in place, is usually more of a challenge than most of us can handle.

We have a rather different history here, but I do get the feeling that a lot of senior management don’t want to hear about IT issues.