Archive for May, 2010

RAS

May 28th, 2010  |  Published in Uncategorized

RAS is an acronym IBM likes to use a lot; it stands for “Reliability, Availability, Serviceability.” In general, it’s about how much you can count on a system to be up and running when you need it.

While we’d always prefer systems that were perfectly reliable and always available, getting there costs a lot of money. Part of designing a system involves trading off RAS characteristics against cost: if it’s OK for a service to be down for hours at a time, why spend the extra money for highly reliable hardware and software?

IBM’s z Series hardware and the z/OS operating system are designed for very high RAS, which is one of the reasons for their high prices. I think that we have many services on our mainframe that need this level of reliability, but there are lots of other services that are there because they have integration points with the mission-critical services. It would be really good if the business leaders of the University would try to categorize the various services provided by the mainframe according to how critical they are. Then, if we could solve some of the integration problems I mentioned in my last post, we could start running the less critical applications on less costly platforms.

This would also help us more immediately during registration and other times of peak capacity. Since we don’t have a big enough mainframe to meet the demand at these times, our only way of getting through is to stop some of the services. (By the way, why does last summer’s “mainframe efficiency initiative” keep getting touted as a success? We only made it through August registration because Jon turned off the Trim monitor on Adabas, which means we now have no way to diagnose database performance problems, and in January we did have to turn off services.) As systems administrators, we can’t really evaluate the relative priorities of different applications, so during crunch times we don’t know what to stop and what to try to keep running. In January we picked services that were timing out anyway (it seems safe to turn something off if it’s not working) but if that isn’t enough we really shouldn’t be the ones trying to decide.

(As long as I’m talking about RAS, I should mention that one of the problems we had with the migration assessment plan is that the hardware recommended is from a lower reliability class than the current mainframe. Multiple hardware vendors had provided specifications for candidate systems, but the one that made it into the report was the one that didn’t meet what we felt were minimum reliability criteria. Also, the other high-reliability systems didn’t cost that much less than a z Series machine.)

Integration

May 26th, 2010  |  Published in Uncategorized

For administrative computing at the University, integration is one of our greatest strengths, but it also may be our greatest weakness.

Integration is a strength, because when different applications need to work together they do so easily.

Integration is a weakness, because when an application needs to change those changes must be coordinated with the other applications it works with. This is greatly highlighted when we consider possible projects like the proposed mainframe migration.

When I first started working here, integration usually happened at the Adabas file level. If application A needed access to application B’s data, application A was given appropriate database permissions. But now if application B wanted to change their file layout, it had to be coordinated with application A. Also, there are clearly security implications for this.

So we invented secured modules. This has helped with the security issues, but I don’t think it has decoupled applications as much as we might want.

Is there a next step, something to replace secured modules to decouple our applications more? Some way for applications to communicate more flexibly, so different areas can move in different directions more easily? Doing more with Broker might be one way, but at a significant cost in performance. Where one application is moving off the mainframe, Adabas Replicator might be a useful tool. Any other ideas?

More on “buy vs. build”

May 18th, 2010  |  Published in Uncategorized

In an earlier post I mentioned Nicholas Carr’s book Does IT Matter? and the argument that information technology has become a commodity, and therefore an organization like the University should buy off-the-shelf systems to meet IT needs. I think anyone who wants to engage in the “buy vs. build” argument should read this book in order to understand the reasoning, even if in the end you don’t agree with it.

In this book, Carr reviews how earlier technology advances, like the development of railroads and the harnessing of electricity, played out in business. At first, businesses that embraced the new technology had a strategic advantage over those that didn’t. After a while, though, all businesses were using it (or had gone out of business) so there was no strategic advantage to the technology, it had just become a part of the cost of doing business. Instead of devising a “railroad strategy” or an “electricity strategy,” businesses located near a rail line or hooked up to an electric utility and purchased standard equipment to use the technology. Carr argues that information technology has reached this same stage.

Carr applies his argument to both hardware and software. Now, I’m mostly a software guy, so my discussion will focus on that side. Software is rather unique from an economic point of view: it costs a lot to create a program, but once it’s been written it costs very close to nothing to copy and distribute it. This drives software toward commodity pricing: the more copies of the program you sell, the smaller a percentage of the fixed costs of software development each customer will have to pay.

On the other hand, a big part of commoditization is standardization. I can treat electricity as a commodity because every device plugs into a few standard outlets: rather than having to purchase the same kind of, say, toaster, as my last one because it has a proprietary way to get power from the electrical lines, I can buy any brand. If I have a Ford car, I can replace it with a Chevy without changing the route I drive to work. With software we do not yet have nearly this degree of standardization. As Brian Cantrill said in The Economics of Software :

The problem is that for all of the rhetoric about software becoming a “commodity”, most software is still very much not a commodity: one software product is rarely completely interchangeable with another. The lack of interchangeability isn’t as much of an issue for a project that is still being specified (one can design around the specific intricacies of a specific piece of software), but it’s very much an issue after a project has deployed: deployed systems are rife with implicit dependencies among the different software components. These dependencies — and thus the cost to replace a given software component — tend to increase over time. That is, your demand becomes more and more price inelastic as time goes on, until you reach a point of complete price inelasticity. Perhaps this is the point when you have so much layered on top of the decision, that a change is economically impossible. Or perhaps it’s the point when the technical talent that would retool your infrastructure around a different product has gone on to do something else — or perhaps they’re no longer with the company. Whatever the reason, it’s the point after which the software has become so baked into your infrastructure, the decision cannot be revisited.

This is another article you should definitely read if you are interested in this issue. If you do, you’ll see that his analysis has influenced mine. Cantrill’s conclusion is that the peculiar economics of software mean that in the long run open source provides the greatest benefits for both software producers and consumers.

So how does this apply to the University? We are definitely in a “vendor lock-in” situation, and the vendors that have us locked in are not just IBM (as some seem to be framing the issue) but also Software AG and, well, ourselves, with the software we’ve developed in-house. Any change is going to be costly, but also any change will most likely result in being locked in to a new set of vendors. Perhaps Cantrill is right, and we should be looking more closely at open source software if we want to have more flexibility in the future.

W3C audio

May 15th, 2010  |  Published in Uncategorized

W3C Launches Audio Incubator Group

I’m guessing this is motivated by the controversy over HTML5 codecs.

The right question

May 14th, 2010  |  Published in Uncategorized

I attended the AITL meeting yesterday. I found it encouraging and frustrating.

What was encouraging? The AITL is made up of intelligent people who are committed to making the University run more effectively and efficiently. They have a clear grasp of the issues we’re facing, and want to do what’s best for the University.

What frustrated me can be illustrated by a sentence from Brad’s “ITS Weekly Update”:

The BSC will use AITL input as it decides whether we “stay status quo with the mainframe” or “move to Open Systems.”

This is wrong on so many levels. We can change the status quo significantly while staying on the mainframe, and the move to Open Systems envisioned by the Mainframe Migration Assessment would preserve the status quo in every dimension except the hardware and operating system. “Status quo on the mainframe” and “move current Adabas/Natural applications to Open Source” are not the only alternatives!

Dennis said at the beginning of the meeting that the purpose of the assessment was to see if the University could save money by moving our existing application portfolio off the mainframe. I think the report answered that question, and the answer is clearly “no.” But I thought when it was first proposed, and I still think now, that “Mainframe vs. Open Systems” is the wrong question. (Personally, I don’t see where there’s that big a difference.) The right question is something like “what should be the University’s long term strategy for administrative information technology?” If we can answer that correctly we will actually have a basis for tactical decisions like whether or how quickly we should migrate off the mainframe. Focusing on the hardware and operating system just distracts us from the real issues.

Context

May 12th, 2010  |  Published in Uncategorized

Sometimes while in the middle of big changes or decisions it’s easy to forget the big picture, so today I want to step back for a minute and look at the context of our technology decisions.

The mission of The University of Texas at Austin is to achieve excellence in the interrelated areas of undergraduate education, graduate education, research and public service.

This is the official mission statement of the University. Notice that it doesn’t say anything at all about information technologies. If the University could run without IT, it should, because that’s not what the University is about.

However, the University can’t run efficiently without IT. Students must be registered and their grades recorded, faculty and staff must be paid, supplies must be purchased—performing these and many other necessary activities without IT would impose a prohibitively high drain on the University’s resources.

If we truly want to have a university “of the first class,” we need a quality administrative IT infrastructure. The University will not be able to attract and retain top faculty and students as effectively if they are forced to deal with slow, buggy, or difficult to use administrative applications. Costs to the University will rise if staff are forced to spend excessive time or effort working with or even circumventing poorly designed applications.

The problem we’re trying to solve is to provide first-class IT services without consuming so much of the University’s resources that it detracts from fulfilling the University’s core mission.

Evaluating tools

May 11th, 2010  |  Published in Uncategorized

One of the things that becomes clear if you look at the Mainframe Migration Assessment report is that Software AG license fees form a major part of the University’s administrative computing costs. We need to either make sure we’re getting our money’s worth, or find less expensive/more effective tools.

I’ve felt for some time that Natural is not adequate for what we need to do. Twenty years ago it provided more than enough power, but no longer.

Natural and Problem Domains

Natural and Problem Domains

Also, several things Software AG has done have convinced me that they no longer see Adabas and Natural as a source of future growth. We really need to explore new tools and expand our tool set. The PyPE project is a step in the right direction, but if we’re going to continue to write our own applications we need to do more.

(This is a long enough blog for now, but once we do develop more tools the question of migrating our applications to them arises. All I’ll say for now is that I don’t see the Mainframe Migration Assessment telling us much of value about that.)

A third way

May 10th, 2010  |  Published in Uncategorized

I’ve talked a little in the last few posts about switching to a commercial ERP or continuing to build our own. There is a third alternative that sits somewhere between these two: switching to and participating in open source projects like Kuali or Sakai. This could relieve us of some of the costs of software development and help ensure we don’t deviate from industry-standard practices, while avoiding some of the problems of vendor lock-in and allowing us to maintain a committed developer community.

Again, the costs of migrating to this would be high (although I expect they might be smaller than some of the other alternatives.) If I were the one making the decisions about the University’s strategic direction, I would give it a lot of consideration.

~~~~~~~~~

Update: I took a walk at lunch and decided I should explain what I meant by “maintaining a committed developer community.” One of the dangers of switching to a commercial ERP is that it is likely to be a one-way transition. (I’m indebted to Adam Connor for this insight.) People who enjoy application development and are good at it are almost certain to leave during a migration to a commercial ERP, and if you later decide it’s not working out you will most likely have a staff without the skills needed for in-house development. Since part of the “price” of participation in an open source project is contributing code back, this shouldn’t be as big an issue.

Risk taking

May 7th, 2010  |  Published in Uncategorized

Even if we do continue to develop our own applications in-house, that doesn’t mean we can take a “business as usual” attitude. To keep up with our competition and continue serving the University, we have to constantly reevaluate our tools and practices.

One of the ways that our development culture seems to have changed over the past decade is that we now seem much more risk-averse than we were. I don’t see this change as positive: the only way to guarantee you won’t fail is to never try to do anything, but that’s just a failure of another sort. You can’t push ahead without going where you haven’t gone before. If we really want to continue to build our own administrative systems, we need to recover a tolerance for mistakes.

Outsourcing the vision

May 6th, 2010  |  Published in Uncategorized

(This is a continuation of the previous post. I will probably have more to say after this one, too.)

One way to avoid developing a vision for administrative IT would be to in effect outsource it by stopping in-house development and migrating to a commercial ERP package like PeopleSoft. There is a strong argument for doing this: Nicholas Carr wrote a whole book (Does IT Matter?) arguing that IT has become a commodity and that there’s no strategic advantage in building your own applications. While I think he overestimates the maturity and stability of the IT industry, he may be right.

On the other hand, we do have some evidence suggesting that building applications ourselves has benefits. In his post for staff appreciation week, Pres. Powers said:

The percentage of UT Austin’s budget spent on administrative costs (5.5%) is about half of the average for Texas public universities.

I’m sure there are many factors that contribute to these lower costs. UT Austin can undoubtedly take advantage of economies of scale that smaller universities can’t. Austin probably attracts more intelligent and creative people than most places, so UT has a better caliber talent pool to recruit staff from. But I also suspect that one of the things driving lower administrative costs is the caliber of applications we’ve developed here—if we used the same software as everyone else, it seems likely that our costs would be more in line with everyone else’s.

Also, we should realize that the cost of moving to a commercial ERP would most likely be similar in scale to the costs outlined in the mainframe migration assessment, as much of the effort would be the same or similar: testing, integrating the migrated pieces with the parts not yet migrated, purchasing new hardware, retraining staff, and so on.

I’m not convinced that migrating the a commercial ERP would benefit the University, but if you believe the argument that IT is a commodity it would make more sense than the proposed mainframe migration.

Social Widgets powered by AB-WebLog.com.