I’ve told this story many times, but I don’t believe I’ve ever written it down. This is the story of how I came to write webAgent.
I’ll start in 1991, when Software AG ported Adabas and Natural to Unix. HP loaned us a small server, and we tried porting a small application (the online policy manual, as I recall) and it seemed to work fine, so we started learning more about Unix and working out what it would take to move all our applications. In the end, we decided the Unix data processing environment was not mature enough for us to migrate yet, but we did get a lot of experience.
There was still a lot of pressure to provide applications that used more than a green screen for user interaction. Next, Software AG came out with a product called Entire APPC that made it easy for Natural programs to serve as LU6.2 nodes. (IBM’s mainframe networking architecture, SNA, distinguishes between hardware, or physical units (PUs), and the code that runs on them, the logical units (LUs). Different types of LU have different capabilites; LU6.2 nodes can communicate as peers with other LU6.2 nodes—APPC stands for Advanced Peer-to-Peer Communication.) So if client workstations had applications with LU6.2 capabilities, Natural could communicate with them. The problem was that LU6.2 is really complicated and difficult to code against, especially on non-IBM platforms.
So a little later Software AG came out with another product, Entire Broker, which had a Natural interface similar to Entire APPC but didn’t use LU6.2. Instead, it had its own proprietary protocol, but Software AG also provided Broker “stubs” that applications could link to and then issue Broker calls. These stubs were available on a variety of platforms—there was even a HyperCard extension for Macs. So we started looking for ways to develop applications on desktop platforms that could use these stubs to communicate with Natural programs on our mainframe. This was complicated by the diversity of platforms we would need to support, and distributing and keeping up-to-date these applications looked to be a logistical nightmare, but we kept trying different things.
In February of 1995 I attended a CAUCUS conference (CAUCUS was a Software AG user group exclusively for colleges and universities) and gave a presentation called “The Hunting of the Client.” The thesis of this presentation was that Broker seemed to solve the problem of client-server communication, but we still didn’t have a good solution for client application development and distribution. When I got home from that conference I found an email from Randy Ebeling, the director of Data Processing (I think we were still called Data Processing then, but we might have been renamed Administrative Computing Services already) saying he had a request from the library that we investigate providing an interface to the card catalog for “Mosaic”—the NCSA Mosaic browser that first popularized the World Wide Web. I had never heard of the World Wide Web, but I got back to the people at the library and they sent me the FTP site for downloading NCSA Mosaic and the NCSA httpd web server. A little research turned up a page describing the CGI protocol, and I realized that the web would solve our distribution issues and provide a plausible application development environment.
Our first plan was to use Natural on Unix as our CGI scripting environment, but that turned out not to work very well. Natural’s string parsing facilities don’t cope with URL query strings very well, and its standard output always included terminal control characters. We got around this by writing a Perl script (Steve Buffum did the coding, as I recall) that would parse the CGI parameters and invoke Natural. Natural would write the HTML to disk (using WRITE WORK to avoid the control characters) and then the Perl script would read it and write it back through the CGI. This worked, but it did not seem like a practical long-term solution. (I should note, however, that Ducks Unlimited has basically used this architecture—with REXX instead of Perl—for over a decade now.)
So we started casting around for alternatives. At this time most web development was being done in C or Perl, but we really didn’t want to have to teach our developers one of those languages. We wrote a white paper for Software AG suggesting changes to Natural that would make it more suitable for web development, but we knew that even if they acted upon our suggestions it would take months if not years, and we wanted to get started right away. In one of the meetings where we were discussing this I said, “You know, I could probably write a stupid little scripting language that could process form data, call Broker, and generate HTML, and make it enough like Natural that our developers could learn it quickly.” Randy said, “Do it!” and webAgent was born.
(I originally called it “webBroker” since my focus was on making it work with Broker. When it was done Software AG asked us to change the name to webAgent, so we did.)
I started writing webAgent 1 in June 1995 and by September 1995 it had progressed to the point where we could use it. By this time we already had a couple of applications in semi-production using the perl/Natural scheme, and they were converted to webAgent.
webAgent 1 was written in C using lex and yacc (technically the GNU versions flex and bison.) It was a purely interpreted language: processing a script involved reading the source file directly. It allowed mixing code and HTML in a way similar to Natural ISPF macros: lines beginning with an at sign (‘@’) were interpreted as code, and lines that didn’t were interpreted as HTML output; by including a variable name prefixed by an at sign within an HTML line you could output the variable’s value.
Pingback: The story of webAgent 1 :: Thimbles & Care
Pingback: questions over answers » Thinking Differently