Category Archives: Uncategorized

PLIHK: Machine code

(Cross-posted from my personal blog.)

My other coding experience in my senior Physics lab was in one of the more fun modules: a digital voltmeter. We had a breadboard with a simple processor chip, memory chip, EPROM, etc. along with another chip that was either a digital to analog converter or digital signal processor, I don’t remember which, and a bunch of other components like resistors and capacitors. We had to wire it all together, and then write a program that would connect the capacitor to the voltage we wanted to test, let the capacitor charge, and then let it discharge through a known resistance. While it was discharging the program went into a loop that incremented a counter, and when it reached a low threshold the DSP or D2A chip (whichever it was) would interrupt the processor, and you could use the counter to calculate the voltage knowing the capacitance and resistance. You had to write this program using the machine code for the chip, and then you entered the program by pressing a toggle until the hexadecimal value for the next byte of the program appeared on a two-character display and then pressing a button that stored it in the next memory location. You also had to look up exactly how long each instruction in the loop took to execute so you could convert the counter value into seconds.

I’m not being sarcastic when I say it was fun. Maybe I was meant to go into programming.

PLIHK: APL

(Cross-posted from my personal blog)

I had encounters with programming in another BYU class, my senior Physics lab. This class was billed as “things about labs and such every Physics graduate should know”, but in practice seemed more like “how many ways can we torture Physics students one more time before giving them a degree?” It consisted of a series of modules that were only related in having something to do with Physics and something to do with labs.

Anyway, one of the modules involved writing a series of programs in APL, on a teletypewriter connected to a server computer via modem. So yes, you’d go into the lab, dial the server’s number on a telephone, and then place the handset on the modem attached to the teletypewriter and hope the connection worked.

APL stands for “A Programming Language”, which to be fair is correct as far as it goes. Now, I haven’t had anything to do with it for nearly half a century, so this is how I remember it. It was very mathematically oriented and very concise. It required a special keyboard because most of the operations were specified with mathematical symbols. Some of these symbols were for things like matrix addition and multiplication, so you could write a program that did a lot of calculating in only one or two lines of code. The downside of this concision was poor readability. More than once I wrote a program, got it working, and then the next day looked at it and couldn’t figure out what it was doing or how it did it. I’ve jokingly referred to APL as a “write-only” language. It did have, on the other hand, the idea of a “workspace” where your programs, data, and results were stored.

PLIHK: FORTRAN

I’ve decided to write a series of blog posts in the category “Programming Languages I Have Known” (PLIHK). This is a cross-post from my personal blog. I will understand if you want to bail out now.

My first exposure to a programming language was in elementary school when my father brought home some COBOL manuals for me to read, but since I didn’t have access to a computer to actually write and test programs then I didn’t really learn anything from them. I will make no claims to understanding COBOL.

Instead, my first actual experience writing programs was when I took a Numerical Methods course as an undergraduate. I found out the week before class started I was expected to know FORTRAN, but my brother Erick had taken a class in it and still had his textbook, and the Numerical Methods textbook (which I think I still have somewhere) had many examples, so it really wasn’t hard to pick up enough to do the assignments.

FORTRAN was an amazing accomplishment for its time, and it proved that programming in a level above machine code or assembler was not only possible but desirable. So I’m not going to bad-mouth FORTRAN at all. That being said, an awful lot has been learned about programming language design since then, and the only reasons I can see to use it now are “historical curiosity” or having a large body of existing code needing maintenance that might take more effort to rewrite than is justified.

In case you’re wondering, the main thing I remember from my Numerical Methods class is “floating point numbers are fiddly.”

Back from the cloud?

Basecamp-maker 37Signals says its “cloud exit” will save it $10M over 5 years

…when 37Signals decided to pull its seven cloud-based apps off Amazon Web Services in the fall of 2022, it didn’t do so quietly or without details. Back then, Hansson described his firm as paying “an at times almost absurd premium” for defense against “wild swings or towering peaks in usage.” In early 2023, Hansson wrote that 37Signals expected to save $7 million over five years by buying more than $600,000 worth of Dell server gear and hosting its own apps.

Late last week, Hansson had an update: it’s more like $10 million (and, he told the BBC, more like $800,000 in gear). By squeezing more hardware into existing racks and power allowances, estimating seven years’ life for that hardware, and eventually transferring its 10 petabytes of S3 storage into a dual-DC Pure Storage flash array, 37Signals expects to save money, run faster, and have more storage available.

Learning from failure

Time to examine the anatomy of the British Library ransomware nightmare.

The Rhysida ransomware attack on the British Library last October didn’t have the visceral physical aspect that creates a folk memory, but it should for anyone who makes enterprise IT. Five months on, not only are significant systems not restored, they’ve gone forever. Remedial work and rebuilding is going to drain cash reserves intended to last seven years. It was and is bad. What makes it even more exceptional is that we now know what happened and why.

The gories are all in a substantial, detailed report released by the British Library itself. It’s a must-read if your life involves any risk of a 2am phone call demanding you drive to the datacenter, even more so if it’s the CEO pulling up the Teams meeting in ten minutes. Truth is, it’s worth much more than a read, once you realize what the report represents. To get there, let’s look at what the institution actually represents.

If you have any years on you in this game, you will have first-hand experience of some of the factors identified in the report as enabling the disaster. Legacy systems too old to be safe, too expensive in time and money to replace, while more pressing needs exist. People who are asked to do too much with too little. The deadly inertia of complexity. New projects that leave older systems to wither in the shade. Security that rigorously defends against the wrong thing. The report is, as befits the institution itself, a comprehensive catalogue of important stories.

I don’t pretend to have any skill at budgeting, but I fear that too often asking “how much will it cost to do this?” is not balanced with asking “how much might it cost if we don’t do this?”