Is Software Abstraction Killing Civilization?
Or: Jonathan Blow is wrong but also right and we still
need to prevent the collapse of civilization
Early 2021
I recently stumbled upon
a talk by game
development guru Jonathan Blow, about how software
abstraction will lead to the end of civilization. Quick summary:
- Information passed on between generations is diluted.
Practise is better than theory for keeping skills alive. - Software runs the world.
- Abstraction fosters ignorance about low level programming.
- If we forget low level stuff, civilization will fall apart
since we won’t be able to keep vital software running.
It was one of those
talks that might at a first glance seem perfectly reasonable, not least
because similar ideas are regularly discussed in programmer circles by a
great number of
people. Then you start thinking about what’s said, and all the errors and
misconceptions presented makes you feel troubled, because it’s both
tempting and easy to perpetuate the punchlines without
considering their implications. I agree with Blow that it’s
important to pass knowledge between generations. Therefore, the veracity
of that information is extremely important. I propose that
the information given in the very talk about the importance of
such information, is in fact wrong on many counts.
That’s why I, in this rather long text, have tried to examine Blow’s
claims in detail. (TL;DR: Someone is wrong on the Internet – man with
ample spare time on his hands to the rescue!)
Some examples of collapsed civilizations and artefacts lost in time
I’m not a historian and will not comment on this first part of the talk. It doesn’t matter much, though; my gripe is mainly with the second part.
Five nines
Blow says we used to use the “five nines” (99.999% uptime) metric when selling computer systems. Since his laptop has the habit of rebooting when in sleep mode, it can never be a candidate for 99.999% uptime. This part is true: five nines means a total yearly downtime of just above five minutes. This is presented as proof that we have lost a rhetoric of quality. This part is false.
In reality, five nines usually applies to things like emergency response switchboards (911), hospital systems, financial transaction processing, etc. It’s also a metric usually accompanied by long contracts detailing various downtime scenarios that couldn’t possibly count as downtime. It has never, ever been used to sell consumer laptops or the word processors we run on them.
It’s also incorrect that five nines isn’t used anymore: several companies, such as IBM and Amazon, sell such systems and services. Incidentally, IBM actually goes further and claims their Power9 platform can provide even higher availability. Provided, I’m sure, the network used to reach it is functioning and the client computers are working and, and, and… you get the point.
So, no, we haven’t lost this rhetoric of quality. We’ve simply never used it the way Blow claims.
Oh, and in case anyone’s wondering, my laptops never reboot unless I tell them to and one of them currently has an uptime of 55 days.
“The industry hasn’t produced robust software for decades”
What is robust? Is it my iPhone, going for weeks and months without a
reboot? Is it
the famous uptime of Novell’s file and printer servers,
with documented instances running for 16 years? Is it the multi-year
uptimes of a plethora of Unix, Windows and VMS machines? Is it the 365+
days (and counting) uptime of some random Linux web server I’ve got
access to? Is it turnkey systems like IBM i, offering continuous
availability?
Apparently none of the above, according to Blow.
“Tech companies are no longer about pushing tech forward”
Well, yes and no. I agree that the lifestyle app startups of Silicon Valley (and elsewhere) are hardly about pushing tech forward – but people more interested in making money rather than amazing technology has always been around. Was Infocom’s umpteenth text adventure about pushing tech forward? Was any of the ten bazillion (a rough estimate) jumpy-shooty platform games produced? Was MS-DOS, single-tasking its way through 15 years of building Microsoft’s empire right up until its last breath in version 6.22?
There’s a lot of exciting stuff happening on the “hard tech” front. Blow does mention machine learning earlier in his talk, but “boring” things like file systems, web servers, databases and programming languages are also continually developed and improved upon. A lot of effort is also going into improving the various layers of abstraction, virtualization and containerization Blow is so adverse to – but him disliking them doesn’t mean they’re not about pushing tech forward.
“Abstraction leads to loss of capability.”
Blow lists a number of programming languages, some of which have fallen out of fashion because of abstraction. His take basically boils down to learning assembly coding, memory management and pointers.
I agree that a large number of programmers today (myself included) are very happy with not having to deal with things like memory allocation and pointers. And yes, there are certainly horrifying examples of when abstraction gets the better of us. Plenty of web sites use a lot of completely unnecessary JavaScript framework voodoo to render a simple blog and plenty of supposed “desktop apps” are really running (very slowly) inside a bundled browser.
Despite that, I’m willing to bet a few bucks there are more people
around today (including youngsters) who can program in C than ever
before, and that more C and assembly code is being written than ever
before. Linux and NetBSD, for example, are continuously being ported to
pretty much everything that even vaguely resembles a CPU. Rust, a new
language with a heavy focus on robustness, does feature pointers and
lets the programmer manage memory. Harvard’s introduction to Computer
Science, CS50, is publicly available on Youtube and each year it
features a two hour
lecture on stuff like memory layout, pointers,
malloc() and free().
I agree that the things mentioned here are good to know about and I think anyone seriously into programming should at least try them out some time, if for no other reason than to understand why a lot of us prefer to avoid them if given the option to do so.
Oh, and garbage collection and functional programming aren’t new
abstractions. Lisp did both in the late 1950s and it’s been used in a
lot of “hardcore” settings, including
NASA’s Jet Propulsion Lab.
And, unsurprisingly, a Lisp programmer will of course claim that Lisp
makes you much more productive than something with malloc() ever will.
Another early high level language, COBOL, is completely ignored by
Blow during his whole talk. It’s fairly fundamental for
our current level of civilization, considering it forms the backbone
of our banking and financial infrastructure – but I suppose a high
level language doing really critical work (as opposed to
games written in C) doesn’t fit Blow’s narrative.
“The productivity of Facebook employees is approaching zero”
Here we’re shown some graphs of the rising number of Facebook and Twitter employees and Blow states that since Facebook doesn’t really get that many new features each year, all those programmers mostly sit around and do nothing.
First of all, I’m pretty sure Facebook employs a wide range of staff who are not feature developers: lawyers, accountants, graphical
designers, sysadmins, researchers, HR, middle management, etc. When not working on Facebook the site, employees of Facebook the company
are also working on Instagram, WhatsApp and Oculus VR to name some of their other pursuits. I’d also argue that (measurable) individual
output in general drops as a result of a company growing. You need to reach a certain critical mass to have a whole team working hard on
a feature only to, by some managerial decision, drop it right before it’s finished. Things like that happen all the time in large
software companies and won’t show up in Blow’s rather flimsy metric.
More importantly, Blow constructs this argument around the assumption that Facebook’s product is Facebook, in the sense of the social platform. This is of course wrong. Facebook doesn’t have to add or change very many features on their social platform every year, they just have to keep things smooth enough for those who actually use those functions to stay content.
This is because Facebook’s real product is an ad delivery platform.
As such, it collects massive amounts of personal and private data and churns it into targeted ads, all the while sucking people in with cleverly designed attention-grabbing dark patterns. I’m sure plenty of work is done on this by a whole lot of programmers, but it’ll never appear as “features” to the user, only as corporate revenue. In that respect, Facebook programmers seem to be highly productive.
Ken Thompson’s “Three Week Unix”
First off, I’m not out to belittle Ken Thompson’s efforts here. Writing
an assembler, editor and basic kernel in three weeks is highly
respectable work by any standard. It’s also a great piece of computer
lore and fits Blow’s narrative perfectly – especially with Kernighan’s
little quip about productivity in the end. Of course, we don’t know how
“robust” Thompson’s software was at this stage, or how user friendly, or
what kind of features it had (Note that what’s discussed here isn’t First
Edition Unix or even PDP-7 Unix, for which there’s source code
available: it’s the first version of what was used to write the first version
of Unix). I’m going to boldly claim it would’ve been a hard sell
today, even if it did run on modern hardware. (For those not familiar with
the kind of user experience one could expect from this proto-Unix, open
up a Unix-type command line in Linux/MacOS/*BSD/WSL, type
“ed”
at the prompt
and see how far you get with your text editing.)
I’m also pretty sure Thompson dealt with little or nothing of the following during those three weeks: documentation, code reviews, daily standups, backlog grooming, user stories, unit tests, customer demands, A/B testing, writing commit messages and adhering to a corporate code formatting standard. We are after all talking about a guy who, when he could pick any names he wanted, opted for “creat”, “mv” and “cp”.
I’m also certain Thompson could work alone in a private office, rather than share an open floor plan with fifty colleagues incessantly shouting into cellphones, shuffling furniture around or having animated discussions about football two feet from his desk.
It’s important to remember that apart from hardware and software, the working conditions for most developers are vastly different now than they were 52 years ago. The TV series Mad Men is of course fiction, but it does give an idea of how white collar workplaces have changed since the conception of Unix.
In any case, Thompson’s feat is hardly proof that programmers in general were always more productive in the olden days. Thompson himself had previously worked on Multics, a system famous for its many delays. In fact, a lot of ambitious projects of this era (much like today) were constantly running out of both time and money. This prompted Frederick P. Brooks to write his book “The Mythical Man-month” in 1975.
Brooks drew heavily on his experiences from managing the production of IBM’s OS/360, another famously long-running project. Among other things, it took a bunch of those programmers more than half a year to construct an ALGOL compiler and I’m fairly certain they were wholly unburdened by abstractions, since those abstractions was in fact what they were tasked with creating.
“Productivity and robustness are declining”
Are they, really? As detailed above, I’ve not seen any conclusive evidence of this at all in Blow’s talk. Instead, I’m presented with a mix of misconceptions, blatant cherry picking and anecdotal evidence, all easily refuted. My view is rather the opposite: computers are, by and large, much more robust these days than a couple of decades ago. Likewise, programmers are at least as productive as before. In many cases we can be much more productive but in certain cases it’s less trivial to get started, which hampers productivity at least initially.
One important aspect here is the expectations end users have on software. Few people today are interested in learning RPN to perform simple arithmetic or writing curious troff directives when making a yard sale poster. Abstractions or not, convenient interfaces and advanced features add complexity and development time.
“The argument that software is advancing is obviously false”
I still regularly use my beloved Amiga computers. A couple of weeks ago,
I was copying a file to an Amiga’s hard drive when the computer suddenly
decided to crash on me. Not because I did something wrong, but because
old home computer OS:es weren’t very stable. This resulted in a
corrupted hard drive partition and the OS failed to re-validate the file
system. My only option was, in the end, to re-format the partition.
It wasn’t the first time this happened to me and since I use my Amigas quite a lot, it probably won’t be the last. During the heydays of Amiga, any serious user would have a program such as DiskSalv, specifically designed to do a low level churn through the sectors of a crashed hard drive, trying to rescue things that looked like files.
With a modern home computer OS, this would have been much more unlikely to happen. Even Windows 10 Home, despite all of its faults, has memory protection and a journaling file system. I’m pretty sure that’s an advancement. As much as I love old computers, I think Blow is recalling the past through a filter so rose tinted even Hello Kitty would be jealous.
(More examples of advancements follow below and
I have written at length
about this topic elsewhere.)
“You Can’t Just…”
Here, Blow lists a couple of things we “can’t just” do with computers anymore. I think a lot of them are thoroughly drenched in romantic nostalgia. Let’s examine!
Copy a program from one computer to another.
Well, yes you can, provided they’ve been compiled in such a way and for the same target architecture. I recently built the excellent slack-term on my Raspberry Pi and sure, the Go compiler produces an 11 meg statically linked binary which might seem hefty for a text mode chat client, but it works on all my friends’ Pi:s, too. It’s just that most developers don’t do this anymore. Then again, when was the last time we actually did?
Unless we’re counting smallish utilities (which can still often be downloaded as standalone executables), we basically have to go back to the days of the C64 and PC/XT to find self-contained programs that didn’t require installation. My copy of Deluxe Paint IV for Amiga (released in 1991), if not booted from floppy, requires a bunch of auxiliary files to be copied to various places on the hard drive to function. It also depends on several third party function libraries. In fact, even some C64 programs required you to “flip the disk” (quite literally: eject it from the drive, flip it over and re-insert it) in order to load data and access certain program features.
Games for the Amiga usually came on so called “track loaded” floppies, meaning the program code, graphics and music stored on them bypassed the file system and was unreadable by the OS. Instead, the game loaded the data as needed by directly accessing given tracks on the floppy (hence the name). This was a sort of container or “flatpak” of its time, ensuring the programmer had very specific control over storage (and copy protection schemes to combat piracy). The downside was of course that even if you happened to own a hard drive, you couldn’t install the game onto it. Oh, and forget that such games multi-tasked: The Amiga’s OS was perfectly capable of it, but pretty much all games disabled it completely.
“Just loading a bunch of machine code into memory and pointing the
program counter at it will make it run on a Mac, a Windows PC,
a Linux box and a PS4, because they all use the same CPUs.”
Well, yes, at least in theory. Then, if you’d actually want to do something with the program, like display some graphics, play a sound, read user input or write something to disk, you’d be shit out of luck.
In theory, this was also true for all old home computers sharing a for example a Z80 CPU (and they were numerous!). In practice it wasn’t possible, because just like with today’s machines, the rest of the hardware differed too much. It was in fact more likely that a fairly boring and graphically unimpressive program written in a higher level of abstraction, such as Basic, would be easily portable between the machines.
About a year after Blow’s talk was recorded, Apple launched their new line
of desktop machines with their own ARM-based silicon, rendering Blow’s
argument even less compelling. Of course he couldn’t have known that, but
that’s part of the point. Relying on abstraction instead of banging the
metal means less work in porting old software to the new CPUs.
“We mostly don’t want the operating system.
The OS removes capabilities from the CPU”
Like Spectre and Meltdown? Joking aside, though, I’m not exactly sure I follow Blow’s train of thought here – perhaps he reasons too much like a game developer and I don’t.
I’d argue that the OS does add to the capabilities of the CPU, such as file systems, networking and, importantly, multitasking. I doubt anyone would like to be without that, even though it adds quite a bit of complexity and abstraction. Of course you could argue that multitasking isn’t needed for playing games, but I’m sure quite a large number of Twitch streamers would disagree. Much like myself, they probably prefer their computers to share hardware resources between programs in a managed, predictable and, indeed, robust way.
Many old school Amiga and Atari users will remember that even the slightest upgrade or change of their hardware (such as adding memory or a hard drive) rendered certain games and even productivity software completely unusable, either by refusing to load or by behaving erratically.
Why? Because even though they knew a lot about assembler and memory management, the programmers simply didn’t adhere to the specifications and abstractions provided by the manufacturers. They knew exactly how the hardware worked and where things were stored in memory and went straight for it. When this changed by just a little bit, everything fell apart. Those who had taken proper care to follow guidelines and relied on abstractions could keep selling their software, because their software still worked.
“It used to be that if you wanted to compile a program for many
platforms you just needed some #ifdefs here and there”
Yes, if you keep your expectations of what a program is to some kind of console application – in which case this still holds true.
I’m not sure if this has ever been the case for games development, which is Blow’s area of expertise but not mine. Perhaps I simply misunderstand him, but I doubt something like this could be achieved without relying on large amounts of pre-existing platform-specific abstraction (such as a highly complex OS) for dealing with sound, graphics and I/O – precisely the things he argues against.
Consider the sound setup screen for Heretic (1994) below. Surely this requires a bit more than a few #ifdefs? In fact, I_SOUND.C in the Heretic source (freely available since 1999) is a 391 line wrapper for initializing the DMX sound abstraction. Heretic was by no means unique in this respect – pretty much all DOS games required similar configuration to function.
Draw pixels to the screen
I’m actually not sure what this means. I can draw pixels on my screen perfectly fine in any number of languages. It’s true that I’m not going to do it using Mode 13h, but that’s more about hardware companies trying to protect their IP and revenue rather than the fault of abstractions. In fact, Mode 13h was accessed through the VGA BIOS, an early hardware abstraction layer.
To tie in with Blow’s previous claims, the code written for such a system would of course not “just run” on any x86 machine: it required a very specific piece of hardware to be present apart from the CPU. It wasn’t portable, it wasn’t “just copyable”. Meanwhile, a program using Windows as an abstraction to draw graphics on screen would have worked perfectly fine on anything from Hercules to true-color XGA.
Sure, I understand where he’s coming from. You typically have to jump through some hoops and layers and APIs and probably don’t interface directly with a hardware framebuffer. Then again, it might’ve been easy to put some pixels on screen on a C64, but it certainly wasn’t equally trivial to display a for example Koala Paint picture. Today, through abstractions, I “can just” load a PNG and display it.
Below is a list of command line parameters for Monkey Island, LucasArt’s iconic game from 1990. This intuitive UI lets the user configure the game to work specifically with their graphics (and other) hardware. Quite a few ways to just put pixels on the screen!
Run an unsigned program
Yes you can. I’m doing it right now, to write this (it’s the excellent
WordGrinder).
I even compiled it myself, without
a manifest. In fact, many of Blow’s complaints in this section are less
about abstractions than they are about hardware and software vendors
locking systems down and disempowering the user. In other words, he
makes a great case for using so called libre systems.
Language Server Protocol
There are still plenty of editors around that either do not use LSP or where usage of it is optional (such as in Emacs, the editor he mentions using in the talk). For the time being, I’m mostly on Blow’s side here even though he’s cherry picking: LSP solves more problems than just “clicking on a method and going to its definition.”
Full screen and full framerate games
I don’t play much modern games (the latest one being Thimbleweed Park from 2017, which I still count as “new”), but a lot of contemporary productivity apps suffer from poor performance and severe input lag. This is a problem and it’s in part due to abstractions but, I would argue, even more so due to badly written code and picking the wrong tool for the job. Different programs using the same platforms and UI toolkits can vary wildly in perceived performance on the same machine. I agree this needs to change.
As for multitasking, Blow’s example is a game that doesn’t restore to
its preferred resolution after he’s alt-tabbed away from it. Aside from
the fact that Blow contradicts himself here (Do we “mostly not need” an OS
that “removes capabilities from the CPU” or should we be able to
multitask perfectly between games and other programs?), I
agree this is bad and that we can and should do better. But is it really
worse than before?
Running full screen games in DOS was simpler and less error prone because they didn’t have to care about other processes. If you were running Windows 3.1 and wanted to enjoy a bit of Doom you had to save your work, close all your programs, quit Windows and then start the game. Even on a multitasking computer like the Amiga, you most likely had to actually reboot the computer, since most games were loaded from floppy (even if you had a hard drive) and completely took over the machine to the point where they couldn’t even quit gracefully back to the OS.
I’m not saying we shouldn’t expect great multitasking from games, I’m just saying that maybe things weren’t perfect in the olden days either and that we seem to be getting better, not worse, at doing multitasking games. The Twitch streamers I mentioned earlier will surely agree.
Complication accelerates knowledge loss
Deep knowledge is replaced by trivia
What’s trivia? Blow’s example is about sprite management in Unity, but he also (sort of) admits it mostly boils down to a change of pacing rather than abstraction. Knowing how to multiplex sprites and place them in the display border on a C64 wasn’t trivia, because the VIC chip remained sufficiently unchanged between 1982 and 1994 to make this knowledge applicable for more than a decade (How’s that for pushing tech forward?). These days you’ll miss the launch of a new Unity major and a few Nvidia cards if you linger with a cup of morning coffee.
I agree that changes in software (and hardware) today are often too fast to keep up with in a meaningful way. This is true for much more than just developer tools. The focus on releasing something E.G. every four weeks means it’s apparently very hard to give end users a consistent experience, perhaps in part because by changing the UI around, things at least look “new”. We’re too often battling the minutiae of a constantly changing interface instead of spending our energy on the task at hand.
This has little to do with abstraction itself and more about consistency over time and the delivery model of software. A regular “mixing things up” and redesign approach is, I guess, the manufacturers’ misguided way of making it seem like the subscribers are getting their money’s worth. I’m not saying this is the case with Unity, which I’ve never used. They might have a perfectly valid reason for changing sprite management, such as to keep up with changes in vendor hardware or firmware.
We can reduce complexity by simply deciding to do so and we get fooled
into thinking we’re saving time by adding abstractions.
Here, Blow really just pulls some numbers out of a hat and says, roughly, “Right, you’re gonna ship in five months so you can’t remove the complexity right now. But then in reality it takes two years to ship and you’ll still not have removed the complexity!”
For every made up war story about libraries and frameworks, there’s also one at least equally true success story. As a web developer, I can get massive help from a framework if I pick the right one and use it for the right thing. But, as a web developer, I’m also not entirely sure it’s a good idea to try to do just about everything from word processing to gaming in a browser or that we should immediately switch to whatever framework was released last week.
I have at times fought long and hard against introducing pointless complexity, resulting in some wins and some (well, sadly, mostly) losses. The sad truth is of course that when push comes to shove, I prefer not defaulting on my mortgage over not adding more software complexity. Yes, we programmers play a large role in this, but so does the market – and other, tangential stuff, too.
At some point in their careers, most programmers (and other office
dwellers) will have to deal with some or all of the following: office
politics, attending pointless meetings, using some arcane time reporting
software designed by doped-up chimpanzees, meeting a hard deadline set
by someone else, sharing a desk with someone who is constantly on the
phone, dealing with obnoxious customers with impossible demands, facing
strange managerial decisions, estimating time for highly abstract
demands, pointless administrative tasks, working on three different
projects simultaneously, debugging decades old legacy code and coding
below or above their current skill level. (Incidentally, I wonder how
much of this guys like Thompson, Ritchie, Pike, Kernighan et al had to
deal with at Bell Labs.)
All of this constitutes little fights, and sometimes we have to pick our
fights. Maybe one week there wasn’t quite enough energy left for the
fight about Yet Another Framework. I’m not saying we shouldn’t be held
responsible and that we shouldn’t do better, but there are more factors
at play here than Blow admits. In general, though, I agree: complexity
is a man made problem with an equally man made solution.
I honestly believe that reducing
workplace complexity would also, in the long run, reduce software
complexity.
Younger game devs have never written their own engines and we might soon
have collectively forgotten how to do that.
There’s some slippery slope type reasoning here that irks me, because it seems highly speculative. Granted, my own reasoning around this is also highly speculative and I might be completely wrong. You have now been properly warned, so let’s get down to brass tacks:
Fact: The majority of people who owned a C64, Amiga or 286 PC did not become low level developers. Most didn’t become programmers at all, though they likely had access to the tools they needed. Most of them were completely satisfied with learning just enough about the system so that they were capable of starting a game and maybe – just maybe – do a bit of homework. Others started exploring but never quite reached all the way to low level coding, because learning assembly language is hard and it takes time to get results. Maybe they poked around with Basic a little and then decided that enough was enough. Others still were drawn to graphics, music, storytelling or any other creative pursuit in which a computer is merely a tool like any other.
Speculation: I think there’s a finite number of people in each generation with the prerequisite desire, motivation and aptitude for making things like low level OS software or highly complex game engines. They are driven by equal parts curiosity (How does this work?) and competitiveness (I’m going to make this work!) in combination with the intelligence and approach to problem solving required to carry their ideas to fruition. It is of course important that those people are discovered and encouraged, but I think that with computers being cheaper than ever, this is more likely to happen these days than ever before.
For all the disgruntled Basic pokers, though, things like abstraction and pre-made game engines are great. Abstraction empowers them to be creative with their computers in new and exciting ways without necessarily possessing the qualities that lead to mastering memory management, pointers, algorithms and whatever else a framework might do for them. They “can’t just” put pixels on the screen – but they “can just” create real software that might even actually be interesting to a wider audience. That’s nothing to scoff at. Kids want to make games that resemble the AAA ones they can buy in the store. While that was reasonably simple to replicate with small amounts of abstraction on a C64 or an Amiga, it’s much harder to do from scratch today because our expectations on games are much higher.
Is this good for programming as a craft, as a discipline? Perhaps not. But I don’t think it’d be fair to deny anyone the possibility of maybe discovering that their true calling lies in lower level programming if they so desire. Today, that first step might be through a framework. Yesteryear, it was through Basic – a language which is now lauded by nostalgic and revered programmers as the tool that sparked their success, but at its height of popularity was derided as a toy language that would forever cement poor fundamentals and bad programming habits in anyone who touched it.
Since Blow seems fond of anecdotal evidence, let me provide these hopefully calming observations:
- The Open Source community seems to lure plenty of youngsters into its folds by way of Linux, encouraging newcomers to take an interest in systems languages like Rust, C and C++.
- A popular window manager among such budding hackers is dwm. It’s configured solely through modifying its C source code.
- I personally know people 20 and 30 years younger than Blow who will happily write both Z80 assembler and C.
- I also know about people in those age groups who roll their own Linux distros completely from scratch, or build their own hardware, or are hardcore C coders who spend their time getting fringe research OS:es running on modern hardware.
-
Software is eating the world but a lot of it is, in the end,
completely inconsequential for maintaining civilization – including
pretty much all computer games ever written. I’m willing to bet people
would lead far
more fulfilling and less stressful lives if they deleted most of the
apps they have on their phones. The important thing here is that we
maintain a skill base large enough to keep the critical systems going. - There are several Open Source CPU and platform designs available – one of them forms the basis of IBM’s “five nines” offering mentioned above. Of course, in the case of trade war, setting up production lines will take time. I’d say that’s something we in the west are more likely to have forgotten about a few decades from now.
- Plenty of Computer Science and Electrical Engineering courses still teach various fundamental skills, such as C and assembly programming and compiler design.
- Lots of amazing projects regularly crop up online and in the media (such as the MOnSter 6502), hopefully attracting the right kind of curious youngsters, inspiring them to learn.
- Access to programming tools and literature has never been cheaper or better. We also have free and easily accessible educational video material suitable for a wide variety of skill levels. Computerphile does a good job at popularizing explanations of E.G. how CPUs and compilers work. MIT Open Courseware has a massive amount of lectures for those who wish do dive deeper.
Final points
Blow’s bottom line is basically that of survivalism applied to technology: Sure, electricity is nice but if there’s a power outage in the middle of winter, it’s good if someone’s taught you how to make a fire. I certainly sympathize with that.
Our society as a whole depends on our ability to keep at least some of our programs running pretty much constantly. The consequences in failing to do so can be very dire indeed, such as the collapse of global economy or the complete failure of entire national health care systems. Our historical and contemporary records are increasingly stored digitally and it’s vital that we can access those sources in the future, too.
I agree that complexity is fragile and that abstraction can lead to detrimental ignorance (and grumpy programmers). I also agree that some software relies on completely frivolous abstractions. We don’t need shadow DOMs to render simple blogs and we shouldn’t use a web browser in disguise to run what’s essentially an IRC client with images.
However, I think the completely artificial demand for constant churn is at least as big a culprit in creating fragility. “Agile” development means we shouldn’t have to release something that isn’t properly finished and tested, yet we still constantly do. And then, of course, a lot of programmers will see their code in the wild being combined with an ever-increasing number of ads, trackers and telemetry systems that are basically back doors by design, adding no small amount of fragility and insecurity. This too can be changed, but probably only by some kind of massive pandemonium within not only the software industry but several other ones as well.
Thus, I personally think there are far greater problems in the digital
world right now – such as privacy and freedom – but that’s not an
argument against Blow’s concerns. It’s just that if we don’t do
something about it then pretty soon there won’t be any hardware left for
us to actually interface directly with. Not because we opt for
abstractions but because that’s our only choice in a world of
increasingly locked down, remote controlled platforms.
What really bothers me with Blow’s talk is that instead of providing real examples and data to support his thesis, a lot of what he says is deceitful and comes off as a setup for complaining about some pains in his development tooling and the state of kids today. I can certainly relate to that – it’s two of my favorite pastimes – but Blow has quite ironically had to selectively forget large parts of computer history in order to make his point.