Toothpicks and Bubblegum
Let’s talk about how the web is a complete clusterfuck being held together by the digital equivalent of this post’s titular items.
The web of today would probably be, on the surface, not really recognizable to someone who was magically transported here from 20 years ago. Back then, websites were mostly static (pace the blink tag) and uniformly terrible-looking. Concepts like responsiveness and interactivity didn’t really exist to any serious extent on the web. You could click on stuff, that was about it; the application-in-a-browser concept that is Google Docs was hardly credible.
But on the other hand, that web would, under the surface, look quite familiar to our guest from 1993. The tags would have changed of course, but the underlying concept of a page structured by HTML and animated by JavaScript* would have been pretty unsurprising. Although application-in-a-browser did not exist, there was nothing in the makeup of Web 1.0 to preclude it; all the necessary programming ingredients were already in place to make it happen. What was missing, however, was the architecture which enabled applications like Google Docs to be realized. And that’s what brings us to today’s remarks.
You see, I am of the strong opinion that the client-side development architecture of the web is ten different kinds of fucked up. A lot of this is the result of the “architecture by committee” approach that seems to be the MO of the W3C, and a lot more seems to be just a plain lack of imagination. Most complaints about W3C focus on the fact that its processes move at a snail’s pace and that it doesn’t enforce its standards, but I think the much larger problem with the web today is that it’s running on 20-year-old technology and standards that were codified before the current iteration of the web was thought to be possible.
Let me explain that. As an aside, though, allow me to tell a relevant story: recently I went to a JavaScript programmers’ MeetUp where people were presenting on various web frameworks (e.g. Backbone, Ember, Angular, etc.). During the Backbone talk, the fellow who was giving it made a snarky comment about “not programming in C on the web.” This got a few laughs and was obviously supposed to be a dig of the “programming Fortran in any language” variety. What I found most revealing about this comment, though, was that it was made in reference not to any features that JavaScript has and C doesn’t (objects, garbage collection) but rather in reference to the notion that one’s program ought to be modularized and split over multiple files. This is apparently considered so ludicrous to a web programmer that the mere suggestion that one might want to do so is worthy of mockery.
At the same time, web programmers are no longer creating static pages with minimal responsivity and some boilerplate JavaScript to do simple things like validation. In fact, the entirety of this presentation was dedicated to people talking about frameworks that do precisely the sort of thing that desktop developers have been doing since forever: writing large, complicated applications. The only difference is that those applications are going to be running in a web browser and not on a desktop, which means they have to access server-side resources in an asynchronous fashion. Other than that, Google Docs doesn’t look much different from Word. And you can’t do large-scale app development by writing all your code in three or four files. I mean, you can do that, but it would be a very bad idea. It’s a bad idea independent of the specific paradigm in which you’re developing, because the idea itself is sort of meta-architectural to begin with. Modularization is the sine qua non of productive development because it allows you to split up your functionality into coherent and manageable work units. It’s a principle that underlies any effective notion of software engineering as such; to deride it as “programming in C on the web” while wanting all the benefits that modularization delivers is to demand, with Agnes Skinner, that all the groceries go in one bag, but that the bag also not be heavy.
What’s the point of this story? It’s this: if large-scale modularization is a necessary feature of truly productive application programming, then how come we have had to wait close to two decades for it to finally reach the web? In particular, why has it taken this long to become a feature of JavaScript that had to be bolted on afterwards by ideas such Asynchronous Module Definition and Require.js?
Because the presence of these projects and efforts, and the fact that they have solved (to whatever extent) the problem of modularization on the web, is ipso facto evidence that the problem existed, demanded a solution, and a solution was possible. Moreover, that this would be a problem for large-scale development would (and should) have been just as obvious to people 20 years ago as it is today. After all, the people responsible for the design and standardization of efforts like JavaScript were programmers themselves, for the most part; it’s not credible to believe that even the original JavaScript compiler was just one or two really long files of code. And yet somehow, the whole concept of breaking up your project into multiple dependencies never became an architectural part of the language despite the fact that this facility is present in virtually all modern programming languages.
To me, this is the first, and perhaps greatest, original sin of the client-side web. A language intended for use in the browser, and which could and is now being used to develop large-scale client-side web applications, originally came, and remains, without any architectural feature designed to support breaking up your program into discrete pieces of code. This isn’t meant to denigrate the awesome work done by the guy behind Require, for example, but the fact remains that Require shouldn’t even have been necessary. AMD, in some form, should have been a first-class architectural feature of the language right out of the box;. I should be able to simply write import(“foo.js”) in my file and have it work; instead, we were reduced to loading scripts in the “right order” in the header. This architectural mistake delayed the advent of web applications by a long time, and still hampers the modularization of complex web applications. Laughing this off as “programming C on the web” is terribly shortsighted, especially as recent developments in JavaScript framework land have demonstrated just exactly this need for breaking up your code. Google didn’t develop their Google Web Toolkit for shits and giggles; they did it because Java provides the kind of rigid structure for application development that JavaScript does not. Granted, they might have gone a bit too far in the other direction (I hate programming in Java, personally), but it’s obvious why they did it: because you can’t do large-scale development without an externally imposed architecture that dictates the flow of that development.
*: Yes, I know JavaScript didn’t appear until 1995. Shut up.