Finding the Software Industrial Revolution


In November of 1990, IEEE Software published an article by Brad Cox titled “Planning the Software Industrial Revolution.” Early this year, I rediscovered it and carried a copy around with me for weeks. As I read and reread it, my copy grew tattered and thoroughly marked up. It’s an essay worth revisiting.

The first industrial revolution was envisioned long before it took place. Its influence on gunsmithing was foreseen by Thomas Jefferson in 1785, but it took fifty years for armory practice to change from cut-to-fit production to the assembly of guns from standardized parts.

The change was driven by consumers and not producers. The experts of the day kept to their ways until the world superceded them.

Is that happening in software today?


What do we mean by “software industrial revolution”?

Here’s how Cox defined it:

…transforming programming from a solitary cut-to-fit craft into an organizational enterprise like manufacturing. This means letting consumers at every level of an organization solve their own software problems just as home owners solve plumbing problems: by assembling their own solutions from a robust commercial market in off-the-shelf subcomponents, which are in turn supplied by multiple lower level echelons of producers.

So what would make me, a solo programmer, any different from the cut-to-fit craftsmen that the industrial revolution displaced? The answer is a shift in perspective. Instead of seeing myself as a sole manufacturer, I become like the plumber or homeowner, a craftsman skilled both in selecting components and in putting them together to solve problems.

There were two essential elements that allowed the industrial revolution to happen. Cox claimed that these same two were needed for a software industrial revolution.

Assembly technology

First, there must be an assembly technology so that standard parts can be assembled into special-purpose configurations. Cox called this process “binding” and described two kinds of binding, “static” and “dynamic”. He compared static binding to forging and casting, manufacturing technologies that are usually performed by a supplier. In contrast, screwing, bolting, welding, soldering, pinning, and riveting are assembly technologies typically used by consumers. This is what Cox called dynamic binding. What is important is that with dynamic binding, it is the consumer who is in control.

Most software sold today is statically bound. My own experience has been in an industry (electronic design automation) where solutions are controlled by three or four major providers, and where virtually none of their products can interoperate beyond the crudest means. From the consumer’s perspective, the problem is that those products are statically bound, and that prevents them from mixing and upgrading components in their design systems. But to the provider, this is no problem at all; to providers, vendor lock-in is a feature.

To simplify dynamic binding, Cox developed a structured way of writing C programs that he called Objective-C. Objective-C adds a thin layer of messaging primitives to C so that C structures can be treated as objects that send and receive messages based on minimal compile-time information. This allows “open world” programming; software objects can be used with other objects that may have been completely unknown when the original objects were compiled. At most, object classes may be required to implement predefined interfaces called protocols, but in general, Objective-C allows any message to be sent to any object at any time.

This kind of openness can be frightening, and that’s led most of the world to prefer C++, which is as notable for what it prevents as for what it allows. That’s because the emphasis of C++ is on ensuring correctness at compile-time. In other words, C++ is a language for static binding. But that isn’t always an advantage for developers. Programs written in C++ require complex build tools and development teams often find themselves rebuilding large parts of their products after making small changes. One example of this is the “fragile base class” problem, in which a change to a class near the root of an inheritance hierarchy triggers rebuilding of everything below it.

C++ software development tends to be labor intensive. Perhaps as a result, pundits worry about a shortage of skilled programmers. Cox’s answer to the “programmer shortage” was simple and shocking:

The programmer shortage can be solved as the telephone-operator shortage was solved: by making every computer user a programmer.

That’s a different definition of “programmer”, for sure. But we’ve taken a few steps toward it. Some software tools graphically guide users through configuration steps that previously would have been considered “programming”: systems for video and music production, graphics, mathematical analysis, database organization, and many other applications all have visual environments for choosing and combining components in custom ways.

But for this to spread further, our concept of “component” also must change and become more precise. This leads to the second essential element of industrial revolution.

Specification Technology

Cox recognized that the crucial innovation of the industrial revolution was not a manufacturing technology, but an innovation of John Hall,

who realized that implementation tools were insufficient unless supplemented by specification tools capable of determining whether parts complied to specification within tolerance.

Without John Hall’s inspection gauges, parts could never have been made to the precise tolerances that allowed standardized and custom assemblies.

For software developers, it is similar. Few of us keep making the same thing over and over again, but we often draw from a pool of components and combine them in new ways:

Programming is not an assembly-line business but a build-to-order one, more akin to plumbing than gun manufacturing. But the principles of standardization and interchangeability pioneered for standard products apply directly to build-to-order industries like plumbing. They enabled the markets of today where all manner of specialized problems can be solved by binding standardized components into new and larger assemblies.

Which is more complex, a software project or a house? If you said “software”, take another look around your house. Imagine everything built to order, with no standards for electrical connections, plumbing, flooring, heating, structural construction, windows, appliances, insulation, paint… imagine a house in which everything was reinvented from first principles.

Mature industries like plumbing are less complex than ours, not because software is intrinsically more complicated, but because they - and not we - have solved their complexity, nonconformity, and changeabililty problems by using a producer/consumer hierarchy to distribute these problems across time and organizational space.

This leads us to specification technologies for software, and there we find one of the biggest advantages of dynamic binding. Dynamically bound systems can be pulled apart into pieces that can be tested and qualified independently, and this can be done with production software. Statically bound systems require special builds for component testing. Those builds require planning and overhead, so they are often neglected until it’s too late and problems have already appeared.

There’s lots of research into static specification technology for software, and I’d love to see it bear fruit. But we can already make great progress by simply testing the software components that we write with more detail. For this we can find help in dynamic glue languages, usually interpreted ones, because we can use them to interactively probe and query our software as we debug it and as we develop new tests. This kind of specification relies less on compilers and more on test authors. But compile time checks are only as good as the information they are given, and a lot of knowledge can only be expressed as tests. Even static specification tools have regression tests.

There’s a famous phrase used to describe dynamic type checking. It’s called “duck typing” and it means that if something behaves like it’s supposed to, then that’s good enough. We’ll probably never know exactly who invented that phrase. Some credit Dave Thomas, others say Alex Martelli, but here’s what Brad Cox wrote in 1990:

For example, a putative duck is an acceptable duck upon passing the isADuck gauge. The specification compiler builds this gauge by assembling walksLikeADuck and quacksLikeADuck test procedures from the library.

A Progress Report

So how are we doing? At the conclusion of his paper, Cox listed a few steps that he thought were in the right direction, and so perhaps they would be signs of progress:

A robust software components market that would provide an alternative to implementing everything from first principles.

I don’t think we’ve made much progress in this area. There are a lot of languages, libraries, and frameworks, but they seldom interoperate and it’s rare to find any one solution that is plug-compatible with any other. Also, earlier Cox wrote that “specification/testing languages could lead to less reliance on source code”. That runs against our strong trend toward open source software. Open source is the most extreme form of dynamic binding, but it leaves all integration problems to the consumer and leaves vendors struggling find ways to get paid for their work.

Object-oriented programming and specification technologies.

Since 1990, we’ve been doing a lot of object-oriented programming. Unfortunately, a lot of that has been done in tightly-coupled statically bound systems that haven’t allowed much reuse. Some have learned the hard way about specification technology and now obsessively test their code, but often still with home-grown tools and methods. Also, unfortunately, most testing efforts have been made by producers rather than consumers of code. Without more emphasis on consumer-side acceptance testing, consumers will continue to lack the power to drive revolutionary change.

Ultra high level object-oriented languages that provide nonprocedural objects that nonprogrammers can manipulate.

Here we’ve seen progress in some specialized areas. In many cases, these are visual languages, but visual languages aren’t always a step forward. We know that alphabets evolved from pictographs and not the other way around. As the much bandied Sapir-Whorf hypothesis claims, language is a tool of thought, and when we develop the words to describe something we become better at thinking about it.

To another point about languages, Objective-C has been a much more important part of Apple’s success than Apple publicly admits. I recently heard Bertrand Serlet answer the question, “why Objective-C?”. His answer: Objective-C solves a problem that C++ still hasn’t solved, that of binary compatibility between releases. That’s not Objective-C’s only advantage. For a software company, the base of Maslow’s pyramid is its ability to maintain control of its product. As discussed, Objective-C’s dynamic binding capability allows software to be pulled apart and tested much more easily than C or C++, and that advantage comes without the performance and machine-level access sacrifices that come with Java and the many interpreted languages. That modularity allows teams to stay small, and for software teams (and their management), small is beautiful.

It’s been 17 years since Cox’s paper was published, and the first industrial revolution took 50. What do you think: have the seeds of the software industrial revolution been sown? If so, where have they taken root?


I know this entry is old, but seeing as there are no comments yet, I’d like to abuse your blog as a springboard to spout off about my own inconsistent beliefs.

I find the concept interesting, but unlikely. One piece that stood out from your entry was Cox’s comment that the shortage can be solved by making every user a programmer.

What I find especially interesting about this is that serious attempts and success occurred before Cox’s statements. For one thing, in the age of BASIC and “microcomputers,” everyone had to be a programmer. As an example, the wildly popular Apple ][—and other computers of that era—ran on operating systems that were themselves a programming language. This is perhaps even more true with UNIX’s bash shell, et al. However, the world has passed that era by; very few computer users these days are “programmers,” depending on how you interpret the definition. Probably, the percentage continues to get smaller.

Another interesting twist on this idea was Bill Atkinson’s HyperCard, very much ahead of its time. While HyperCard was designed to be easy enough that anyone familiar with MacPaint could, at the very least, create stacks of their artwork, it still only enjoyed popularity in a relatively small share of the market. HyperCard, in Bill’s words, was “programming for the rest of us,” an IDE that the average computer user could use. Of course, its popularity declined as Apple allowed it to age, and it never even made the jump to color (without 3rd-party extensions).

I guess my point is that this idea isn’t really anything new, and while it does hold some nice appeal, its main flaw is that most computer users don’t want to be programmers. Don’t get me wrong, I love Objective-C and have great respect for Brad; however, your average user is satisfied when the job is done, and frustrated when a new learning curve is presented to them. Most people, I think, regard computers as a tool meant for accomplishing things, not as the intellectual playground that they are for us coders.

If a truly powerful programming experience “for the rest of us” appears anytime soon, my guess is it will have been created, like HyperCard (and, indeed, your and my own software), as a project by one rogue individual with a dream. I’m just not going to hold my breath.

George Bezel — December 5, 2007

It’s available on-line from the author, Brad Cox. Interesting article. I think George Bezel should consider how widespread the use of Excel is and how many programs it has replaced by end users writing code. Bonnie Nardi’s A Small Matter of Programming addresses some of the key barriers to end user programming. Many modern products embrace end user programmability as a way of to avoid the need for many custom variations. EDA as an industry has been held back by the need for external glue scripting that was distinct from any affordances in the tools offered by the major vendors. I think their “user lock in” has proven to be illusory, as Cadence’s recent results vs. Synopsys certainly would indicate.Thanks for the pointer I will have to re-read this article with some care.

Sean Murphy — October 28, 2008