Read this book from the beginning
One of the two articles of faith that Eric Kriss and Peter Quinn embraced in drafting their evolving Enterprise Technical Reference Model (ETRM) was this: products built to “open standards” are more desirable than those that aren’t. Superficially, the concept made perfect sense – only buy products that you can mix and match. That way, you can take advantage of both price competition as well as a wide selection of alternative products from multiple vendors, each with its own value-adding features. And if things don’t work out, well, you’re not locked in, and can swap out the loser and shop for a winner.
But did that make as much sense with routers and software as it did with light bulbs and lamps? And in any event, if this was such a great idea, why hadn’t their predecessors been demanding open standards-based products for years? Finally, what exactly was that word “open” supposed to mean?
To answer these questions properly requires a brief hop, skip and jump through the history of standards, from their origins up to the present. And that’s what this chapter is about.
At a high level, standards have been around for millennia. The basic concept is that when people agree upon a common definition for something once, they can save a great deal of time and trouble in the future, because they never need to describe that “something” again. This holds true in an astonishingly wide number of situations – so wide, in fact, that we take the concept of standards entirely for granted. Language, for example, is one of the earliest examples of a standards-based system. If we both agree to use the word “deer” for the same animal, then conveying the news that supper can be found nearby becomes much easier and faster. Of course, each person must agree to give up his right to call a deer something else, but that’s a small price to pay in comparison to the mutual benefit to be obtained – in this case, survival. Stated another way, a standard is a voluntary agreement that is reached because of the mutual benefit that adopters expect to gain from honoring that agreement. That’s really all there is to it, although the situations, benefits and challenges specific to the situation vary. The concept is almost infinitely extensible, embracing not just what we think of as technical standards, but also professional credentials, moral systems, laws, and much more. Indeed, the concept of consensus-based standards is one of the bedrocks of society.
One of the earliest sets of formal standards evolved to quantify the physical characteristics of objects, and the genealogy of what eventually became technical standards can be traced back to this root. In each case, the specifics of the standards adopted (e.g., feet, bushels, gallons, and pounds for length, solid volume, liquid volume, and weight, respectively, in the English system of measurement) were totally arbitrary, in the sense that any specific measurement would serve just as well as a standard as any other. And, in each early society, it did. Through the use of such measures, meaningful orders and deliveries could be made, and value more easily compared and assessed. As the same weights and measures became more widely adopted, or at least familiar (especially around the Mediterranean), trade was dramatically facilitated and commerce could become far more sophisticated.
Coinage was a logical next step, and recognized the need to standardize another physical property: the purity of precious metals with intrinsic, but easily adulterated value. Impressing a coin with the seal of a monarch applied another key concept that would remain fundamental to the usefulness of standards down through the ages: certification. Just as an Underwriters Laboratory logo tells us today that a consumer product meets relevant safety standards, the King’s mint mark certified that the metal comprising a coin had been tested to be of a certain minimum purity, and therefore value (today, you can buy hundreds of different standardized “reference materials” from NIST, which certifies their purity. Some of these materials are quite surprising).
As time went on, standards were assigned to most other important characteristics that could be measured, some of which were non-physical, and therefore presented new and unique challenges. Time, for example, proved to be particularly challenging, once you got past the concept of “day,” because a standard that can’t be easily measured has little utility. Only with the discovery and application of astronomical principles and the development of hourglasses, sundials, and eventually clocks could date and time standards become precise and reliable in application.
There things more or less stayed until comparatively recent times, when the next great leap in standards development occurred. As has so often been the case throughout human history, the driver of the next great innovation was war.
The problem to be solved was this: over the years, the arts of the armourer had progressed to the point where the initial gunpowder-powered engines of war (cannons) could be miniaturized (becoming guns). Once miniaturized, infantry could carry these new devices, resulting in a vastly larger market for the product. Still, creating a flintlock musket was in some respects more challenging than casting a one piece, cast iron cannon barrel. A musket used multiple small parts that needed to fit together precisely, and which could be damaged relatively easily. Special skills and tools were therefore required to fabricate or repair a gun, unlike the carriage of a cannon, which was more robust, and which any wagon maker could repair. This made a musket not only expensive to fabricate, but also resulted in each piece becoming effectively one of a kind. A musket was also very slow to produce, because a gunsmith and his apprentice typically made every part. If a musket was damaged in battle, it might therefore be weeks before an opportunity could be found to repair it unless skilled gunsmiths, and all of the tools of their trade, were part of the afterguard. The obvious solution would be to carry spare parts, but since the fit of the parts in each gun was slightly different, this would require carrying spare parts for each gun. Eventually a less obvious solution occurred to someone, which was that it would be highly advantageous if the parts of a gun were to be made exactly alike, and therefore become interchangeable. Repairs could then be made quickly in the field by semiskilled camp followers using only light tools.
While the concept of interchangeable parts would eventually lead to Henry Ford’s production lines, it did not immediately result in the birth of technical standards, because the spare parts created were unique to the manufacturer of the gun. But with the dawn of the Industrial Revolution, technologies and manufacturing processes each began to rapidly evolve, and three things changed that set the stage for wide adoption of technical standards: first, machines became much more complex, requiring more and more parts, often similar, and requiring periodic replacement. Second, the costs of producing machine parts began to drop, as new manufacturing devices were designed that could do more and more of the work of fabrication. And third, manufacturing gradually became more specialized, such that vendors might no longer need to manufacture every single part of their final products.
This last development followed inevitably when it became more cost-effective to specialize at both the high and the low ends of manufacture, and as the separate elements of design, assembly and sales became more valuable in their own right, allowing greater profit margins at the high end than the low. If you manufactured power looms, for example, why not buy the simpler parts, if available, leaving fewer custom parts to create in order to create a salable product? At the low end, why not specialize in a limited number of products fabricated with a high degree of efficiency, allowing you to offer lower prices and undercut your competitors? Manufacturing thus became more layered and granular: instead of making a business out of cooperage, blacksmithing or spar making, a concern could make a business out of supplying individual parts that were needed in high volume by upstream manufacturers. This strategy, of course, would favor suppliers able to manufacture uniform parts of consistent quality over those that were only able to fabricate rough parts that had to be finally sized by the customer prior to installation. The continuing evolution of manufacturing techniques soon made this possible.
The next major step in the evolution of modern standards therefore involved a humble item at the very bottom of the production pyramid: the screw fastener. By the 1800’s, the screw had evolved from a tour de force of the blacksmith’s art to a fully machine-produced item that was in dramatically increasing demand. Since manufacturing screws at an economically feasible price required very sophisticated machinery and resulted in finished products that had no features unique to individual machine shops, they were a perfect example of a product destined to be made by companies specializing in fasteners of all types, rather than by the makers of the products that consumed them. They were also ideal to become the inspiration for modern technical standards as we know them today.
Consider, if you will, what makes a screw a screw: its value lies as much in its total uniformity of design as in its ability to hold two things together. Everything about it is specified: length, bore, number of threads per inch, shape of head, and type of screwdriver slot. Even the material (brass, stainless steel, galvanized, etc.) is standardized today. Moreover, a screw is part of a larger system (nuts, drill bits, taps and dies) that mirrors the same standards. Indeed, a screw is as much a collection of standardized features as it is a physical object.
Because the precise dimensions of a screw (just like a weight or measure) were arbitrary and easily adjusted, the customer could specify those measurements, and put the order out to bid rather than manufacture them itself. But unlike a nail, the screw did have one unique element: its threads. If a screw were to have a nut, or if it was to be used to fasten metal to metal, the threads of the screw needed to be identical to those in the nut or the machined hole. And unless the screw manufacturer was willing to also manufacture and sell drills, taps and dies (or the manufacturers of those tools were willing to sell a different set of their tools for use with the wares of each manufacturer of screws), then the threads of all screws would need to be uniform.
Moreover, unless every screw manufacturer wanted to sell direct to every single one of its customers, it would need to offer a standardized product that could be sold at retail. Ultimately, the value of an individual screw was too low, and the design too obvious, to avoid commoditization under such tremendous pressure from practical considerations. The threads of the screw, along with its other dimensions, therefore made it the perfect progenitor for standards, resulting in what we would now call an “interoperable product.” Standardization of other products followed, resulting in lower prices per piece, but in far larger markets for the standardized products as well.
Railways provided the impetus for another major advance in standards. As the landscape became increasingly divided among the new joint stock companies created to build railroads, an obvious need developed for the rails of one system to be placed exactly as far apart as those of its neighbors. Otherwise, the long distance shipment of goods would require unloading the freight from one set of cars and the loading of another each time a commercial border was passed. At first, standardizing railway gauges required government intervention. But soon the benefit to the railway owner became apparent as well, because the value of each railway as a stepping-stone for high volumes of freight and passengers going long distances was far greater than the value of a railway carrying low volumes within its discrete island of transportation.
The standardization and linking of the railroads resulted in one of the first and most powerful examples of the “network effect.” By the late 1800s, life had been transformed, as goods and people began to travel quickly, easily and economically over long distances. Not only could goods arrive faster, but perishable goods could access markets they could never reach before. Just as the value of the Internet and the Web increases as the nodes and information that are part of it expand, so too did the value of the railway system increase incrementally, mile by interlinked mile, and station by newly added station.
The effects on standardization were equally profound, as access to rail transport provided benefits that offset the costs of standardizing other goods and packaging in order to be loaded more efficiently in boxcars and on other rolling stock. Eventually, the effect was extended across oceans as well, with the popularization of railway car sized containers. Even the measurement of time standards was affected. Before the railways, local time was whatever the town clock said, which was usually different from what the clock the next town over indicated. And why not? With travel between the two taking so long, minor differences were irrelevant. Once the railways ran, however, time needed to be synchronized over long distances, or you would miss your train. And as the railway lines became longer, time needed to be divided into zones as well – all so that railway schedules could become reliable.
Standards of all types rapidly proliferated as the evolution of manufacturing, technology and society accelerated in modern times. Boilers that were apt to explode were harder to sell and uninsurable, so boiler manufacturers banded together to develop, adopt, maintain and promote design standards that would make their products safe – a process that required forming some of the first standards associations. Other types of safety standards followed.
The same realities that led to standardizing the design of screws soon drove the standardization and commoditization of more, and over time, more sophisticated, products. Initially, smaller manufacturers would simply copy the dimensions of the products of larger manufacturers, but eventually the process became institutionalized in organizations created for that purpose. The increasing prevalence of domestic systems utilizing multiple parts that needed to be connected together, such as plumbing and electrical systems, further drove standards. And the rise of an urban middle class increased the market for consumer products that logically demanded standard as well, leading (for example) to standards for light bulb bases and light sockets.
Some new types of new products led to the need for new types of standards, and especially “performance” standards, such as the power demand of light bulbs (measured in watts) and the light output of the same bulbs (measured in lumens), so that not only were the bulbs of various manufacturers interoperable in fact, but their prices could be meaningfully and easily compared as well. Of course, more and more associations were needed to develop, maintain and promote all of these standards, because government had neither the resources nor the inclination to provide them.
Initially, these associations were by definition national. But the emergence of telecommunications and the laying of trans-oceanic cables led to the formation of the first truly global standards body (the ITU). With time, the increasing volume of international trade led to others. Not long after the end of the Second World War, all three “Big I” organizations were in place, with the addition of the ISO and IEC.
While the ITU became a treaty organization, the ISO and IEC did not. But the quasi-governmental role of the ISO, IEC and the many niche and national bodies setting standards was also recognized. Process concepts were therefore developed to ensure that all of those affected by a standard, as well as those that would directly benefit as vendors (“stakeholders,” in standards parlance), could have a say in the development of a standard. These concepts were general rather than codified in specific rules of process. The standards that emerged from this system became known as “de jure” standards (Latin for “lawful”), despite the fact that use of these standards remained consensual and market driven, rather than required by law, unless incorporated into law by legislatures, as occurred with increasing frequency as the value of private sector standards became recognized. Standards that emerged in the marketplace purely as a result of the market dominance of one or a few vendors, in contrast, came to be known as “de facto” standards, indicating that although widely adopted “in fact,” they had not been developed in a process open to all stakeholders.
In modern times in the IT industry, standards created through the de jure process or through well-respected consortia, sometimes also began to be referred to as “open standards,” although the exact definition of those two simple words has recently become the subject of spirited disagreement, in part because specific standards have become increasingly important to the business strategies of individual vendors, and in part due to the growing popularity of open source software, and the restrictions and requirements of many of the licenses under which such software is made available.
This rapid romp through the history of standards provides not only the background for a discussion of standards in the IT industry, but also the basis for a number of important insights into why open standards in the computer world have not followed as neatly, or been adopted so universally for software and servers, as those implemented in light bulbs and lamps.
The first insight is that the IT industry is, by historical measures, still a new industry. Standards tend to follow, rather than lead, innovation for a number of reasons, some obvious and others less so. When new technologies emerge, vendors need to design and manufacture the unique elements of their new products above the level of existing commoditized parts. Such products can therefore frequently be proprietary, because a significant portion of their new value is, by definition, new. Stated another way, technology has to exist before a need can develop for it to become standardized. Moreover, since many technologies fail in the marketplace and because standards require time and resources to create, products tend to come first and standards second, and only for those products that gain traction. Also, unless a technology is likely to be useful over a long period of time, the effort of standardization may not seem warranted. Finally, standards do have costs beyond time and effort in development: inevitably, they restrict the degree of design freedom that a vendor can exercise. Standardization purely for standardization’s sake therefore does not make sense. For all these reasons, throughout modern history the standardization of new technologies tends to arise after an era of rapid, unrestricted innovation. This process allows multiple experiments to succeed or fail in the marketplace before one is anointed as the candidate to be fixed in the amber of standardization.
The result of all of these forces is that, until the 1990s (as discussed in earlier chapters), the first decades of the computer era were typified by proprietary vendors selling high margin, sophisticated products to customers that the vendors sought to hold on to for as long as possible once they had been secured. Most customers therefore lived in proprietary “silos,” and entrenched vendors had far more to lose than to gain if switching costs were to decline. Gradually, standards did become pervasive in ways that allowed vendors to buy the modern equivalent of very sophisticated screws (think disk drives) at extremely low prices. But the proliferation of standards could be a two edged sword if the result was a level of interoperability that would allow customers to mix and match hardware and software as if they were home stereo components, or to migrate easily to a competitor’s products entirely.
As IBM found to its sorrow with its ground-breaking line of PCs, this could even occur at the product design level. The good news for IBM was that it had succeeded in setting “the standard” for desktop computers, in the sense that the PC design became wildly popular in the marketplace. Unfortunately, the same processors as well as the operating system IBM had selected for use in its PCs were available to its competitors as well, permitting other vendors to build systems capable of running software originally developed for IBM PCs. The bad news was therefore that IBM’s PC architecture became not only a commercial benchmark, but an available de facto standard as well. Soon, scores of competitors were selling competing, and cheaper, desktop computers upon which the same software would run.
To be fair, the slow process of achieving interoperability at the product level through deliberate standard setting efforts was not purely the result of vendor-proprietary impulses. Just as the industrial revolution brought about the need for entirely new types of standards, so also did the IT revolution (and how could it not be so?) Unlike screw threads, which are easily implemented with complete fidelity, it is sometimes only feasible to create a standard for software that, in a given case, at best will enable two products to become close to interoperable. After that, tinkering and testing is necessary to accomplish the final “fit.” Similarly, the costs to innovation in achieving true “plug and play” interoperability when that result is feasible may be unacceptably high, leading to a decision to create a standard that (like ODF) only locks in a very significant amount of functionality, rather than complete uniformity (as OOXML strives to achieve).
Until recently, then, while vendors would frequently extol the virtues of “open standards” for marketing purposes, their commitment to actually deliver them was often selective at best. As with previous industries, with time this position began to change as the industry matured and became more multifaceted as production became more layered and internationalized, as those seeking to compete with entrenched incumbents became more hungry for their share of the end-user pie, and – most importantly – as the Internet truly did begin to change everything.
It would require far more than this single chapter to fully analyze the causes underlying the increasing allure of open standards, but the influence of the Internet cannot be ignored, if only to share one final insight derived from this brief review of the history of standard setting. Just as the joining of the railways created an explosion in value that radically changed the cost/benefit ratio of standardizing not only railway gauges, but everything from time zones to the dimensions of all manner of freight items, the value of the Internet is reordering almost everything in the IT industry to some degree.
At the macro level, the ability to connect to the Internet is all important to customers, and trumps any vendor’s desire to keep its customers trapped in a silo. Once connected, however, customers become exposed to new kinds of competition, such as the provisioning of software as a service (SaaS). Moreover, the Internet enables entirely new platforms to exist that initially are not owned by anyone, such as today’s increasingly powerful smartphones and other mobile devices. With so enormous a market at stake, vendors can easily conclude that they are better off pushing for open standards in order to ensure that they have a chance at securing some piece of the pie, rather than rolling the dice in a high stakes gamble to achieve monopoly power, and end up completely out of the game. The current success of Linux in the mobile marketplace and the recent opening up of platforms by major telecom carriers is current evidence of just such a market conclusion.
Looking back a decade from now, we will see few products, few services – and few vendors – that do not show the dramatic effects of this transition. Resisting the power of the network effect of the Internet will be like resisting the impact of the railways. Vendors that do not embrace that reality, including by adopting the standards that will continue to be developed to serve the Internet, will find themselves left in the dust as absolutely as a 19th century town that the railway passed by.
Have you discovered The Alexandria Project?
<-Previous Chapter: Eric Kriss, Peter Quinn and the ETRM
An open standards tree grows in Massachusetts