Practical Technology

for practical people.

The Birth and Rise of Ethernet: A History


Nowadays, we take Ethernet for granted. We plug a cable jack in the wall or into a switch and we get the network. What’s to think about?

But it didn’t start that way. In the 60s and 70s, networks were ad hoc hodgepodges of technologies with little rhyme and less reason. But, then Robert “Bob” Metcalfe was asked to create a local area network (LAN) for Xerox’s Palo Alto Research Center (PARC). His creation, Ethernet, changed everything.

Back in 1972, Metcalfe, David Boggs, and other members of the PARC team assigned to the networking problem weren’t thinking of changing the world. They only wanted to enable PARC’s Xerox Altos (the first personal workstations with a graphical user interface and the Mac’s spiritual ancestor), to connect and use the world’s first laser printer, the Scanned Laser Output Terminal.

It wasn’t an easy problem. The network had to connect hundreds of computers simultaneously and be fast enough to drive a very fast (for the time) laser printer.

Metcalfe didn’t try to create his network from whole cloth. He used previous work for his inspiration.

In particular, Metcalfe looked to Norman Abramson’s 1970 paper about the ALOHAnet packet radio system. ALOHAnet was used for data connections between the Hawaiian Islands. Unlike ARPANet, in which communications relied on dedicated connections, in ALOHAnet network transmissions used a shared medium, UHF frequencies.

ALOHAnet addressed one important issue: how the technology coped when a collision happened between packets because two radios were broadcasting at the same time. The nodes would rebroadcast  these “lost in the ether” packets after waiting a random interval of time. While this primitive form of packet collision avoidance worked relatively well, Abramson’s original design showed that ALOHAnet would reach its maximum traffic load at only 17% of its potential maximum efficiency.

In graduate school, Metcalfe had worked on this problem and discovered that with the right packet-queuing algorithms you could reach 90% efficiency of the potential traffic capacity. His work would become the basis of Ethernet’s media access control (MAC) rules: Carrier Sense Multiple Access with Collision Detect (CSMA/CD).

For PARC, though, a wireless solution wasn’t practical. Instead, Metcalfe turned to coaxial cable. But, rather than call it CoaxNet or stick with the original name, Alto Aloha network, Metcalfe borrowed an obsolete phrase from 19th century scientific history: Ether. In 19th century physics, “luminiferous ether” was the name used for the medium through which light traveled.

In a 2009 interview, Metcalfe explained, “The whole concept of an omnipresent, completely passive-medium for the propagation of magnetic waves didn’t exist. It was fictional. But when David [Boggs] and I were building this thing at PARC, we planned to run a cable up and down every corridor to actually create an omnipresent, completely-passive medium for the propagation of electromagnetic waves. In this case, data packets.” Appropriately enough, the first nodes on the first Ethernet were named Michelson and Morley, after the scientists who had disapproved the existence of ether.

On May 22, 1973, Metcalfe wrote a memo to management (video link) explaining how it would work. Given approval, the coaxial cable was laid in PARC’s corridors, and the first computers were attached to this bus-style network on November 11, 1973. Within PARC’s halls, Ethernet with its 3Mbps (megabits per second) speeds was immediately successful.

Metcalfe’s first Ethernet sketch.

For the next few years, Ethernet remained a closed, in-house system. Then, in 1976, Metcalfe and Boggs published a paper titled, “Ethernet: Distributed Packet-Switching For Local Computer Networks” (PDF link). Xerox patented the technology, but unlike so many modern companies, Xerox was open to the idea of opening up Ethernet to others.

Metcalfe, who left Xerox to form 3Com in 1979, shepherded this idea and got DEC, Intel, and Xerox  to agree to commercialize Ethernet. The consortium, which became known as DIX, had its work cut out for it. Aside from internal conflicts (gosh, we’ve never seen any of those since then, have we?), the IEEE 802 committee, which DIX hoped would make Ethernet a standard, wasn’t about to rubber-stamp Ethernet. It took years, but on June 23, 1983 the IEEE 802.3 committee approved Ethernet as a standard. That is to say: Ethernet’s CSMA/CD was approved. There were some slight differences between 802.3 and what had by then evolved into Ethernet II (aka DIX 2.0).

By now, Ethernet had reached a speed of 10Mbps and was now on its way to becoming wildly popular. (At least among networking geeks, the people who could name the seven layers of TCP/IP off the top of their head. Our sort of folks, that is.) In part, that was because the physical design was improving. The first Ethernet used 9.5mm thick coaxial cable, also called ThickNet, or as we used to curse it as we tried to lay out the cables: Frozen Yellow Snake. To attach a device to this 10Base5 physical media  you had to drill a small hole in the cable itself to place a “vampire tap.” It was remarkably hard to deploy.


Back in the 80s, connecting to a 10Base5 (thicknet) network was a big job.

10Base2, a.k.a. thinnet, which uses cable TV-style cable, RG-58A/U, made it much easier to lay out network cable. In addition, you could now easily attach a computer to the network with T-connectors. But 10Base2 did have one major problem: If the cable was interrupted somewhere, the entire network segment went down. In an large office, tracking down the busted connection that had taken down the entire network was a real pain in the rump. I speak from experience.

By the 1980s, both 10Base5 and 10Base2 began to be replaced by unshielded twisted pair (UTP) cabling. This technology, 10BaseT, and its many descendants (such as 100BASE-TX and 1000BASE-T) is what most of us use today.

But Ethernet and these cabling choices were not without alternatives. In the early 80s, Ethernet faced serious competition from two other networking technologies: Token Bus, championed by GM for factory networking, and IBM’s far more popular Token Ring, IEEE 802.5.

Token Ring’s bandwidth usage was more efficient. With its larger packet sizes — Token Ring at 4 Mbps had a packet size of 4550 bytes, compared to 10Mbps Ethernet’s 1,514 bytes — made it effectively faster than Ethernet. And16Mbps Token Ring was clearly faster to (relative) laymen who couldn’t get their heads around true line speed.

Another Ethernet challenger was Attached Resource Computer NETwork, better known as ARCnet. It was created in the70s as a proprietary network by Datapoint Corp. Like Ethernet and Token Ring, ARCnet was opened up in the 80s. ARCnet was also a token-based networking protocol, but it used a bus rather than a ring architecture. In its day, its simple bus-based architecture and 2.5Mbps speeds made it attractive.

Several things assured that Ethernet would win. First, as Urs Von Burg describes in his book,The Triumph of Ethernet, DEC decided early on to support Ethernet. This gave the fledging networking technology significant support in the IEEE standardization process.

Ethernet was also a far more open standard. IBM’s Token Ring was, in theory, open; but Metcalfe has said that in reality non-IBM Token Ring equipment seldom worked with IBM computers. Ethernet soon had over 20 companies supporting it, and their cost-competitive standards-based products worked together. (Mostly. With late 1980s networks, most of us tended to choose one hardware vendor for Ethernet cards and stick to that brand.) ARCnet, which only moved up to 20Mbps in 1992 with ARCnet Plus, was slower than both by the late 80s and early 90s.

In no small part because it was both open and had many developers working on it, Ethernet also quickly closed the technology gap between it and Token Ring.

In particular, 10BaseT, which became an IEEE standard in 1990, allowed the use of hubs and switches. This freed Ethernet from its often cumbersome bus architecture to one in which the flexibility of star architecture could be deployed. This change enabled network administrators to much more easily manage their networks and gave users a great deal more flexibility on where they could place their PCs.

It also didn’t help any that by the early 90s, 10BaseT Ethernet was simply much cheaper than Token Ring, no matter what metric you used. The final straw came with the widespread introduction of Ethernet switching and 100Mbps Ethernet.

Today, there may still be old Token Ring networks running somewhere but they’re historical curiosities. True, 802.11n and other Wi-Fi technologies have become immensely popular. But to supply those Wi-Fi access points with network connectivity and for businesses that don’t want their secrets constantly broadcast to the world, Ethernet will always have a role.

This story first appeared in HP Input/Output

Leave a Reply