If you do not happen to be an expert in computer networking algorithms, the headline description of US Patent #7339900 may seem a little mysterious to you: “Method and apparatus for preventing spanning tree loops during traffic overload conditions.” The lay reader might be forgiven for thinking it involves some kind of innovation to help city planners route automobile traffic around fallen trees. But in fact, Patent #7339900—one of more than a hundred held by pioneering network designer Radia Perlman— is a tool for stabilizing information traffic, not automotive. When you connect your local Ethernet network—your computers and smart TVs and game consoles—to the wider Internet, and the information flows reliably between those two networks, that connection is quietly made possible by innovations like those described in Patent #7339900.
Our understanding of networks—how to make them stable, self-correcting, efficient—is a remarkably young science, given how much the modern world now relies on the consistent flow of information across them. We tend to focus on the glittery applications that we interact with directly—TikTok and Chrome and Spotify—but all those apps would be worthless without the quiet miracle of stable network architecture, seamlessly connecting servers and routers all around the world. That architecture was the product of many minds, each contributing small pieces to the puzzle that over time has given us the network stability that the entire world depends on. Radia Perlman contributed more than her fair share of those pieces over her long career—and probably did as much as anyone to explain how stable networks worked, teaching a whole generation of network designers in her wake.
But if US Patent #7339900 is only part of the story of the network revolution—one of many patented breakthroughs that have long been neglected by the very people who benefit from them—it does stand out from the pack in one crucial respect: it is probably the only software patent on record that features a poem.
The fact that Radia Perlman is now a member of the Internet Hall of Fame, and is widely considered to be one of the pioneers behind the networking revolution, would have likely surprised Perlman herself growing up in suburban New Jersey in the 1960s. “There's this stereotype of engineers that they always took things apart,” she says now with a smile. “I never took anything apart. It never occurred to me to do that. I would have assumed that I would break it or get electrocuted or something.”
We sometimes forget that while software became a field dominated by men in the 80s and 90s—now thankfully trending back towards a more gender-diverse mix—many influential earlier programmers in the 1960s were women, like Grace Hopper, creator of the COBOL programming language. But while Perlman got a taste of programming in her high school years, when a teacher encouraged her to take an outside class on computer science at the Stevens Institute of Technology, the initial encounter was not exactly an encouraging one.
In addition to assisting with the creation of COBOL, Hopper—who was both a Navy Rear Admiral and a computer scientist—also devised one of the very first compilers, a now essential resource that enables programmers to translate human-readable instructions to the zeroes and ones of binary code. But Hopper was only one of many female coders from that period. Six women were in charge of writing the software for ENIAC, the first programmable digital computer, developed in the 1940s. More than a quarter of all programmers in the 1960s were women. (By comparison, only about 4% of lawyers were women back then.) The percentage of female coders peaked in 1990, and since then has retreated to the level it was back in Grace Hopper’s day.
“Everyone was bragging about how they had built ham radios when they were seven. I had no idea what a ham radio was,” she recalls. “And then they were asking questions with fancy words, like input. I had no idea what that meant, and so my mind shut down. I got nothing out of that class because I felt I was so many years behind, I’d never catch up. If someone had asked me what I was interested in doing, I would have said: Oh anything, as long as it's reasonably interesting—and as long as it doesn't involve computers,” she adds. “When I learned to program a few years later, I realized it was fun and not difficult, but this experience made me understand the sense of panic that can cause someone perfectly capable of learning something to fail because they believe they cannot do it.”
Shortly after that unpromising taste of programming at Stevens, Perlman enrolled as an undergrad at MIT, one of only 50 women in a freshman class of roughly a thousand. In an interview with The Atlantic several years ago, Perlman recalled that women were so scarce on the MIT campus that she’d find herself startled when she encountered another woman in her classes. “I’d notice that it kind of looked weird…this other gender person looking curiously out of place in the crowd. I’d have to remind myself that I was also that 'other gender.'"
MIT only began a serious effort to attract and retain female students in the early 1960s. At the start of the decade, the first-year cohort was roughly 2% women, and only about half of them managed to graduate. In part, this was due to the fact that the university had no on-campus housing for women, which both restricted the number who could be admitted, and made attending the school more challenging for those who enrolled. But the first all-female dorms were built in the mid-sixties, and by the time Perlman enrolled, female MIT students were actually more likely to complete their degrees than men.
While Perlman was majoring in math, her years at MIT gave her another pass at computer science. “I was taking a physics class and the TA said: I have a project and I need a programmer. Would you like to be my programmer? And I said: I don't know how to program. And he said: yes, I know that's why I'm asking you because I have no money to pay you.” Before long, Perlman had taken a job at nearby BBN Technologies, one of the core organizations responsible for the development of ARPANet, the early predecessor of the Internet.
One of the earliest networks was ARPANet, established by the Advanced Research Projects Agency (ARPA) of the US Department of Defense. The project was initiated in the 1960s at a time when computers were large machines with connections established via dedicated links. Prior to the ARPANet invention, highly centralized systems were based on the model of the circuit-switched telecommunication network in which the connection between two nodes lasted as long as they’re both connected to a dedicated communication channel (the same as in the early analog telephone network). Prone to failure and very unreliable, the model was challenged with needing to still function after having portions removed or destroyed (potential threat from the nuclear war). It took a few years to establish it, but in 1971 the network was declared operational.
It was at BBN that Perlman found her true calling, exploring the new science of network protocols. Software programs were linear, prescriptive; they worked because they operated in an environment that was utterly predictable, just you and your code in dialogue with a CPU that followed a fixed logic. Network protocols were about something different: the flow of information, and the strange eddies and whorls that could form in those networks. A key part of designing the protocols involved building systems that were resilient when confronted with unpredictable behavior. A protocol designed to last for decades has to be flexible enough to interact with computers that haven’t even been dreamt of yet, much less manufactured. And it has to be able to adapt to new network architectures that might emerge, new ways of connecting new machines.
“BBN was really the dawn of networking,” Perlman says now. “I was designing routing algorithms, and I discovered that I loved it. You have all these little things, all doing their own little thing based on local information. And somehow it all fits together, like a beautiful symphony. I can think that way, whereas a lot of people can only think in a linear way.”
The Cambridge-based company BBN played a crucial role in the development of networking technology in the 1960s and 1970s. Its name derives from the last names of its founders—Bolt, Beranek, and Newman—though over time the organization would be associated with even more famous names from the history of computing: AI pioneers Marvin Minsky and John McCarthy were consultants for the firm, while early Internet architects like Bob Taylor and Ray Tomlinson (who popularized the use of the @ symbol in email messages) were full-time employees. In the late 1960s, the government research agency ARPA contracted BBN to design the very first routers used in ARPANET, the predecessor of the modern Internet. In 2009, the company was acquired by the defense contractor Raytheon.
One of Perlman’s first published papers identified a crucial vulnerability in the design of ARPANet. “A few mangled packets could cause the network to go down forever,” she explains. “If your PC gets into a bad state, you just reboot it. But you can’t reboot a network.” In those days, rebooting the ARPANet would have meant calling all the computer science labs around the country connected to ARPANet and asking them to individually turn their routers off and on again in sync with everyone else. This was an unacceptable burden even back in the 1970s, when the total number of nodes on the ARPANet was still relatively small; on today’s Internet it would be unthinkable.
Diagnosing that vulnerability made it clear to Perlman that what networks required was what she calls “self-stabilization”—the ability to recover from mangled information or bad actors or crashed computers. One of the great under-appreciated miracles of modern life is the fact that the Internet itself—this massive network of networks connecting billions of devices—is staggeringly reliable. That stability emerged out of thousands of specific design tweaks to the underlying protocols, but it also, more generally, emerged out of the pursuit of scalability and self-stabilization as a principal goal of network architecture. Perlman wanted to make network routing easier to manage, which was important in making networks to be used by real people, not just computer science researchers. “The spanning tree algorithm grew out of an industry mistake, thinking that Ethernet was a network, rather than a link in a network,” Perlman points out.
The original Ethernet was just a shared link, where any message would be received by all the attached computers. The Ethernet invention enabled sharing of the link without having computers talking at the same time, garbling messages. But it had its limitations—the first applications would only work on a single link instead of the entire multilink network. This made it impossible to have a computer on one Ethernet talk to a computer on another Ethernet. Radia’s work on “layer 3” made it possible for the network to forward messages between links and find a path of many links across the network. The process is known as “routing.”
The Spanning Tree Protocol that Perlman designed in the early 1980s is a marvelous case study in network protocol design. It’s not a feature that users experience directly—like a cloud-based word processor or a hypertext browser—but without the resilience it creates, our experience of those more obvious features would be seriously impaired.
Perlman’s first major work after joining Digital Equipment Corporation (DEC) in the early 1980s was designing routing for DECnet, a protocol that is still widely deployed in the Internet today. Her contribution and innovations made networks more robust, easier to manage, and more scalable. But there were still certain limitations. As Perlman explains it, “when Ethernet was introduced, the industry made the mistake of thinking that Ethernet was ‘the network’ instead of a link in a network.” Just as the post office depends on people putting a letter into an envelope saying where it’s supposed to go, routers depend on computers that transmit data adding information that the routers can read.”
When people built applications assuming the entire network was a single Ethernet, routers were not able to forward data from one Ethernet to another. The basic concept of a bridge was to listen to all transmitted packets and forward them onto the bridge’s other ports. But that would be a disaster if there were loops. The solution was the Spanning Tree Protocol.
Digital, or DEC, was one of the major American computer companies. Once a powerful player dubbed "the nation's second-largest computer company, after IBM." DEC failed to notice market shifts towards personal computers and can be given as an example of a disregard for the need to innovate in a big enterprise next to Kodak and Nokia. Nevertheless, DEC left a rich legacy in software, hardware, and networking that many tech giants have benefited from.
A network is a kind of relay race, with packets of information passed along instead of batons. A computer sends a packet of information out into the network with an ultimate destination attached to it, and it is relayed from one node to another along the network until it reaches its final destination. Imagine a city mail system, where individual letters are passed along from mailbox to mailbox, slowly making their way towards the recipient’s address: Mailbox #1 doesn’t know how to get the message all the way to the final destination, but it does know to send it to Mailbox #2 which is closer to the recipient’s address. And Mailbox #2 knows enough to send it to Mailbox #3. The problem with a system like this is that every now and then, loops appear in the network. And while the message won’t be circulating in the network forever, it might not reach the destination as the looping packet will be discarded after a certain number of forwards.
Without getting into technical details, existing routers could not forward packets between links unless the end nodes added extra information to the data. Building a bridge, which would forward information based on end nodes acting as though they were attached only to a single link, requires a topology without any loops.
“Now, he mentioned this on a Friday and he was on vacation the following week. This was before email or cell phones, so he was going to be absolutely unreachable,” she says. It seemed, initially, to be a daunting task, perhaps an impossible one. But it was also exactly the sort of problem that Radia Perlman had been wrestling with since those initial days at BBN: how to make networks self-stabilize in unpredictable conditions.
“So that night I realized: oh, wait a minute—it’s trivial. I know just how to do it. I can prove it works,” she recalls. The key insight behind the solution came from a metaphor of sorts: imagining the network not as a series of mailboxes or superhighways, but instead as a tree.
Even though she describes spanning tree bridging as a kludge to rescue the industry from a mistake, it is very widely deployed today. The original shared-link Ethernet could only support a few hundred nodes. Spanning tree bridging has replaced the original shared-link concept of Ethernet, and what people know of as “Ethernet” today can support very large networks.