Grandma COBOL: How Grace Hopper made computers more human

BY STEVEN JOHNSON

Other computing pioneers might have made our digital machines more efficient or powerful. Grace Hopper made them more like us.

Illustration: Krzysztof Nowak
Grace Hopper
Grace Hopper
Grace Hopper

The history of progress in computing can be measured across two primary axes. The first often goes by the name Moore’s Law: the exponential increase in processing power and efficiency that enables us to now have all the computing power in the world that existed in the 1960s on a phone in our pockets. But there is another, equally important transformation that has been more a matter of art and design than raw engineering: our computers have become increasingly human in the way they interact with us. We’ve gone from switchboards and punch cards and inscrutable machine code to user-friendly graphic interfaces and talking assistants.

Grace Hopper, 1940 © Unknown author, Public domain, via Wikimedia Commons

That shift towards more human interfaces is even transforming the world of programming itself. Of all the intriguing new developments in artificial intelligence of the past few years, one of the most striking—to anyone who has spent any time developing software—is the rise of “AI-based programming assistants” or AI agents who can actually write functional code on their own. Instead of forcing you to learn the intricacies of a language like C++ or Python, the latest tools offer a radical new proposition: you simply tell the computer in ordinary language what you want your new program to do—create a to-do-list manager, re-create the classic video game Pong—and the AI agent will write the code for you automatically.

AI Agents

AI-based tools for programming—including software like GitHub CoPilot and OpenAI’s Codex—have proliferated over the past few years as Large Language Models have matured enough to actually perform useful tasks. Interestingly, the programming skills of the AI models were an accidental discovery. When the team at OpenAI originally developed their flagship language model GPT-3, they did not specifically design it to be able to write code. But as it happened, the vast training data that they used to build the model—a significant slice of the entire Internet—contained countless examples of working code, accompanied by explanations of what the code was actually doing. In their early explorations with the model after training, they found, to their surprise, that the software had acquired rudimentary programming skills. AI innovators later refined those skills with deliberate training sets focused on code examples, creating the array of products now available on the market.

Grace Hopper’s legacy lies in the simple fact that she made these remarkable new machines more human.

Programming “co-pilots” may be one of the hottest areas in AI research right now, but the underlying idea behind them belongs to a tradition that dates back more than 80 years to the very origins of the computer age. The central premise is that digital computers require layers of translation, converting human concepts and instructions into the underlying zeroes and ones that all digital computation depends on. A machine that can only “think” in machine language is not nearly as powerful as a machine that can process and express itself in statements that resemble human languages. That idea seems almost obvious to us now, but it was a controversial proposition when it was first proposed by one of the most charismatic and influential figures in the early years of software: Grace Hopper. Her career involved a number of milestones in technological history; she programmed one of the very first computers ever made, and helped design one of the first programming languages. But more than anything else, Grace Hopper’s legacy lies in the simple fact that she made these remarkable new machines more human.

In pursuit of a career in the Navy

Grace Hopper upon graduation from Midshipman's School, June 27, 1944. © Grace Murray Hopper Collection, Archives Center, National Museum of American History, Smithsonian Institution

Hopper’s career as a software pioneer was set in motion by two developments: an unhappy marriage and the onset of war. Born to a well-to-do Manhattan family, the daughter of an insurance executive and a mathematician, Hopper displayed an early propensity for math, majoring in the subject at Vassar and going on to complete a doctorate at Yale in 1934—the eleventh woman to receive a math PhD in the school’s history. She landed a job teaching at Vassar and married a literature professor named Vincent Hopper. Even in those early years, her professorial style gave a hint of what was to come in her career. According to her biographer Kurt Beyer, “In her probability course, she began with a lecture on one of her favorite mathematical formulas and asked her students to write an essay about it. These she would mark for clarity of writing and style.”

Time for change. Illustration: Krzysztof Nowak

“It was no use trying to learn math,” she would tell her perplexed students when they complained that they were not taking an English composition course, “unless you can communicate it with other people.”

In the early 1940s, Hopper found herself increasingly bored with the routines of teaching and unsatisfied in her marriage. After the Japanese invaded Pearl Harbor, and the United States entered World War II, Hopper decided the time was right for a change. She parted ways with her husband and began plotting a scheme to be helpful to the war effort. One promising opening appeared in the middle of 1942, when Congress passed the Navy Women’s Reserve Act, which opened up a range of non-combat positions in the Navy for female recruits.

Initially, Hopper was rejected from the service for physiological reasons: she weighed only 105 pounds, a full 15 pounds below the Navy’s minimum for a woman of her height. But her brainpower ultimately secured her a special exemption. A new class of machines with odd names, like The Automatic Sequence Controlled Calculator, were suddenly becoming crucial to the war effort. These machines promised to perform complex calculations like logarithms and trigonometric functions, crucial for creating ballistic tables or solving engineering problems like the propagation of radio waves. But the machines were so new they lacked anything even resembling a manual—not to mention a customer support desk—and figuring out how to operate them required advanced mathematical skills. A math PhD—even one that only weighed 105 pounds—could be invaluable to the Allied cause. And that is how Navy Lieutenant Grace Hopper found herself assigned to the Bureau of Ships Computation Project at Harvard University. There she would work on one of the very first computers ever made, the Mark I.

Lt. Hopper with her comrades from the Navy, 1944 or 1945. 15 pounds below the Navy’s minimum weight for a woman of her height but her brainpower secured her a special exemption. © Grace Murray Hopper Collection, Archives Center, National Museum of American History, Smithsonian Institution

Although it was far more powerful than any existing calculator, the Mark I was only a distant relative of today’s computing technology. To begin with, it had an enormous number of moving parts, in the form of electromechanical relays, then commonly deployed in telephone switchboards, that used magnetism to pull two contacts together or keep them apart. (Later machines—some of which Hopper would work on as well—would replace those relays with vacuum tubes and then, eventually, the integrated circuits of modern microprocessors.)

Mark I computer. © Grace Murray Hopper Collection, Archives Center, National Museum of American History, Smithsonian Institution
A little woman and a big machine. Illustration: Krzysztof Nowak

The Mark I was also enormous: eight feet high and 51 feet long, with 530 miles of internal wiring connecting its relays. It weighed almost five tons and featured 750,000 moving parts. The massive machine’s most noteworthy feature,” Beyer notes, “was a paper-tape mechanism. The tape was pre-coded with step-by-step instructions that dictated the machine’s operation and guided it toward a solution without the need for further human intervention.” Those rolls of paper-tape instructions didn’t look like much at the time: just a random-sequences of holes punched into a long roll of paper, like something you might have seen attached to a cash register. But they were a kind of signal from the future. Understanding how to structure those paper-tape instructions so that they elicited the desired behavior from the machine required an entirely new form of expertise, one that would soon become one of the most lucrative and influential skills in the world: programming.

Tape-based mechanism

The Mark I’s rolling tape mechanism was related to a key medium for digital information that persisted well into the 1970s: the punch card. Its small notches in a series of index cards were used to both store information, and convey instructions to the computer. Punch cards had a curious history: they were originally developed for automated weaving machines back in the 19th century, the concoction of a brilliant French engineer named Joseph Marie Jacquard. Inspired by mechanical dolls and music boxes that entertained the French elite during that period, Jacquard hit upon the idea of controlling a mechanical loom with a kind of code that would be imprinted on a punched card, with the holes on the card conveying instructions about the pattern the loom would weave. Despite its initial use in the fashion industry, the Jacquard Loom is considered an important early technology in the pre-history of computing, because inventor Charles Babbage adapted Jacquard’s punch cards for his pioneering Analytical Engine—one of the first designs of a true programmable computer—several decades later.

Mark I computer. © Grace Murray Hopper Collection, Archives Center, National Museum of American History, Smithsonian Institution

Shaping modern programming

“There was no such thing as a programmer at that point. We had a code book for the machine and that was all,” Hopper would recall in an interview more than four decades later. “It listed the codes and what they did, and we had to work out all the beginning of programming.” Because the Mark I relied on such mechanical complexity, Hopper soon realized that she would have to augment her mathematical skills with practical engineering.

“There was no such thing as a programmer at that point.”

She studied the machine’s blueprints and circuit diagrams, trying to build a mental model of the functions behind each switch and relay. Working closely with the Mark I’s inventor, Howard Aiken, and a colleague Richard Bloch, Hopper improvised a set of ingenious coding strategies to cajole the temperamental computer into generating reliable calculations. Because the Mark I was astonishingly slow by modern standards—it could only execute three instructions per second, billions of times slower than the microprocessor in a modern smart-watch—Hopper and Bloch were forced to invent techniques to maximize the efficiency of their programmed instructions. They began borrowing snippets of each other’s code. “If I needed a sine routine, angles less than five over four,” Hopper later recalled, “I’d call over Dick Bloch and say, ‘Can I copy your sine routine?’” Over time the practice would develop into the now ubiquitous custom of shared software “libraries,” where certain common functions can be easily inserted into a specific program without having to write every single instruction from scratch.

Howard Aiken and beginnings of programming

Aiken had long envisioned a large-scale computing device that would facilitate tedious work for him but failed to attract any attention to his futuristic idea and project. After obtaining a PhD in physics from Harvard, and inspired by a Charles Babbage's concept described 100 years earlier, he finally got his project approved and its funding secured in February 1939. Five years later, Mark I, the first operating machine that could execute long computations automatically, was completed and installed at Harvard, proving that a machine can be programmed to execute a sequence of predetermined operations. Aiken’s legacy was an important milestone in initiating the application of computers beyond mathematical problems. And he’s also a father of the world’s first graduate program in computer science at Harvard.

Howard Aiken © Public domain, via Wikimedia Commons
The dawn of programming. Illustration: Krzysztof Nowak

Even in those early days, Hopper’s skills as a conceptual translator came in handy. A military officer would relay a new problem that required the Mark I’s computational prowess to solve, but wouldn’t be able to express the nature of the problem in the form of precise equations. “I learned languages of oceanography, of this whole business of minesweeping, of detonators, of proximity fuses, of biomedical stuff,” Hopper said in an interview. “We had to learn their vocabularies in order to be able to run their problems.” The pressure was relentless. While they were far from the front lines, feeding paper-tapes into the whirring mechanical behemoth in Cambridge, Hopper knew that the calculations they were generating were translating directly into lives and deaths on the battlefield. There was little time to sit back and reflect on the technological revolution they were witnessing first-hand. “The whole thing was the war, the end of the war, getting a job done, terrific pressure,” Hopper recalled near the end of her life. “We didn’t think ahead at all.”

Ironically, the one moment during that period where Hopper began to contemplate the long-term implications of the computing revolution came during a brief holiday leave in December of 1944. Back on the Upper West Side with her family, she described her work on the Mark I to her insurance executive father, who instantly grasped that the corporate world would find immense value in such a machine —particularly a data-intensive business like insurance dependent on calculating probabilities. “Now whether he had gone to New York and mentioned that to any of the insurance companies, I’ll never know, but that was the first mention that I ever heard of using the computer in industry,” Hopper recalled. “It was from my own father.”

Commander Aiken and Lieutenant Hopper working together, 1944 and 1946. © Grace Murray Hopper Collection, Archives Center, National Museum of American History, Smithsonian Institution

Eventually, Aiken decided that Hopper’s language skills made her the ideal candidate to write a proper instruction manual for using the Mark I. While Hopper resisted initially, she ultimately threw herself into the project, creating a document that was both an intellectual history and a how-to guide. She introduced the Mark I—and the notion of programmable computers generally—via a sweeping survey of calculating devices, from the abacus technology developed in the Tigris-Euphrates valley to the philosopher Blaise Pascal’s “arithmetic machine.” She also paid tribute to the Victorian inventor Charles Babbage, whose Analytical Engine—first sketched out in the 1830s but never built—was a close analogue of the Mark I itself. With that context established, Hopper then delivered another 300 pages of detailed instructions, with chapter headings like “How the electromechanical counters work” and “Interpolators and value tapes.” Today there are entire sections of bookstores devoted to how-to guides for the latest programming platforms: TensorFlow, Unreal Engine, blockchain, JavaScript. Hopper’s manual for the Mark I marked the origin of that publishing niche—and to this day, it remains one of the most literate, historically-grounded examples of the genre.

Analytical Engine

The Analytical Engine was the brainchild of the maverick British inventor Charles Babbage, now considered the predecessor of the modern programmable computer. Like the punch card inventor Jacquard, Babbage had been inspired by the “automata” of the 18th century: mechanical toys that could be instructed to move in surprisingly lifelike ways. His childhood experiences with those devices ultimately led him to the idea of a machine that might be capable of thinking. Developed in the 1830s but never fully completed, the “engine” anticipated many of the core architectural features of digital-age machines: a central processing unit that Babbage called the “Mill”; a working memory unit called the “Store,” and programs that would be fed into the machine using punched cards. While Babbage was never able to build a functional version of the machine, in the early 1990s, the London Science Museum did manage to construct a working version of another, slightly less ambitious machine that Babbage had designed, The Difference Engine No. 2.

Legend has it that Hopper’s time at Harvard also generated another now-familiar component of the digital world: the idea of a computer “bug.” While working on the Mark II, the successor to Aiken’s original computer, Hopper and Bloch found themselves in a Harvard lab with no screens on the windows. “We were working on it at night, of course,” Hopper recalled, “and all the bugs in the world came in. And, one night the Mark II conked out.” When they went looking through the electromechanical relays to determine what part had failed, they found a large moth had gotten trapped in the device, thus preventing it from executing its commands. “We took it out and put it in the log book and pasted Scotch tape over it,” Hopper recalled. While the term “bug” had long been used by mechanical inventors to describe failure points in their designs, after the night of the moth, Hopper and Bloch began using a genuinely new term to describe their efforts to fix flaws in the software, one that remains in use today. They called it debugging.

The First "Computer Bug" detected and debugged on September 9, 1947. Moth found trapped between points on the Mark II Aiken Relay Calculator while it was being tested at Harvard University. © Courtesy of the Naval Surface Warfare Center, Dahlgren, VA, 1988. Public domain, via Wikimedia Commons

From the creation of compilers to the birth of COBOL

On its own, Grace Hopper’s role as one of the world’s first software programmers would have earned her a prominent spot in the pantheon of computer history. But her tenure with Bloch and Aiken at Harvard was only the beginning. Influenced by her father’s early insight about the commercial potential of digital machines, Hopper left the Navy in 1949, and after turning down offers from IBM, Honeywell and RCA, she took a job as “Senior Mathematician” at the Eckert-Mauchly Computer Corporation, later known as the Sperry Rand Corporation. Once again, her fixation with making code intelligible to humans would lead to breakthrough ideas that changed computing forever.

Grace Hopper examining in front of UNIVAC magnetic tape drives, holding a COBOL programming manual. © Courtesy of the Computer History Museum

Inspired by the work of another woman programmer named Betty Holberton, who had created a small software program that would actually write a program of its own, Hopper began thinking about a kind of translation layer that would allow programmers to use symbolic mathematical code and have it automatically converted into machine language. Part of Hopper’s goal was simply eliminating the inevitable errors that would result when programmers tried to translate their instructions by hand into machine language. “You had to copy these darn things,” she recalled, “and I sure found out fast that programmers can not copy things correctly.” Working with the powerful new UNIVAC computer—the first machine built entirely with electronic components—Hopper devised a scheme where programmers could simply invoke any number of subroutines with simple “call numbers.” The system was dubbed the A-O compiler, and it inaugurated a way of interacting with computers that would become ubiquitous: writing code in one language, and then “compiling” that code into machine language.

UNIVAC

UNIVAC was the first American computer designed at the outset for business and administration, making it a direct competitor for punch-card machines. Hopper worked on the team developing the machine and found it really difficult to convince that a new programming language would use entirely English words as “computers didn't understand English." She persisted that "it's much easier for most people to write an English statement than it is to use symbols." Data processors, who were not typically mathematicians or engineers, would be more comfortable using word-based languages. Her idea was only accepted three years later, and the first computer language “compiler” was born.

Compilers marked an enormous step forward in terms of making programming easier and less error-prone, but it also opened up a radical new possibility that would not become apparent for several years: the idea that you could write a single program and then compile it to run on entirely different hardware. “Her instinct that programming should be machine-independent was a reflection of her preference for collegiality; even machines, she felt, should work well together,” the tech historian Walter Isaacson wrote in his book, The Innovators. “It also showed her early understanding of a defining fact of the computer age: that hardware would become commoditized and that programming would be where the true value resided. Until Bill Gates came along, it was an insight that eluded most of the men.”

“Grace Hopper’s instinct that programming should be machine-independent was a reflection of her preference for collegiality; even machines, she felt, should work well together.”
Computer whisperer. Illustration: Krzysztof Nowak

Emboldened by her success, Hopper embarked on a new project designed for the kind of back-office accounting uses that her father had initially envisioned. She created a compiler—eventually called FLOW-MATIC—that could trigger instructions based on 20 English-language statements, mostly revolving around common calculations used in billing and payroll. “The systems and procedures analysts, the accountants, operating management can use the UNIVAC FLOW-MATIC SYSTEM with little training,” the user manual promised. “Familiarity with detailed computer coding is not necessary.” FLOW-MATIC offered a glimpse of a future where software was integrated into every aspect of corporate life, a world where you didn’t have to have an advanced math degree to get something useful out of a digital computer.

Grace Murray Hopper in her office in Washington DC, 1978. © Lynn Gilbert, CC BY-SA 4.0, via Wikimedia Commons

The success of that project sparked an even more ambitious idea: creating an entire programming language that would use intelligible English phrases. The resistance from her superiors was immediate. "I was told very quickly that I couldn't do this because computers didn't understand English," she recalled. But Hopper persisted. “I decided there were two kinds of people in the world who were trying to use these things,” she said. “One was people who liked using symbols – mathematicians and people like that. There was another bunch of people who were in data processing who hated symbols, and wanted words, word-oriented people very definitely. And that was the reason I thought we needed two languages.”

Eventually, that “second language” would emerge in the form of COBOL, which Hopper helped create as an advisor and evangelist in the late 50s and early 60s, and which went on to become one of the most influential programming languages ever created, and the first to use natural language phrases. We often think of the invention of the graphic interface in the 1970s as the watershed moment in the evolution of human-computer interaction, but you can make the argument that COBOL was an equivalent advance, turning abstract symbols into human-readable words. Hopper’s vision was evident in the name COBOL itself, which was short for Common, Business-Oriented Language. If we are indeed about to enter a new world where complex software can be designed just by describing the program you want in natural language sentences, it was Grace Hopper—more than anyone—who first grasped that such a thing was even possible.

Grace Hopper standing in front of a COBOL class. © Courtesy of the Computer History Museum
The idea that a machine that can only “think” in machine language is not nearly as powerful as a machine that can process and express itself in statements that resemble human languages seems almost obvious to us now, but it was a controversial proposition when it was first proposed by one of the most charismatic and influential figures in the early years of software: Grace Hopper.
Grace Hopper with three other programmers and the operator's console of the Univac I computer, 1957. © Courtesy of the Computer History Museum
Commodore Grace M. Hopper, 1984. © James S. Davis, Public domain, via Wikimedia Commons

Computer whisperer

“For some reason [I’ve] been able to explain things to people without necessarily using a technical vocabulary,” Hopper once observed. “I could switch my vocabulary and speak highly technical for the programmers, and then tell the same things to the managers a few hours later but with a totally different vocabulary. So I guess I’ve always been innately a teacher.” That skill made her a brilliant software designer, and helped inspire a generation of woman programmers who played a crucial role in the decades that followed: women like Gladys West, Katherine Johnson, and Radia Perlman. But computer pioneers—particularly the female ones—did not generally get the public recognition they do today back in the 1940s and 1950s. Hopper’s work was even less celebrated because so much of the work at Harvard had been classified. It wasn’t until 1983 that she began to receive the public accolades her contributions warranted after the American news program 60 Minutes aired an interview with her. That broadcast triggered a wave of honors: a special Congressional bill elevated her rank in the Navy to Commodore, and a new military data center was named after her.

But perhaps the most fitting honor came in 1997 when the Navy commissioned a new state-of-the-art missile destroyer. Hopper’s calculations had once quietly guided the missiles fired on the front lines of World War II. Now, more than 50 years later, the Naval computer operators—the descendants of Hopper and Bloch crunching the numbers with the Mark I at Harvard—were calculating missile trajectories on the deck of the USS Hopper.

It was Grace Hopper—more than anyone—who first grasped that complex software can be designed just by describing the program you want in natural language sentences was even possible.

Steven Johnson is the bestselling author of 13 books, including Where Ideas Come From. He’s the host of the PBS/BBC series Extra Life and How We Got to Now. He regularly contributes to The New York Times Magazine and has written for Wired, The Guardian, and The Wall Street Journal. His TED Talks on the history of innovation have been viewed more than ten million times.

Don't miss a good story
Don't miss a good story
Don't miss a good story
Don't miss a good story
Don't miss a good story
Don't miss a good story
newsletter

Sign up to uncover the stories of Hidden Heroes with us