The history of progress in computing can be measured across two primary axes. The first often goes by the name Moore’s Law: the exponential increase in processing power and efficiency that enables us to now have all the computing power in the world that existed in the 1960s on a phone in our pockets. But there is another, equally important transformation that has been more a matter of art and design than raw engineering: our computers have become increasingly human in the way they interact with us. We’ve gone from switchboards and punch cards and inscrutable machine code to user-friendly graphic interfaces and talking assistants.
That shift towards more human interfaces is even transforming the world of programming itself. Of all the intriguing new developments in artificial intelligence of the past few years, one of the most striking—to anyone who has spent any time developing software—is the rise of “AI-based programming assistants” or AI agents who can actually write functional code on their own. Instead of forcing you to learn the intricacies of a language like C++ or Python, the latest tools offer a radical new proposition: you simply tell the computer in ordinary language what you want your new program to do—create a to-do-list manager, re-create the classic video game Pong—and the AI agent will write the code for you automatically.
AI-based tools for programming—including software like GitHub CoPilot and OpenAI’s Codex—have proliferated over the past few years as Large Language Models have matured enough to actually perform useful tasks. Interestingly, the programming skills of the AI models were an accidental discovery. When the team at OpenAI originally developed their flagship language model GPT-3, they did not specifically design it to be able to write code. But as it happened, the vast training data that they used to build the model—a significant slice of the entire Internet—contained countless examples of working code, accompanied by explanations of what the code was actually doing. In their early explorations with the model after training, they found, to their surprise, that the software had acquired rudimentary programming skills. AI innovators later refined those skills with deliberate training sets focused on code examples, creating the array of products now available on the market.
Programming “co-pilots” may be one of the hottest areas in AI research right now, but the underlying idea behind them belongs to a tradition that dates back more than 80 years to the very origins of the computer age. The central premise is that digital computers require layers of translation, converting human concepts and instructions into the underlying zeroes and ones that all digital computation depends on. A machine that can only “think” in machine language is not nearly as powerful as a machine that can process and express itself in statements that resemble human languages. That idea seems almost obvious to us now, but it was a controversial proposition when it was first proposed by one of the most charismatic and influential figures in the early years of software: Grace Hopper. Her career involved a number of milestones in technological history; she programmed one of the very first computers ever made, and helped design one of the first programming languages. But more than anything else, Grace Hopper’s legacy lies in the simple fact that she made these remarkable new machines more human.
In pursuit of a career in the Navy
Hopper’s career as a software pioneer was set in motion by two developments: an unhappy marriage and the onset of war. Born to a well-to-do Manhattan family, the daughter of an insurance executive and a mathematician, Hopper displayed an early propensity for math, majoring in the subject at Vassar and going on to complete a doctorate at Yale in 1934—the eleventh woman to receive a math PhD in the school’s history. She landed a job teaching at Vassar and married a literature professor named Vincent Hopper. Even in those early years, her professorial style gave a hint of what was to come in her career. According to her biographer Kurt Beyer, “In her probability course, she began with a lecture on one of her favorite mathematical formulas and asked her students to write an essay about it. These she would mark for clarity of writing and style.”
“It was no use trying to learn math,” she would tell her perplexed students when they complained that they were not taking an English composition course, “unless you can communicate it with other people.”
In the early 1940s, Hopper found herself increasingly bored with the routines of teaching and unsatisfied in her marriage. After the Japanese invaded Pearl Harbor, and the United States entered World War II, Hopper decided the time was right for a change. She parted ways with her husband and began plotting a scheme to be helpful to the war effort. One promising opening appeared in the middle of 1942, when Congress passed the Navy Women’s Reserve Act, which opened up a range of non-combat positions in the Navy for female recruits.
Initially, Hopper was rejected from the service for physiological reasons: she weighed only 105 pounds, a full 15 pounds below the Navy’s minimum for a woman of her height. But her brainpower ultimately secured her a special exemption. A new class of machines with odd names, like The Automatic Sequence Controlled Calculator, were suddenly becoming crucial to the war effort. These machines promised to perform complex calculations like logarithms and trigonometric functions, crucial for creating ballistic tables or solving engineering problems like the propagation of radio waves. But the machines were so new they lacked anything even resembling a manual—not to mention a customer support desk—and figuring out how to operate them required advanced mathematical skills. A math PhD—even one that only weighed 105 pounds—could be invaluable to the Allied cause. And that is how Navy Lieutenant Grace Hopper found herself assigned to the Bureau of Ships Computation Project at Harvard University. There she would work on one of the very first computers ever made, the Mark I.
Although it was far more powerful than any existing calculator, the Mark I was only a distant relative of today’s computing technology. To begin with, it had an enormous number of moving parts, in the form of electromechanical relays, then commonly deployed in telephone switchboards, that used magnetism to pull two contacts together or keep them apart. (Later machines—some of which Hopper would work on as well—would replace those relays with vacuum tubes and then, eventually, the integrated circuits of modern microprocessors.)
The Mark I was also enormous: eight feet high and 51 feet long, with 530 miles of internal wiring connecting its relays. It weighed almost five tons and featured 750,000 moving parts. The massive machine’s most noteworthy feature,” Beyer notes, “was a paper-tape mechanism. The tape was pre-coded with step-by-step instructions that dictated the machine’s operation and guided it toward a solution without the need for further human intervention.” Those rolls of paper-tape instructions didn’t look like much at the time: just a random-sequences of holes punched into a long roll of paper, like something you might have seen attached to a cash register. But they were a kind of signal from the future. Understanding how to structure those paper-tape instructions so that they elicited the desired behavior from the machine required an entirely new form of expertise, one that would soon become one of the most lucrative and influential skills in the world: programming.
The Mark I’s rolling tape mechanism was related to a key medium for digital information that persisted well into the 1970s: the punch card. Its small notches in a series of index cards were used to both store information, and convey instructions to the computer. Punch cards had a curious history: they were originally developed for automated weaving machines back in the 19th century, the concoction of a brilliant French engineer named Joseph Marie Jacquard. Inspired by mechanical dolls and music boxes that entertained the French elite during that period, Jacquard hit upon the idea of controlling a mechanical loom with a kind of code that would be imprinted on a punched card, with the holes on the card conveying instructions about the pattern the loom would weave. Despite its initial use in the fashion industry, the Jacquard Loom is considered an important early technology in the pre-history of computing, because inventor Charles Babbage adapted Jacquard’s punch cards for his pioneering Analytical Engine—one of the first designs of a true programmable computer—several decades later.