Intelligent Agent. How Pattie Maes almost invented social media


Belgian-born computer scientist Pattie Maes was inventing the core principles behind the social media age when Mark Zuckerberg was still in kindergarten. Why have her contributions been neglected?

Illustration: Patryk Hardziej
Pattie Maes
Pattie Maes
Pattie Maes

Anyone who was around for the early days of the World Wide Web, before the Netscape IPO and the dotcom boom, knows that there was a strange quality to the medium back then – in many ways the exact opposite of the way the Web works today. It was oddly devoid of people. Tim Berners-Lee had conjured up a radically new way of organizing information through the core innovations of hypertext and URLs, which created a standardized way of pointing to the location of documents. But almost every Web page you found yourself on back in those frontier days was frozen in the form that its author had originally intended. The words on the screen couldn’t adapt to your presence and your interests as you browsed. Interacting with other humans and having conversations – all that was still what you did with email or usenet or dial-up bulletin boards like The Well. The original Web was more like a magic library, filled with pages that could connect to other pages through miraculous wormholes of links. But the pages themselves were fixed, and everyone browsed alone.

Pattie Maes 2008. Courtesy of Pattie Maes

Developed in the late-70s, USENET was the more sociable sibling of email protocols that had emerged a few years earlier. USENET served for many years as the primary venue for distributed public dialogue, places where strangers could talk to one another in thematically-organized “newsgroups” like sci.physics and alt.politics. If you wanted to strike up a digital conversation with a specific friend or group of colleagues, email was your platform. But if you wanted to get up on a soapbox or find new friends who might share your interests, USENET was the way to go. It was also, it should be noted, a complete cesspit of porn and hate speech as well. The phrase “flame wars” had to be invented to describe the default tone of some newsgroups, and the first spam message in history appeared there. Though most online conversation ultimately shifted to the web — and then to social media — USENET newsgroups continue to be active to this day.

HOMR, first known as “Ringo,” maybe didn’t look that appealing, but used the innovative idea of collaborative filtering. Courtesy of Pattie Maes

One of the first signs that the Web might eventually escape those confines arrived in the last months of 1994, with the release of an intriguing (albeit bare-bones) prototype called HOMR, short for the Helpful Online Music Recommendation service.

HOMR was one of a number of related projects that emerged in the early-to-mid-90s out of the MIT lab of the Belgian-born computer scientist Pattie Maes, projects that eventually culminated in a company that Maes co-founded, called Firefly. HOMR pulled off a trick that was genuinely unprecedented at the time: it could make surprisingly sophisticated recommendations of music that you might like. It seemed to be capable of learning something about you as an individual. Unlike just about everything else on the Web back then, HOMR’s pages were not one-size-fits all. They suggested, perhaps for the first time, that this medium was capable of conveying personalized information. Firefly would then take that advance to the next level: not just recommending music, but actually connecting you to other people who shared your tastes.

Maes called the underlying approach “collaborative filtering”, but looking back on it with more than two decades’ worth of hindsight, it’s clear that what we were experiencing with HOMR and Firefly was the very beginnings of a new kind of software platform that would change the world in the coming decades, for better and for worse: social networks.

HOMR’s pages suggested, for perhaps the first time, that this medium was capable of conveying personalized information.

Rise of the Software Agent

Maes was born in the early 1960s in Brussels, the child of a doctor and a dentist. “I always tell people, I was not the type of kid who took apart radios and built robots,” she says now, looking back on those early years. “I emphasize that because when I was growing up whenever I read an article a computer scientist—that's what they would say. But that wasn’t me. I was playing with Barbies—and Legos.”

Arriving as an undergrad at the Free University of Brussels during the late-1970s oil crisis, Maes initially gravitated towards a computer science major for entirely practical reasons. “There were no jobs for kids leaving college,” she says, “and though I wanted to either study architecture or biology, I eventually ended up choosing computer science, really for two reasons. I did realize that computers were going to be important in any domain, so I could still do biology or architecture in the future. But the other reason was purely practical: I’d definitely have a job when I graduated.” It wasn’t until she enrolled in a class on artificial intelligence that Maes found herself intellectually engaged with the material.

“AI was all about modeling human intelligence back then,” she recalls. “I thought: wow, this relates to people.” Within a few years she earned a PhD in Artificial Intelligence, and moved to the US to do post-graduate work at MIT, studying with AI pioneers like Rodney Brooks and Marvin Minsky. “I came to visit first for like two months and then for a year, and then the year became two years,” she says. The move was a bold one for more than just geographic reasons. In the late eighties, the extended AI Lab at MIT consisted of around forty scholars. Maes was the only woman in the entire group.

“AI was all about modeling human intelligence back then. I thought: wow, this relates to people.”
Pattie’s pro-human approach to AI quickly put her on the map. Courtesy of Pattie Maes

The late 80s and early 90s belonged to a longer period in AI research often referred to as the “AI Winter”—a frustrating stretch of time where the field appeared to make little progress, after an early wave of hype in the 60s and 70s. Ultimately, Maes came to believe that AI back then was “just creating intelligent systems for the sake of making more intelligent systems,” she says now. “I was always much more interested in helping people—thinking about how technology could help us with decision-making and communication and finding other people that we might want to talk to. Or how it could augment our memories.”


Artificial intelligence first emerged as a field through a flurry of thrilling developments in the 1950s. Stanford’s John McCarthy first coined the phrase in the mid-50s, and by the end of the decade, computers were learning how to play elemental games like checkers. The Perceptron, the first “neural net” modeled on the architecture of the human brain — the ancestor of modern AI superstars like GPT-3 — was developed in 1958. At the time, the rapid early progress in the field suggested that simulating open-ended intelligence and problem solving was within reach. Reporting on the launch of the Perceptron, The New Yorker claimed that the machine “was capable of original thought. Indeed, it strikes us as the first serious rival to the human brain ever devised.” But genuine “rivals” to the human brain ended up requiring far more computational power than the technology world possessed in the ensuing decades. The field went through a decades-long stretch without making serious progress—now known as the “AI winter”—until the 2010s, when the emergence of organizations like DeepMind and OpenAI finally began to deliver on the original vision.  

Working with a handful of grad students in a lab she called the “Software Agents” group, Maes began exploring the ways that shared social information could generate helpful recommendations. “We started this work actually before browsers existed,” Maes says now, with a chuckle. The first iteration revolved around science fiction novels, and was entirely e-mail based. You sent off an email with the names of sci-fi books you liked, and the software emailed back some suggestions for further reading, based on your tastes. A student of Maes’ named Karl Feynman—son of legendary physicist Richard Feynman—created an e-mail recommendation system for music, called RINGO. When Feynman left MIT, another grad student, Upendra Shardanand, began working on the browser-based version, HOMR, under Maes’ supervision. “The whole idea was really to kind of simulate the joy of going to a record store and browsing,” he says now, looking back on that original project. “There was something brain-tickling about the whole thing. It was all about the joy of discovery and exploration.”

A digital magic trick. Illustration: Patryk Hardziej

The interaction was simple: the software offered you a random sampling of artists to rate on a scale of 1-7: Arrested Development, Nirvana, Van Morrison, The Sex Pistols. Once you submitted the ratings, the software would recommend a list of albums that you might like, given your tastes. In a medium defined by static information, HOMR offered something different: it seemed, in a slightly uncanny way, to know a little bit about you, to have a feel for something as inchoate as musical taste. The page it served up with those music recommendations was composed on the fly – you weren’t just reading through the same archived page that a thousand other people had read. Some of the artists it recommended were invariably ones you already knew, and that was impressive enough given that you were getting these recommendations from an algorithm. But the real trick was getting a recommendation that you hadn’t come across before, a musician who did turn out to be in your wheelhouse once you tracked down one of their albums. HOMR wasn’t just a digital magic trick – it was surprisingly useful.

10 years before Spotify even existed, HOMR worked like magic. Courtesy of Pattie Maes

Part of the magic lay in the fact that HOMR’s aesthetic sensibility was not hard-coded in advance. A programmer somewhere didn’t just create a pre-existing database of artists, organized by explicitly defined sub-genres. Instead, the association between different artists—Pearl Jam with Soundgarden, Joni Mitchell with Neil Young—was emerging from the bottom-up out of thousands of ratings sets that had been submitted by early users. Over time, the software learned to detect clusters of musical taste in all that data, a kind of transitivity principle of taste. If you liked the Pet Shop Boys, and someone else liked the Pet Shop Boys and Simple Minds, there was a higher probability that you might like Simple Minds as well.

It wasn’t explicit in the software yet, but there was another latent implication that HOMR was predicated on—if you shared some overlapping set of cultural tastes or references with someone, then perhaps you might want to form a deeper connection with that person, that relationships between individuals could be organized and mapped statistically, using databases and computer algorithms.

Social Media origins

The conventional history dates the origins of social networking back to the late 1990s and early aughts, marked by the launch of services like, Friendster, and the photo-sharing site Flickr. But many of the core ideas that would shape the social media revolution—minus the advertising model that would ultimately cause so much trouble—originated with Maes’ research at the Media Lab well before then.


“A lot of what we did was model people and collect data,” Maes says now with a wry smile. “It sounds terrible now but we thought of this as a positive thing. We were a little bit naive I guess back then about how this would all be used in the long run. But we thought: well, if we know a little bit more about people and their interests, then we can help them.”

Originally, Maes called the technique social filtering. “But then somebody said ‘social filtering?—that sounds like Nazis.’” For a while, they tried adding the word “information” to the phrase to make it more palatable: “social information filtering.” But eventually Maes settled on a new name, one that briefly became a catchphrase of mid-90s Internet culture: collaborative filtering.

Early icons of the soon-to-come Internet’s social revolution
Social bubbles. Illustration: Patryk Hardziej

A paper Maes published in 1995, co-authored with Shardanand, laid out the approach in a clear language, free of the usual jargon of academic prose. “We need technology to help us wade through all the information to find the items we really want and need, and to rid us of the things we do not want to be bothered with.” You could sift through that information through traditional approaches like keyword filtering, but keywords were useless when trying to make more subtle assessments, like the ones at play when we like or dislike certain kinds of music. Other researchers were exploring automated ways of detecting meaning in text documents, using approaches like latent semantic indexing. But even if those techniques might be able to detect connections between articles online, they would be useless with other forms of media. Collaborative filtering took a different approach. As Maes and Shardanand wrote in the 1995 paper, the technique “essentially automates the process of ‘word-of-mouth’ recommendations: items are recommended to a user based upon values assigned by other people with similar taste. The system determines which users have similar taste via standard formulas for computing statistical correlations.” (The paper has now been cited almost five thousand times in other scholarly papers that followed in its wake.)

“Pattie back then—like Pattie now—as an academic advisor was really a zenmaster,” Shardanand says. “She was thoughtful, really listened to you, and had very insightful reactions.”

From Firefly to FaceMash

Before long, Maes and her students realized that collaborative filtering was useful for much more than simply recommending new artists or novels; once you’d given the computer a sense of your personal interests—and your connections to other people—all sorts of new possibilities emerged. “You can tell people how unique their interests are, like how rare are the books that they’re interested in,” Maes explains. “Or: who are the other people who like the same books or the same music that you like?”

This seems obvious to us now, given that some of the most valuable companies in the world are predicated on this model, but in the mid-nineties Maes had a hard time convincing anyone that this could be a viable platform for a business.

What Maes’ research began to suggest was the possibility of organizing information around people: their likes and dislikes, their interests, their social circles. This seems obvious to us now, given that some of the most valuable companies in the world are predicated on this model, but in the mid-nineties Maes had a hard time convincing anyone that this could be a viable platform for a business.

Ultimately Maes and a team of students from MIT—including Sharpandra as CTO and a recent Harvard Business School grad named Nick Grouf as CEO—decided to take matters into their own hands and start a company themselves. “The whole idea of the Media Lab was always that we do the research and then these big companies take what we invent and commercialize it,” Maes says. “But [the big companies] weren’t doing that or they weren’t ready for it. So we started Firefly.”

Looking back from our contemporary perspective, the Firefly site—which launched in October of 1995—seems like a kind of a time-machine, anticipating a whole set of advances that would become mainstream more than a decade later. “Recommendations were a big part of it, but in order to support recommendations, we started doing profile pages, and then messaging, then groups. And so we inadvertently built this social network,” Sharpandra recalls. Thanks to those profile pages, you weren’t just using the service to discover new bands to follow; you were using it to find interesting like-minded people, and get into conversations with them. Online communities had existed before Firefly of course: there were bulletin boards like The Well in the 1980s, and chat rooms at AOL. But Firefly was one of the first—if not the very first—to map connections between people using some kind of structured database that was built on personal information: in their case, information about your taste in music or books or movies.


Many Americans of a certain vintage had their first exposure to networked life through America Online, but for the younger readers out there (and those from outside the US), it may require a brief introduction. AOL was, at least initially, a closed network that you joined for a monthly fee, which you accessed through special phone numbers that would connect directly with their servers. Once connected, you could access chat rooms, news and information services, and email. Its signature audio feature — a digitized voice cheerily announcing, “You’ve got mail!” — inspired the rom-com classic by the same name. Eventually, AOL opened up a portal to the wider Internet and for a while, it was right up there with Yahoo, Netscape, and Amazon as one of the brightest stars of the original boom.

“And so we inadvertently built this social network“

Soon the Firefly team began to see evidence of a phenomenon that would become commonplace in the next decade: virtual connections leading to real-word relationships. “We had all these marriages that came out of Firefly actually,” Maes says, “because people were so excited to find other people that were into the same obscure stuff.

”Firefly never really took off as a business, in large part because the advertising model that would later support social networks like Facebook or Twitter simply didn’t exist in the late 90s. But even in those early days, the potential for a new, personalized model of advertising was visible. A profile of the company in BusinessWeek noted: “Marketers say the software agents developed by Firefly could move them closer to their Holy Grail by providing a way to predict what customers are likely to want next –and the means to reach them with a customized pitch that could cost a tenth of more traditional direct-marketing programs.” Firefly struck licensing deals with Barnes and Noble, Yahoo, and Rolling Stone, while adding new social features to the core site. Maes’s visionary descriptions of future “intelligent agents” built on collaborative filtering were published in Wired. Magazines like Time and Newsweek put her on their lists of the most influential “cyber-elite.” (She even found her way into People Magazines’s “50 Most Beautiful People” issue in 1997, undoubtedly a first for a MIT computer scientist.)

Pattie Maes’ profile in “Red Herring” magazine.
Courtesy of Pattie Maes

How Facebook, Instagram and Twitter are making money? By serving millions of precisely profiled ads to billions of its users. Whole new advertising model that was developed in the 2000’s and skyrocketed social platform’s incomes to unprecedented levels is generally based on social connections within the platforms. Gathering detailed knowledge about user’s behavior, preferences, interests and even personality types allows to serve highly effective contextual ads, that fit individuals like never before. And it’s all based on the same innovative Ideas and frameworks that Pattie and her teams have been working on. Today they are the foundation of the whole industry’s spectacular business success, as well as a major source of complaints about the companies' privacy abuses.

Tim Berners-Lee (co-creator of the WWW) and Pattie Maes styled as Mulder And Scully from the cult TV series “The X-Files,” Improper Bostonian 10/1997. Clockwise from bottom right: strategy+business magazine 20/2000, PC/Computing magazine 2/2000, Red Herring magazine 3/2000. Courtesy of Pattie Maes

Slowly, the core ideas behind collaborative filtering began to trickle out into the marketplace, most famously in Amazon’s “People who bought this book also bought this other book” algorithm, which was one of its key features in the early days of online shopping. Soon the idea of receiving cultural recommendations based on collective data would become ubiquitous. Every time Netflix recommends a new show for you to watch, or Spotify generates an automated playlist based on your recent listening history, you are enjoying the descendants of HOMR and Firefly.


No company represents the commercial impact of the collaborative filtering approach more decisively than Netflix through the success of their recommendation engines. We now think of Netflix primarily as a platform distinguished by its original productions—Stranger Things, House Of Cards. However, the company rose to prominence back in the days when it was sending DVDs to subscribers through the mail, leveraging its ability to suggest interesting new movies to watch, movies that seemed brilliantly tailored to an individual subscriber’s tastes. Perhaps the greatest illustration of how important Netflix considered its recommendation engine is the fact that they held a series of open competitions—known as the Netflix Prize—offering $1 million dollars to any outside programmers who devised the best upgrade to their in-house collaborative filtering system, Cinematch. After three years of staged competitions, the Grand Prize was awarded in 2009 to a group with the memorable name, “BellKor's Pragmatic Chaos.”

Collecting personal information, of course, posed significant privacy problems, many of which did not have robust technical solutions in the late 90s. The team at Firefly began building sophisticated back-end user privacy and identity tools to accompany the recommendation services. Ultimately, Firefly was purchased by Microsoft in 1998, largely for those user ID technologies, which became the foundation of Microsoft Passport.


A “Microsoft account” is a single sign-on to Microsoft services running on Microsoft operating systems. Introduced along with Windows 10, Microsoft Passport received a lot of criticism upon its arrival, mainly because of “the violations of the laws of identity”.

The Firefly site itself was shut down in 1999, but by that time the seeds of social networking had begun to take root. Sixdegrees had launched in 1997, followed by MySpace and Friendster. Just three years after went dark, a Harvard freshman named Mark Zuckerberg started toying with the idea of a social network he originally called FaceMash. The rest, as they say, is history.

In pursuit of Augmented Memory

More than two decades later, Maes continues to work at the Media Lab, running the “Fluid Interfaces” group that focuses on using technology for cognitive enhancement. While the core ideas behind collaborative filtering have transformed society—in positive and negative ways alike—since she first began dreaming them up in the early 90s, a number of other fascinating projects she developed have still not been implemented at scale. Perhaps, the most provocative idea is what Maes calls “remembrance agents,” software that would seamlessly augment your memory by dynamically gathering together bits of information from all your various applications, based on your current task.

Remembrance agents. Illustration: Patryk Hardziej

“It would take all of your data, your emails, your files, even conversations that you may have taken notes during,” Maes explains. “And every time you were in a certain new context—like I’m talking to you now—it would bring up all the notes from previous conversations that we had when we last met, previous emails that you may have sent to me and so on. Bringing all the relevant context, and making that available proactively so that it's a little bit easier to switch contexts, and be efficient and effective.”

One reason that the original vision of the “remembrance agent” hasn’t been built is that for the system to work across multiple apps, it would require a shared social database of identity, so that your email client would know that your colleague Marcia was the same Marcia who had sent you a text yesterday, and who collaborated with you on that Google doc last week. But today, most of the persistent shared social graphs belong to big, advertising-supported companies like Facebook or LinkedIn. If you wanted to build an underlying foundation to connect all your social interactions, to augment your memory in a seamless way, you’d have to pay the price of admission of watching a bunch of ads for mattresses and diet pills.

“All of these ideas—I think they make so much sense,” Maes says. “But because of this whole advertising model and because developers ultimately don't have people's interests in mind, I think that these things don't get introduced or don't get built.”

The remembrance agent—like her pioneering work on collaborative filtering in the 90s—has all the hallmarks of the Pattie Maes vision of computing, grounded in the human side of the human-computer interaction. “Looking back, it was a benefit for me that I wasn't one of these typical [computer scientists] who are really in love with machines in general,” she says. “I'm in love with people not machines!”

Steven Johnson is the bestselling author of 13 books, including Where Ideas Come From. He’s the host of the PBS/BBC series Extra Life and How We Got to Now. He regularly contributes to The New York Times Magazine and has written for Wired, The Guardian, and The Wall Street Journal. His TED Talks on the history of innovation have been viewed more than ten million times.

Don't miss a good story
Don't miss a good story
Don't miss a good story
Don't miss a good story
Don't miss a good story
Don't miss a good story

Sign up to uncover the stories of Hidden Heroes with us