Tom Griffiths is decoding intelligence — both human and artificial — to think differently about how we think

Watch the Venture Forward video

Photo by Sameer Khan

It takes Tom Griffiths just 15 minutes to walk from the Computer Science Building on Olden Street to Peretsman Scully Hall, home of Princeton’s psychology department. Jointly appointed in both departments, Griffiths makes the jaunt every weekday, during which he can sometimes feel the different sides of his own brain negotiating the competing perspectives of both centers. 

“In computer science, humans are the best example of what we want our computers to do, which is to act intelligently in the world,” Griffith said. “In psychology, there’s been a view that humans are not particularly good at making decisions due to bias and irrationality.”

“It’s a paradox,” Griffiths said. “How can humans be both these error-prone decision-makers and our paradigm example of an intelligent system that we’re still aspiring to create? I sometimes joke that my view on human cognition changes as I walk from one side of the campus to the other.”

Every time Griffiths cuts through Scudder Plaza, crosses Washington Avenue and glimpses Pardee Field on his way to his office in Peretsman Scully Hall, his journey connects his “life of the mind,” traversed between both locations, focused on computer algorithms and the psychological tools used to study human and artificial intelligence (AI). As the Henry R. Luce Professor of Information Technology, Consciousness, and Culture of Psychology and Computer Science, Griffiths studies computer science ideas and applies them to psychology — and vice versa. “Finding those kinds of connections is the most intellectually exciting part of the work.”

Where some psychologists see human cognition as often filled with distortion, bias and error, Griffiths sees it as elegantly efficient. As AI models become increasingly capable of doing tasks that previously only humans could do, Griffiths is interested in understanding why humans are still better at so many things — and how we can use that knowledge to both build better machines and help people make better decisions.

“That means trying to understand how it is people learn, make decisions and plan — all the things that characterize how minds work,” he said. “And it means using tools from AI, machine learning, statistics and computer science to help make sense of human behavior.”

As the inaugural director of the Princeton Laboratory for Artificial Intelligence (AI Lab, for short), Griffiths is helping to shape the future of AI research at Princeton. The AI Lab is an incubator that provides resources — from technical expertise to GPU clusters — to allow Princeton researchers across disciplines to explore AI’s potential impact in their fields.

“The AI Lab was created to support and expand AI research on campus, identifying areas of research where there’s an opportunity to make impact in a way that’s nimbler than a normal university unit,” Griffiths said. “And it’s for faculty and researchers who want to try some of these methods in their field before finding other kinds of funding for that research.”

Griffiths’ dual perspective, seeing human cognition through the lens of computer science and computing through the lens of psychology, makes him an excellent director for the AI Lab. “There are meaningful problems in the sciences and humanities that we can tackle using AI and the cognitive tools we’ve developed,” he said. “We can take a question that comes from a discipline that’s maybe quite far away from computer science and then build a bridge that makes it possible to use some of these ideas.”

By making the investment, Princeton can accelerate the kind of research these fields can do. For instance, Ellen Zhong, assistant professor of computer science, is constructing intricate three-dimensional images of proteins and other biomolecules using machine learning and computer vision tools. The algorithms developed by Zhong and her colleagues process millions of raw two-dimensional pictures captured by powerful cryogenic electron microscopes to carefully assemble high-resolution 3D representations that help them better understand how genes and proteins function at the atomic level.

For Griffiths, the goal is discovering ways to change how we think about thinking itself. It’s a perspective that could only come from someone who has spent decades walking between psychology and computer science and finding insight in the space between.

Wizarding up

Griffiths grew up in Perth, a remote city located on the western coast of Australia. When he was 13, he contracted a series of viruses that led to a chronic fatigue condition, forcing him to miss two years of school. For many teenagers, such isolation would’ve been devastating. For Griffiths, it became a peculiar kind of sabbatical that would shape his future.

During those homebound years, he immersed himself in text-based online role-playing games, connecting via dial-up modem. One game proved particularly transformative. After advancing to a certain stage, players could level up to “wizard” and access powers to modify the virtual world of the game using object-oriented programming. Griffiths, who had been writing code on his father’s Epson computer since he was 8, collaborated with other players to expand the game’s environment and even began coding new games.

“I received really good training in computer science, probability and statistics through those games,” said Griffiths, who eventually began teaching himself probability theory and various algorithms by writing programs to simulate different gaming strategies and model the probability of different outcomes. “In my research now, I use some of those same simulation concepts to try and make sense of the world we actually live in and how it is that human minds respond to the challenges posed by the kinds of events that fill our lives.”

When it came time for Griffiths to choose his university path in his final year of high school, he made a surprising choice. Despite his strong skills in math and computer science, he opted to major in psychology at the University of Western Australia, with additional classes in philosophy and anthropology — fields where fundamental questions remained unanswered. “I wanted to learn about the things I hadn’t had the chance to learn about and work in areas where there are still mysteries,” he said, “rather than trying to figure out the very last pieces of a theory that was already mostly developed.”

These early experiences shaped his unique approach to cognitive science. While others studied either human psychology or computer science, Griffiths pursued both, searching for connections between silicon and gray matter and developing an intellectual framework that would inform his later work. It was during his undergraduate studies that Griffiths learned of the bridge between his computational interests and his fascination with human cognition. “I was reading a book in one of my philosophy classes, and it had a chapter about neural networks,” he said. “I was amazed to learn that I could use math to describe how minds work.”

“Modern neural networks started in psychology,” Griffiths said. As he explains it, the key insight that made deep learning — the ability to train multi-layer neural networks — and the current AI boom possible emerged not from a tech company but from a psychology lab in the 1980s. Over the decades, neural network concepts have ping-ponged between psychology and computer science, with each field advancing them in crucial ways. “When computer scientists lost interest in neural networks over poor results, psychologists came up with another way of making them work,” he said. “And that then led to a lot of success in computer science.”

In 2012, that success sparked AI’s Big Bang moment when a research team from the University of Toronto trained neural network algorithms running on powerful computer processors (GPUs) on a massive online dataset to teach computers to recognize images with incredible accuracy. That massive dataset was ImageNet, the groundbreaking project launched by Fei-Fei Li ’99 in 2007 when she was an assistant professor at Princeton. Jia Deng *08 and Olga Russakovsky worked with Li on ImageNet as graduate students. Both are now associate professors in computer science at Princeton: Deng is the director of the Princeton Vision and Learning Lab, and Russakovsky is the director of the Visual AI Lab and associate director of the new AI Lab.

Back in 2012, Griffiths was director of the Institute of Cognitive and Brain Sciences at the University of California-Berkeley, where seeing the success of ImageNet inspired him to start running large-scale automated cognitive science experiments using machine learning to sift through vast amounts of behavioral data for insights that lead to better decisions. “When that [University of Toronto] paper first came out, it was such a big leap that people didn’t really believe it until they were able to try it themselves,” Griffiths said.

This game-changing computer vision model laid the foundation for the AI revolution and led to advancements in areas such as self-driving cars, facial recognition systems and medical imaging technologies.

Teaching a computer to catch a baseball

When Griffiths began attending machine learning conferences in 2000, they were often intimate gatherings of 500 people or less. Today, such events have exploded in popularity, drawing crowds of more than 15,000. “I got to witness this trajectory of more and more people becoming interested in machine learning and more and more things that these AI models are able to do,” he said. “At the same time, I think that there are some challenges for the field that have come out of that.”

While the growth is fueling remarkable progress, Griffiths is concerned that the intense focus on neural networks might be overlooking other valuable approaches to AI. Among other things, Griffiths’ Computational Cognitive Science Lab (CoCoSci), which applies mathematical modeling, computer simulation and behavioral experiments to study various aspects of human cognition, uses psychological concepts to make sense of what AI systems are doing and to figure out how to make them better. “We have these powerful AI systems, but we really don’t understand them,” he said. “And in many cases, the only thing that we have access to is their behavior, because they’re locked behind some proprietary firewall.”

It turns out that psychology is an excellent toolbox for understanding intelligent systems based on their behavior. Where generative AI has focused on scaling up — more data, more computing power and bigger models — human intelligence works differently. “With human cognition, we only get to learn from a limited amount of data because we only live for so long,” Griffiths said. “All of the things that we do, from composing symphonies to putting a diaper on a baby, we do with just the computational resources inside our heads.”

At a time when AI seems to advance daily, Griffiths’ work suggests that understanding human cognition — with all its constraints and quirks — might be key to developing smarter AI systems. “As we start to hit the limits of what’s possible by scaling data and compute, we’re going to need to develop other systems that learn quickly or think more efficiently,” he said. “And that’s something that we can get from thinking about people.”

Peter Ramadge, Tom Griffiths, and Olga Russakovsky
From left, Princeton professors Peter Ramadge, the Gordon Y.S. Wu Professor of Engineering and director of the Center for Statistics and Machine Learning, Tom Griffiths and Olga Russakovsky at “Genesis: Artificial Intelligence, Hope, and the Human Spirit” launch event.

Griffith’s CoCoSci Lab also studies what he calls resource rationality — how people make intelligent decisions about how much mental computation to engage in and develop efficient cognitive strategies. “We’re really good at efficiently using our cognitive resources and recognizing that a problem has the same structure as one we’ve seen before because we just don’t have the luxury of being able to put as much compute into solving a problem as we can.”

For one thing, he said, we apply heuristics — simple rules of thumb that simplify calculations — that allow us to solve problems without having to think too much.

Pointing to his desk in Peretsman Scully Hall, Griffiths recalled an example used by German psychologist Gerd Gigerenzer, long-time director at the Max Planck Institute for Human Development, of a heuristic we use for catching a baseball that replaces the need to calculate the trajectory, which can be computationally intense. “If you maintain that angle of your eyes when you’re fixating on the ball as you’re moving your feet, you’re going to end up in a position where you catch it,” he said. “Since the ball is coming in at a constant angle as you’re moving towards it, it’s going to end up hitting your hand.”

While methods like this won’t help your laptop shag pop flies any time soon, they can help develop techniques that make AI less computationally intensive. “Looking at systems that are more efficient in their use of cognitive resources can help reduce the massive amounts of money and energy going into creating data sets and providing computation for training AI models,” he said. “They might even lead to systems that are able to learn from less data.”

For Griffiths, understanding human intelligence might be just as crucial as replicating it. The next breakthrough in AI might come not from building bigger networks, but from better understanding how humans learn, collaborate and think.

A beautiful mind

In the “Computational Models of Cognition” class Griffiths often teaches, he finds that more and more of his students are neither psychology nor computer science majors. Currently at Princeton, one-third of undergraduates have taken or are taking an advanced AI course. “The future of computer science and AI are much more integrated into the foundation of a college education,” he said. “These are methods that are starting to touch all of the different academic disciplines that we have on campus.”

Tom Griffiths giving a lecture

Griffiths loves seeing the projects his students devise. “They’re taking these concepts and applying them to whatever they’re interested in — from building AI-assisted DJs to doing publishable research in cognitive science that gives us new ideas about how human minds work.”

The common thread isn’t the technology itself but the fundamental literacy it provides — a toolset for navigating a rapidly changing world. “We’re exploring the ways we can think about building bridges between what we’ve learned in modern machine learning and the things we need to know in order to understand how human minds work,” Griffiths said.

To study these abilities, Griffiths gathers huge datasets, often from online games and social media, and uses machine learning to model how people learn concepts, make decisions and solve problems. The scale of the data, enabled by the internet, allows his lab to test theories with a rigor rarely seen in psychology. One recent study analyzed millions of online chess games to understand how people allocate their cognitive resources to discover new problem-solving techniques. “We used AI to look at millions of human decisions surrounding a complex cognitive task like deciding what move to make next,” he said. “That gives us a resolution that we’re able to then look for the kinds of questions that would otherwise be really hard to study in the lab, such as: ‘Are people thinking more in situations that are more challenging? And are they making good use of their cognitive resources?’”

Griffiths’ CoCoSci Lab also studies distributed computing to try and understand how it is that people pool their resources to solve problems. “If we want to do something that requires more data than we can get in a single human lifetime or more computational resources than are contained in a single human brain, we use cultural mechanisms to do that,” he said. “We create things like science that allows us to have ideas, collect data and record results in a way that allows other people to build on our findings.”

“And we create institutions and form companies that allow us to work together, pool our knowledge and pool our cognitive resources. But understanding how it is that humans do those things is a whole other set of research questions,” Griffiths said. “But as I tell my students, in cognitive science you’re not going to learn the answers to the questions. You’re going to learn what’s the best way to ask the questions that we’ve come up with so far.”

It’s an understanding that Griffiths pursues through regular treks between his two academic homes. “Because I live in these two different worlds, sometimes there’s an idea in computer science that gives us a way to understand this thing that people do,” he said. “We can then use the idea to explain other things in psychology, and then bring those concepts back as new challenges for computer science.”

These connections offer a new vista in which the most profound insights often emerge not from choosing between different approaches to understanding intelligence, but from embracing the creative tension between them.

After more than two decades in cognitive science, Griffiths remains fascinated by how the human mind acquires knowledge, makes decisions and plans ahead. These open mysteries are not unlike the ones that drew him to the field of psychology in the first place and captivated him during his early days of programming. “In the long term, these questions offer insight into the nature of our own intelligence and how it might differ from the other kinds of systems that we interact with in the world,” he said. “They can lead to a better understanding of what it means to be a human being.”

   

Tagged: