AI on the Brain: A Q&A with Richard Zemel

ARNI Director Zemel discusses Columbia’s new NSF-funded AI institute and how it will connect major progress made in AI systems to the revolution in our understanding of the brain.

Nov 09 2023 | By Holly Evarts | Photo Credit: Timothy Lee
Richard Zemel

Credit: Timothy Lee

Since he was a kid, Richard Zemel has been fascinated with artificial intelligence (AI). Now, as the lead principal investigator of Columbia’s new institute focused on AI and the brain, he is working with a large team of leading researchers across the country to advance our understanding of the brain and build better algorithms of machine intelligence that can adapt to unpredictable scenarios.

“I got into the field in high school and by sheer luck,” says Zemel, Trianthe Dakolias Professor of Engineering and Applied Science in the Department of Computer Science and director at the AI Institute for ARtificial and Natural Intelligence (ARNI). As a young student growing up in Pittsburgh, Zemel was first introduced to AI by the late Hans Berliner, a renowned chess champion and programmer who also turned out to be Zemel’s neighbor. 

“Berliner wrote one of the original AI chess programs, and also developed the best AI-based backgammon player back then. I was cutting his lawn and talked to him about what he was doing, and he offered me a summer position at Carnegie Mellon,” says Zemel, who also is a member of Columbia's Data Science Institute. As an undergraduate student at Harvard, Zemel wrote a thesis on the history of AI, supervised by Seymour Papert– one of the founders of the MIT AI lab and a pioneer in computers and education. 

“I interviewed legends in the field – Herb Simon, Allen Newell, Ed Feigenbaum, and Marvin Minsky. I worked after undergrad at an AI startup, and got into neural networks in grad school, and have been working in the area ever since.”

In September, Zemel went to Washington, where he and his Co-PI Xaq Pitkow, associate professor of computational neuroscience at Carnegie Mellon University, took part in a Congressional Showcase, held by the National Science Foundation (NSF). At this Capitol Hill hearing, which featured all 25 of NSF’s National Artificial Intelligence Research Institutes focused on AI, all eyes were on the researchers leading the way in this evolving frontier of technology.

ARNI, led by Columbia, won a $20 million NSF grant in May 2023. Part of a $500 million investment connecting more than 500 funded and collaborative institutions across the U.S. and around the world, ARNI is already drawing together leading academic and industry partners who are focused on connecting the major progress made in AI systems to the revolution in our understanding of the brain. We sat down with Zemel on the eve of his trip to D.C. to learn more about how ARNI may transform our lives.

What are ARNI's goals?

ARNI aims to find common principles of artificial and natural intelligence, to use these shared principles to better understand the brain, and to use neuroscience and cognitive science to create better algorithms of machine intelligence. We can evaluate our understanding by being able to make better scientific predictions, and will judge our algorithms by their ability to generalize to new situations.

Why is now the time to launch an artificial intelligence institute?

We have entered an exciting phase of AI development, where the promises of machine learning (ML) approaches have come to fruition. For example, we now have language models that can effectively pass the Turing test – they can pass for humans in a text-based conversation.

However, our systems face Moravec’s paradox, which roughly states that the things that animals find challenging are easy for AI, while the things that humans find easy are hard. So the domains where AI has shown the greatest recent triumphs are all “higher” areas like language and difficult games like Go and Chess, whereas basic tasks that we consider “lower,” like running, jumping, and navigation are areas where people still greatly outperform AI and robots. Think of how a person can talk hands-free on the phone while they follow a recipe: chopping vegetables and monitoring the sauce on the stove. This gap in natural and artificial intelligence is one of the greatest impediments to the adoption of AI and implies a missing link between AI and neuroscience.

How are ARNI researchers addressing that missing link?

One project we are pursuing is developing a simulation of a rodent engaged in some realistic tasks that rats find trivial, but which current AI systems would struggle with. The simulated rodent uses a brain-inspired architecture to process what it sees and drive low-level motor commands. The investigators hypothesize that ML methods for training this system using brain-inspired mechanisms will lead to progress in the challenging lower kind of tasks, combining a low-level motor controller similar to the brain stem with a perception system based on neurons.

Another project in this vein aims to combine new sophisticated methods of reconstructing a 3D environment from a few images, with recordings of neurons in the brain of a rodent as it carries out some task. An important challenge in this project is to consider two interacting rodents, aiming to model the behavior of each one, and their perceptions of the actions and intentions of the other animal.

The work at the institute focuses on AI and the brain, specifically. How will AI impact cognitive science and neuroscience? What does each discipline have to learn from the other?

There are two major ways that AI can impact the study of natural intelligence: tools and models. The mathematical tools from AI help us find important structures within massive amounts of data. As the tools get better, we can more easily find the hidden structures that tell us what the brain is doing. At the same time, those tools are derived from models of intelligent computation — models that give reasons why AI should look for certain structures in its data, and direction for how to do so. If these models are practically useful for artificial neural networks, they could be good mental models for biological neural networks, too. So in this way, AI can provide concepts and testable hypotheses for understanding the brain and its thoughts.

At the same time, we can make great strides in AI by constructing systems that gain inspiration from how brains process sensory information, make decisions, and execute motor actions. New innovations such as optogenetic recordings provide precise data about neural responses. More generally, neuroscience and cognitive science give us insight into what is happening in the brain as humans and animals explore and act in their environments.

AI and ML systems are having a major impact on multiple fields and industries, can you talk a bit about the progress made and what lies ahead? 

Machine learning has made incredible progress in the last decade, but this is fueled by the enormous scale of data and computational resources. For example, large language models like ChatGPT are trained by most of the available text on the internet, and we will soon run out of fresh data. We need a different paradigm to continue to make progress. Progress in ML systems is centered on the concept of generalization, which refers to the ability of a system to go beyond the data on which it is trained to make appropriate responses in novel inputs and situations. For example, in computer vision, an ML system can recognize a hippopotamus in an image, even when it is mostly submerged in a pond and there are a lot of shadows. People may not realize how hard this is – the hippo can take on a variety of shapes and orientations in the scene, and there may be lots of shadows, and it can be mostly submerged in a pond. But still incredibly we know it is a hippo and not a cow or just random pond contents. 

One way of measuring generalization is by how little data we need to reach a given performance. If we can use brain-like inferences to draw good conclusions from minimal data, then that would reflect a revolutionary leap in intelligence.

We are attempting to understand and build systems that can approach our incredible brains, which make complex decisions all the time. We are able to move around without hitting things at the same time while carrying out a conversation, all using tiny amounts of energy compared to computers. How is this possible?

Also, how do we learn new things over time? How does the brain not forget more than it does, and how does it combine old stuff with new stuff? How can we recognize someone we knew as a child and see again 10 years later? This is the continual learning problem, also known as lifelong learning, which is a chief focus of ARNI.

What projects is the Center already working on?

We have already funded 15 projects, for a total of $1.5M for the first year of our institute. The examples of projects are wide-ranging, including testing whether artificial neural networks can gain robustness and flexibility by incorporating biological adaptation mechanisms, and creating language models that learn sequences by using neurons that adjust their connections using biological plasticity rules.

We have researchers who are working on the understanding and generation of artistic intent – the aim is to generate descriptions of art such as paintings, sculpture, and dance that can address questions like whether the artist intended to convey the masculinity of his subject. This project, for instance, could lead to a tool for describing art and dance that could be available online to help the visually impaired in their home environments. 

Another research direction concerns the interplay of learning and memory. A long-standing theory of how memories are consolidated in humans and other animals is through a mechanism known as replay, which involves rehearsing past experiences in order to facilitate learning and generalization. ARNI neuroscientists have developed a method of decoding statistics of what is being replayed in animals during sleep, based on neural responses. This allows the testing of theories of what would be replayed in order to prevent forgetting of past events and tasks. We are studying what would be optimal in terms of combatting forgetting, and aim to compare this to what we can work out about what is actually replayed.

What are the limitations and challenges of current machine learning systems?

The successful current ML systems that have been gaining so much attention depend on having enormous computer farms operating on massive amounts of data. An important challenge is to achieve similar performance with much less data and processing resources.

Another chief challenge of these systems is making them trustworthy. Many people who use them currently trust them too much, thinking that if the computer tells them something it must be true. Yet these systems are not able to generate consistent and correct responses. We need to create AI systems that are more transparent about the information they rely on in their responses, and more open about their own confidence and limitations.

Some of your own research focuses on how to make ML systems trustworthy. Can you tell us more about how ARNI will address this?

This is top of mind for our institute. This applies to many aspects of AI, such as whether the responses it produces are factual, or whether they reflect particular cultural, political, or social viewpoints. We are also interested in the ethical issues surrounding using biodata particularly around data privacy, as well as around copyright issues for generative AI. We plan to collaborate with others on these issues, including our colleagues at the Law School, Columbia University Libraries, and our industrial partners. 

What did you tell Congress at the Sept. 19 Capitol Hill session

Keep giving us money, please.

Where do you think ARNI will be in five years? What do you think you’ll discover?

Our hope is that after five years of close interactions between neuroscientists, cognitive scientists, and computer scientists, we will have new brain-inspired algorithms that generalize better than current computer algorithms, using less data. These systems might use brain-like modules for memory, assessing value, and cogitating, in addition to the current powerful feature processing networks used today. Those new modules might use smarter, biologically inspired microcircuits that need far less energy than today’s power-hungry computers. And as we understand how to build these new types of computers using neuroscience and cognitive science, we will also be learning what is special about how our own brains work.

Learn more about Richard Zemel's research:

Richard Zemel is a professor of computer science and the Trianthe Dakolias Professor of Engineering and Applied Science. His research focuses on machine learning and artificial intelligence.

Stay up-to-date with the Columbia Engineering newsletter

* indicates required