AI Safety Summit: Adam Russell on panel

Panelists including USC’s Adam Russell, far right, attend the AI Safety Summit. (Photo/Kirsty O’Connor, No. 10 Downing St. Used under CC BY 2.0; cropped from original)

Science/Technology

Beyond ChatGPT: USC’s Adam Russell explains how universities can chart AI’s future

Russell, AI division director at the USC Information Sciences Institute, discusses his impressions of the recent U.K. AI Safety Summit.

November 16, 2023 By Paul McQuiston

USC scientists were among some of Big Tech’s most influential minds who descended on Bletchley Park in the United Kingdom earlier this month for the inaugural AI Safety Summit, a major international conference focused on safety, ethics and AI risk mitigation.

The conference brought together experts and officials from across government, industry, nonprofits and academia to discuss the growing importance of making AI safety as important as AI innovation. Emerging from the summit was the Bletchley Declaration, a policy paper that emphasized the importance of cooperation in understanding and mitigating AI risks.

Adam Russell, director of the AI division at the USC Information Sciences Institute, attended the first day, along with representatives of the world’s most influential tech companies including Google, Amazon, Meta, Microsoft and OpenAI, the creator of ChatGPT. World leaders such as U.S. Vice President Kamala Harris and U.K. Prime Minister Rishi Sunak met on the second day to discuss a way forward. In all, 28 countries were represented.

Russell, a sociocultural anthropologist hired in July to head one of the nation’s largest and most accomplished academic AI research departments, spoke to us about how USC can help guide AI safety toward becoming what he called a “mature science.”

AI safety: Adam Russell
Adam Russell is the director of the AI division at the USC Information Sciences Institute. (Photo/Courtesy of Adam Russell)

What was your participation at the summit?

Russell: To start with, I have to caveat that the summit was conducted under [the] Chatham House Rule, so we can’t talk about who was there and attribute what they said. But you’ve seen the people who were in attendance — we had some really important folks in the AI space there. In the panels I sat on, we talked about things like the risks of losing control of AI, and what the scientific community can do to address some of the risks associated with cutting-edge AI as it’s increasingly deployed into the real world.

One takeaway for me from these discussions is, unsurprisingly, anthropological. Cultures tend to categorize things in the world as good or bad, right or wrong, us or them, that sort of thing. So, I noticed these interesting categorical tensions between things like open source vs. proprietary models, as well as a tension between innovation vs. risk. What I heard was a tension between developing and deploying AI — which is happening at the speed of engineering — and the need to tackle AI safety and characterize risk, which is happening at the speed of science. The latter is much slower than the former. Very smart people are trying to figure out how we can thread this needle of realizing AI’s promise at the speed of engineering, while also recognizing that we have to move faster on building the science of AI safety — because once we realize some of these risks, given their potential magnitude, it might be too late to some degree.

I was also heartened to hear people beginning to realize that the notion that you have to focus on either real risk happening today or on potential risk that could occur tomorrow is a false dichotomy. AI safety, as a science, ought to be able to tackle both. This is why one of our strategic R&D areas at ISI is called “AI in the Wild.” We need to develop capabilities to understand what AI is doing for us, as well as what AI is doing to us.

How can USC be a leader in these efforts?

Russell: To meet these kinds of challenges, universities are going to have to evolve. AI safety is not just economics, it is not just computer science, it is not just public policy, it is not just anthropology — it is all those things, and more. There’s a nicely provocative 2020 article titled “Why computing belongs within the social sciences” that basically argued that computer science belongs in the social sciences if we’re going to tackle these problems. As a social scientist, I would never do that to computer scientists. But the point is you are no longer in the realm where you can think of this as just computer science. You have to think much more from a sociological perspective, which is really about systems thinking. If we focus on the word “socio-technical,” USC has all the components needed to get after truly socio-technical AI safety. It’s just a question of how do we bring them together.

Our scientists can bring much-needed empirical research to the topic of AI safety. I’ve been noodling hard on what a research center on accelerating and scaling the science of AI safety might look like and how it could be most impactful. We need to bring even more experts and communities together to help mature a science of AI safety that can keep pace with AI engineering — and to be smart about using AI policy as a key lever.

USC was one of only a few U.S. universities represented at the summit. What role does academia play in helping AI realize its promise?

Russell: There are two big spaces where universities are going to be really important. The first is that it’s becoming more widely acknowledged that generative AI — which is what powers ChatGPT — has a limited architecture. There is still foundational research to be done in AI, and people are realizing that if we’re not careful, the fever pitch around commercial generative AI may lead us to disregard those additional investments that should be made. Ultimately, you want your utility function to be things like social good, and knowledge and truth, as opposed to [just] engagement and business licensing. That is very much what universities can and should be doing.

The second is that we need to take AI safety much more seriously as a science and push it toward becoming a mature science. AI safety will need to include not just things like ethics, but also technical questions like how to align AI with different communities’ values and how to create AI that is able to learn what the right thing to do is. When you look at the hallmarks of mature sciences, they have things like common infrastructure, standardization, methods, protocols, implications that may inform policy, a body of confirmatory research — all based on strong theoretical foundations. Right now, there’s very little of that in AI safety. It’s almost like the early days of chemistry where we were just throwing things together and seeing what happens. That’s a good start, but we don’t yet have a periodic table of AI safety where we can begin to robustly design AI safety into systems. It’s going to take the universities to really help do that.