Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

When the date to Amnesty International As written, Steven Adler may end up becoming Paul Revere — or at least one of them — when it comes to safety.
Last month Adler, who spent four years in various safety roles at OpenAIbooks piece For the New York Times with a rather alarming headline: “I led product safety at OpenAI. Don’t trust its claims about ‘erotica.’” In the article, he laid out the problems OpenAI had when it came to letting users have erotic conversations with chatbots while also protecting them from any effects those interactions could have on their mental health. “No one wanted to be the morality police,” he wrote, “but we lack ways to carefully measure and manage erotic use.” “We decided that AI stunts would have to wait.”
Adler wrote his op-ed because OpenAI CEO Sam Altman recently announced that the company would soon allow “Verified adult eroticaIn response, Adler wrote that he had “big questions” about whether OpenAI had done enough to, in Altman’s words, “alleviate” mental health concerns about how users interact with the company’s chatbots.
After reading Adler’s article, I wanted to talk to him. He graciously accepted an offer to come to WIRED’s offices in San Francisco, and in this episode of The big interviewHe talks about what he learned during his four years at OpenAI, the future of AI safety, and the challenge he sets for companies bringing chatbots to the world.
This interview has been edited for length and clarity.
Katie Drummond: Before we get started, I want to make two things clear. First off, you’re, unfortunately, not the same Steven Adler who played drums in Guns N’ Roses, are you?
Steven Adler: Absolutely true.
Well, that’s not you. And secondly, you’ve had a very long career in technology, more specifically in AI. So, before we get into things, tell us a little about your career, background, and what you’ve worked on.
I have worked across the AI industry, and have particularly focused on safety angles. She most recently worked for four years at OpenAI. I’ve worked on basically every dimension of safety issues you can imagine: How do we make products better for customers and eliminate risks that actually occur? Looking a little further into the future, how will we know if AI systems have truly become so dangerous?