How Afraid of the AI Apocalypse Should We Be? | The Ezra Klein Show
From The Ezra Klein Show
Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case. Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent syst...
Mentioned in This Episode
- Chia GPT (product)
- ChatGPT (product)
- P Doom (concept)
- Jeffrey Hinton (person)
- GPT-5 (product)
- Sora 2 (product)
- Gemini (product)
- Cloud Code (product)
- Eliezer Yudkowsky (person)
- Nate Soares (person)
- If Anyone Builds It, Everyone Dies (book)
- AI alignment (concept)
- Claude (product)
- Alignment faking (concept)
- Capture the flag (concept)
- The New York Times Magazine (company)
- Silicon Valley (location)
- Interpretability (concept)
- Large language model (concept)
- System prompt (concept)