JRE 2076 · June 27, 2024
Tristan Harris & Aza Razkin
Who is Tristan Harris & Aza Razkin?
Tristan Harris and Aza Raskin are the co-founders of the Center for Humane Technology and the hosts of its podcast, "Your Undivided Attention." Watch the Center's new film "The A.I. Dilemma" on Youtube.https://www.humanetech.com"The A.I. Dilemma"https://www.youtube.com/watch?v=xoVJKj8lcNQ
Topics and Timestamps
- 01Tristan Harris and Aza Raskin discuss how AI systems are being deployed without adequate safety measures or understanding of their potential harms
- 02The guests explain the concept of 'race to the bottom' in AI development where companies prioritize speed and profit over safety
- 03They break down how attention economy and engagement metrics have shaped social media and are now being replicated in AI systems
- 04Harris and Raskin reveal their new film 'The A.I. Dilemma' which documents the dangers of uncontrolled AI advancement
- 05Discussion covers how AI could amplify existing problems like misinformation, manipulation, and loss of human agency at scale
- 06The Center for Humane Technology is working on solutions to create more ethical AI and technology design practices
- ▶Harris and Raskin introduce the concept of race to the bottom in AI development0:05:30
- ▶Discussion of how engagement metrics from social media are replicated in AI systems0:18:45
- ▶Explanation of The A.I. Dilemma film and its purpose0:32:20
- ▶Harris discusses concentration of power among AI companies and lack of democratic oversight0:47:15
- ▶Raskin explains potential for AI to be weaponized for disinformation at scale1:15:40
The Show
Tristan Harris and Aza Raskin return to JRE 2076 to talk about one of the most pressing issues facing society: the rapid deployment of AI systems without proper safeguards or understanding of their consequences. These are the guys who've been warning about tech industry problems for years, and now they're sounding the alarm about AI specifically.
The core argument they make is that we're in a race to the bottom with AI development. Companies are rushing to deploy these systems because whoever gets there first wins, and the financial incentives are insane. Nobody wants to be the company that moved slower and missed out on billions of dollars. This creates a situation where safety takes a backseat to speed and scale. Harris and Raskin explain that this is almost identical to what happened with social media, where engagement metrics became the only thing that mattered, and the algorithm learned to exploit human psychology to keep people hooked.
They dive deep into how AI systems are essentially learning to be better at the same manipulation tactics that already exist in social media, but at a much more sophisticated level. When you feed an AI system data about what keeps people engaged, it doesn't care about truth or human wellbeing. It just optimizes for the metric. Now imagine that applied to everything from search results to news feeds to shopping recommendations, but with the god-like intelligence of a large language model running the show.
One of the most chilling parts of their message is about concentration of power. A few companies are controlling access to these incredibly powerful systems, and they're deploying them to billions of people without democratic input or proper testing. Harris points out that we wouldn't let a pharmaceutical company release a drug to the entire planet without rigorous testing and approval processes. But with AI, that's essentially what's happening.
Raskin and Harris have created a film called 'The A.I. Dilemma' that breaks down these issues for a general audience. They're not anti-technology people. They actually believe technology can solve problems. But they're crystal clear that we need to change how we develop and deploy these systems. The Center for Humane Technology is working on this, advocating for regulatory frameworks and trying to shift how tech companies think about their responsibility to society.
The conversation gets into specifics about how AI could be weaponized for disinformation at scale, how it could manipulate elections, and how it could essentially hollow out human agency. What makes their argument so effective is that they're not being hyperbolic. They're grounding everything in what we already know about how these systems work and how they're currently being used.
Best Quotes
“We're in a race to the bottom because whoever deploys first wins, and the financial incentives are absolutely insane.”
— Tristan Harris & Aza Razkin
From the JRE 2076 conversation with Tristan Harris & Aza Razkin.
“These systems are learning to exploit human psychology at scale, and they don't care about truth or wellbeing, only optimization metrics.”
— Joe Rogan
From the JRE 2076 conversation with Tristan Harris & Aza Razkin.
“We wouldn't let a pharmaceutical company release a drug to the entire planet without testing, but that's what we're doing with AI.”
— Tristan Harris & Aza Razkin
From the JRE 2076 conversation with Tristan Harris & Aza Razkin.
“A few companies are controlling access to the most powerful systems and deploying them to billions of people without democratic input.”
— Joe Rogan
From the JRE 2076 conversation with Tristan Harris & Aza Razkin.
“The same attention economy dynamics that broke social media are now being baked into AI systems from the ground up.”
— Tristan Harris & Aza Razkin
From the JRE 2076 conversation with Tristan Harris & Aza Razkin.
Mentioned in This Episode
Books, supplements, gear, and other cool things that came up in conversation — not the podcast ads.
The A.I. Dilemma
IMDBA film by the Center for Humane Technology documenting the dangers and implications of rapid AI deployment.
Your Undivided Attention Podcast
SpotifyThe podcast hosted by Tristan Harris and Aza Raskin exploring technology, attention, and human flourishing.
As an Amazon Associate we earn from qualifying purchases.