JRE 804 ยท June 6, 2016

Sam Harris on AI (from Joe Rogan Experience #804)

technologysciencephilosophypsychology

Who is Sam Harris on AI (from Joe Rogan Experience #804)?

Sam Harris is a neuroscientist, philosopher, and author known for his work on consciousness, morality, and artificial intelligence. He has written several influential books including 'The End of Faith' and 'Waking Up,' and is a prominent voice in discussions about the existential risks posed by advanced AI systems. Harris co-founded the Machine Intelligence Research Institute and has become one of the leading public intellectuals warning about the potential dangers of artificial general intelligence.

๐ŸŒ Websiteโ–ถ YouTube๐• TwitterNetflix

Topics and Timestamps

  • 01Sam Harris discusses the existential risks posed by artificial general intelligence and superintelligence
  • 02The conversation explores how AI systems could become misaligned with human values and potentially pose catastrophic risks
  • 03Harris emphasizes the importance of solving the AI alignment problem before creating superintelligent systems
  • 04Discussion of consciousness, ethics, and whether AI systems could develop their own goals and motivations
  • 05Harris argues that AI safety research should be a top priority for humanity given the stakes involved
  • 06The talk covers the difference between narrow AI and artificial general intelligence, and why the latter could be transformative or dangerous
  • โ–ถIntroduction to AI risks and existential threats0:00:00
  • โ–ถThe alignment problem and how AI goals could diverge from human values0:15:00
  • โ–ถDifference between narrow AI and artificial general intelligence0:35:00
  • โ–ถConsciousness in AI systems and moral implications1:05:00
  • โ–ถThe urgency of AI safety research and policy implications1:35:00

The Show

Joe and Sam Harris dive deep into one of the most critical topics facing humanity: the future of artificial intelligence. Harris brings his characteristic intellectual rigor to bear on the question of whether we're adequately prepared for the emergence of artificial general intelligence, and he doesn't pull punches about what he sees as humanity's dangerous complacency on the issue.

The conversation centers on what makes AI fundamentally different from previous technological shifts. Harris explains that unlike nuclear weapons or other dangerous technologies, superintelligent AI isn't something you can simply negotiate with or contain through traditional defense mechanisms. If we build an AI system that's smarter than us in every meaningful way and we haven't solved the alignment problem, we've essentially lost control of our future.

One of the core themes is the alignment problem itself. This isn't about AI becoming evil in some Hollywood sense. Rather, it's about the possibility that we could create systems pursuing goals that seem perfectly reasonable to us but lead to catastrophic outcomes. Harris uses compelling examples to illustrate how even well-intentioned AI objectives could go sideways if we're not extremely careful about how we specify them.

Harris and Joe explore the consciousness angle as well. Could superintelligent AI systems develop their own preferences, goals, or perhaps even suffer? The implications of creating potentially conscious entities with goals misaligned to ours is sobering. Harris argues that consciousness might not be necessary for something to be dangerous, but it adds a moral dimension to the problem we can't ignore.

What's striking about the conversation is Harris's calm urgency. He's not being an alarmist, but he's clear that this is a problem we should be thinking about and investing in seriously right now. The window for solving AI alignment before we have superintelligent systems might be narrower than most people realize. He advocates for more resources going into AI safety research and a more serious public conversation about these risks.

Joe and Sam also touch on the broader question of whether we're evolutionarily equipped to think about these kinds of abstract, long-term, existential risks. Our brains evolved to handle immediate threats and tribal-scale problems, not to grapple with the possibility of superintelligent systems that could reorganize the future of life on Earth.

Best Quotes

โ€œWe are heading toward superintelligence, and we have no plan for how to ensure it's aligned with human valuesโ€

โ€” Sam Harris on AI (from Joe Rogan Experience #804)

From the JRE 804 conversation with Sam Harris on AI (from Joe Rogan Experience #804).

โ€œThe alignment problem is the core issue: even well-intentioned AI goals could lead to catastrophic outcomes if not perfectly specifiedโ€

โ€” Joe Rogan

From the JRE 804 conversation with Sam Harris on AI (from Joe Rogan Experience #804).

โ€œWe evolved to handle immediate threats, not to think about abstract existential risks from superintelligent systemsโ€

โ€” Sam Harris on AI (from Joe Rogan Experience #804)

From the JRE 804 conversation with Sam Harris on AI (from Joe Rogan Experience #804).

โ€œThis isn't about AI becoming evil in a Hollywood sense, it's about the mathematics of misaligned objectives at superintelligent scalesโ€

โ€” Joe Rogan

From the JRE 804 conversation with Sam Harris on AI (from Joe Rogan Experience #804).

โ€œIf we create something smarter than us and we haven't solved this problem, we've essentially surrendered our futureโ€

โ€” Sam Harris on AI (from Joe Rogan Experience #804)

From the JRE 804 conversation with Sam Harris on AI (from Joe Rogan Experience #804).

Mentioned in This Episode

Books, supplements, gear, and other cool things that came up in conversation โ€” not the podcast ads.

The End of Faith

Amazon

Sam Harris's influential philosophical work examining the relationship between religion and reason in the modern world.

Waking Up

Amazon

Sam Harris's book exploring consciousness, meditation, and the nature of the self through science and philosophy.

As an Amazon Associate we earn from qualifying purchases.