Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust".
This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more.
It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet.
Pre-order my book:
"The End of Race Politics: Arguments for a Colorblind America" - [ Ссылка ]
FOLLOW COLEMAN:
Check out my Album: AMOR FATI - [ Ссылка ]
Substack - [ Ссылка ]
Join the Unfiltered Community - [ Ссылка ]
YouTube - [ Ссылка ]
Twitter - [ Ссылка ]
Facebook - [ Ссылка ]
Instagram - [ Ссылка ]
Podcast -[ Ссылка ]
Website - [ Ссылка ]
Chapters:
00:00:00 Intro
00:03:45 The Uncertainty Of Chat GPT's Potential Threats
00:05:50 The Need To Understand And Align Machine Values
00:09:01 What Does AI Want In The Future?
00:14:44 Universal Threat Of Super Intelligence: A Global Concern
00:17:13 Inadequacy Of Bombing Data Centers And The Pace Of Technological Advancements
00:20:48 Current Machines Lack General Intelligence
00:25:46 Leveraging Ai As A Partner For Complex Tasks
00:29:46 Improving Gp T's Knowledge Gap: From GPT3 To GPT4
00:32:00 The Unseen Brilliance Of Artificial Intelligence
00:37:27 Introducing A Continuum Spectrum Of Artificial General Intelligence
00:39:54 The Possibility Of Smarter Future Ai: Surprising Or Expected?
00:42:19 The Importance Of Superintelligence's Intentions And Potential Threat To Humanity
00:47:20 The Evolution Of Optimism And Cynicism In Science
00:52:17 The Importance Of Getting It Right The First Time
00:53:53 Concerns Over Artificial Intelligence And Its Potential Threat To Humanity
00:57:39 Importance Of Global Coordination For Addressing Concerns About Super Intelligence
00:59:04 Exploring The Potential Of Super Intelligent Ai For Human Happiness
01:03:32 The Potential Of AI To Solve Humanity's Problems
01:05:45 The Uncertain Impact Of Gp T Four
01:08:30 The Future Of Utility And The Dangers Ahead
01:15:04 The Challenge Of Internalized Constraints And Jailbreaking
01:19:04 The Need For Diverse Approaches In Alignment Theory
01:23:47 The Importance Of Legible Warning Bills And Capability Evaluations
01:26:31 Exploring Hypotheses And Constraints For Robot Behavior
01:27:44 Lack Of Will And Obsession With Ll Ms Hinders Progress In Street Light Installation
01:33:20 The Challenges Of Developing Knowledge About The Alignment Problem
#ConversationswithColeman #CWC #ColemanHughes #Podcast #Politics #society #Colemanunfiltered #Unfiltered #Music #Philosophy #BlackCulture #Intellectual #podcasting #podcastersofinstagram #Youtube #podcastlife #music #youtube #radio #comedy #podcastshow #spotifypodcast #newpodcast #interview #motivation #art #covid #history #republicans #blacklivesmatter #follow #libertarian #art #socialism #communism #democracy #woke #wokepolitics #media #chatgpt #AI #EliezerYudkowsky #GaryMarcus #ScottAaronson
![](https://i.ytimg.com/vi/xkQSiS8hpZA/maxresdefault.jpg)