Powered by RND
PodcastCultura e societàFor Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

The AI Risk Network
For Humanity: An AI Risk Podcast
Ultimo episodio

Episodi disponibili

5 risultati 122
  • The RAISE Act: Regulating Frontier AI | For Humanity | EP 71
    In this episode of For Humanity, John speaks with New York Assemblymember Alex Bores, sponsor of the groundbreaking RAISE Act, one of the first state-level bills in the U.S. designed to regulate frontier AI systems.They discuss:* Why AI poses an existential risk, with researchers estimating up to a 10% chance of extinction.* The political challenges of passing meaningful AI regulation at the state and federal level.* How the RAISE Act could require safety plans, transparency, and limits on catastrophic risks.* The looming jobs crisis as AI accelerates disruption across industries.* Why politicians are only beginning to grapple with AI’s dangers — and why the public must speak up now.This is a candid, urgent conversation about AI risk, regulation, and what it will take to secure humanity’s future.📌 Learn more about the RAISE Act.👉 Subscribe for more conversations on AI risk and the future of humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    --------  
    1:04:28
  • Young Voices on AI Risk: Jobs, Community & the Fight for Our Future | FHP Ep. 70
    What happens when AI collides with the next generation? In this episode of For Humanity #70 — Young People vs. Advancing AI, host John Sherman sits down with Emma Corbett, Ava Smithing, and Sam Heiner from the Young People’s Alliance to explore how artificial intelligence is already shaping the lives of students and young leaders. From classrooms to job applications to AI “companions,” the next generation is facing challenges that older policymakers often don’t even see. This episode digs into what young people really think about AI—and why their voices are critical in the fight for a safe and human future. In this episode we cover:* Students’ on-the-ground views of AI in education and daily life* How AI is fueling job loss, hiring barriers, and rising anxiety about the future* The hidden dangers of AI companions and the erosion of real community* Why young people feel abandoned by “adults in the room”* The path from existential dread → civic action → hope🎯 Why watch? Because if AI defines the future, young people will inherit it first. Their voices, fears, and leadership could decide whether AI remains a tool—or becomes an existential threat.👉 Subscribe for more conversations on AI, humanity, and the choices that will shape our future.#AI #AIsafety #ForHumanityPodcast #YoungPeople #FutureofWork This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    --------  
    1:06:44
  • Big Tech Under Pressure: Hunger Strikes and the Fight for AI Safety | For Humanity EP69
    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actIn Episode 69 of For Humanity: An AI Risk Podcast, we explore one of the most striking acts of activism in the AI debate: hunger strikes aimed at pushing Big Tech to prioritize safety over speed.Michael and Dennis, two AI safety advocates, join John from outside DeepMind’s London headquarters, where they are staging hunger strikes to demand that frontier AI development be paused. Inspired by Guido’s protest in San Francisco, they are risking their health to push tech leaders like Demis Hassabis to make public commitments to slow down the AI race.This episode looks at how ordinary people are taking extraordinary steps to demand accountability, why this form of protest is gaining attention, and what history tells us about the power of public pressure. In this conversation, you’ll discover: * Why hunger strikers believe urgent action on AI safety is necessary* How Big Tech companies are responding to growing public concern* The role of parents, workers, and communities in shaping AI policy* Parallels with past social movements that drove real change* Practical ways you can make your voice heard in the AI safety conversationThis isn’t just about technology—it’s about responsibility, leadership, and the choices we make for future generations. 🔗 Key Links 👉 AI Pause Petition: https://safe.ai/act 👉 Follow the movement on X: https://x.com/safeai 👉 Learn more and get involved: GuardRailNow.org This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    --------  
    58:17
  • Forcing Sunlight Into OpenAI | For Humanity: An AI Risk Podcast | EP68
    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actTyler Johnston, Executive Director of The Midas Project, joins John to break down the brand-new open letter demanding that OpenAI answer seven specific questions about its proposed corporate restructuring. The letter, published on 4 August 2025 and coordinated by the Midas Project, already carries the signatures of more than 100 Nobel laureates, technologists, legal scholars, and public figures. What we coverWhy transparency matters now: OpenAI is “making a deal on humanity’s behalf without allowing us to see the contract.” themidasproject.comThe Seven Questions the letter poses—ranging from whether OpenAI will still prioritize its nonprofit mission over profit to whether it will reveal the new operating agreement that governs AGI deployment. openai-transparency.orgthemidasproject.comWho’s on board: Signatories include Geoffrey Hinton, Vitalik Buterin, Lawrence Lessig, and Stephen Fry, underscoring broad concern across science, tech, and public life. themidasproject.comNext steps: How you can read the full letter, add your name, and help keep the pressure on for accountability.🔗 Key LinksRead / Sign the Open Letter: https://www.openai-transparency.org/The Midas Project (official site): https://www.themidasproject.com/Follow The Midas Project on X: https://x.com/TheMidasProj👉 Subscribe for weekly AI-risk conversations → http://bit.ly/ForHumanityYT👍 Like • Comment • Share — because transparency only happens when we demand it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    --------  
    53:47
  • Right Wing AI Risk Alarm | For Humanity | EP67
    🚨 RIGHT‑WING AI ALARM | For Humanity #67Steve Bannon, Tucker Carlson, and other conservative voicesare sounding fresh warnings on AI extinction risk. John breaksdown what’s real, what’s hype, and why this moment matters.⏰ WHAT’S INSIDE• The ideological shift that’s bringing the right into the AI‑safety fight• New bills on the Hill that could shape model licensing & oversight• Action steps for parents, policymakers, and technologists• A first look at the AI Risk Network — five shows, one mission: get the public ready for advanced AI🔗 TAKE ACTION & LEARN MOREAlliance for Secure AI Website ▸ https://secureainow.org X / Twitter ▸ https://x.com/secureainow AI Policy Network Website ▸ https://theaipn.org LinkedIn ▸ https://www.linkedin.com/company/theaipn 📡 JOIN THE NEW **AI RISK NETWORK** Subscribe here ➜ [insert channel URL] Turn on alerts so you never miss an episode, short, or live Q&A.👍 If you learned something, hit Like, drop a comment, and sharethis link with one person who should be watching. Every click helpswake up the world to AI risk. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    --------  
    1:16:23

Altri podcast di Cultura e società

Su For Humanity: An AI Risk Podcast

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Sito web del podcast

Ascolta For Humanity: An AI Risk Podcast, PoretCast di Giacomo Poretti e molti altri podcast da tutto il mondo con l’applicazione di radio.it

Scarica l'app gratuita radio.it

  • Salva le radio e i podcast favoriti
  • Streaming via Wi-Fi o Bluetooth
  • Supporta Carplay & Android Auto
  • Molte altre funzioni dell'app

For Humanity: An AI Risk Podcast: Podcast correlati