Powered by RND

The Valmy

Peter Hartree
The Valmy
Ultimo episodio

Episodi disponibili

5 risultati 143
  • Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]
    Podcast: The TrajectoryEpisode: Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]Release date: 2025-04-25Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThis is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Richard's exploratory fiction writing - https://narrativeark.xyz/Watch this episode on The Trajectory YouTube channel: https://youtu.be/UQpds4PXMjQ See the full article from this episode: https://danfaggella.com/ngo1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
    --------  
    1:46:15
  • AI, data centers, and power economics, with Azeem Azhar
    Podcast: Complex Systems with Patrick McKenzie (patio11) Episode: AI, data centers, and power economics, with Azeem AzharRelease date: 2025-02-27Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPatrick McKenzie (patio11) is joined by Azeem Azhar, writer of the Exponential View newsletter, to discuss the massive data center buildout powering AI and its implications for our energy infrastructure. The conversation covers the physical limitations of modern datacenters, the challenges of electricity generation, the societal ripples from historical largescale infrastructure investments like railways and telecommunications, and the future of energy including solar, nuclear and geothermal power. Through their discussion, Patrick and Azeem explain why our mental models for both computing and energy systems need to be updated.–Full transcript available here: www.complexsystemspodcast.com/ai-llm-data-center-power-economics/–Sponsors:  Safebase | CheckReady to save time and close deals faster? Inbound security reviews shouldn’t slow down your team or your sales cycle. Leading companies use SafeBase to eliminate up to 98% of inbound security questionnaires, automate workflows, and accelerate pipeline. Go to safebase.io/podcast Check is the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to checkhq.com/complex and tell them patio11 sent you.–Recommended in this episode:Azeem’s newsletter: https://www.exponentialview.co/ Azeem Azhar’s guest essay: The 19th-Century Technology That Threatens A.I. https://www.nytimes.com/2024/12/28/opinion/ai-electricity-power-plants.htmlElectric Twin: https://www.electrictwin.com/ Video of Elon Musk’s Colossus https://www.youtube.com/watch?v=Tw696JVSxJQ Complex Systems with Travis Dauwalter on the electrical grid: https://open.spotify.com/episode/5JY8e84sEXmHFlc8IR2kRb?si=35ymIC0UQ5SKdV8rrBcgIw Complex Systems with Austin Vernon on fracking: https://open.spotify.com/episode/0YDV1XyjUCM2RtuTcBGYH9?si=YshjUXPEQBiScNxrNaI-Gw Complex Systems with Casey Handmer on direct capture of CO2 to turn into hydrocarbon: https://open.spotify.com/episode/0GHegWgLSubYxvATmbWhQu?si=xNYBjn0ZTX2IT_pAZ5Ozsg –Twitter:@azeem@patio11–Timestamps:(00:00) Intro (00:27) The power economics of data centers(01:12) Historical infrastructure rollouts(04:58) The telecoms bubble (06:22) Unprecedented enterprise spend on AI capabilities(11:12) Let's have your LLM talk to my LLM(16:44) Is there a saturation point?(19:25) Sponsors: Safebase | Check(21:55) What’s in a data center?(24:52) The challenges of data centers(29:40) Geographical considerations for data centers(36:53) Energy consumption and future needs(40:48) Challenges in building transmission lines(41:35) The solar power learning curve(43:51) Small modular nuclear reactors(51:26) Geothermal energy and fracking(01:01:34) The future of AI and energy systems(01:12:57) Wrap
    --------  
    1:13:53
  • #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway
    Podcast: 80,000 Hours Podcast Episode: #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anywayRelease date: 2025-02-14Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationTechnology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.Links to learn more, highlights, video, and full transcript.This dynamic played out dramatically in 1853 when US Commodore Perry sailed into Tokyo Bay with steam-powered warships that seemed magical to the Japanese, who had spent centuries deliberately limiting their technological development. With far greater military power, the US was able to force Japan to open itself to trade. Within 15 years, Japan had undergone the Meiji Restoration and transformed itself in a desperate scramble to catch up.Today we see hints of similar pressure around artificial intelligence. Even companies, countries, and researchers deeply concerned about where AI could take us feel compelled to push ahead — worried that if they don’t, less careful actors will develop transformative AI capabilities at around the same time anyway.But Allan argues this technological determinism isn’t absolute. While broad patterns may be inevitable, history shows we do have some ability to steer how technologies are developed, by who, and what they’re used for first.As part of that approach, Allan has been promoting efforts to make AI more capable of sophisticated cooperation, and improving the tests Google uses to measure how well its models could do things like mislead people, hack and take control of their own servers, or spread autonomously in the wild.As of mid-2024 they didn’t seem dangerous at all, but we’ve learned that our ability to measure these capabilities is good, but imperfect. If we don’t find the right way to ‘elicit’ an ability we can miss that it’s there.Subsequent research from Anthropic and Redwood Research suggests there’s even a risk that future models may play dumb to avoid their goals being altered.That has led DeepMind to a “defence in depth” approach: carefully staged deployment starting with internal testing, then trusted external testers, then limited release, then watching how models are used in the real world. By not releasing model weights, DeepMind is able to back up and add additional safeguards if experience shows they’re necessary.But with much more powerful and general models on the way, individual company policies won’t be sufficient by themselves. Drawing on his academic research into how societies handle transformative technologies, Allan argues we need coordinated international governance that balances safety with our desire to get the massive potential benefits of AI in areas like healthcare and education as quickly as possible.Host Rob and Allan also cover:The most exciting beneficial applications of AIWhether and how we can influence the development of technologyWhat DeepMind is doing to evaluate and mitigate risks from frontier AI systemsWhy cooperative AI may be as important as aligned AIThe role of democratic input in AI governanceWhat kinds of experts are most needed in AI safety and governanceAnd much moreChapters:Cold open (00:00:00)Who's Allan Dafoe? (00:00:48)Allan's role at DeepMind (00:01:27)Why join DeepMind over everyone else? (00:04:27)Do humans control technological change? (00:09:17)Arguments for technological determinism (00:20:24)The synthesis of agency with tech determinism (00:26:29)Competition took away Japan's choice (00:37:13)Can speeding up one tech redirect history? (00:42:09)Structural pushback against alignment efforts (00:47:55)Do AIs need to be 'cooperatively skilled'? (00:52:25)How AI could boost cooperation between people and states (01:01:59)The super-cooperative AGI hypothesis and backdoor risks (01:06:58)Aren’t today’s models already very cooperative? (01:13:22)How would we make AIs cooperative anyway? (01:16:22)Ways making AI more cooperative could backfire (01:22:24)AGI is an essential idea we should define well (01:30:16)It matters what AGI learns first vs last (01:41:01)How Google tests for dangerous capabilities (01:45:39)Evals 'in the wild' (01:57:46)What to do given no single approach works that well (02:01:44)We don't, but could, forecast AI capabilities (02:05:34)DeepMind's strategy for ensuring its frontier models don't cause harm (02:11:25)How 'structural risks' can force everyone into a worse world (02:15:01)Is AI being built democratically? Should it? (02:19:35)How much do AI companies really want external regulation? (02:24:34)Social science can contribute a lot here (02:33:21)How AI could make life way better: self-driving cars, medicine, education, and sustainability (02:35:55)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions: Katy Moore
    --------  
    2:44:07
  • Claude Cooperates! Exploring Cultural Evolution in LLM Societies, with Aron Vallinder & Edward Hughes
    Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Episode: Claude Cooperates! Exploring Cultural Evolution in LLM Societies, with Aron Vallinder & Edward HughesRelease date: 2025-02-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, Edward Hughes, researcher at Google DeepMind, and Aron Vallinder, an independent researcher and PIBBSS fellow discuss their pioneering research on cultural evolution and cooperation among large language model agents. The conversation delves into the study's design, exploring how different AI models exhibit cooperative behavior in simulated environments, the implications of these findings for future AI development, and the potential societal impacts of autonomous AI agents. They elaborate on their experimental setup involving different LLMs like Claude, Gemini 1.5, and GPT-4.0 in a cooperative donor-recipient game, shedding light on how various AI models handle cooperation and their potential societal impacts. Key points include the importance of understanding externalities, the role of punishment and communication, and future research directions involving mixed-model societies and human-AI interactions. The episode invites listeners to engage in this fast-growing field, stressing the need for more hands-on research and empirical evidence to navigate the rapidly evolving AI landscape.Link to Aron & Edward's research paper "Cultural Evolution of Cooperation among LLMAgents"SPONSORS:Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitiveNetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitiveShopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitiveCHAPTERS:(00:00) Teaser(00:42) About the Episode(03:26) Introduction(03:40) The Rapid Evolution of AI(04:58) Human Cooperation and Society(07:03) Cultural Evolution and Stories(08:39) Mechanisms of Cultural Evolution (Part 1)(20:56) Sponsors: Oracle Cloud Infrastructure (OCI) | NetSuite(23:35) Mechanisms of Cultural Evolution (Part 2)(27:07) Experimental Setup: Donor Game (Part 1)(37:35) Sponsors: Shopify(38:55) Experimental Setup: Donor Game (Part 2)(44:32) Exploring AI Societies: Claude, Gemini, and GPT-4(45:50) Striking Graphical Differences(48:08) Experiment Results and Implications(50:54) Prompt Engineering and Cooperation(57:40) Mixed Model Societies(01:00:35) Future Research Directions(01:03:10) Human-AI Interaction and Influence(01:05:20) Complexifying AI Games(01:18:14) Evaluations and Feedback Loops(01:20:50) Open Source and AI Safety(01:23:23) Reflections and Future Work(01:30:04) Outro
    --------  
    1:32:52
  • AI in 2030, Scaling Bottlenecks, and Explosive Growth
    Podcast: Epoch After HoursEpisode: AI in 2030, Scaling Bottlenecks, and Explosive GrowthRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn our first episode of Epoch After Hours, Ege, Tamay and Jaime dig into what they expect AI to look like by 2030; why economists are underestimating the likelihood of explosive growth; the startling regularity in technological trends like Moore's Law; Moravec’s paradox, and how we might overcome it; and much more!
    --------  
    2:02:22

Altri podcast di Scolastico

Su The Valmy

https://thevalmy.com/
Sito web del podcast

Ascolta The Valmy, Tressessanta e molti altri podcast da tutto il mondo con l’applicazione di radio.it

Scarica l'app gratuita radio.it

  • Salva le radio e i podcast favoriti
  • Streaming via Wi-Fi o Bluetooth
  • Supporta Carplay & Android Auto
  • Molte altre funzioni dell'app