Beyond Transformers: Maxime Labonne on Post-Training, Edge AI, and the Liquid Foundation Model Breakthrough
The transformer architecture has dominated AI since 2017, but it’s not the only approach to building LLMs - and new architectures are bringing LLMs to edge devicesMaxime Labonne, Head of Post-Training at Liquid AI and creator of the 67,000+ star LLM Course, joins Conor Bronsdon to challenge the AI architecture status quo. Liquid AI’s hybrid architecture, combining transformers with convolutional layers, delivers faster inference, lower latency, and dramatically smaller footprints without sacrificing capability. This alternative architectural philosophy creates models that run effectively on phones and laptops without compromise.But reimagined architecture is only half the story. Maxime unpacks the post-training reality most teams struggle with: challenges and opportunities of synthetic data, how to balance helpfulness against safety, Liquid AI’s approach to evals, RAG architectural approaches, how he sees AI on edge devices evolving, hard won lessons from shipping LFM1 through 2, and much more. If you're tired of surface-level AI takes and want to understand the architectural and engineering decisions behind production LLMs from someone building them in the trenches, this is your episode.Connect with Maxime Labonne :LinkedIn – https://www.linkedin.com/in/maxime-labonne/ X (Twitter) – @maximelabonneAbout Maxime – https://mlabonne.github.io/blog/about.html HuggingFace – https://huggingface.co/mlabonne The LLM Course – https://github.com/mlabonne/llm-course Liquid AI – https://liquid.ai Connect with Conor Bronsdon :X (twitter) – @conorbronsdonSubstack – https://conorbronsdon.substack.com/ LinkedIn – https://www.linkedin.com/in/conorbronsdon/00:00 Intro — Welcome to Chain of Thought 00:27 Guest Intro — Maxime Labonne of Liquid AI 02:21 The Hybrid LLM Architecture Explained 06:30 Why Bigger Models Aren’t Always Better 11:10 Convolution + Transformers: A New Approach to Efficiency 18:00 Running LLMs on Laptops and Wearables 22:20 Post-Training as the Real Moat 25:45 Synthetic Data and Reliability in Model Refinement 32:30 Evaluating AI in the Real World 38:11 Benchmarks vs Functional Evals 43:05 The Future of Edge-Native Intelligence 48:10 Closing Thoughts & Where to Find Maxime Online
--------
52:30
--------
52:30
Architecting AI Agents: The Shift from Models to Systems | Aishwarya Srinivasan, Fireworks AI Head of AI Developer Relations
Most AI agents are built backwards, starting with models instead of system architecture.Aishwarya Srinivasan, Head of AI Developer Relations at Fireworks AI, joins host Conor Bronsdon to explain the shift required to build reliable agents: stop treating them as model problems and start architecting them as complete software systems. Benchmarks alone won't save you. Aish breaks down the evolution from prompt engineering to context engineering, revealing how production agents demand careful orchestration of multiple models, memory systems, and tool calls. She shares battle-tested insights on evaluation-driven development, the rise of open source models like DeepSeek v3, and practical strategies for managing autonomy with human-in-the-loop systems. The conversation addresses critical production challenges, ranging from LLM-as-judge techniques to navigating compliance in regulated environments.Connect with Aishwarya Srinivasan:LinkedIn: https://www.linkedin.com/in/aishwarya-srinivasan/Instagram: https://www.instagram.com/the.datascience.gal/Connect with Conor: https://www.linkedin.com/in/conorbronsdon/00:00 Intro — Welcome to Chain of Thought00:22 Guest Intro — Ash Srinivasan of Fireworks AI02:37 The Challenge of Responsible AI05:44 The Hidden Risks of Reward Hacking07:22 From Prompt to Context Engineering10:14 Data Quality and Human Feedback14:43 Quantifying Trust and Observability20:27 Evaluation-Driven Development30:10 Open Source Models vs. Proprietary Systems34:56 Gaps in the Open-Source AI Stack38:45 When to Use Different Models45:36 Governance and Compliance in AI Systems50:11 The Future of AI Builders56:00 Closing Thoughts & Follow Ash OnlineFollow the hostsFollow AtinFollow ConorFollow VikramFollow Yash
--------
53:25
--------
53:25
The accidental algorithm: Melisa Russak, AI research scientist at WRITER
This week, we're doing something special and sharing an episode from another podcast we love: The Humans of AI by our friends at Writer. We're huge fans of their work, and you might remember Writer's CEO, May Habib, from the inaugural episode of our own show.From The Humans of AI:Learn how Melisa Russak, lead research scientist at WRITER, stumbled upon fundamental machine learning algorithms, completely unaware of existing research — twice. Her story reveals the power of approaching problems with fresh eyes and the innovative breakthroughs that can occur when constraints become catalysts for creativity.Melisa explores the intersection of curiosity-driven research, accidental discovery, and systematic innovation, offering valuable insights into how WRITER is pushing the boundaries of enterprise AI. Tune in to learn how her journey from a math teacher in China to a pioneer in AI research illuminates the future of technological advancement.Follow the hostsFollow AtinFollow ConorFollow VikramFollow YashFollow Today's Guest(s)Check out Writer’s YouTube channel to watch the full interviews. Learn more about WRITER at writer.com. Follow Melisa on LinkedInFollow May on LinkedInCheck out GalileoTry GalileoAgent Leaderboard
--------
21:09
--------
21:09
If Code Generation is Solved What's Next? | Graphite’s Greg Foster
The incredible velocity of AI coding tools has shifted the critical bottleneck in software development from code generation to code reviews. Greg Foster, Co-Founder & CTO of Graphite, joins the conversation to explore this new reality, outlining the three waves of AI that are leading to autonomous agents spawning pull requests in the background. He argues that as AI automates the "inner loop" of writing code, the human-centric "outer loop"—reviewing, merging, and deploying—is now under immense pressure, demanding a complete rethinking of our tools and processes.The conversation then gets tactical, with Greg detailing how a technique called "stacking" can break down large code changes into manageable units for both humans and AI. He also identifies an emerging hiring gap where experienced engineers with strong architectural context are becoming "lethal" with AI tools. This episode is an essential guide to navigating the new bottlenecks in software development and understanding the skills that will define the next generation of high-impact engineers.Follow the hostsFollow AtinFollow ConorFollow VikramFollow YashFollow Today's Guest(s)Connect with Greg on LinkedInFollow Greg on XGraphite Website: graphite.devCheck out GalileoTry GalileoAgent Leaderboard
--------
54:39
--------
54:39
Vercel's Playbook for AI Agents: From Vibe Check to Production | Malte Ubl
What’s the first step to building an enterprise-grade AI tool? Malte Ubl, CTO of Vercel, joins us this week to share Vercel’s playbook for agents, explaining how agents are a new type of software for solving flexible tasks. He shares how Vercel's developer-first ecosystem, including tools like the AI SDK and AI Gateway, is designed to help teams move from a quick proof-of-concept to a trusted, production-ready application.Malte explores the practicalities of production AI, from the importance of eval-driven development to debugging chaotic agents with robust tracing. He offers a critical lesson on security, explaining why prompt injection requires a totally different solution - tool constraint - than traditional threats like SQL injection. This episode is a deep dive into the infrastructure and mindset, from sandboxes to specialized SLMs, required to build the next generation of AI tools.Follow the hostsFollow AtinFollow ConorFollow VikramFollow YashFollow Today's Guest(s)Connect with Malte on LinkedInFollow Malte on X (formerly Twitter)Learn more about VercelCheck out GalileoTry GalileoAgent Leaderboard
Introducing Chain of Thought, the weekly podcast for software engineers and leaders that demystifies artificial intelligence.
Join host Conor Bronsdon each week as we tell the stories of the people building the AI revolution, unravel actionable strategies for agents and share practical techniques for building effective GenerativeAI applications.