Dive into the realities of AI-assisted coding, the origins of modern fine-tuning, and the cognitive science behind machine learning with fast.ai founder Jeremy Howard. In this episode, we unpack why AI might be turning software engineering into a slot machine and how to maintain true technical intuition in the age of large language models.
GTC is coming, the premier AI conference, great opportunity to learn about AI. NVIDIA and partners will showcase breakthroughs in physical AI, AI factories, agentic AI, and inference, exploring the next wave of AI innovation for developers and researchers. Register for virtual GTC for free, using my link and win NVIDIA DGX Spark (https://nvda.ws/4qQ0LMg)
Jeremy Howard is a renowned data scientist, researcher, entrepreneur, and educator. As the co-founder of fast.ai, former President of Kaggle, and the creator of ULMFiT, Jeremy has spent decades democratizing deep learning. His pioneering work laid the foundation for modern transfer learning and the pre-training and fine-tuning paradigm that powers today's language models.
Key Topics and Main Insights Discussed:
- The Origins of ULMFiT and Fine-Tuning
- The Vibe Coding Illusion and Software Engineering
- Cognitive Science, Friction, and Learning
- The Future of Developers
RESCRIPT: https://app.rescript.info/public/share/BhX5zP3b0m63srLOQDKBTFTooSzEMh_ARwmDG_h_izk
Jeremy Howard:
https://x.com/jeremyphoward
https://www.answer.ai/
---
TIMESTAMPS:
00:00:00 Introduction & GTC Sponsor
00:03:00 ULMFiT & The Birth of Fine-Tuning
00:08:30 Intuition & The Mechanics of Learning
00:13:30 Abstraction Hierarchies & AI Creativity
00:19:30 Claude Code & The Interpolation Illusion
00:24:30 Coding vs. Software Engineering
00:30:00 Cosplaying Intelligence: Dennett vs. Searle
00:36:30 Automation, Radiology & Desirable Difficulty
00:42:30 Organizational Knowledge & The Slope
00:48:00 Vibe Coding as a Slot Machine
00:54:00 The Erosion of Control in Software
00:59:00 Interactive Programming & REPL Environments
01:05:00 The Notebook Debate & Exploratory Science
01:12:00 AI Existential Risk & Power Centralization
01:17:30 Current Risks, Privacy & Enfeeblement
---
REFERENCES:
Blog Post:
[00:03:00] fast.ai Blog: Self-Supervised Learning
https://www.fast.ai/posts/2020-01-13-self_supervised.html
[00:13:30] DeepMind Blog: Gemini Deep Think
https://deepmind.google/blog/accelerating-mathematical-and-scientific-discovery-with-gemini-deep-think/
[00:19:30] Modular Blog: Claude C Compiler analysis
https://www.modular.com/blog/the-claude-c-compiler-what-it-reveals-about-the-future-of-software
[00:19:45] Anthropic Engineering Blog: Building C Compiler
https://www.anthropic.com/engineering/building-c-compiler
[00:48:00] Cursor Blog: Scaling Agents
https://cursor.com/blog/scaling-agents
[01:05:15] fast.ai Blog: NB Dev Merged Driver
https://www.fast.ai/posts/2022-08-25-jupyter-git.html
[01:17:30] Jeremy Howard: Response to AI Risk Letter
https://www.normaltech.ai/p/is-avoiding-extinction-from-ai-really
Book:
[00:08:30] M. Chirimuuta: The Brain Abstracted
https://mitpress.mit.edu/9780262548045/the-brain-abstracted/
[00:30:00] Daniel Dennett: Consciousness Explained
https://www.amazon.com/Consciousness-Explained-Daniel-C-Dennett/dp/0316180661
[00:42:30] Cesar Hidalgo: Infinite Alphabet / Laws of Knowledge
https://www.amazon.com/Infinite-Alphabet-Laws-Knowledge/dp/0241655676
Archive Article:
[00:13:45] MLST Archive: Why Creativity Cannot Be Interpolated
https://archive.mlst.ai/read/why-creativity-cannot-be-interpolated
Research Study:
[00:24:30] METR Study: AI OS Development
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Paper:
[00:24:45] Fred Brooks: No Silver Bullet
https://www.cs.unc.edu/techreports/86-020.pdf
[00:30:15] John Searle: Minds, Brains, and Programs
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/minds-brains-and-programs/DC644B47A4299C637C89772FACC2706A
<trunc, see ReScript/YT>