Powered by RND
PodcastEconomiaTechnically Speaking with Chris Wright

Technically Speaking with Chris Wright

Red Hat
Technically Speaking with Chris Wright
Ultimo episodio

Episodi disponibili

5 risultati 7
  • Driving healthcare discoveries with AI ft. Jianying Hu
    We often hear about AI's potential to generate text and code, but its application in healthcare and life sciences promises a more profound impact: solving our most fundamental challenges in disease and medicine. This isn't just about applying existing LLMs; it's about building entirely new foundation models that understand the complex language of biology itself. To explore this new frontier, Red Hat CTO Chris Wright speaks with Dr. Jianying Hu, an IBM Fellow, Global Science Leader of AI for Health, and Director of Healthcare and Life Sciences Research at IBM. Dr. Hu shares her extensive insights and expertise on: •  The fundamental shift from general-purpose LLMs to domain-specific AI foundation models that learn the complex, multimodal language of biology. •  How these new models power ""in silico"" simulations to accelerate drug discovery, validate new experiments, and test therapeutic efficacy. •  The critical role of collaboration in applying AI, as seen in the IBM & Cleveland Clinic Discovery Accelerator, to create models that lead to personalized medicine. •  Why open science and shared community benchmarks are essential for driving the next wave of medical breakthroughs. Join in as they explore how this foundational, collaborative work is creating a new paradigm for scientific discovery. And discover why the future of medicine isn't just about a single AI model, but about building an open, iterative ecosystem that combines deep domain expertise with the power of AI to understand biology from the ground up.
    --------  
    27:30
  • Security for the AI supply chain ft. Aeva Black
    The software supply chain has always been a critical battleground, but AI introduces an exponential increase in scale and complexity. We are no longer just securing lines of code; we are now responsible for securing the models that generate it. This a new reality that shifts the entire attack surface for distributed IT systems. To explore this new frontier, Red Hat CTO Chris Wright speaks with Æva Black, an open source security and policy expert. Æva Black shares their extensive insights and expertise on: •  The fundamental shift from securing code to securing the data, training processes, and models that make up the AI supply chain. •  New, emerging attack vectors, such as exploiting model quantization, and how they are analogous to hardware threats like Spectre and Meltdown. •  The growing burden of low-quality, AI-generated contributions on open source communities and the risk this poses to project sustainability and security. •  Why the concept of “model provenance” is essential for building trust in AI systems. •  The rising importance of public policy and government funding to protect and sustain open source as the critical digital infrastructure it has become. Listen in as they explore how the foundational principles of open source including transparency, collaboration, and community-driven governance, offer our most promising path forward. And discover why the health and sustainability of open source communities are directly tied to the security of our AI-powered future and what enterprise leaders can do to move from passive consumers to active contributors in this critical ecosystem.
    --------  
    21:46
  • Taming AI agents with observability ft. Bernd Greifeneder
    As modern IT systems grow too complex for humans to effectively manage, there's growing interest in turning to autonomous AI agents for operations. While powerful, these agents introduce new challenges around trust, reliability, and control. To explore how to solve for this, Red Hat CTO Chris Wright speaks with Bernd Greifeneder, Founder and CTO of Dynatrace [LINK: https://www.dynatrace.com], a company that has long focused on managing complexity with AI. Bernd Greifeneder shares his industry insights and expertise on: •  The evolution from simple application architectures to massive, ephemeral microservices environments that are beyond human-scale to manage. •  Balancing creative, generative AI with fact-based, causal AI to create a reliable and deterministic foundation for autonomous operations. •  The critical need for high-quality, real-time data to create a "digital twin" of a production environment, enabling true root cause analysis instead of just correlation. •  Why building trust is the biggest challenge for agentic AI and how a "human-in-the-loop" approach is essential before handing over the keys. Tune in for an in-depth discussion on the future of autonomous IT. This conversation is critical for any SRE, developer, or technology leader preparing to manage not just their systems, but the AI agents that will run them.
    --------  
    30:15
  • Inside distributed inference with llm-d ft. Carlos Costa
    Scaling LLM inference for production isn't just about adding more machines, it demands new intelligence in the infrastructure itself. In this episode, we're joined by Carlos Costa, Distinguished Engineer at IBM Research, a leader in large-scale compute and a key figure in the llm-d project. We discuss how to move beyond single-server deployments and build the intelligent, AI-aware infrastructure needed to manage complex workloads efficiently. Carlos Costa shares insights from his deep background in HPC and distributed systems, including: • The evolution from traditional HPC and large-scale training to the unique challenges of distributed inference for massive models. • The origin story of the llm-d project, a collaborative, open-source effort to create a much-needed ""common AI stack"" and control plane for the entire community. • How llm-d extends Kubernetes with the specialization required for AI, enabling state-aware scheduling that standard Kubernetes wasn't designed for. • Key architectural innovations like the disaggregation of prefill and decode stages and support for wide parallelism to efficiently run complex Mixture of Experts (MOE) models. Tune in to discover how this collaborative, open-source approach is building the standardized, AI-aware infrastructure necessary to make massive AI models practical, efficient, and accessible for everyone.
    --------  
    26:23
  • Building more efficient AI with vLLM ft. Nick Hill
    Explore what it takes to run massive language models efficiently with Red Hat's Senior Principal Software Engineer in AI Engineering, Nick Hill. In this episode, we go behind the headlines to uncover the systems-level engineering making AI practical, focusing on the pivotal challenge of inference optimization and the transformative power of the vLLM open-source project. Nick Hill shares his experiences working in AI including: • The evolution of AI optimization, from early handcrafted systems like IBM Watson to the complex demands of today's generative AI. • The critical role of open-source projects like vLLM in creating a common, efficient inference stack for diverse hardware platforms. • Key innovations like PagedAttention that solve GPU memory fragmentation and manage the KV cache for scalable, high-throughput performance. • How the open-source community is rapidly translating academic research into real-world, production-ready solutions for AI. Join us to explore the infrastructure and optimization strategies making large-scale AI a reality. This conversation is essential for any technologist, engineer, or leader who wants to understand the how and why of AI performance. You’ll come away with a new appreciation for the clever, systems-level work required to build a truly scalable and open AI future.
    --------  
    20:52

Altri podcast di Economia

Su Technically Speaking with Chris Wright

Struggling to keep pace with the ever-changing world of technology? For experienced tech professionals, making sense of this complexity to find real strategic advantages is key. This series offers a clear path, featuring insightful, casual conversations with leading global experts, innovators, and key voices from Red Hat, all cutting through the hype. Drawing from Red Hat's deep expertise in open source and enterprise innovation, each discussion delves into new and emerging technologies-- from artificial intelligence and the future of cloud computing to cybersecurity, data management, and beyond. The focus is on understanding not just the 'what,' but the important 'why' and 'how': exploring how these advancements can shape long-term strategic developments for your organization and your career. Gain an insider’s perspective that humanizes complex topics, helping you anticipate what’s next and make informed decisions. Equip yourself with the knowledge to turn today's emerging tech into valuable, practical strategies and apply innovative thinking in your work. Tune in for forward-looking discussions that connect the dots between cutting-edge technology and real-world application, leveraging a rich understanding of the enterprise landscape. Learn to navigate the future of tech with confidence.
Sito web del podcast

Ascolta Technically Speaking with Chris Wright, 4 soldi da investire e molti altri podcast da tutto il mondo con l’applicazione di radio.it

Scarica l'app gratuita radio.it

  • Salva le radio e i podcast favoriti
  • Streaming via Wi-Fi o Bluetooth
  • Supporta Carplay & Android Auto
  • Molte altre funzioni dell'app

Technically Speaking with Chris Wright: Podcast correlati