Powered by RND
PodcastScienzeData Science Decoded

Data Science Decoded

Mike E
Data Science Decoded
Ultimo episodio

Episodi disponibili

5 risultati 29
  • Data Science #30 - The Bootstrap Method (1977)
    In the 30th episode we review the the bootstrap, method which was introduced by Bradley Efron in 1979, is a non-parametric resampling technique that approximates a statistic’s sampling distribution by repeatedly drawing with replacement from the observed data, allowing estimation of standard errors, confidence intervals, and bias without relying on strong distributional assumptions. Its ability to quantify uncertainty cheaply and flexibly underlies many staples of modern data science and AI, powering model evaluation and feature stability analysis, inspiring ensemble methods like bagging and random forests, and informing uncertainty calibration for deep-learning predictions—thereby making contemporary models more reliable and robust.Efron, B. "Bootstrap methods: Another look at the bootstrap." The Annals of Statistics 7 (1977): 1-26.
    --------  
    41:05
  • Data Science #29 - The Chi-square automatic interaction detection(CHAID) algorithm (1979)
    In the 29th episode, we go over the 1979 paper by Gordon Vivian Kass that introduced the CHAID algorithm.CHAID (Chi-squared Automatic Interaction Detection) is a tree-based partitioning method introduced by G. V. Kass for exploring large categorical data sets by iteratively splitting records into mutually exclusive, exhaustive subsets based on the most statistically significant predictors rather than maximal explanatory power. Unlike its predecessor, AID, CHAID embeds each split in a chi-squared significance test (with Bonferroni‐corrected thresholds), allows multi-way divisions, and handles missing or “floating” categories gracefully.In practice, CHAID proceeds by merging predictor categories that are least distinguishable (stepwise grouping) and then testing whether any compound categories merit a further split, ensuring parsimonious, stable groupings without overfitting. Through its significance‐driven, multi-way splitting and built-in bias correction against predictors with many levels, CHAID yields intuitive decision trees that highlight the strongest associations in high-dimensional categorical data In modern data science, CHAID’s core ideas underpin contemporary decision‐tree algorithms (e.g., CART, C4.5) and ensemble methods like random forests, where statistical rigor in splitting criteria and robust handling of missing data remain critical. Its emphasis on automated, hypothesis‐driven partitioning has influenced automated feature selection, interpretable machine learning, and scalable analytics workflows that transform raw categorical variables into actionable insights.
    --------  
    41:03
  • Data Science #28 - The Bloom filter algorithm
    In the 28th episode, we go over Burton Bloom's Bloom filter from 1970, a groundbreaking data structure that enables fast, space-efficient set membership checks by allowing a small, controllable rate of false positives.Unlike traditional methods that store full data, Bloom filters use a compact bit array and multiple hash functions, trading exactness for speed and memory savings. This idea transformed modern data science and big data systems, powering tools like Apache Spark, Cassandra, and Kafka, where fast filtering and memory efficiency are critical for performance at scale.
    --------  
    39:15
  • Data Science #27 - The History of Least Squares (1877)
    Mansfield Merriman's 1877 paper traces the historical development of the Method of Least Squares, crediting Legendre (1805) for introducing the method, Adrain (1808) for the first formal probabilistic proof, and Gauss (1809) for linking it to the normal distribution. He evaluates multiple proofs, including Laplace’s (1810) general probability-based derivation, and highlights later refinements by various mathematicians. The paper underscores the method’s fundamental role in statistical estimation, probability theory, and error minimization, solidifying its place in scientific and engineering applications.
    --------  
    32:09
  • Data Science #26 - The First Gradient decent algorithm by Cauchy (1847)
    In this episode, we review Cauchy’s 1847 paper, which introduced an iterative method for solving simultaneous equations by minimizing a function using its partial derivatives. Instead of elimination, he proposed progressively reducing the function’s value through small updates, forming an early version of gradient descent. His approach allowed systematic approximation of solutions, influencing numerical optimization.This work laid the foundation for machine learning and AI, where gradient-based methods are essential. Modern stochastic gradient descent (SGD) and deep learning training algorithms follow Cauchy’s principle of stepwise minimization. His ideas power optimization in neural networks, making AI training efficient and scalable.
    --------  
    33:14

Altri podcast di Scienze

Su Data Science Decoded

We discuss seminal mathematical papers (sometimes really old 😎 ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective. We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on. Our podcast episodes are also available on our youtube: https://youtu.be/wThcXx_vXjQ?si=vnMfs
Sito web del podcast

Ascolta Data Science Decoded, Il gorilla ce l'ha piccolo e molti altri podcast da tutto il mondo con l’applicazione di radio.it

Scarica l'app gratuita radio.it

  • Salva le radio e i podcast favoriti
  • Streaming via Wi-Fi o Bluetooth
  • Supporta Carplay & Android Auto
  • Molte altre funzioni dell'app