Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Coredump Sessions

Coredump Sessions

Memfault

Coredump Sessions is a podcast for embedded engineers and product teams building connected devices. Hosted by the team at Memfault, each episode features real-world stories and technical deep dives with experts across the embedded systems space. From Bluetooth pioneers and OTA infrastructure veterans to the engineers who built Pebble, we explore the tools, techniques, and tradeoffs that power reliable, scalable devices. If you're building or debugging hardware, this is your go-to for embedded insights.
Share icon

All episodes

Best episodes

Seasons

Top 10 Coredump Sessions Episodes

Goodpods has curated a list of the 10 best Coredump Sessions episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Coredump Sessions for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Coredump Sessions episode by adding your comments to the episode page.

Coredump Sessions - #005: The Current Realities of Cellular IoT
play

03/19/25 • 59 min

In today’s Coredump Session, we zoom in on the rapidly evolving world of cellular IoT—what’s working, what’s changing, and what developers should know. With expert insight from Fabien Korheim of ONES, the conversation breaks down MVNOs vs MNOs, dives into certification hurdles, explores connectivity trade-offs like NB-IoT vs LTE-M, and unpacks why cellular is quietly powering more devices than you think. Whether you're building metering devices or baby monitors, this one hits the full stack—from tech to business models.

Key Takeaways:

  • MVNOs simplify global IoT deployments by abstracting regional carrier relationships and reducing SKU complexity.
  • LTE-M is currently the safest bet for low-power cellular applications, with 5G RedCap positioned as a future alternative.
  • Certification processes are lighter with MVNOs, especially when using pre-approved modules.
  • Cellular IoT is ideal where Wi-Fi isn’t guaranteed, like basements, forests, and mobile tracking.
  • Consumer IoT has huge untapped potential—cellular can dramatically improve usability and reduce returns.
  • Battery life and data costs are major design considerations, especially when scaling fleets globally.
  • Multiradio devices and smart fallback strategies (e.g. BLE/Wi-Fi + Cellular) are becoming more common.
  • Debugging tools and observability platforms are essential for maintaining reliability across networks, devices, and regions.

Chapters:

00:00 Episode Teasers & Intro02:34 MVNO vs MNO: What’s the Difference?06:28 Certifications, SIMs & Simplifying Deployment12:31 NB-IoT, LTE-M, LoRaWAN & Satellite—Explained23:43 5G for IoT: Hype or Here?27:14 Top Use Cases: Meters, Trackers & Wildlife33:28 The Big Opportunity: Cellular in Consumer Devices36:33 Business Models: Who Pays for Cellular?37:49 Getting Started: Kits, SIMs & Copy-Paste Firmware41:59 Common Mistakes & What to Watch in the Field47:15 What to Measure: Observability That Scales49:13 Q&A: Prioritization, Firmware Updates, RedCap & More

⁠⁠Join the Interrupt Slack

Watch this episode on YouTube

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

bookmark
plus icon
share episode

In today’s Coredump Session, the team dives deep into the world of over-the-air (OTA) updates—why they matter, how they break, and what it takes to get them right. From horror stories involving IR updates in a snowstorm to best practices for deploying secure firmware across medical devices, this conversation covers the full stack of OTA: device, cloud, process, and people. It's equal parts cautionary tale and technical masterclass.

Key Takeaways:

  • OTA is essential for modern hardware—without it, even small bugs can require massive field operations.
  • Good OTA starts early, ideally at the product design and architecture phase.
  • Bootloaders, memory maps, and security keys must be carefully planned to avoid long-term issues.
  • Staged rollouts and cohorts help mitigate fleet-wide disasters.
  • Signing keys and root certificates should be treated like firmware—versioned, updatable, and secure.
  • Real-world constraints (medical, smart home, etc.) make OTA more complex—but not optional.
  • Testing both the update and the update mechanism itself is critical before going live.
  • When OTA fails, fallback plans (like dual banks or A/B slots) can be the difference between a patch and a catastrophe.

Chapters:

00:00 Episode Teasers & Intro

03:29 Meet the Guests + OTA Gut Reactions

05:33 Why OTA Is Non-Negotiable

03:29 The OTA Wake-Up Call: Why You Need It

09:31 Building OTA into Hardware from Day One

16:49 Cloud-Side OTA: Cohorts, Load, and Timing

21:53 OTA in Regulated Industries

30:10 When OTA Breaks Itself

34:44 Minimizing OTA Risk: The Defensive Playbook

41:18 OTA and the Matter Standard

47:17 Networking Stacks, Constraints, and Reliability

51:11 Security, Scale, and the OTA Future

⁠⁠⁠⁠Join the Interrupt Slack

Watch this episode on YouTube

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

bookmark
plus icon
share episode

REGISTER FOR PART 2 OF THE PEBBLE CONVERSATION ON APRIL 15TH

In this episode of Coredump, three former Pebble engineers reunite to dive deep into the technical quirks, philosophies, and brilliant hacks behind Pebble OS. From crashing on purpose to building a single codebase that powered every watch, they share war stories, bugs, and what made Pebble’s firmware both rare and remarkable. If you love embedded systems, software-forward thinking, or startup grit— this one’s for you.

Key topics:

  • Pebble intentionally crashed devices to collect core dumps and improve reliability.
  • All Pebble devices ran on a single codebase, which simplified development and updates.
  • The open-sourcing of Pebble OS is a rare opportunity to study real, commercial firmware.
  • A platform mindset—supporting all devices and apps consistently—shaped major engineering decisions.
  • Pebble’s app sandbox isolated bad code without crashing the OS, improving developer experience.
  • The team built a custom NOR flash file system to overcome constraints in size and endurance.
  • Core dumps and analytics were essential for tracking bugs, deadlocks, and field issues.
  • Collaborations between hardware and firmware engineers led to better debugging tools and smoother development.

Chapters:

00:00 Episode Teasers & Intro01:10 Meet the Team: Pebble Engineers Reunite01:13 Meet the Hosts + Why Pebble Still Matters03:47 Why Open-Sourcing Pebble OS Is a Big Deal06:20 The Startup Firmware Mentality08:44 One OS, All Devices: Pebble’s Platform Bet12:30 App Compatibility and the KEMU Emulator14:51 Sandboxing, Syscalls, and Crashing with Grace20:25 Pebble File System: Built from Scratch (and Why)23:32 From Dumb to Smart: The Iterative Codebase Ethos26:09 Core Dumps: Crashing Is a Feature30:45 How Firmware Shaped Hardware Decisions33:56 Rust, Easter Eggs, and Favorite Bugs36:09 Wear-Level Failures, Security Exploits & Font Hacks39:42 Why We Chose WAF (and Regret Nothing?)42:41 What We’d Do Differently Next Time47:00 Final Q&A: Open Hardware, Protocols, and Part Two?

Join the Interrupt Slack ⁠⁠

⁠⁠Watch this episode on YouTube⁠⁠

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

bookmark
plus icon
share episode

In today’s Coredump Session, we dive into the fast-evolving world of Edge AI and its real implications for device makers. From robots that detect humans to welding machines that hear errors, we explore the rise of intelligent features at the hardware level. The conversation spans practical tools, common developer traps, and why on-device AI might be the most underrated revolution in embedded systems today.

Key Takeaways:

  • Edge AI means real-time inference on embedded devices, not just “AI at the edge of the network.”
  • Privacy, latency, and power efficiency are core reasons to use Edge AI over cloud processing.
  • Hardware accelerators like the Cortex-M55 + U55 combo have unlocked GPU-like performance in microcontrollers.
  • Battery-powered AI devices are not only possible—they're already shipping.
  • Data collection and labeling are major bottlenecks, especially in real-world form factors.
  • Start projects with data acquisition firmware and plan ahead for memory, power, and future use cases.
  • Edge AI applications are expanding in healthcare, wearables, and consumer robotics.
  • Business models are shifting, with AI driving recurring revenue and service-based offerings for hardware products.

Chapters:

00:00 Episode Teasers & Intro02:57 What Is Edge AI Anyway?06:42 Tiny Models, Tiny Devices, Big Impact10:15 The Hardware Leap: From M4 to M55 + U5515:21 Real-World Use Cases: From ECGs to Welding Bots17:47 Spec’ing Your Hardware for AI24:15 Firmware + Inference Frameworks: How It Actually Works26:07 Why Data Is the Hard Part34:21 Where Edge AI Will—and Won’t—Take Off First37:40 Hybrid Edge + Cloud Models40:38 Business Model Shifts: AI as a Service44:20 Live Q&A: Compatibility, Labeling, On-Device Training56:48 Final Advice: Think of AI as Part of the Product

Join the Interrupt Slack

⁠⁠⁠Watch this episode on YouTube⁠

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

bookmark
plus icon
share episode

In today’s Coredump Session, we unpack the full story of Bluetooth—from its PDA-era beginnings to its rising role in cloud-connected devices. With insights from Memfault’s Chris Coleman and François Baldassari, along with Blecon’s Simon Ford, this wide-ranging conversation explores how Bluetooth Low Energy has evolved, where it thrives (and doesn’t), and why it’s often the right tool, even if it’s not a perfect one. Expect history, hot takes, and practical guidance for building better Bluetooth-powered products.

Key Takeaways:

  • Bluetooth Low Energy (BLE) and Bluetooth Classic are fundamentally different—and BLE was never just a “lite” version.
  • BLE's strength lies in its low power consumption and quick connection setup, making it ideal for peripheral devices that sleep most of the time.
  • Use cases like audio, asset tracking, and cloud sync continue to shape BLE’s evolution, and new specs like LE Audio and PAwR are expanding its reach.
  • Bluetooth wins not because it’s perfect—but because it’s practical: globally adopted, low-cost, and well-supported.
  • Debugging Bluetooth at scale requires collecting connection parameters, analyzing retries, and understanding phone ecosystem quirks.
  • BLE Mesh adoption has been underwhelming, with real-world complexity often outweighing its theoretical benefits.
  • Expect to see BLE turn up in more places, including MEMS sensors and energy-harvesting devices, not just consumer gadgets.
  • Designers should understand trade-offs in connection intervals, latency, and power draw when choosing Bluetooth for cloud or local connectivity.

Chapters:

00:00 Episode Teasers & Intro

01:10 Meet the Guests: Bluetooth Roots at Pebble, Fitbit, and Blecon

06:51 BLE’s Breakthrough: The iPhone 4S Moment

10:22 BLE vs Classic: Why It Took Off

14:39 Specs That Shifted Everything: Packet Length, Coded PHY & LE Audio

21:41 Is BLE Still Interoperable? And Does It Matter?

28:22 The BLE Cloud Puzzle: Gateways, Phones & Golden Gate

38:40 BLE’s Sweet Spot: Power, Latency & When It Just Works

47:12 Operating BLE Devices in the Wild: What to Track & Why

57:40 Mesh Ambitions vs Reality

⁠⁠Join the Interrupt Slack

Watch this episode on YouTube

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

bookmark
plus icon
share episode

In today's Coredump Session, Memfault’s François Baldassari and Chris Coleman unpack the sweeping impact of new IoT security regulations like the CRA and the Cyber Trust Mark. From shocking real-world exploits to smart compliance strategies, they explore what these changes mean for hardware teams and the future of connected devices. If you ship firmware or build IoT products, this one’s essential listening.

Key takeaways:

  • IoT security is no longer optional—new regulations like the CRA and Cyber Trust Mark make it mandatory.
  • Most connected devices today are still dangerously undersecured, with outdated stacks and poor OTA support.
  • Open source platforms like Zephyr can make compliance easier by pooling security resources across companies.
  • OTA (over-the-air) updates are now a requirement in both US and EU regulations.
  • The CRA introduces SBOM (Software Bill of Materials) requirements to track vulnerabilities in dependencies.
  • Observability, encryption, and secure boot need to be built in from the start—not as last-minute add-ons.
  • Compliance will vary based on device criticality, but self-certification will be the norm for most companies.
  • Ignoring security costs more in the long run—both in reputation and risk.

Chapters:

00:00 Episode Teasers & Intro

01:03 Meet the Hosts: François and Chris from Memfault

03:40 Why IoT Security Is Still So Behind

07:15 Vulnerabilities, Legacy Chips, and Who’s to Blame

10:12 Wireless Protocols: Still a Huge Attack Surface

13:28 If You Ship Without OTA, You're Asking for Trouble

20:50 Introducing the CRA and Cyber Trust Mark

23:38 What the CRA Actually Requires

31:45 Reconciling Security Monitoring with GDPR

34:07 Cyber Trust Mark vs CRA: US vs EU Approaches

41:05 What You Can Do Today to Prepare

46:33 How Long Do You Have to Support a Device?

52:19 Attack Surfaces: Even a Projector Isn't Safe

56:06 Lifecycle Support and Product Lifespan Realities

58:51 Observability in Low-Resource Devices

1:00:34 Connected Architectures & Multichip Compliance

1:01:43 IoT Devices with Limited Bandwidth & OTA Constraints

Join the Interrupt Slack

⁠⁠⁠⁠Watch this episode on YouTube

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

bookmark
plus icon
share episode

In today's Coredump Session, we dive into a wide-ranging conversation about the intersection of AI, open source, and embedded systems with the teams from Memfault and Goliath. From the evolution of AI at the edge to the emerging role of large language models (LLMs) in firmware development, the panel explores where innovation is happening today — and where expectations still outpace reality. Listen in as they untangle the practical, the possible, and the hype shaping the future of IoT devices.

Speakers:

  • François Baldassari: CEO & Founder, Memfault
  • Thomas Sarlandie: Field CTO, Memfault
  • Jonathan Beri: CEO & Founder, Golioth
  • Dan Mangum: CTO, Golioth

Key Takeaways:

  • AI has been quietly powering embedded devices for years, especially in edge applications like voice recognition and computer vision.
  • The biggest gains in IoT today often come from cloud-based AI analytics, not necessarily from AI models running directly on devices.
  • LLMs are reshaping firmware development workflows but are not yet widely adopted for production-grade embedded codebases.
  • Use cases like audio and video processing have seen the fastest real-world adoption of AI at the edge.
  • Caution is warranted when integrating AI into safety-critical systems, where determinism is crucial.
  • Cloud-to-device AI models are becoming the go-to for fleet operations, anomaly detection, and predictive maintenance.
  • Many promising LLM-based consumer products struggle because hardware constraints and cloud dependence create friction.
  • The future of embedded AI may lie in hybrid architectures that balance on-device intelligence with cloud support.

Chapters:

00:00 Episode Teasers & Welcome

01:10 Meet the Panel: Memfault x Golioth

02:56 Why AI at the Edge Isn’t Actually New

05:33 The Real Use Cases for AI in Embedded Devices

08:07 How Much Chaos Are You Willing to Introduce?

11:19 Edge AI vs. Cloud AI: Where It’s Working Today

13:50 LLMs in Embedded: Promise vs. Reality

17:16 Why Hardware Can’t Keep Up with AI’s Pace

20:15 Building Unique Models When Public Datasets Fail

36:14 Open Source’s Big Moment (and What Comes Next)

42:49 Will AI Kill Open Source Contributions?

49:30 How AI Could Change Software Supply Chains

52:24 How to Stay Relevant as an Engineer in the AI Era

⁠⁠Join the Interrupt Slack

Watch this episode on YouTube

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

bookmark
plus icon
share episode

In today’s Coredump Session, the team reunites to unpack the behind-the-scenes lessons from their time building firmware at Pebble. This episode dives into the risks, decisions, and sheer grit behind a near-disastrous OTA update—and the ingenious hack that saved a million smartwatches. It’s a candid look at the intersection of rapid development, firmware stability, and real-world consequences.

Key Takeaways:

  • Pebble’s open approach to developer access often came at the cost of security best practices, reflecting early startup trade-offs.
  • A critical OTA update bug almost bricked Pebble devices—but the team recovered using a clever BLE-based stack hack.
  • Lack of formal security measures at the time (e.g., unsigned firmware) unintentionally enabled recovery from a serious update failure.
  • Static analysis and test automation became top priorities following the OTA scare to prevent repeat incidents.
  • The story reveals how firmware constraints (like code size and inline functions) can lead to high-stakes bugs.
  • Investing in robust release processes—including version-to-version OTA testing—proved vital.
  • Real security risks included impersonation on e-commerce platforms and potential ransom via malicious OTA compromise.
  • The importance of "hiring your hackers" was humorously noted as a de facto security strategy.

Chapters:

00:00 Episode Teasers & Welcome

01:22 Why Pebble’s Firmware Was Open (and Unsigned)

05:01 The Security Tradeoffs That Enabled Speed

11:00 The OTA Bug That Could Have Bricked Everything

15:26 Hacking Our Way Out with BLE Stack Overflow

17:47 Lessons Learned: Test Automation & Static Analysis

26:30 How Pebble Built a Developer Ecosystem

29:56 CloudPebble, Watchface Generator & Developer Tools

42:55 Backporting Pebble 3.0 to Legacy Hardware

49:02 The Bootloader Rewrite & Other Wild Optimizations

53:31 Simulators, Robot Arms & Debugging in CI56:40 Firmware Signing, Anti-Rollback & Secure Update

1:06:10 Coding in Rust? What We’d Do Differently Today

1:08:28 Where to Start with Open Source Pebble Development

⁠⁠Join the Interrupt Slack

Watch this episode on YouTube⁠⁠

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

bookmark
plus icon
share episode

In this episode of Coredump: Embedded Insights, the Memfault founders—François Baldassari and Chris Coleman—are joined by Brad Murray, former Pebble firmware lead, to explore the now open-sourced Pebble OS. They share war stories from the early days of embedded development, unpack why Pebble’s firmware architecture was years ahead of its time, and highlight the lessons embedded engineers can take from a real, production-grade consumer device.

Topics include:

  • Why open-sourcing Pebble OS is a big deal
  • The platform strategy behind a single codebase for multiple hardware SKUs
  • Custom file systems, app sandboxing, and crash recovery in the real world
  • The debugging hacks, performance tricks, and developer tools they wish they had built sooner

This is a rare peek behind the scenes of one of the most iconic embedded products ever shipped.

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Coredump Sessions have?

Coredump Sessions currently has 9 episodes available.

What topics does Coredump Sessions cover?

The podcast is about Podcasts and Technology.

What is the most popular episode on Coredump Sessions?

The episode title 'COREDUMP #003: Pebble's Code is Free: Three Former Pebble Engineers Discuss Why It's Important' is the most popular.

What is the average episode length on Coredump Sessions?

The average episode length on Coredump Sessions is 59 minutes.

How often are episodes of Coredump Sessions released?

Episodes of Coredump Sessions are typically released every 21 days.

When was the first episode of Coredump Sessions?

The first episode of Coredump Sessions was released on Sep 17, 2024.

Show more FAQ

Toggle view more icon

Comments