Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Coredump Sessions - #004: The Future of Edge AI and What it Means for Device Makers

#004: The Future of Edge AI and What it Means for Device Makers

02/25/25 • 58 min

Coredump Sessions

In today’s Coredump Session, we dive into the fast-evolving world of Edge AI and its real implications for device makers. From robots that detect humans to welding machines that hear errors, we explore the rise of intelligent features at the hardware level. The conversation spans practical tools, common developer traps, and why on-device AI might be the most underrated revolution in embedded systems today.

Key Takeaways:

  • Edge AI means real-time inference on embedded devices, not just “AI at the edge of the network.”
  • Privacy, latency, and power efficiency are core reasons to use Edge AI over cloud processing.
  • Hardware accelerators like the Cortex-M55 + U55 combo have unlocked GPU-like performance in microcontrollers.
  • Battery-powered AI devices are not only possible—they're already shipping.
  • Data collection and labeling are major bottlenecks, especially in real-world form factors.
  • Start projects with data acquisition firmware and plan ahead for memory, power, and future use cases.
  • Edge AI applications are expanding in healthcare, wearables, and consumer robotics.
  • Business models are shifting, with AI driving recurring revenue and service-based offerings for hardware products.

Chapters:

00:00 Episode Teasers & Intro02:57 What Is Edge AI Anyway?06:42 Tiny Models, Tiny Devices, Big Impact10:15 The Hardware Leap: From M4 to M55 + U5515:21 Real-World Use Cases: From ECGs to Welding Bots17:47 Spec’ing Your Hardware for AI24:15 Firmware + Inference Frameworks: How It Actually Works26:07 Why Data Is the Hard Part34:21 Where Edge AI Will—and Won’t—Take Off First37:40 Hybrid Edge + Cloud Models40:38 Business Model Shifts: AI as a Service44:20 Live Q&A: Compatibility, Labeling, On-Device Training56:48 Final Advice: Think of AI as Part of the Product

Join the Interrupt Slack

⁠⁠⁠Watch this episode on YouTube⁠

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

plus icon
bookmark

In today’s Coredump Session, we dive into the fast-evolving world of Edge AI and its real implications for device makers. From robots that detect humans to welding machines that hear errors, we explore the rise of intelligent features at the hardware level. The conversation spans practical tools, common developer traps, and why on-device AI might be the most underrated revolution in embedded systems today.

Key Takeaways:

  • Edge AI means real-time inference on embedded devices, not just “AI at the edge of the network.”
  • Privacy, latency, and power efficiency are core reasons to use Edge AI over cloud processing.
  • Hardware accelerators like the Cortex-M55 + U55 combo have unlocked GPU-like performance in microcontrollers.
  • Battery-powered AI devices are not only possible—they're already shipping.
  • Data collection and labeling are major bottlenecks, especially in real-world form factors.
  • Start projects with data acquisition firmware and plan ahead for memory, power, and future use cases.
  • Edge AI applications are expanding in healthcare, wearables, and consumer robotics.
  • Business models are shifting, with AI driving recurring revenue and service-based offerings for hardware products.

Chapters:

00:00 Episode Teasers & Intro02:57 What Is Edge AI Anyway?06:42 Tiny Models, Tiny Devices, Big Impact10:15 The Hardware Leap: From M4 to M55 + U5515:21 Real-World Use Cases: From ECGs to Welding Bots17:47 Spec’ing Your Hardware for AI24:15 Firmware + Inference Frameworks: How It Actually Works26:07 Why Data Is the Hard Part34:21 Where Edge AI Will—and Won’t—Take Off First37:40 Hybrid Edge + Cloud Models40:38 Business Model Shifts: AI as a Service44:20 Live Q&A: Compatibility, Labeling, On-Device Training56:48 Final Advice: Think of AI as Part of the Product

Join the Interrupt Slack

⁠⁠⁠Watch this episode on YouTube⁠

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

Previous Episode

undefined - #003: Pebble's Code is Free: Three Former Pebble Engineers Discuss Why It's Important (PART 1/2)

#003: Pebble's Code is Free: Three Former Pebble Engineers Discuss Why It's Important (PART 1/2)

REGISTER FOR PART 2 OF THE PEBBLE CONVERSATION ON APRIL 15TH

In this episode of Coredump, three former Pebble engineers reunite to dive deep into the technical quirks, philosophies, and brilliant hacks behind Pebble OS. From crashing on purpose to building a single codebase that powered every watch, they share war stories, bugs, and what made Pebble’s firmware both rare and remarkable. If you love embedded systems, software-forward thinking, or startup grit— this one’s for you.

Key topics:

  • Pebble intentionally crashed devices to collect core dumps and improve reliability.
  • All Pebble devices ran on a single codebase, which simplified development and updates.
  • The open-sourcing of Pebble OS is a rare opportunity to study real, commercial firmware.
  • A platform mindset—supporting all devices and apps consistently—shaped major engineering decisions.
  • Pebble’s app sandbox isolated bad code without crashing the OS, improving developer experience.
  • The team built a custom NOR flash file system to overcome constraints in size and endurance.
  • Core dumps and analytics were essential for tracking bugs, deadlocks, and field issues.
  • Collaborations between hardware and firmware engineers led to better debugging tools and smoother development.

Chapters:

00:00 Episode Teasers & Intro01:10 Meet the Team: Pebble Engineers Reunite01:13 Meet the Hosts + Why Pebble Still Matters03:47 Why Open-Sourcing Pebble OS Is a Big Deal06:20 The Startup Firmware Mentality08:44 One OS, All Devices: Pebble’s Platform Bet12:30 App Compatibility and the KEMU Emulator14:51 Sandboxing, Syscalls, and Crashing with Grace20:25 Pebble File System: Built from Scratch (and Why)23:32 From Dumb to Smart: The Iterative Codebase Ethos26:09 Core Dumps: Crashing Is a Feature30:45 How Firmware Shaped Hardware Decisions33:56 Rust, Easter Eggs, and Favorite Bugs36:09 Wear-Level Failures, Security Exploits & Font Hacks39:42 Why We Chose WAF (and Regret Nothing?)42:41 What We’d Do Differently Next Time47:00 Final Q&A: Open Hardware, Protocols, and Part Two?

Join the Interrupt Slack ⁠⁠

⁠⁠Watch this episode on YouTube⁠⁠

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

Next Episode

undefined - #005: The Current Realities of Cellular IoT

#005: The Current Realities of Cellular IoT

In today’s Coredump Session, we zoom in on the rapidly evolving world of cellular IoT—what’s working, what’s changing, and what developers should know. With expert insight from Fabien Korheim of ONES, the conversation breaks down MVNOs vs MNOs, dives into certification hurdles, explores connectivity trade-offs like NB-IoT vs LTE-M, and unpacks why cellular is quietly powering more devices than you think. Whether you're building metering devices or baby monitors, this one hits the full stack—from tech to business models.

Key Takeaways:

  • MVNOs simplify global IoT deployments by abstracting regional carrier relationships and reducing SKU complexity.
  • LTE-M is currently the safest bet for low-power cellular applications, with 5G RedCap positioned as a future alternative.
  • Certification processes are lighter with MVNOs, especially when using pre-approved modules.
  • Cellular IoT is ideal where Wi-Fi isn’t guaranteed, like basements, forests, and mobile tracking.
  • Consumer IoT has huge untapped potential—cellular can dramatically improve usability and reduce returns.
  • Battery life and data costs are major design considerations, especially when scaling fleets globally.
  • Multiradio devices and smart fallback strategies (e.g. BLE/Wi-Fi + Cellular) are becoming more common.
  • Debugging tools and observability platforms are essential for maintaining reliability across networks, devices, and regions.

Chapters:

00:00 Episode Teasers & Intro02:34 MVNO vs MNO: What’s the Difference?06:28 Certifications, SIMs & Simplifying Deployment12:31 NB-IoT, LTE-M, LoRaWAN & Satellite—Explained23:43 5G for IoT: Hype or Here?27:14 Top Use Cases: Meters, Trackers & Wildlife33:28 The Big Opportunity: Cellular in Consumer Devices36:33 Business Models: Who Pays for Cellular?37:49 Getting Started: Kits, SIMs & Copy-Paste Firmware41:59 Common Mistakes & What to Watch in the Field47:15 What to Measure: Observability That Scales49:13 Q&A: Prioritization, Firmware Updates, RedCap & More

⁠⁠Join the Interrupt Slack

Watch this episode on YouTube

Follow Memfault

Other ways to listen:

⁠⁠Apple Podcasts

iHeartRadio⁠⁠

⁠⁠Amazon Music

GoodPods

Castbox

⁠⁠

⁠⁠Visit our website

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/coredump-sessions-668730/004-the-future-of-edge-ai-and-what-it-means-for-device-makers-89143931"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to #004: the future of edge ai and what it means for device makers on goodpods" style="width: 225px" /> </a>

Copy