Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
The MAD Podcast with Matt Turck - Jeremy Howard on Building 5,000 AI Products with 14 People (Answer AI Deep-Dive)

Jeremy Howard on Building 5,000 AI Products with 14 People (Answer AI Deep-Dive)

05/15/25 • 55 min

The MAD Podcast with Matt Turck

What happens when you try to build the “General Electric of AI” with just 14 people? In this episode, Jeremy Howard reveals the radical inside story of Answer AI — a new kind of AI R&D lab that’s not chasing AGI, but instead aims to ship thousands of real-world products, all while staying tiny, open, and mission-driven.

Jeremy shares how open-source models like DeepSeek and Qwen are quietly outpacing closed-source giants, why the best new AI is coming out of China. You’ll hear the surprising truth about the so-called “DeepSeek moment,” why efficiency and cost are the real battlegrounds in AI, and how Answer AI’s “dialogue engineering” approach is already changing lives—sometimes literally.

We go deep on the tools and systems powering Answer AI’s insane product velocity, including Solve It (the platform that’s helped users land jobs and launch startups), Shell Sage (AI in your terminal), and Fast HTML (a new way to build web apps in pure Python). Jeremy also opens up about his unconventional path from philosophy major and computer game enthusiast to world-class AI scientist, and why he believes the future belongs to small, nimble teams who build for societal benefit, not just profit.

Fast.ai

Website - https://www.fast.ai

X/Twitter - https://twitter.com/fastdotai

Answer.ai

Website - https://www.answer.ai/

X/Twitter - https://x.com/answerdotai

Jeremy Howard

LinkedIn - https://linkedin.com/in/howardjeremy

X/Twitter - https://x.com/jeremyphoward

FIRSTMARK

Website - https://firstmark.com

X/Twitter - https://twitter.com/FirstMarkCap

Matt Turck (Managing Director)

LinkedIn - https://www.linkedin.com/in/turck/

X/Twitter - https://twitter.com/mattturck

(00:00) Intro

(01:39) Highlights and takeaways from ICLR Singapore

(02:39) Current state of open-source AI

(03:45) Thoughts on Microsoft Phi and open source moves

(05:41) Responding to OpenAI’s open source announcements

(06:29) The real impact of the Deepseek ‘moment’

(09:02) Progress and promise in test-time compute

(10:53) Where we really stand on AGI and ASI

(15:05) Jeremy’s journey from philosophy to AI

(20:07) Becoming a Kaggle champion and starting Fast.ai

(23:04) Answer.ai mission and unique vision

(28:15) Answer.ai’s business model and early monetization

(29:33) How a small team at Answer.ai ships so fast

(30:25) Why Devin AI agent isn't that great

(33:10) The future of autonomous agents in AI development

(34:43) Dialogue Engineering and Solve It

(43:54) How Answer.ai decides which projects to build

(49:47) Future of Answer.ai: staying small while scaling impact

plus icon
bookmark

What happens when you try to build the “General Electric of AI” with just 14 people? In this episode, Jeremy Howard reveals the radical inside story of Answer AI — a new kind of AI R&D lab that’s not chasing AGI, but instead aims to ship thousands of real-world products, all while staying tiny, open, and mission-driven.

Jeremy shares how open-source models like DeepSeek and Qwen are quietly outpacing closed-source giants, why the best new AI is coming out of China. You’ll hear the surprising truth about the so-called “DeepSeek moment,” why efficiency and cost are the real battlegrounds in AI, and how Answer AI’s “dialogue engineering” approach is already changing lives—sometimes literally.

We go deep on the tools and systems powering Answer AI’s insane product velocity, including Solve It (the platform that’s helped users land jobs and launch startups), Shell Sage (AI in your terminal), and Fast HTML (a new way to build web apps in pure Python). Jeremy also opens up about his unconventional path from philosophy major and computer game enthusiast to world-class AI scientist, and why he believes the future belongs to small, nimble teams who build for societal benefit, not just profit.

Fast.ai

Website - https://www.fast.ai

X/Twitter - https://twitter.com/fastdotai

Answer.ai

Website - https://www.answer.ai/

X/Twitter - https://x.com/answerdotai

Jeremy Howard

LinkedIn - https://linkedin.com/in/howardjeremy

X/Twitter - https://x.com/jeremyphoward

FIRSTMARK

Website - https://firstmark.com

X/Twitter - https://twitter.com/FirstMarkCap

Matt Turck (Managing Director)

LinkedIn - https://www.linkedin.com/in/turck/

X/Twitter - https://twitter.com/mattturck

(00:00) Intro

(01:39) Highlights and takeaways from ICLR Singapore

(02:39) Current state of open-source AI

(03:45) Thoughts on Microsoft Phi and open source moves

(05:41) Responding to OpenAI’s open source announcements

(06:29) The real impact of the Deepseek ‘moment’

(09:02) Progress and promise in test-time compute

(10:53) Where we really stand on AGI and ASI

(15:05) Jeremy’s journey from philosophy to AI

(20:07) Becoming a Kaggle champion and starting Fast.ai

(23:04) Answer.ai mission and unique vision

(28:15) Answer.ai’s business model and early monetization

(29:33) How a small team at Answer.ai ships so fast

(30:25) Why Devin AI agent isn't that great

(33:10) The future of autonomous agents in AI development

(34:43) Dialogue Engineering and Solve It

(43:54) How Answer.ai decides which projects to build

(49:47) Future of Answer.ai: staying small while scaling impact

Previous Episode

undefined - Why Influx Rebuilt Its Database for the IoT and Robotics Explosion

Why Influx Rebuilt Its Database for the IoT and Robotics Explosion

InfluxDB just dropped its biggest update ever — InfluxDB 3.0 — and in this episode, we go deep with the team behind the world’s most popular open-source time series database.

You’ll hear the inside story of how InfluxDB grew from 3,000 users in 2015 to over 1.3 million today, and why the company decided to rewrite its entire architecture from scratch in Rust, ditching Go and moving to object storage on S3.

We break down the real technical challenges that forced this radical shift: the “cardinality problem” that choked performance, the pain of linking compute and storage, and why their custom query language (Flux) failed to catch on, leading to a humbling embrace of SQL as the industry standard. You’ll learn how InfluxDB is positioning itself in a world dominated by Databricks and Snowflake, and the hard lessons learned about monetization when 1.3 million users only yield 2,600 paying customers.

InfluxData

Website - https://www.influxdata.com

X/Twitter - https://twitter.com/InfluxDB

Evan Kaplan

LinkedIn - https://www.linkedin.com/in/kaplanevan

X/Twitter - https://x.com/evankaplan

FIRSTMARK

Website - https://firstmark.com

X/Twitter - https://twitter.com/FirstMarkCap

Matt Turck (Managing Director)

LinkedIn - https://www.linkedin.com/in/turck/

X/Twitter - https://twitter.com/mattturck

Foursquare:

Website - https://foursquare.com

X/Twitter - https://x.com/Foursquare

IG - instagram.com/foursquare

(00:00) Intro

(02:22) The InfluxDB origin story and why time series matters

(06:59) The cardinality crisis and why Influx rebuilt in Rust

(09:26) Why SQL won (and Flux lost)

(16:34) Why UnfluxData bets on FDAP

(22:51) IoT, Tesla Powerwalls, and real-time control systems

(27:54) Competing with Databricks, Snowflake, and the “lakehouse” world

(31:50) Open Source lessons, monetization, & what’s next

Next Episode

undefined - AI Eats the World: Benedict Evans on What Really Matters Now

AI Eats the World: Benedict Evans on What Really Matters Now

What if the “AI revolution” is actually... stuck in the messy middle? In this episode, Benedict Evans returns to tackle the big question we left hanging a year ago: Is AI a true paradigm shift, or just another tech platform shift like mobile or cloud? One year later, the answer is more complicated — and more revealing — than anyone expected.

Benedict pulls back the curtain on why, despite all the hype and model upgrades, the core LLMs are starting to look like commodities. We dig into the real battlegrounds: distribution, brand, and the race to build sticky applications. Why is ChatGPT still topping the App Store charts while Perplexity and Claude barely register outside Silicon Valley? Why did OpenAI just hire a CEO of Applications, and what does that signal about the future of AI products?

We go deep on the “probabilistic” nature of LLMs, why error rates are still the elephant in the room, the future of consumer AI (is there a killer app beyond chatbots and image generators?), the impact of generative content on e-commerce and advertising, and whether “AI agents” are the next big thing — or just another overhyped demo.

And, we ask: What happened to AI doomerism? Why did the existential risk debate suddenly vanish, and what risks should we actually care about?

Benedict Evans

LinkedIn - https://www.linkedin.com/in/benedictevans

Threads - https://www.threads.net/@benedictevans

FIRSTMARK

Website - https://firstmark.com

X/Twitter - https://twitter.com/FirstMarkCap

Matt Turck (Managing Director)

LinkedIn - https://www.linkedin.com/in/turck/

X/Twitter - https://twitter.com/mattturck

(00:00) Intro

(01:47) Is AI a Platform Shift or a Paradigm Shift?

(07:21) Error Rates and Trust in AI

(15:07) Adapting to AI’s Capabilities

(19:18) Generational Shifts in AI Usage

(22:10) The Commoditization of AI Models

(27:02) Are Brand and Distribution the Real Moats in AI?

(29:38) OpenAI: Research Lab or Application Company?

(33:26) Big Tech’s AI Strategies: Apple, Google, Meta, AWS

(39:00) AI and Search: Is ChatGPT a Search Engine?

(42:41) Consumer AI Apps: Where’s the Breakout?

(45:51) The Need for a GUI for AI

(48:38) Generative AI in Social and Content

(51:02) The Business Model of AI: Ads, Memory, and Moats

(55:26) Enterprise AI: SaaS, Pilots, and Adoption

(01:00:08) The Future of AI in Business

(01:05:11) Infinite Content, Infinite SKUs: AI and E-commerce

(01:09:42) Doomerism, Risks, and the Future of AI

Episode Comments

Featured in these lists

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/the-mad-podcast-with-matt-turck-320090/jeremy-howard-on-building-5000-ai-products-with-14-people-answer-ai-de-91256091"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to jeremy howard on building 5,000 ai products with 14 people (answer ai deep-dive) on goodpods" style="width: 225px" /> </a>

Copy