
Building Production Workflows for AI Applications
06/14/24 • 43 min
1 Listener
In this episode, Inngest cofounder and CEO Tony Holdstock-Brown joins a16z partner Yoko Li, as well as Derrick Harris, to discuss the reality and complexity of running AI agents and other multistep AI workflows in production. Tony also why developer tools for generative AI — and their founders — might look very similar to previous generations of these products, and where there are opportunities for improvement.
Here's a sample of the discussion, where Tony shares some advice for engineers looking to build for AI:
"We almost have two parallel tracks right now as, as engineers. We've got the CPU track in which we're all like, 'Oh yeah, CPU-bound, big O notation. What are we doing on the application-level side?' And then we've got the GPU side, in which people are doing like crazy things in order to make numbers faster, in order to make differentiation better and smoother, in order to do gradient descent in a nicer and more powerful way. The two disciplines right now are working together, but are also very, very, very different from an engineering point of view.
"This is one interesting part to think about for like new engineers, people that are just thinking about what to do if they want to go into the engineering field overall. Do you want to be on the side using AI, in which you take all of these models, do all of this stuff, build the application-level stuff, and chain things together to build products? Or do you want to be on the math side of things, in which you do really low-level things in order to make compilers work better, so that your AI things can run faster and more efficiently? Both are engineering, just completely different applications of it."
Learn more:
The Modern Transactional Stack
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode, Inngest cofounder and CEO Tony Holdstock-Brown joins a16z partner Yoko Li, as well as Derrick Harris, to discuss the reality and complexity of running AI agents and other multistep AI workflows in production. Tony also why developer tools for generative AI — and their founders — might look very similar to previous generations of these products, and where there are opportunities for improvement.
Here's a sample of the discussion, where Tony shares some advice for engineers looking to build for AI:
"We almost have two parallel tracks right now as, as engineers. We've got the CPU track in which we're all like, 'Oh yeah, CPU-bound, big O notation. What are we doing on the application-level side?' And then we've got the GPU side, in which people are doing like crazy things in order to make numbers faster, in order to make differentiation better and smoother, in order to do gradient descent in a nicer and more powerful way. The two disciplines right now are working together, but are also very, very, very different from an engineering point of view.
"This is one interesting part to think about for like new engineers, people that are just thinking about what to do if they want to go into the engineering field overall. Do you want to be on the side using AI, in which you take all of these models, do all of this stuff, build the application-level stuff, and chain things together to build products? Or do you want to be on the math side of things, in which you do really low-level things in order to make compilers work better, so that your AI things can run faster and more efficiently? Both are engineering, just completely different applications of it."
Learn more:
The Modern Transactional Stack
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Previous Episode

The Future of Image Models Is Multimodal
In this episode, Ideogram CEO Mohammad Norouzi joins a16z General Partner Jennifer Li, as well as Derrick Harris, to share his story of growing up in Iran, helping build influential text-to-image models at Google, and ultimately cofounding and running Ideogram. He also breaks down the differences between transformer models and diffusion models, as well as the transition from researcher to startup CEO.
Here's an excerpt where Mohammad discusses the reaction to the original transformer architecture paper, "Attention Is All You Need," within Google's AI team:
"I think [lead author Asish Vaswani] knew right after the paper was submitted that this is a very important piece of the technology. And he was telling me in the hallway how it works and how much improvement it gives to translation. Translation was a testbed for the transformer paper at the time, and it helped in two ways. One is the speed of training and the other is the quality of translation.
"To be fair, I don't think anybody had a very crystal clear idea of how big this would become. And I guess the interesting thing is, now, it's the founding architecture for computer vision, too, not only for language. And then we also went far beyond language translation as a task, and we are talking about general-purpose assistants and the idea of building general-purpose intelligent machines. And it's really humbling to see how big of a role the transformer is playing into this."
Learn more:
Investing in Ideogram
Denoising Diffusion Probabilistic Models
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Next Episode

Developer Tool UX in the Age of Generative AI
In this episode, design engineer Alasdair Monk joins a16z's Yoko Li and Derrick Harris to discuss how generative AI is changing how developers — and the those building for developers — interact with the tools of their trade. Alasdair’s journey includes stints at dev-centric companies such as Heroku/Salesforce, and he's presently designing the user experience for Poolside, an AI programming startup.
Here's a sample of Alasdair discussing the future of the prompt bar in generative coding tools:
"When interacting with machine learning models, we've almost thrown away 30 years of human-computer interaction knowledge and kind of reverted to using a terminal circa 1980 to interact with computer, or the prompt bar. This very plain-text way to interact with AI is really interesting.
"I think it's very different when you can't predict what a user interface is going to look like. What an LLM can spit out is basically unpredictable or non-deterministic, and so how do you design for that or how do you design around the guardrails for that are the really interesting things that I think everyone who works in the industry right now is trying to figure out. And I think it's pretty clear to a lot of people that sometimes you want to chat to the computer as if it's like the rubber duck.
"I think a lot of where AI is going to really help us, particularly with engineering, is going to be in the interactions that aren't that at all, and will actually probably look much more like interacting with traditional software today, where I interact with it via windows and buttons and all sorts of GUI elements."
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/ai-a16z-379277/building-production-workflows-for-ai-applications-54412437"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to building production workflows for ai applications on goodpods" style="width: 225px" /> </a>
Copy