Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
AI + a16z

AI + a16z

a16z

Artificial intelligence is changing everything from art to enterprise IT, and a16z is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.

2 Listeners

Share icon

All episodes

Best episodes

Top 10 AI + a16z Episodes

Goodpods has curated a list of the 10 best AI + a16z episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to AI + a16z for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite AI + a16z episode by adding your comments to the episode page.

AI + a16z - The Best Way to Achieve AGI Is to Invent It
play

11/04/24 • 38 min

Longtime machine-learning researcher, and University of Washington Professor Emeritus, Pedro Domingos joins a16z General Partner Martin Casado to discuss the state of artificial intelligence, whether we're really on a path toward AGI, and the value of expressing unpopular opinions. It's a very insightful discussion as we head into an era of mainstream AI adoption, and ask big questions about how to ramp up progress and diversify research directions.

Here's an excerpt of Pedro sharing his thoughts on the increasing cost of frontier models and whether that's the right direction:

"if you believe the scaling laws hold and the scaling laws will take us to human-level intelligence, then, hey, it's worth a lot of investment. That's one part, but that may be wrong. The other part, however, is that to do that, we need exploding amounts of compute.

"If if I had to predict what's going to happen, it's that we do not need a trillion dollars to reach AGI at all. So if you spend a trillion dollars reaching AGI, this is a very bad investment."

Learn more:

The Master Algorithm

2040: A Silicon Valley Satire

The Economic Case for Generative AI and Foundation Models

Follow everyone on Z:

Pedro Domingos

Martin Casado

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

1 Listener

bookmark
plus icon
share episode
AI + a16z - The Future of Image Models Is Multimodal
play

06/07/24 • 37 min

In this episode, Ideogram CEO Mohammad Norouzi joins a16z General Partner Jennifer Li, as well as Derrick Harris, to share his story of growing up in Iran, helping build influential text-to-image models at Google, and ultimately cofounding and running Ideogram. He also breaks down the differences between transformer models and diffusion models, as well as the transition from researcher to startup CEO.

Here's an excerpt where Mohammad discusses the reaction to the original transformer architecture paper, "Attention Is All You Need," within Google's AI team:
"I think [lead author Asish Vaswani] knew right after the paper was submitted that this is a very important piece of the technology. And he was telling me in the hallway how it works and how much improvement it gives to translation. Translation was a testbed for the transformer paper at the time, and it helped in two ways. One is the speed of training and the other is the quality of translation.

"To be fair, I don't think anybody had a very crystal clear idea of how big this would become. And I guess the interesting thing is, now, it's the founding architecture for computer vision, too, not only for language. And then we also went far beyond language translation as a task, and we are talking about general-purpose assistants and the idea of building general-purpose intelligent machines. And it's really humbling to see how big of a role the transformer is playing into this."

Learn more:
Investing in Ideogram

Imagen

Denoising Diffusion Probabilistic Models

Follow everyone on X:

Mohammad Norouzi

Jennifer Li

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

1 Listener

bookmark
plus icon
share episode

In this archive episode from 2015, a16z's Sonal Chokshi, Frank Chen, and Steven Sinofsky discuss DeepMind's breakthrough AlphaGo system, which mastered the ancient Chinese game Go and introduced the public to reinforcement learning.

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

1 Listener

bookmark
plus icon
share episode

In this episode, Inngest cofounder and CEO Tony Holdstock-Brown joins a16z partner Yoko Li, as well as Derrick Harris, to discuss the reality and complexity of running AI agents and other multistep AI workflows in production. Tony also why developer tools for generative AI — and their founders — might look very similar to previous generations of these products, and where there are opportunities for improvement.

Here's a sample of the discussion, where Tony shares some advice for engineers looking to build for AI:
"We almost have two parallel tracks right now as, as engineers. We've got the CPU track in which we're all like, 'Oh yeah, CPU-bound, big O notation. What are we doing on the application-level side?' And then we've got the GPU side, in which people are doing like crazy things in order to make numbers faster, in order to make differentiation better and smoother, in order to do gradient descent in a nicer and more powerful way. The two disciplines right now are working together, but are also very, very, very different from an engineering point of view.

"This is one interesting part to think about for like new engineers, people that are just thinking about what to do if they want to go into the engineering field overall. Do you want to be on the side using AI, in which you take all of these models, do all of this stuff, build the application-level stuff, and chain things together to build products? Or do you want to be on the math side of things, in which you do really low-level things in order to make compilers work better, so that your AI things can run faster and more efficiently? Both are engineering, just completely different applications of it."

Learn more:

The Modern Transactional Stack

The LLM App Stack

Follow everyone on X:

Tony Holdstock-Brown

Yoko Li

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

1 Listener

bookmark
plus icon
share episode

a16z partners Guido Appenzeller and Matt Bornstein join Derrick Harris to discuss the state of the generative AI market, about 18 months after it really kicked into high gear with the release of ChatGPT — everything from the emergence of powerful open source LLMs to the excitement around AI-generated music.

If there's one major lesson to learn, it's that although we've made some very impressive technological strides and companies are generating meaningful revenue, this is still a a very fluid space. As Matt puts it during the discussion:
"For nearly all AI applications and most model providers, growth is kind of a sawtooth pattern, meaning when there's a big new amazing thing announced, you see very fast growth. And when it's been a while since the last release, growth kind of can flatten off. And you can imagine retention can be all over the place, too . . .

"I think every time we're in a flat period, people start to think, 'Oh, it's mature now, the, the gold rush is over. What happens next?' But then a new spike almost always comes, or at least has over the last 18 months or so. So a lot of this depends on your time horizon, and I think we're still in this period of, like, if you think growth has slowed, wait a month and see it change."

Follow everyone on X:

Guido Appenzeller

Matt Bornstein

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

1 Listener

bookmark
plus icon
share episode
AI + a16z - Scoping the Enterprise LLM Market
play

04/12/24 • 44 min

Naveen Rao, vice president of generative AI at Databricks, joins a16z's Matt Bornstein and Derrick Harris to discuss enterprise usage of LLMs and generative AI. Naveen is particularly knowledgeable about the space, having spent years building AI chips first at Qualcomm and then as the founder of AI chip startup Nervana Systems back in 2014. Intel acquired Nervana in 2016.

After a stint at Intel, Rao re-emerged with MosaicML in 2021. This time, he focused on the software side of things, helping customers train their own LLMs, and also fine-tune foundation models, on top of an optimized tech stack. Databricks acquired Mosaic in July of 2023.

This discussion covers the gamut of generative AI topics — from basic theory to specialized chips — to although we focus on how the enterprise LLM market is shaping up. Naveen also shares his thoughts on why he prefers finally being part of the technology in-crowd, even if it means he can’t escape talking about AI outside of work.

More information:

LLMs at Databricks

Mosaic Research

More AI content from a16z

Follow everyone on X:

Naveen Rao

Matt Bornstein

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

bookmark
plus icon
share episode

In this episode of AI + a16z, Replicate cofounder and CEO Ben Firshman, and a16z partner Matt Bornstein, discuss the art of building products and companies that appeal to software developers. Ben was the creator of Docker Compose, and Replicate has a thriving community of developers hosting and fine-tuning their own models to power AI-based applications.

Here's an excerpt of Ben and Matt discussing the difference in the variety of applications built using multimedia models compared with language models:

Matt: "I've noticed there's a lot of really diverse multimedia AI apps out there. Meaning that when you give someone an amazing primitive, like a FLUX API call or a Stable Diffusion API call, and Replicate, there's so many things they can do with it. And we actually see that happening — versus with language, where all LLM apps look kind of the same if you squint a little bit.

"It's like you chat with something — there's obviously code, there's language, there's a few different things — but I've been surprised that even today we don't see as many apps built on language models as we do based on, say, image models."

Ben: "It certainly maps with what we're seeing, as well. I think these language models, beyond just chat apps, are particularly good at turning unstructured information into structured information. Which is actually kind of magical. And computers haven't been very good at that before. That is really a kind of cool use case for it.

"But with these image models and video models and things like that, people are creating lots of new products that were not possible before — things that were just impossible for computers to do. So yeah, I'm certainly more excited by all the magical things these multimedia models can make."

"But with these image models and video models and things like that, people are creating lots of new products that were just not possible before — things that were just impossible for computers to do. So yeah, I'm certainly more excited by all the magical things these multimedia models can make."

Follow everyone on X:

Ben Firshman

Matt Bornstein

Derrick Harris

Learn more:

Replicate's AI model hub

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

bookmark
plus icon
share episode

In this episode of AI + a16z, a16z General Partner Martin Casado and Rasa cofounder and CEO Alan Nichol discuss the past, present, and future of AI agents and chatbots. Alan shares his history working to solve this problem with traditional natural language processing (NLP), expounds on how large language models (LLMs) are helping to dull the many sharp corners of natural-language interactions, and explains how pairing them with inflexible business logic is a great combination.

Learn more:

Task-Oriented Dialogue with In-Context Learning

GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM Application

CALM Summit

Follow everyone on X:

Alan Nichol

Martin Casado

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

bookmark
plus icon
share episode
AI + a16z - How GPU Access Helps AI Startups Be Agile
play

10/23/24 • 39 min

In this episode of AI + a16z, General Partner Anjney Midha explains the forces that lead to GPU shortages and price spikes, and how the firm mitigates these concerns for portfolio companies by supplying them with the GPUs they need through a program called Oxygen. The TL;DR version of the problem is that competition for GPU access favors large incumbents who can afford to outbid startups and commit to long contracts; when startups do buy or rent in bulk, they can be stuck with lots of GPUs and — absent training runs or ample customer demand for inference workloads — nothing to do with them.

Here is an excerpt of Anjney explaining how training versus inference workloads affect what level of resources a company needs at any given time:

"It comes down to whether the customer that's using them . . . has a use that can really optimize the efficiency of those chips. As an example, if you happen to be an image model company or a video model company and you put a long-term contract on H100s this year, and you trained and put out a really good model and a product that a lot of people want to use, even though you're not training on the best and latest cluster next year, that's OK. Because you can essentially swap out your training workloads for your inference workloads on those H100s.

"The H100s are actually incredibly powerful chips that you can run really good inference workloads on. So as long as you have customers who want to run inference of your model on your infrastructure, then you can just redirect that capacity to them and then buy new [Nvidia] Blackwells for your training runs.

"Who it becomes really tricky for is people who bought a bunch, don't have demand from their customers for inference, and therefore are stuck doing training runs on that last-generation hardware. That's a tough place to be."

Learn more:

Navigating the High Cost of GPU Compute

Chasing Silicon: The Race for GPUs

Remaking the UI for AI

Follow on X:

Anjney Midha

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

bookmark
plus icon
share episode
AI + a16z - Augmenting Incident Response with LLMs
play

07/26/24 • 40 min

In this episode of the AI + a16z podcast, Command Zero cofounder and CTO Dean de Beer joins a16z's Joel de la Garza and Derrick Harris to discuss the benefits of training large language models on security data, as well as the myriad factors product teams need to consider when building on LLMs.

Here's an excerpt of Dean discussing the challenges and concerns around scaling up LLMs:

"Scaling out infrastructure has a lot of limitations: the APIs you're using, tokens, inbound and outbound, the cost associated with that — the nuances of the models, if you will. And not all models are created equal, and they oftentimes are very good for specific use cases and they might not be appropriate for your use case, which is why we tend to use a lot of different models for our use cases . . .

"So your use cases will heavily determine the models that you're going to use. Very quickly, you'll find that you'll be spending more time on the adjacent technologies or infrastructure. So, memory management for models. How do you go beyond the context window for a model? How do you maintain the context of the data, when given back to the model? How do you do entity extraction so that the model understands that there are certain entities that it needs to prioritize when looking at new data? How do you leverage semantic search as something to augment the capabilities of the model and the data that you're ingesting?

"That's where we have found that we spend a lot more of our time today than on the models themselves. We have found a good combination of models that run our use cases; we augment them with those adjacent technologies."

Learn more:

The Cuckoo's Egg

1995 Citigroup hack

Follow everyone on social media:

Dean de Beer

Joel de la Garza

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does AI + a16z have?

AI + a16z currently has 37 episodes available.

What topics does AI + a16z cover?

The podcast is about Venture Capital, Entrepreneurship, Startups, Podcasts, Technology, Business, Artificial Intelligence and Machine Learning.

What is the most popular episode on AI + a16z?

The episode title 'Building Production Workflows for AI Applications' is the most popular.

What is the average episode length on AI + a16z?

The average episode length on AI + a16z is 41 minutes.

How often are episodes of AI + a16z released?

Episodes of AI + a16z are typically released every 7 days.

When was the first episode of AI + a16z?

The first episode of AI + a16z was released on Apr 8, 2024.

Show more FAQ

Toggle view more icon

Comments