Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Gradient Dissent: Conversations on AI - Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems

Advanced AI Accelerators and Processors with Andrew Feldman of Cerebras Systems

06/22/23 • 60 min

Gradient Dissent: Conversations on AI

On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.

We discuss:

The advantages of using large chips for AI work.

Cerebras Systems’ process for building chips optimized for AI.

Why traditional GPUs aren’t the optimal machines for AI work.

Why efficiently distributing computing resources is a significant challenge for AI work.

How much faster Cerebras Systems’ machines are than other processors on the market.

Reasons why some ML-specific chip companies fail and what Cerebras does differently.

Unique challenges for chip makers and hardware companies.

Cooling and heat-transfer techniques for Cerebras machines.

How Cerebras approaches building chips that will fit the needs of customers for years to come.

Why the strategic vision for what data to collect for ML needs more discussion.

Resources:

Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/

Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/

Cerebras Systems | Website - https://www.cerebras.net/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

plus icon
bookmark

On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.

We discuss:

The advantages of using large chips for AI work.

Cerebras Systems’ process for building chips optimized for AI.

Why traditional GPUs aren’t the optimal machines for AI work.

Why efficiently distributing computing resources is a significant challenge for AI work.

How much faster Cerebras Systems’ machines are than other processors on the market.

Reasons why some ML-specific chip companies fail and what Cerebras does differently.

Unique challenges for chip makers and hardware companies.

Cooling and heat-transfer techniques for Cerebras machines.

How Cerebras approaches building chips that will fit the needs of customers for years to come.

Why the strategic vision for what data to collect for ML needs more discussion.

Resources:

Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/

Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/

Cerebras Systems | Website - https://www.cerebras.net/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

Previous Episode

undefined - Enabling LLM-Powered Applications with Harrison Chase of LangChain

Enabling LLM-Powered Applications with Harrison Chase of LangChain

On this episode, we’re joined by Harrison Chase, Co-Founder and CEO of LangChain. Harrison and his team at LangChain are on a mission to make the process of creating applications powered by LLMs as easy as possible.

We discuss:

What LangChain is and examples of how it works.

Why LangChain has gained so much attention.

When LangChain started and what sparked its growth.

Harrison’s approach to community-building around LangChain.

Real-world use cases for LangChain.

What parts of LangChain Harrison is proud of and which parts can be improved.

Details around evaluating effectiveness in the ML space.

Harrison's opinion on fine-tuning LLMs.

The importance of detailed prompt engineering.

Predictions for the future of LLM providers.

Resources:

Harrison Chase - https://www.linkedin.com/in/harrison-chase-961287118/

LangChain | LinkedIn - https://www.linkedin.com/company/langchain/

LangChain | Website - https://docs.langchain.com/docs/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

Next Episode

undefined - Exploring PyTorch and Open-Source Communities with Soumith Chintala, VP/Fellow of Meta, Co-Creator of PyTorch

Exploring PyTorch and Open-Source Communities with Soumith Chintala, VP/Fellow of Meta, Co-Creator of PyTorch

On this episode, we’re joined by Soumith Chintala, VP/Fellow of Meta and Co-Creator of PyTorch. Soumith and his colleagues’ open-source framework impacted both the development process and the end-user experience of what would become PyTorch.

We discuss:

The history of PyTorch’s development and TensorFlow’s impact on development decisions.

How a symbolic execution model affects the implementation speed of an ML compiler.

The strengths of different programming languages in various development stages.

The importance of customer engagement as a measure of success instead of hard metrics.

Why community-guided innovation offers an effective development roadmap.

How PyTorch’s open-source nature cultivates an efficient development ecosystem.

The role of community building in consolidating assets for more creative innovation.

How to protect community values in an open-source development environment.

The value of an intrinsic organizational motivation structure.

The ongoing debate between open-source and closed-source products, especially as it relates to AI and machine learning.

Resources:

- Soumith Chintala

https://www.linkedin.com/in/soumith/

Meta | LinkedIn

https://www.linkedin.com/company/meta/

Meta | Website

https://about.meta.com/

Pytorch

https://pytorch.org/

Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

#OCR #DeepLearning #AI #Modeling #ML

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/gradient-dissent-conversations-on-ai-163784/advanced-ai-accelerators-and-processors-with-andrew-feldman-of-cerebra-31123204"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to advanced ai accelerators and processors with andrew feldman of cerebras systems on goodpods" style="width: 225px" /> </a>

Copy