Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Intel Conversations in the Cloud

Intel Conversations in the Cloud

Intel

Intel Conversations in the Cloud is a weekly podcast with IT leaders who are driving the future of a software-defined infrastructure based data center. Featuring members of the Intel Builders programs, Intel experts, and industry analysts, this recurring podcast series provides information on delivering, deploying, and managing cloud computing, technology, and services in the data center and enterprise.
bookmark
Share icon

All episodes

Best episodes

Top 10 Intel Conversations in the Cloud Episodes

Goodpods has curated a list of the 10 best Intel Conversations in the Cloud episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Intel Conversations in the Cloud for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Intel Conversations in the Cloud episode by adding your comments to the episode page.

In this Intel Conversations in the Cloud audio podcast: Jayachandran Ramachandran, Senior Vice President of AI Labs at Course 5 Intelligence, joins host Jake Smith to discuss the Course5 Intelligence Discovery AI augmented analytics platform. Jay highlights how Discovery allows enterprises to achieve real-time access to conversational insights and cognitive answers from their data. He talks about how Discovery uses composable and composite AI to enable enterprises to gain insights and receive AI-generated answers in seconds. Jay and Jake also discuss how Intel optimizations are driving even faster inference and higher throughput on their platform improving customer experience and time to insights for Course5’s enterprise customers. Jay especially highlights how Course5 utilized Cnvrg.io to help automate and accelerate their testing and ML model creation. Jake and Jay also dive into the importance of natural language understanding (NLU) and illustrate how it enables the Discovery platform to derive causal analysis when helping enterprise customers understand their data.

For more information, visit:
course5i.com/course5-discovery

Follow Jake on Twitter at:
twitter.com/jakesmithintel

bookmark
plus icon
share episode

Daniel Chang, CEO of ENERZAi, joins host Jake Smith to discuss ENERZAi’s vision of delivering the best AI experience on everything for everyone, and how they are doing this by overcoming the constraints that edge devices have through AI models. He highlights how ENERZAi’s Automated Model Compression Optimization Toolkit enables AI models to maintain high accuracy, while minimizing latency, size, power consumption for successful Edge deployment, as proven in their recent collaboration with Intel. Daniel further illuminates how their collaboration through the Intel AI Builders program optimized their state-of-the-art 3D hand pose estimation model for Intel Xeon processors achieved incredible performance results. Utilizing the Intel OpenVINO toolkit to improve the latency and inference times while not compromising the model’s accuracy. This optimization project also helped pave the way for customers aiming to use ENERZAi’s 3D hand pose estimation solution for their driver monitoring systems, AR/VR systems or other systems on Intel processors in an incredibly performant way. Jake and Daniel also chat about a customer use case where ENERZAi was able to help a SAAS customer migrate their solution from expensive GPU instances to more cost efficient Intel CPU instances while preserving the model accuracy. Lastly they both dive into discussing the future of AI and how it can solve the constraints that edge devices face to truly enable AI to be deployed everywhere in the world.

For more information, visit:
enerzai.com

Follow Jake on Twitter at:
twitter.com/jakesmithintel

bookmark
plus icon
share episode

In this Intel Conversations in the Cloud audio podcast: Stephen Gold, CMO of SparkCognition, joins host Jake Smith to discuss SparkCognition’s vision of delivering the best AI solutions to address the critical challenges facing organizations across industries around predictive maintenance, assets management, cost efficiency, worker safety and cybersecurity. He discusses how SparkCognition is leveraging its robust IP portfolio to create practical AI applications to solve real world industry problems. Stephen highlights the compute power and resources that are enabled by Intel architecture accelerated AI evolution and made today’s AI solutions possible. Specifically, SparkCognition’s Model Studio algorithm was optimized on Intel Xeon Scalable processors and achieved up to 45x increase in performance. Jake and Stephen also discuss SparkCognition’s presence in the Intel Vision conference and how the partnership helps build connections with customers in the marketplace. Stephen also shares his perspective on future of AI where he believes that AI will permeates every aspect of the business operations and seamlessly integrates into company’s products and services so much so that AI simply becomes an integral part of business and life, rather than an additional category of technology.

For more information, visit:
sparkcognition.com

Follow Jake on Twitter at:
twitter.com/jakesmithintel

bookmark
plus icon
share episode

Anup Mehta, Co-Founder and CEO of DeepEdge, joins host Jake Smith to discuss DeepEdge’s vision to enable new edge applications, bringing the power of deep learning and computer vision to the hands of the users. He chats about how DeepEdge’s deep learning operations platform supports the entire machine learning lifecycle from on boarding data to deploying edge optimized models to Intel hardware. Anup talks about how the Intel OpenVINO toolkit is seamlessly integrated with the platform for model conversion, optimization, and deployment. He also illustrates how the DeepEdge platform leverages other tools from the OpenVINO toolkit including post optimization toolkit and Neural Network Compression Framework. Anup also highlights how their platform can enable customers to migrate their workloads from a GPU based infrastructure to Intel architecture-based platforms at a fraction of the cost. Lastly, Jake and Anup dive into discussing the future of AI and how it can solve many every-day problems that people face today and how AI can deliver value to everyone in the world.

For more information, visit:
deepedge.ai

Follow Jake on Twitter at:
twitter.com/jakesmithintel

bookmark
plus icon
share episode

In this Intel Conversations in the Cloud audio podcast: Vik Anantha, Chief Product and Strategic Marketing Officer at Caresyntax joins host Jake Smith to discuss how Caresyntax’s data-driven surgery platform is helping make surgeries smarter and safer. Through applying AI and analytics automation, he describes how the Caresyntax platform enables clinicians to focus only on necessary variations for a surgery and in turn optimizes the clinical, operational, and financial outcomes. Vik talks about how the Caresyntax platform can reduce unwarranted surgical variation from pre-operation to post-operation by assisting surgeons and interventionalists to make better decisions in real time utilizing important data captured during surgery. He also points out that the platform is optimized for the Intel architecture to bring performance and flexibility to their customers. Lastly, Jake and Vik talk about what the future holds for AI and how Caresyntax in collaboration with Intel is accelerating AI’s ability to solve real business problems in healthcare.

For more information, visit:
caresyntax.com

Follow Jake on Twitter at:
twitter.com/jakesmithintel

bookmark
plus icon
share episode

Lorenzo Fornaciari, Vice President of Strategy iGenius joins host Jake Smith to discuss how iGenius is reimagining data analytics and bringing it to the next level with their platform named Crystal. Lorenzo describes how Crystal, auto-classifying business requests in multiple languages, enables businesses to derive real-time insights and access key insights from their data without any training or data literacy skills needed. He highlights how iGenius worked with Intel to optimize their natural language processing and discusses the future of business analytics through the idea of the augmented consumer, a consumer that is able to use new augmented analytics technologies to easily explore and derive insights from their data. He describes how augmented non-technical data consumers represent an un-tapped user base that owns many decision making process and rely on smart business intelligence platforms like iGenius. Lastly, Lorenzo discusses how iGenius is using AI to remove the barriers between the businessperson and data in order to democratize the insights that are hidden and available in the data to boost a decision-making process for every enterprise.

For more information, visit:
igenius.ai

Follow Jake on Twitter at:
twitter.com/jakesmithintel

bookmark
plus icon
share episode

Vamshi Ambati, Founder & CEO at Predera, joins host Jake Smith to discuss how Predera is working to democratize AI by making it truly accessible to everyone with companies able to manage AI and not just build it. He highlights how Predera’s AIQ platform enables enterprises to drastically cut down the challenges they face today in building, deploying and managing machine learning models. The platform makes it very simple and easy to take models into production and then manage them into the future. Vamshi talks about how Predera’s collaborated with Intel to optimize their workloads for Intel oneAPI AI Analytics Toolkits resulting in improvements in both training and inference. This has also enabled Predera and their customers to leverage CPUs for most workloads reducing the need to use expensive GPUs. Lastly, Jake and Vamshi discuss how algorithmic advancements and compute advancements will drive the future of machine learning allowing AI to be impactful in many new aspects of our lives.

For more information, visit: https://predera.com/
Follow Jake on Twitter at: https://twitter.com/jakesmithintel

bookmark
plus icon
share episode

Ankit Narayan Singh, Co-Founder & CTO at ParallelDots, joins host Jake Smith to discuss how artificial intelligence and image recognition are transforming the consumer-packaged goods (CPG) and retail industry operations in brick-and-mortar stores. He describes how it’s a challenge for retailers to identify and replenish out-of-stock products on shelves quickly and efficiently. Ankit highlights how ParallelDots’ ShelfWatch platform analyzes photos captured in stores and uses computer vision models to send real time alerts to retailers allowing them to rapidly analyze and refill any shortages on their shelves. Ankit also illustrates how ParallelDots collaborated closely with Intel leveraging Intel’s ecosystem program to optimize their DL models for OpenVINO. This collaboration helped significantly increase their inference throughput performance and also enabled ParallellDots to optimize some of their models to run on CPUs avoiding the need to run on more expensive GPUs. Lastly, Jake and Ankit talk about the digitization of brick-and-mortar stores and how that process is building the stores of the future. Computer Vision and artificial intelligence are exposing more data and greater capabilities for retailers to provide better experiences to their customers while increasing efficiencies.

For more information, visit:
paralleldots.com

Follow Jake on Twitter at:
twitter.com/jakesmithintel

bookmark
plus icon
share episode

On today’s episode of Conversations in the Cloud Alex Liu, Chief Technology Officer of BasicAI stops by to talk about the company’s efforts at evolving the training data platform. Xtreme1 is the world’s first open-source platform for multisensory training data and can rapidly accelerate the processing and management of training data.

AI engineers spend a preponderance of their time preparing the training data. Through advanced AI-powered annotation tools, Xtreme1 improves modeling. Capable of handling various time-sensitive online/offline tasks, distributed data analysis computation, model training, evaluation, and inference, Xtreme1 is fully compatible with CPU-only runtime environments. In particular, the model inference has been optimized for Intel Xeon CPU to provide higher throughput. Inference throughput on Intel Xeon 8380 is vastly improved with the help of Intel extension for PyTorch and OpenVINO toolkit.

For more information go to:
basic.ai
xtreme1.io

bookmark
plus icon
share episode

In this Intel Conversations in the Cloud audio podcast: Nelson Lee, Head of Partnerships at Synergies joins host Jake Smith to discuss the importance of data science and AI in the manufacturing industry. Nelson talks about how Synergies works to close the gap for manufacturing organizations using outdated data and limited data science expertise in their decision-making processes. He highlights how their platform, JarviX, accomplishes this by visualizing a company’s data, analyzing it, and generating insight triggered by simple queries. Nelson also talks about the deep collaboration between Synergies and Intel resulting in incredible performance improvements for their platform. Jake and Nelson wrap the episode discussing the future of AI and how it will continue to be a differentiator for businesses moving forward.

For more information, visit:
synergies.ai

Follow Jake on Twitter at:
twitter.com/jakesmithintel

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Intel Conversations in the Cloud have?

Intel Conversations in the Cloud currently has 191 episodes available.

What topics does Intel Conversations in the Cloud cover?

The podcast is about News, Business News, Tech News, Cloud Computing, Podcasts, Technology, Artificial Intelligence and Machine Learning.

What is the most popular episode on Intel Conversations in the Cloud?

The episode title 'Why Agile Networks Are More Important Ever Before – Conversations in the Cloud – Episode 207' is the most popular.

What is the average episode length on Intel Conversations in the Cloud?

The average episode length on Intel Conversations in the Cloud is 16 minutes.

How often are episodes of Intel Conversations in the Cloud released?

Episodes of Intel Conversations in the Cloud are typically released every 23 hours.

When was the first episode of Intel Conversations in the Cloud?

The first episode of Intel Conversations in the Cloud was released on Oct 3, 2017.

Show more FAQ

Toggle view more icon

Comments