goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
header image

Data Science at Home

Francesco Gadaleta

Technology, AI, machine learning and algorithms. Come join the discussion on Discord! https://discord.gg/4UNKGf3
not bookmarked icon
Share icon

All episodes

Best episodes

Top 10 Data Science at Home Episodes

Best episodes ranked by Goodpods Users most listened

play

06/29/20 • 24 min

In this episode I make a non exhaustive list of machine learning tools and frameworks, written in Rust. Not all of them are mature enough for production environments. I believe that community effort can change this very quickly.

To make a comparison with the Python ecosystem I will cover frameworks for linear algebra (numpy), dataframes (pandas), off-the-shelf machine learning (scikit-learn), deep learning (tensorflow) and reinforcement learning (openAI).

Rust is the language of the future. Happy coding!

Reference
  1. BLAS linear algebra https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms
  2. Rust dataframe https://github.com/nevi-me/rust-dataframe
  3. Rustlearn https://github.com/maciejkula/rustlearn
  4. Rusty machine https://github.com/AtheMathmo/rusty-machine
  5. Tensorflow bindings https://lib.rs/crates/tensorflow
  6. Juice (machine learning for hackers) https://lib.rs/crates/juice
  7. Rust reinforcement learning https://lib.rs/crates/rsrl
play

06/29/20 • 24 min

bookmark
plus icon
share episode
play

06/22/20 • 23 min

In the 3rd episode of Rust and machine learning I speak with Alec Mocatta. Alec is a +20 year experience professional programmer who has been spending time at the interception of distributed systems and data analytics. He's the founder of two startups in the distributed system space and author of Amadeus, an open-source framework that encourages you to write clean and reusable code that works, regardless of data scale, locally or distributed across a cluster.

Only for June 24th, LDN *Virtual* Talks June 2020 with Bippit (Alec speaking about Amadeus)

play

06/22/20 • 23 min

bookmark
plus icon
share episode
play

06/19/20 • 27 min

In the second episode of Rust and Machine learning I am speaking with Luca Palmieri, who has been spending a large part of his career at the interception of machine learning and data engineering. In addition, Luca contributed to several projects closer to the machine learning community using the Rust programming language. Linfa is an ambitious project that definitely deserves the attention of the data science community (and it's written in Rust, with Python bindings! How cool??!).

References
play

06/19/20 • 27 min

bookmark
plus icon
share episode
play

06/17/20 • 22 min

This is the first episode of a series about the Rust programming language and the role it can play in the machine learning field.

Rust is one of the most beautiful languages I have ever studied so far. I personally come from the C programming language, though for professional activities in machine learning I had to switch to the loved and hated Python language.

This episode is clearly not providing you with an exhaustive list of the benefits of Rust, nor its capabilities. For this you can check the references and start getting familiar with what I think it's going to be the language of the next 20 years.

Sponsored

This episode is supported by Pryml Technologies. Pryml offers secure and cost effective data privacy solutions for your organisation. It generates a synthetic alternative without disclosing you confidential data.

References
play

06/17/20 • 22 min

bookmark
plus icon
share episode

In this episode I have a chat with Sandeep Pandya, CEO at Everguard.ai a company that uses sensor fusion, computer vision and more to provide safer working environments to workers in heavy industry. Sandeep is a senior executive who can hide the complexity of the topic with great talent.

This episode is supported by Pryml.io Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment. Test ideas. Launch new products. Fast. Secure.

play

06/15/20 • 16 min

bookmark
plus icon
share episode
play

06/01/20 • 15 min

As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.

Don't forget to join our Slack channel and discuss previous episodes or propose new ones.

This episode is supported by Pryml.io Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment.

References

Comparing Rewinding and Fine-tuning in Neural Network Pruning https://arxiv.org/abs/2003.02389

play

06/01/20 • 15 min

bookmark
plus icon
share episode
play

05/20/20 • 22 min

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.

In this episode I explain one of the first methods: knowledge distillation

Come join us on Slack

Reference
play

05/20/20 • 22 min

bookmark
plus icon
share episode
play

05/08/20 • 20 min

Codiv-19 is an emergency. True. Let's just not prepare for another emergency about privacy violation when this one is over.

Join our new Slack channel

This episode is supported by Proton. You can check them out at protonmail.com or protonvpn.com

play

05/08/20 • 20 min

bookmark
plus icon
share episode
play

04/19/20 • 14 min

Whenever people reason about probability of events, they have the tendency to consider average values between two extremes. In this episode I explain why such a way of approximating is wrong and dangerous, with a numerical example.

We are moving our community to Slack. See you there!

play

04/19/20 • 14 min

bookmark
plus icon
share episode

In this episode I speak with Filip Piekniewski about some of the most worth noting findings in AI and machine learning in 2019. As a matter of fact, the entire field of AI has been inflated by hype and claims that are hard to believe. A lot of the promises made a few years ago have revealed quite hard to achieve, if not impossible. Let's stay grounded and realistic on the potential of this amazing field of research, not to bring disillusion in the near future.

Join us to our Discord channel to discuss your favorite episode and propose new ones.

This episode is brought to you by Protonmail

Click on the link in the description or go to protonmail.com/datascience and get 20% off their annual subscription.

play

07/03/20 • 36 min

bookmark
plus icon
share episode

Show more

Toggle view more icon

FAQ

How many episodes does Data Science at Home have?

Data Science at Home currently has 201 episodes available.

What topics does Data Science at Home cover?

The podcast is about News, Tech News, Podcasts and Technology.

What is the most popular episode on Data Science at Home?

The episode title 'Rust and machine learning #4: practical tools (Ep. 110)' is the most popular.

What is the average episode length on Data Science at Home?

The average episode length on Data Science at Home is 26 minutes.

How often are episodes of Data Science at Home released?

Episodes of Data Science at Home are typically released every 7 days, 2 hours.

When was the first episode of Data Science at Home?

The first episode of Data Science at Home was released on Oct 16, 2017.

Show more FAQ

Toggle view more icon

Comments

0.0

out of 5

Star filled grey IconStar filled grey IconStar filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey IconStar filled grey Icon
Star filled grey IconStar filled grey Icon
Star filled grey Icon

Rating