Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
headphones
Idea Machines

Idea Machines

Benjamin Reinhardt

Idea Machines is a deep dive into the systems and people that bring innovations from glimmers in someone's eye all the way to tools, processes, and ideas that can shift paradigms. We see the outputs of innovation systems everywhere but rarely dig into how they work. Idea Machines digs below the surface into crucial but often unspoken questions to explore themes of how we enable innovations today and how we could do it better tomorrow. Idea Machines is hosted by Benjamin Reinhardt.

1 Listener

bookmark
Share icon

All episodes

Best episodes

Seasons

Top 10 Idea Machines Episodes

Goodpods has curated a list of the 10 best Idea Machines episodes, ranked by the number of listens and likes each episode have garnered from our listeners. If you are listening to Idea Machines for the first time, there's no better place to start than with one of these standout episodes. If you are a fan of the show, vote for your favorite Idea Machines episode by adding your comments to the episode page.

Idea Machines - MACROSCIENCE with Tim Hwang [Idea Machines #49]
play

11/27/23 • 57 min

A conversation with Tim Hwang about historical simulations, the interaction of policy and science, analogies between research ecosystems and the economy, and so much more.

Topics

  • Historical Simulations
  • Macroscience
  • Macro-metrics for science
  • Long science
  • The interaction between science and policy
  • Creative destruction in research
  • “Regulation” for scientific markets
  • Indicators for the health of a field or science as a whole
  • “Metabolism of Science”
  • Science rotation programs
  • Clock speeds of Regulation vs Clock Speeds of Technology

References

Transcript

[00:02:02] Ben: Wait, so tell me more about the historical LARP that you're doing. Oh,

[00:02:07] Tim: yeah. So this comes from like something I've been thinking about for a really long time, which is You know in high school, I did model UN and model Congress, and you know, I really I actually, this is still on my to do list is to like look into the back history of like what it was in American history, where we're like, this is going to become an extracurricular, we're going to model the UN, like it has all the vibe of like, after World War II, the UN is a new thing, we got to teach kids about international institutions.

Anyways, like, it started as a joke where I was telling my [00:02:35] friend, like, we should have, like, model administrative agency. You know, you should, like, kids should do, like, model EPA. Like, we're gonna do a rulemaking. Kids need to submit. And, like, you know, there'll be Chevron deference and you can challenge the rule.

And, like, to do that whole thing. Anyways, it kind of led me down this idea that, like, our, our notion of simulation, particularly for institutions, is, like, Interestingly narrow, right? And particularly when it comes to historical simulation, where like, well we have civil war reenactors, they're kind of like a weird dying breed, but they're there, right?

But we don't have like other types of historical reenactments, but like, it might be really valuable and interesting to create communities around that. And so like I was saying before we started recording, is I really want to do one that's a simulation of the Cuban Missile Crisis. But like a serious, like you would like a historical reenactment, right?

Yeah. Yeah. It's like everybody would really know their characters. You know, if you're McNamara, you really know what your motivations are and your background. And literally a dream would be a weekend simulation where you have three teams. One would be the Kennedy administration. The other would be, you know, Khrushchev [00:03:35] and the Presidium.

And the final one would be the, the Cuban government. Yeah. And to really just blow by blow, simulate that entire thing. You know, the players would attempt to not blow up the world, would be the idea.

[00:03:46] Ben: I guess that's actually the thing to poke, in contrast to Civil War reenactment. Sure, like you know how

[00:03:51] Tim: that's gonna end.

Right,

[00:03:52] Ben: and it, I think it, that's the difference maybe between, in my head, a simulation and a reenactment, where I could imagine a simulation going

[00:04:01] Tim: differently. Sure, right.

[00:04:03] Ben: Right, and, and maybe like, is the goal to make sure the same thing happened that did happen, or is the goal to like, act? faithfully to

[00:04:14] Tim: the character as possible.

Yeah, I think that's right, and I think both are interesting and valuable, right? But I think one of the things I'm really interested in is, you know, I want to simulate all the characters, but like, I think one of the most interesting things reading, like, the historical record is just, like, operating under deep uncertainty about what's even going on, right?

Like, for a period of time, the American [00:04:35] government is not even sure what's going on in Cuba, and, like, you know, this whole question of, like, well, do we preemptively bomb Cuba? Do we, we don't even know if the, like, the warheads on the island are active. And I think I would want to create, like, similar uncertainty, because I think that's where, like, that's where the strategic vision comes in, right?

That, like, you have the full pressure of, like, Maybe there's bombs on the island. Maybe there's not even...

1 Listener

bookmark
plus icon
share episode

In this episode I talk to Craig Montouri about nonprofits and politics. Specifically their constraints and possibilities for enabling innovations.

Craig is the executive director at Global EIR - a nonprofit focused on connecting non-U.S. founders with universities so that they can get the visas they need to build their companies in America. Craig's perspective is fascinating because contrary to the common wisdom that innovation happens by doing an end run around politics, he focuses on enabling innovations through the political system. It's eye opening conversation about two worlds I knew little about so I hope you enjoy this conversation with Craig Montouri.

Key Takeaways:

  • There is a lot of valuable human capital and knowledge left on the table both by the US immigration system and the university tech transfer system.
  • Nonprofits need to find product-market just as much as for-profit companies making products. And just like the world of products, there's often a big difference between what people say their problems are and what their problems actually are.
  • Political innovation is different than other domains for several reasons - it both has shorter and longer timelines than other domains and in contrast to the world of startups, politics needs to focus on downside mitigation instead of maximizing upside.
Resources

Global EIR

Craig on Twitter(@craig_montouri)

NPR piece on Global EIR

bookmark
plus icon
share episode
Intro

In this episode I talk to Joel Chan about cross-disciplinary knowledge transfer, zettlekasten, and too many other things to enumerate. Joel is an a professor in the University of Maryland’s College of Information Studies and a member of their Human-Computer Interaction Lab. His research focuses on understanding and creating generalizable configurations of people, computing, and information that augment human intelligence and creativity. Essentially, how can we expand our knowledge frontier faster and better.

This conversation was also an experiment. Instead of a normal interview that’s mostly the host directing the conversation, Joel and I actually let the conversation be directed by his notes. We both use a note-taking system called a zettlekasten that’s based around densely linked notes and realized hat it might be interesting to record a podcast where the structure of the conversation is Joel walking through his notes around where his main lines of research originated.

For those of you who just want to hear a normal podcast, don’t worry - this episode listens like any other episode of idea machines. For those of you who are interested in the experiment, I’ve put a longer-than normal post-pod at the end of the episode.

Key Takeaways
  • Context and synthesis are two critical pieces of knowledge transfer that we don’t talk or think about enough.
  • There is so much exciting progress to be made in how we could generate and execute on new ideas.
Show Notes

More meta-experiments: An entry point to Joel’s Notes from our conversation

Wright brothers - Wing warping - Control is core problem - Boxes have nothing to do with flying - George Vestral - velcro

scite.ai - Canonical way you’re supposed to do scientific literature - Even good practice - find the people via the literature - Incubation Effect - Infrastructure has no way of knowing whether a paper has been contradicted - No way to know whether paper has been Refuted, Corroborated or Expanded - Incentives around references - Herb Simon, Allen Newell - problem solving as searching in space - Continuum from ill structured problem to well structured problems - Figuring out the parameters, what is the goal state, what are the available moves - Cyber security is both cryptography and social engineering - How do we know what we know? - Only infrastructure we have for sharing is via published literature - Antedisciplinary Science - Consequences of science as a career - Art in science - As there is more literature fragmentation it’s harder to synthesize and actually figure out what the problem is - Canonical unsolved problems - List of unsolved problems in physics - Review papers are: Hard to write and Career suicide - Formulating a problem requires synthesis - Three levels of synthesis 1. Listing citations 2. Listing by idea 3. Synthesis - Bloom’s taxonomy - Social markers - yes I’ve read X it wasn’t useful - Conceptual flag citations - there may actually be no relation between claims and claims in paper - Types of knowledge synthesis and their criteria - If you’ve synthesized the literature you’ve exposed fractures in it - To formulate problem you need to synthesize, to synthesize you need to find the right pieces, finding the right pieces is hard - Individual synthesis systems: - Zettlekasten - Tinderbox system - Roam

Graveyard of systems that have tried to create centralized knowledge repository - The memex as the philosopher’s stone of computer science - Semantic web - Shibboleth words - Open problem - “What level of knowledge do you need in a discipline” - Feynman sense of knowing a word - Information work at interdisciplinary boundaries - carol palmer - Different modes of interdisciplinary research - “Surface areas of interaction” - Causal modeling the Judea pearl sense - Sensemaking is moving from unstructured things towards more structured things and the tools matter

bookmark
plus icon
share episode
Idea Machines - NASA vs DARPA with Mark Micire [Idea Machines #1]
play

12/07/18 • 58 min

My guest this week is Mark Micire, group lead for the Intelligent Robotics Group at NASA’s Ames Research Center. Previously Mark was a program manager at DARPA, an entrepreneur, and a volunteer firefighter.

The topic of this conversation is how DARPA works and why it’s effective at generating game-changing technologies, the Intelligent Robotics Group at NASA, and developing Robotics and technology in high-stakes scenarios.

Links

Intelligent Robotics Group

DARPA

Camp Fire

DARPA Defense Sciences Office

First DARPA Grand Challenge Footage - looks like a blooper reel

FEMA Robotics

Transcript

Ben: [00:00:00] [00:00:00] Mark, welcome to the show. I actually want to start let's start by talking about the campfire.

[00:00:04]Camp Fire

[00:00:04] So we have a unprecedented campfire going on right now. It's basically being fought primarily with people. I know you have a lot of experience dealing with natural disasters and Robotics for emergency situations. So I guess the big question is why don't we have more robots fighting the campfire right now?

[00:00:26] Mark: [00:00:26] Well, so the believe it or not. There are a lot of efforts happening right now to bring robotics to bear on those kinds of problems. Menlo Park fire especially has one of the nation's leading. Groups, it's a small called kind of like a squad of folks that are actually on Menlo Park fire trained in their absolute career firefighters who are now learning how to leverage in their case.

[00:00:57] They're [00:01:00] using a lot of uavs to to do Arrow aerial reconnaissance. It's been used on multiple disasters the we had the damn breakage up in almost the same area as campfire. And they were using the the uavs to do reconnaissance for for those kind of things. So so the the ability for fire rescue to begin adopting these two new technologies is always slow the inroads that I have seen in the last say five years is that they like that it has cameras.

[00:01:32] They like that it can get overhead and can give them a view they wouldn't have been able to see otherwise the fact that now you can get these uavs. That have thermal imaging cameras is frighteningly useful, especially for structure fires. So that's so that's the baby steps that we've taken where we haven't gone yet that I'm hopeful we'll eventually see is the idea that you actually have some of [00:02:00] these robots deploying suppressant.

[00:02:01] So the idea that they are helping to, you know, provide water and to help put out the fire that that's a long leap from where we are right now, but I would absolutely see that being within the realm of the possible. Sybil about gosh now friend 2008. So about 10 years ago NASA was leveraging a predator be that it had with some with some.

[00:02:27] Imagery technology that was up underneath it. Um to help with the fire that was down in Big Sur and I helped with with that a little bit while I was back then I was just an intern here at Nasa and that's I think a really really good example of us using of the fire service leveraging larger government facilities and capabilities to use Robotics and usually these and other things in a way that the fire service itself frankly doesn't have the budget or R&D [00:03:00] resources to really do on their own.

[00:03:00]Ben: [00:03:00]

[00:03:00]So you think it's primarily a resources thing

[00:00:00] Mark: [00:00:00] t it's a couple factors there's resources. So, you know outside of I'll say really outside of DHS. So the problem that homeland security has a science and technology division that does some technology development outside of that. There's not a whole lot of organizations outside of commercial entities that are doing R&D a for fire rescue the it just doesn't exist.

[00:00:28] So that's so that's that's your first problem. The second problem is culturally the fire service is just very slow to adopt new technology. And that's not it. It's one part. You know, well, my daddy didn't need it in my daddy's daddy didn't need it. So why the heck do I need it right at that?

[00:00:49] That's it's easy to blame it on that. What I guess I've learned over [00:04:00] time and after working within the fire service is that everything is life-critical? There's very few things that you're doing when you're in the field providing that service in this case Wildfire response where lives don't. Kind of hang in the balance.

[00:01:09] And so the technologies that you bring to bear have to be proven b...

bookmark
plus icon
share episode

My Guest this week is Malcolm Handley, General Partner and Founder of Strong Atomics.

The topic of this conversation is Fusion power - how it’s funded now, why we don’t have it yet, and how he’s working on making it a reality. We touch on funding long-term bets in general, incentives inside of venture capital, and more.

Show Notes

Strong Atomics

Malcolm on Twitter (@malcolmredheron)

Fusion Never Plot

Fusion Z-Pinch Experiment.

ARPA-e Alpha Program

ITER - International Thermonuclear Experimental Reactor.

NIF - National Ignition Facility

ARPA-e

Office of Fusion Energy Science

Sustainable Energy without the Hot Air

Transcript

[00:00:00] This podcast I talk to Malcolm Hanley about Fusion funding long-term bets incentives inside of venture capital and more Malcolm is the managing partner of strong atomics. Strong atomics is a venture capital firm that exists solely in a portfolio of fusion projects that have been selected based on their potential to create net positive energy and lead to plausible reactors before starting strong atomics.

Malcolm was the first employee at the software company aside. I love talking to Malcolm because he's somewhat of a fanatic about making Fusion Energy reality. But at the same time he remains an intense pragmatist in some ways. He's even more pragmatic than I am. So here in the podcast. He thinks deeply about everything he does.

So we go very deep on some topics. I hope you enjoy the conversation as much as I did.

Intro

Ben: Malcolm would you would you introduce yourself?

Malcolm: Sure. So I'm Malcolm heavily. I found in strong [00:01:00] atomics after 17 years is software engineer because I. I was looking for the most important thing that I could work on and concluded that that was kind of change that was before democracy fell off the rails.

And so it was the obvious most important thing. So

my thesis is that climate change is a real problem and the. Typical ways that we are addressing it or insufficient, for example, even if you ignore the climate deniers most people seem to be of the opinion that we're on track that Renewables and storage for renewable energy are going to save the day and my fear as I looked into this more deeply is that this is not sufficient that we are in fact not on track and that we need to be looking at more possible ways of responding to [00:02:00] climate change.

So I found an area nuclear fusion that is that it has the potential to help us solve climate change and that in my opinion is underinvested. So I started strong atomics to invest in those companies and to support them in other ways. And that's what I'm doing these days

What did founding strong atomics entail?

Ben: and he did a little bit more into what founding strong atomics and Tails. You can just snap your fingers and bring it into being

Malcolm: I almost did because it was extremely lucky but in general Silicon Valley has a pretty well worn model for how people start startups and I think even the people getting out of college actually no a surprising amount about how to start a company and when you look at Fusion companies getting started you realize just how much knowledge we take for granted in Silicon Valley.

On the other hand as far as I can tell the way [00:03:00] that every VC fund get started in the way that everyone becomes a VC is unique. It was really one story for how you start a company and there are n stories for how funds get started. So in my case, I wasn't sure that I wanted to start a fund more precisely.

It hadn't even occurred to me that I would start a fund. I was a software engineer and looking for what I could do about climate change. I'm just assuming that I was looking for a technical way to be involved with that. I was worried because my only technical skill is software engineering but I figured hey, but software you can do many things.

There must be a way that a software engineer can help. So I made my way to The arpa-e Summit in DC at the beginning of 2016 and went around and talked to a whole lot of people if they're different boots about what they were doing and. My questions for myself was does what you're doing matter. My question for them was how might a software engineer help [00:04:00] and to a first approximation even at a wonderful conference like the arpa-e summit.

...
bookmark
plus icon
share episode

In this episode I talk to Evan Miyazono about tackling metaresearch questions, how novel physical phenomena go from "oh that's cool" to devices that harness cutting edge physics, and how we could better incentivize the creators of innovations where traditionally it's hard to capture value, like open-source software and early-stage research.

Evan is a research scientist at Protocol Labs where he helps lead their research efforts - coordinating researchers both inside and outside the company. Protocol labs is best known for Filecoin: a blockchain application for distributed storage. At the same time they also have a much larger mission that we get into in the podcast. Before joining Protocol Labs, Evan did his PhD at Caltech where he worked on turning crazy physics into practical devices for cryptography.

Key Takeaways
  • There might be ways to demystify both intuition and "big H Hard" research research in order to improve our systems for breakthrough discoveries. It's still super speculative but worth thinking about.
  • Observations about physical phenomena and the world are at the core of many innovations, but the most of the process is driven from the top down by the problem, rather than bottom-up by the solution. On top of that, the process of solving the problem can actually feed back and increase our understanding of the underlying phenomena.
  • Finally, there might also be new legal structures we could put in place to encourage more open-source development and fundamental research by allowing people to access more of the value they create in those activities.
Resources

Protocol Labs

Evan on Twitter

A quick talk on Protocol Labs research

Metascience

Cloud Seeding - From the abstract: "The intent of glaciogenic seeding of orographic clouds is to introduce aerosol into a cloud to alter the natural development of cloud particles and enhance wintertime precipitation in a targeted region. ... Despite numerous experiments spanning several decades, no direct observations of this process exist."

SourceCred - a tool to help open source contributors capture the value of their contributions.

Evan on Google Scholar if you want to go really deep. Try saying "Coupling of erbium dopants to yttrium orthosilicate photonic crystal cavities for on-chip optical quantum memories" three times fast.

bookmark
plus icon
share episode

My guest this week is Brian Nosek, co-Founder and the Executive Director of the Center for Open Science. Brian is also a professor in the Department of Psychology at the University of Virginia doing research on the gap between values and practices, such as when behavior is influenced by factors other than one's intentions and goals.

The topic of this conversation is how incentives in academia lead to problems with how we do science, how we can fix those problems, the center for open science, and how to bring about systemic change in general.

Show Notes

Brian’s Website

Brian on Twitter (@BrianNosek)

Center for Open Science

The Replication Crisis

Preregistration

Article in Nature about preregistration results

The Scientific Method

If you want more, check out Brian on Econtalk

Transcript

Intro

[00:00:00] This podcast I talked to Brian nosek about innovating on the very beginning of the Innovation by one research. I met Brian at the Dartmouth 60th anniversary conference and loved his enthusiasm for changing the way we do science. Here's his official biography. Brian nozik is a co-founder and the executive director for the center for open science cos is a nonprofit dedicated to enabling open and reproducible research practices worldwide.

Brian is also a professor in the department of psychology at the University of Virginia. He's received his PhD from Yale University in 2002 in 2015. He was on Nature's 10 list and the chronicle for higher education influence. Some quick context about Brian's work and the center for open science.

There's a general consensus in academic circles that there are glaring problems in how we do research today. The way research works is generally like this researchers usually based at a university do experiments then when they have a [00:01:00] result they write it up in a paper that paper goes through the peer-review process and then a journal publishes.

The number of Journal papers you've published and their popularity make or break your career. They're the primary consideration for getting a position receiving tenure getting grants and procedure in general that system evolved in the 19th century. When many fewer people did research and grants didn't even exist we get into how things have changed in the podcast.

You may also have heard of what's known as the replication crisis. This is the Fairly alarming name for a recent phenomena in which people have tried and failed to replicate many well-known studies. For example, you may have heard that power posing will make you act Boulder where that self-control is a limited resource.

Both of the studies that originated those ideas failed to replicate. Since replicating findings a core part of the scientific method unreplicated results becoming part of Cannon is a big deal. Brian has been heavily involved in the [00:02:00] crisis and several of the center for open science is initiatives Target replication.

So with that I invite you to join my conversation with Brian idzik.

How does open science accelerate innovation and what got you excited about it?

Ben: So the theme that I'm really interested in is how do we accelerate Innovations? And so just to start off with I love to ask you sort of a really broad question of in your mind. How does having a more open science framework help us accelerate Innovations? And I guess parallel to that. Why what got you excited about it first place.

Brian: Yeah, yeah, so that this is really a core of why we started the center for open science is to figure out how can we maximize the progress of science given that we see a number of different barriers to or number of different friction points to the PACE and progress of [00:03:00] Science.

And so there are a few things. I think that how. Openness accelerates Innovation, and I guess you can think of it as sort of multiple stages at the opening stage openness in terms of planning pre-registering what your study is about why you're doing this study that the study exists in the first place has a mechanism of helping to improve Innovation by increasing The credibility of the outputs.

Particularly in making a clear distinction between the things that we planned in advance that we're testing hypotheses of ideas that we have and we're acquiring data in order to test those ideas from the exploratory results the things that we learn once we've observed the data and we get insights bu...

bookmark
plus icon
share episode

In this episode I talk to Gary Bradski about the creation of OpenCV, Willow Garage, and how to get around institutional roadblocks.

Gary is perhaps best known as the creator of OpenCV - an open source tool that has touched almost every application that involves computer vision - from cat-identifying AI, to strawberry-picking robots, to augmented reality. Gary has been part of Intel Research, Stanford (where he worked on Stanley, the self driving car that won the first DARPA grand challenge), Magic Leap, and started his own Startups. On top of that Gary was early at Willow Garage - a private research lab that produced two huge innovations in robotics: The open source robot operating system and the pr2 robot. Gary has a track record of seeing potential in technologies long before they appear on the hype radar - everything from neural networks to computer vision to self-driving cars.

Key Takeaways

  • Aligning incentives inside of organizations is both essential and hard for innovation. Organizations are incentivized to focus on current product lines instead of Schumpeterian long shots. Gary basically had to do incentive gymnastics to get OpenCV to exist.
  • In research organization there's an inherent tension between pressure to produce and exploration. I love Gary's idea of a slowly decreasing salary.
  • Ambitious projects are still totally dependent on a champion. At the end of the day, it means that every ambitious project has a single point of failure. I wonder if there's a way to change that.
Notes

Gary on Twitter

The Embedded Vision Alliance

Video of Stanley winning the DARPA Grand Challenge

A short history of Willow Garage

bookmark
plus icon
share episode

In this episode I talk to Errol Arkilic about different systems involved in turning research into companies.

Errol has been helping research make the jump from the lab to the market for more than fifteen years: he was a program manager at the National Science Foundation or NSF, Small Business Innovation Research or SBIR program, where he awarded grants to hundreds of companies commercializing research. He started the NSF Innovation Corps, a program that gives researchers the tools they need to make the transition to running a successful business. Currently he is a partner at M34 capital where he focuses exclusively on projects that are being spun out of labs. Seeing the often rocky tech transition from so many sides has given him a nuanced view of the whole system.

Key Takeaways
  • While there are some best practices around commercializing research, like business model canvases, many pieces like assembling a team and finding complementary technologies are still completely bespoke.
  • The commercial value of research is a tricky thing. Some is valuable, but not quite valuable enough to form an organization around. Other research could be incredibly valuable if the world were in a slightly different state. Different approaches are needed in each situation.
  • The mental model of MIST vs TIMS - market in search of technology and technology in search of market.

Links

M34 Capital

The SBIR Program

Business Model Canvases

Errol on How the NSF Works

Pasteur's Quadrant

NSF Innovation Corps

Topics

What is the pathway to commercialization

How do you have an iterative process when people don't know what they want

What do the best researchers do to pull out core problems to work on?

How do you address the tension of people wanting to apply their hammers?

What are examples of people who have applied very specific technologies?

How do you assemble a team around a technology?

How do you systemitize assembling teams?

How do you systemitize finding technologies that can plug a technological hole?

What do you think about patents?

Patents, trade screts,

Technology that isn't venture fundable

Valuable ideas that aren't valuable enough to pursue

Systemitizing finding whether value could be harvested

Where is the role of SBIRs in today's world

SBIR decision making process

Lengendary SBIR successes

Push vs. Pull out of lab

How do you find MIST projects

Are there labs in unintuitive programs

Next steps outside of local ecosystems?

Does any new innovation need a champion?

What should people be thinking about that they're not?

TISM vs MIST

bookmark
plus icon
share episode

In this episode I talk to Mark Hammond about how Deep Science Ventures works, why the linear commercialization model leaves a lot on the table, and the idea of venture-focused research. Mark is the founder of Deep Science Ventures, an organization with a fascinating model for launching science-based companies. Mark has many crisply articulated theses about holes in the current system by which research becomes useful innovations and what we might do to fill them.

Key Takeaways:

  1. There are many places where innovation is slow and incremental because everybody is focused on individual pieces: batteries are a great example here.
  2. The perception that deep/frontier/hard tech companies are riskier and take longer to provide returns may in fact be more grounded in popular perception than fact
  3. The factors that make translational research so expensive may not be inherent but instead driven by administrative overhead and the fact that much of it is pointed in the wrong direction.

Resources

Deep Science Ventures

Mark on Twitter (@iammarkhammond)

Systematised ‘quant’ venture in the sciences.

LifeSciVC on biotech returns

bookmark
plus icon
share episode

Show more best episodes

Toggle view more icon

FAQ

How many episodes does Idea Machines have?

Idea Machines currently has 49 episodes available.

What topics does Idea Machines cover?

The podcast is about Investing, Future, Podcasts, Finance, Technology and Business.

What is the most popular episode on Idea Machines?

The episode title 'MACROSCIENCE with Tim Hwang [Idea Machines #49]' is the most popular.

What is the average episode length on Idea Machines?

The average episode length on Idea Machines is 59 minutes.

How often are episodes of Idea Machines released?

Episodes of Idea Machines are typically released every 20 days, 12 hours.

When was the first episode of Idea Machines?

The first episode of Idea Machines was released on Dec 7, 2018.

Show more FAQ

Toggle view more icon

Comments