
E4: Deep Learning
01/01/16 • 77 min
Previous Episode

E3: Neuromorphic Computing
For our third episode, we cover "neuromorphic computing". The attempt to build hardware that functions like neurons is a fairly new field of research. We discuss how building neurons on a chip is possible, how it compares to standard computing and standard neural modeling, and the principles of design that make something "neuromorphic". We also ask whether any of this is worth it, for engineering purposes or for neuroscience.
Next Episode

E5: Neural Oscillations
For our 5th episode, we get into braaaiiiiinnnwaaaaaavessss. By which we mean neural oscillations. By which we really mean a lot of different things it turns out. For this, we bring in special guest Nancy Padilla, who actually puts electrodes into animals to study these things. We define the vocabulary of the field and then Nancy tells us how she uses these measurements for her own work. Then with the help of this paper, we get into what we think can reasonably be concluded from extracellular-oscillation style studies, and the seemingly seductive nature of oscillations to explain everything. All throughout you're gonna hear a lot about LFPs (local field potentials), including Conor's lament about their undefinable nature. And Josh is going to demand that Nancy explain how oscillations could be of use to us computational types. Finally we wrap up with a bit of redemption and common ground, surrounding this paper on "ephaptic coupling".
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/unsupervised-thinking-292150/e4-deep-learning-38247002"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to e4: deep learning on goodpods" style="width: 225px" /> </a>
Copy