Partner im RedaktionsNetzwerk Deutschland
Höre Brain Inspired in der App.
Höre Brain Inspired in der App.
(7.565)(6.472)
Sender speichern
Wecker
Sleeptimer
Sender speichern
Wecker
Sleeptimer

Brain Inspired

Podcast Brain Inspired
Podcast Brain Inspired

Brain Inspired

Paul Middlebrooks
hinzufügen
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand i... Mehr
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand i... Mehr

Verfügbare Folgen

5 von 99
  • BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness
    Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. This is one in a periodic series of episodes with Alex Gomez-Marin, exploring how the arts and humanities can impact (neuro)science. Artistic creations, like cinema, have the ability to momentarily lower our ever-critical scientific mindset and allow us to imagine alternate possibilities and experience emotions outside our normal scientific routines. Might this feature of art potentially change our scientific attitudes and perspectives? Frauke Sandig and Eric Black recently made the documentary film AWARE: Glimpses of Consciousness, which profiles six researchers studying consciousness from different perspectives. The film is filled with rich visual imagery and conveys a sense of wonder and awe in trying to understand subjective experience, while diving deep into the reflections of the scientists and thinkers approaching the topic from their various perspectives. This isn't a "normal" Brain Inspired episode, but I hope you enjoy the discussion! AWARE: Glimpses of Consciousness Umbrella Films 0:00 - Intro 19:42 - Mechanistic reductionism 45:33 - Changing views during lifetime 53:49 - Did making the film alter your views? 57:49 - ChatGPT 1:04:20 - Materialist assumption 1:11:00 - Science of consciousness 1:20:49 - Transhumanism 1:32:01 - Integrity 1:36:19 - Aesthetics 1:39:50 - Response to the film
    2.6.2023
    1:54:42
  • BI 167 Panayiota Poirazi: AI Brains Need Dendrites
    Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Panayiota Poirazi runs the Poirazi Lab at the FORTH Institute of Molecular Biology and Biotechnology, and Yiota loves dendrites, those branching tree-like structures sticking out of all your neurons, and she thinks you should love dendrites, too, whether you study biological or artificial intelligence. In neuroscience, the old story was that dendrites just reach out and collect incoming signals for the all-important neuron cell body to process. Yiota, and people Like Matthew Larkum, with whom I chatted in episode 138, are continuing to demonstrate that dendrites are themselves computationally complex and powerful, doing many varieties of important signal transformation before signals reach the cell body. For example, in 2003, Yiota showed that because of dendrites, a single neuron can act as a two-layer artificial neural network, and since then others have shown single neurons can act as deeper and deeper multi-layer networks.  In Yiota's opinion, an even more important function of dendrites is increased computing efficiency, something evolution favors and something artificial networks need to favor as well moving forward. Poirazi Lab Twitter: @YiotaPoirazi. Related papers Drawing Inspiration from Biological Dendrites to Empower Artificial Neural Networks. Illuminating dendritic function with computational models. Introducing the Dendrify framework for incorporating dendrites to spiking neural networks. Pyramidal Neuron as Two-Layer Neural Network 0:00 - Intro 3:04 - Yiota's background 6:40 - Artificial networks and dendrites 9:24 - Dendrites special sauce? 14:50 - Where are we in understanding dendrite function? 20:29 - Algorithms, plasticity, and brains 29:00 - Functional unit of the brain 42:43 - Engrams 51:03 - Dendrites and nonlinearity 54:51 - Spiking neural networks 56:02 - Best level of biological detail 57:52 - Dendrify 1:05:41 - Experimental work 1:10:58 - Dendrites across species and development 1:16:50 - Career reflection 1:17:57 - Evolution of Yiota's thinking
    27.5.2023
    1:27:43
  • BI 166 Nick Enfield: Language vs. Reality
    Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Nick Enfield is a professor of linguistics at the University of Sydney. In this episode we discuss topics in his most recent book, Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. A central question in the book is what is language for? What's the function of language. You might be familiar with the debate about whether language evolved for each of us thinking our wonderful human thoughts, or for communicating those thoughts between each other. Nick would be on the communication side of that debate, but if by communication we mean simply the transmission of thoughts or information between people - I have a thought, I send it to you in language, and that thought is now in your head - then Nick wouldn't take either side of that debate. He argues the function language goes beyond the transmission of information, and instead is primarily an evolved solution for social coordination - coordinating our behaviors and attention. When we use language, we're creating maps in our heads so we can agree on where to go. For example, when I say, "This is brain inspired," I'm pointing you to a place to meet me on a conceptual map, saying, "Get ready, we're about to have a great time again!"  In any case, with those 4 words, "This is brain inspired," I'm not just transmitting information from my head into your head. I'm providing you with a landmark so you can focus your attention appropriately. From that premise, that language is about social coordination, we talk about a handful of topics in his book, like the relationship between language and reality, the idea that all language is framing- that is, how we say something influences how to think about it. We discuss how our language changes in different social situations, the role of stories, and of course, how LLMs fit into Nick's story about language. Nick's website Twitter: @njenfield Book: Language vs. Reality: Why Language Is Good for Lawyers and Bad for Scientists. Papers: Linguistic concepts are self-generating choice architectures 0:00 - Intro 4:23 - Is learning about language important? 15:43 - Linguistic Anthropology 28:56 - Language and truth 33:57 - How special is language 46:19 - Choice architecture and framing 48:19 - Language for thinking or communication 52:30 - Agency and language 56:51 - Large language models 1:16:18 - Getting language right 1:20:48 - Social relationships and language
    9.5.2023
    1:27:12
  • BI 165 Jeffrey Bowers: Psychology Gets No Respect
    Check out my free video series about what's missing in AI and Neuroscience Support the show to get full episodes and join the Discord community. Jeffrey Bowers is a psychologist and professor at the University of Bristol. As you know, many of my previous guests are in the business of comparing brain activity to the activity of units in artificial neural network models, when humans or animals and the models are performing the same tasks. And a big story that has emerged over the past decade or so is that there's a remarkable similarity between the activities and representations in brains and models. This was originally found in object categorization tasks, where the goal is to name the object shown in a given image, where researchers have compared the activity in the models good at doing that to the activity in the parts of our brains good at doing that. It's been found in various other tasks using various other models and analyses, many of which we've discussed on previous episodes, and more recently a similar story has emerged regarding a similarity between language-related activity in our brains and the activity in large language models. Namely, the ability of our brains to predict an upcoming word can been correlated with the models ability to predict an upcoming word. So the word is that these deep learning type models are the best models of how our brains and cognition work. However, this is where Jeff Bowers comes in and raises the psychology flag, so to speak. His message is that these predictive approaches to comparing artificial and biological cognition aren't enough, and can mask important differences between them. And what we need to do is start performing more hypothesis driven tests like those performed in psychology, for example, to ask whether the models are indeed solving tasks like our brains and minds do. Jeff and his group, among others, have been doing just that are discovering differences in models and minds that may be important if we want to use models to understand minds. We discuss some of his work and thoughts in this regard, and a lot more. Website Twitter: @jeffrey_bowers Related papers: Deep Problems with Neural Network Models of Human Vision. Parallel Distributed Processing Theory in the Age of Deep Networks. Successes and critical failures of neural networks in capturing human-like speech recognition. 0:00 - Intro 3:52 - Testing neural networks 5:35 - Neuro-AI needs psychology 23:36 - Experiments in AI and neuroscience 23:51 - Why build networks like our minds? 44:55 - Vision problem spaces, solution spaces, training data 55:45 - Do we implement algorithms? 1:01:33 - Relational and combinatorial cognition 1:06:17 - Comparing representations in different networks 1:12:31 - Large language models 1:21:10 - Teaching LLMs nonsense languages
    12.4.2023
    1:38:45
  • BI 164 Gary Lupyan: How Language Affects Thought
    Support the show to get full episodes and join the Discord community. Check out my free video series about what's missing in AI and Neuroscience Gary Lupyan runs the Lupyan Lab at University of Wisconsin, Madison, where he studies how language and cognition are related. In some ways, this is a continuation of the conversation I had last episode with Ellie Pavlick, in that we  partly continue to discuss large language models. But Gary is more focused on how language, and naming things, categorizing things, changes our cognition related those things. How does naming something change our perception of it, and so on. He's interested in how concepts come about, how they map onto language. So we talk about some of his work and ideas related to those topics. And we actually start the discussion with some of Gary's work related the variability of individual humans' phenomenal experience, and how that affects our individual cognition. For instance, some people are more visual thinkers, others are more verbal, and there seems to be an appreciable spectrum of differences that Gary is beginning to experimentally test. Lupyan Lab. Twitter: @glupyan. Related papers: Hidden Differences in Phenomenal Experience. Verbal interference paradigms: A systematic review investigating the role of language in cognition. Gary mentioned Richard Feynman's Ways of Thinking video. Gary and Andy Clark's Aeon article: Super-cooperators. 0:00 - Intro 2:36 - Words and communication 14:10 - Phenomenal variability 26:24 - Co-operating minds 38:11 - Large language models 40:40 - Neuro-symbolic AI, scale 44:43 - How LLMs have changed Gary's thoughts about language 49:26 - Meaning, grounding, and language 54:26 - Development of language 58:53 - Symbols and emergence 1:03:20 - Language evolution in the LLM era 1:08:05 - Concepts 1:11:17 - How special is language? 1:18:08 - AGI
    1.4.2023
    1:31:54

Weitere Wissenschaft Podcasts

Über Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Podcast-Website

Hören Sie Brain Inspired, Der Radio F Sternecheck und viele andere Radiosender aus aller Welt mit der radio.at-App

Brain Inspired

Brain Inspired

Jetzt kostenlos herunterladen und einfach Radio hören.

Google Play StoreApp Store