Apollonian Intelligence: Why Do Tech Bros Have the Worst Ideas?
Send us a textPeter Thiel is speaking this month in San Francisco about the antichrist. Nicole Shanahan, ex-wife of Google co-founder Sergey Brin and former vice-presidential running mate of Robert F. Kennedy Jr., called the annual Burning Man festival "demonic" last week. These are the most recent developments in the rise of techno-Christianity, a reaction in part to transhumanism and Effective Altruism. Peter Thiel has also questioned the viability of democracy, which brings us to the "Dark Enlightenment" of Curtis Yarvin and Nick Land that advocates for anti-democratic CEO-kings. Although it was a fringe idea for many years, it has now gained traction with Silicon Valley billionaires like Balaji Srinivasan, who advocate for the "Exit," seasteading, and "network states." It feels like we live in the strangest times, shaped by powerful people with the worst ideas. Because technology reflects the consciousness of the people creating it, I am deeply concerned about the people creating our technologies, especially artificial intelligence.In my new podcast series, I begin trying to understand the philosophical and psychological underpinnings behind the strange ideologies coming out of Silicon Valley, ideologies very much shaping technology innovation today. Inspired by Nietzsche and Iain McGilchrist, I'm calling the imbalanced thinking behind all of this that emphasizes left hemisphere qualities “Apollonian Intelligence" or the "Apollonian Mind."Peter Thiel on Ross Douthat’s New York Times podcast raving about the antichrist Thiel’s four-part lecture series about the antichrist https://luma.com/antichristNicole Shanahan’s rant about demonic Burning Man New Yorker coverage of Curtis Yarvin’s “Dark Enlightenment” Iain McGilchrist’s wonderful book The Matter with Things Support the showJoin Chad's newsletter to learn about all new offerings, courses, trainings, and retreats. Finally, you can support the podcast here.
--------
22:40
--------
22:40
Will Artificial Intelligence Make Us More or Less Wise?
Send us a textIn this episode I explore the complex relationship between artificial intelligence (AI) and wisdom, particularly focusing on discernment. I argue that while AI can hinder discernment by perpetuating biases and misinformation, it also holds some potential for cultivating it through tools that aid meditation and self-reflection. I also emphasize the importance of truth and self-awareness in this "age of AI." Ultimately, I argue that discernment is a uniquely human quality that requires ongoing effort and vigilance, whether aided by AI or not. This one was a long-time coming so I hope you get as much out of listening as I did writing it! More on OpenAI's recent decision to remove some of the content guardrails on ChatGPT.This episode was adapted from a guest post on Michael Spencer's AI Supremacy newsletter. Support the showJoin Chad's newsletter to learn about all new offerings, courses, trainings, and retreats. Finally, you can support the podcast here.
--------
28:39
--------
28:39
Zombies, Transhumanists, and the Worldview Crisis
Send us a textContinuing our deep dive into what it means to be human in the age of AI, and inspired by the physicalist yet transcendent worldviews of the transhumanists, in this episode I start to explore the concept of worldviews, focusing first on physicalism / scientific materialism (the idea that matter / energy is fundamental), and how that particular metaphysics or worldview was based entirely on a series of assumptions that were never empirically proven. We contextualize all of it via the postmodern zombie mythology.I focus on the unexplained anomalies from quantum physics as how they undermine the physicalist worldview. I then explore the reasons that physicalism is so intractable as a worldview. All of this is just to set the table for an exploration of #idealism in my next episode (the idea that mind or consciousness is fundamental). Then we can finally turn our attention to transhumanism and human faculties.You can find a YouTube version of this episode here.Sam Altman’s “Intelligence Age” postQuantum measurement explained: https://www.youtube.com/watch?v=IHDMJqJHCQghttps://www.youtube.com/watch?v=-kxmR82QMN8Quantum entanglement explained:https://www.youtube.com/watch?v=rqmIVeheTVUhttps://www.youtube.com/watch?v=ZuvK-od647cJohn Vervaeke on being rational and spiritualJohn Vervaeke on ZombiesThe famous Einstein - Bergson debate of 1922My video on the hurdles to AGI Support the showJoin Chad's newsletter to learn about all new offerings, courses, trainings, and retreats. Finally, you can support the podcast here.
--------
46:30
--------
46:30
What Psychedelic States of Consciousness Tell Us about AI
Send us a textThe irony of the application of the word “hallucination” to LLMs making mistakes is that they are completely incapable of having psychedelic experiences. Why does that matter?In this mind-bending exploration, we dive into the fascinating intersection of artificial intelligence and expanded states of consciousness. We examine how imagination, creativity, and innovation seem to arise more frequently in altered or "holotropic" states of consciousness - such as through meditation, breathwork, dreams, dancing, psychedelics, or other experiences. I argue that current approaches to AI may never be truly inventive or creative, as they lack the ability to model the abductive reasoning and intuitive leaps that often occur in these holotropic states. To support this thesis, we explore historical examples of scientific and philosophical breakthroughs that emerged from dreams, visions, and other non-ordinary states of consciousness. In short, I am challenging the narrative that AI will soon surpass human intelligence, suggesting there may be profound mysteries of the human mind that AI cannot replicate, and offering a more sober and realistic view of the limitations facing AI research in attempting to model the wonders of human cognition and consciousness.This video is part of a series about the myths, hype, and ideologies surrounding AI.YouTube versionSupport ChadStan Grof's collected worksMy previous video on the impediments to AGIPaper using LLMs to model abductive reasoningWillis Harman's Higher CreativityEffects of conscious connected breathing on cortical brain activity, mood and state of consciousness in healthy adultsSupport the showJoin Chad's newsletter to learn about all new offerings, courses, trainings, and retreats. Finally, you can support the podcast here.
--------
32:25
--------
32:25
Impediments to Creating Artificial General Intelligence (AGI)
Send us a textArtificial general intelligence, or superintelligence, is not right around the corner like AI companies want you to believe, and that's because intelligence is really hard. Major AI companies like OpenAI and Anthropic (as well as Ilya Sutskever’s new company) have the explicit goal of creating artificial general intelligence (AGI), and claim to be very close to doing so using technology that doesn’t seem capable of getting us there.So let's talk about intelligence, both human and artificial. What is artificial intelligence? What is intelligence? Are we going to be replaced or killed by superintelligence robots? Are we on the precipice of a techno-utopia, or some kind of singularity?These are the questions I explore, to try to offer a layman’s overview of why we’re far away from AGI and superintelligence. Among other things, I highlight the limitations of current AI systems, including their lack of trustworthiness, reliance on bottom-up machine learning, and inability to provide true reasoning and common sense. I also introduce abductive inference, a rarely discussed type of reasoning. Why do smart people want us to think that they’ve solved intelligence when they are smart enough to know they haven’t? Keep that question in mind as we go.YouTube version originally recorded July 1, 2024....Support ChadJames Bridle’s Ways of Being (book)Ezra Klein’s comments on AI & capitalism How LLMs workGary Marcus on the limits of AGIMore on induction and abductionNYTimes deep dive into AI data harvestingSam Altman acknowledging that they’ve reached the limits of LLMsMira Murati saying the same thing last month Google’s embarrassing AI search experience AI Explained’s perspective on AGILLMs Can’t Plan paperPaper on using LLMs to tackle abduction ChatGPT is Bullshit paper Philosophize This on nostalgia and pastiche Please leave a comment with your thoughts, and anything I might have missed or gotten wrong. More about me over hereSupport the showJoin Chad's newsletter to learn about all new offerings, courses, trainings, and retreats. Finally, you can support the podcast here.
Welcome to Cosmic Intelligence (formerly Spiritual But Not Ridiculous), a podcast that explores philosophy (Western and Vedic), consciousness, cosmology, spirituality, and technologies in the broadest sense—technologies of the sacred, of transformation, and of the mundane. As we enter this age of artificial intelligence (AI), we focus in particular on AI and its implications for humanity, questions of consciousness, AI safety and alignment, and what it means to be human in the 21st century, as well as its impact on our shared worldview. Since worldviews create worlds we will always keep one eye on our shifting worldview, hoping to encourage it along from materialism to idealism.In terms of consciousness and spirituality, we also explore spiritual practices and other ways to expand consciousness, the importance of feeling our feelings, how to cultivate compassion and empathy, find balance, and lean into fear as a practice. Sometimes we have guests. We approach all subjects from a grounded and discerning perspective.Your host is Chad Jayadev Woodford, a philosopher, cosmologist, master yoga teacher, Vedic astrologer, lawyer, and technologist.