Adam Becker’s latest work, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, delves into the complex and often unsettling ideological underpinnings of the tech elite. The book meticulously dissects a belief system that, according to Becker, is a potent cocktail of misconstrued science fiction, flawed cognitive psychology, a profound fear of death, and a drive to rationalize immense wealth. This analysis is particularly poignant given the author’s background as a cognitive psychologist and speculative fiction writer, lending a deeply personal and critical lens to the subject matter.
Unpacking the Genesis of a Modern Ideology
Becker’s exploration begins with Eliezer Yudkowsky, a prominent figure often described as an AI prophet and a foundational architect of the Rationalist movement. Yudkowsky’s conviction, stemming from early encounters with the concept of the "Singularity"—a hypothetical point at which artificial superintelligence will irrevocably alter human civilization—is central to the narrative. His perspective posits an urgent dichotomy: successfully aligning superintelligent AI with human values and ethics could usher in an era of utopia, while failure risks human extinction. This framing positions AI as a potential messianic or apocalyptic force, demanding immediate and significant attention.
This intellectual current is not isolated but rather a confluence of various belief systems that have coalesced to shape the modern tech oligarchy. Yudkowsky’s early engagement with the transhumanist and extropian movements, characterized by aspirations for technologically augmented immortality and enhanced cognitive abilities, is highlighted. These ideas, familiar to those who recall their dissemination through niche gaming supplements like GURPS’ Transhuman Space or the speculative fiction of authors like Spider Robinson, and notably popularized in Vernor Vinge’s Zones of Thought series, represent a significant influence.
The Rationalist movement itself, deeply intertwined with the pursuit of superintelligent AI, draws heavily on research into cognitive biases. The stated aim is to mitigate personal biases and enhance predictive capabilities regarding future technological developments. Yudkowsky’s Harry Potter and the Methods of Rationality fan fiction, which introduced these concepts to a broader audience, is cited as a key vector for their popularization, drawing parallels to the esoteric and potentially mind-altering nature of H.P. Lovecraft’s The King in Yellow.
The Influence of Effective Altruism and Longtermism
Another crucial component of this ideological edifice is Effective Altruism (EA). Initially focused on maximizing the impact of philanthropic endeavors, EA’s philosophical extensions have evolved significantly. While basic utilitarian ethics suggest that all individuals, regardless of proximity, deserve equal consideration, EA’s "longtermism" extends this moral calculus to hypothetical future beings. This perspective prioritizes the potential well-being of a vast, distant future over the immediate needs of current populations. The core tenet often becomes the maximization of "utility"—a quantifiable measure of happiness or well-being—where the sheer quantity of future utility, even if diluted, outweighs the concrete benefits to present-day individuals.
This framework can justify substantial resource allocation towards speculative future scenarios, such as AI alignment, based on the infinitesimal probability of generating immense future utility from trillions of hypothetical future minds. This contrasts sharply with investing in immediate, high-probability solutions like disease eradication programs, which address tangible, present-day suffering for a smaller, albeit existing, population. The underlying assumption is that future utility is both measurable and infinitely scalable, a premise that Becker scrutinizes.
Critiquing the Foundations: Science, Psychology, and Ethics
Becker systematically dismantles the scientific, mathematical, and ethical assumptions underpinning these tech-driven futures. He argues that concepts like nanotechnology and neuroscience are often misrepresented by proponents, and that space travel is frequently imbued with an almost spiritual transcendence, a trope that has filtered into speculative fiction. A particularly concerning throughline identified is the historical connection between eugenics—the idea of a singular, measurable, and breedable intelligence—and the contemporary notion that a sufficiently advanced AI must inevitably create further, ever-more-intelligent iterations of itself. Becker notes that some Effective Altruism philanthropists have allegedly considered ethnic group contributions when allocating aid, ostensibly based on perceived potential for AI alignment research.
The book also highlights the reported involvement of Jeffrey Epstein, a convicted sex offender and financier, in some of these circles, underscoring the complex and sometimes morally compromised networks that can emerge around fringe ideologies.
The consequences of these beliefs, Becker contends, are palpable in political discourse, technological development, and the persistent narrative that advanced language models are on the cusp of deification. More broadly, he argues, these ideologies have diverted significant societal resources away from pressing, high-probability existential threats like climate change, pandemics, and asteroid impacts, towards speculative AI scenarios. The emphasis on pro-natalist movements, prioritizing the creation of more humans over ensuring their well-being, and the push to decouple wealth generation from human labor, raise further ethical questions about a future potentially built on non-human exploitation.

Becker posits that humanity has a long history of adopting counterproductive belief systems. However, he stresses that the current era is unique due to the unprecedented power wielded by billionaires, enabling them to enact these beliefs on a global scale. His proposed solution—the radical taxation of billionaires to the point of their effective elimination—is presented as a calculable measure to improve the future for humanity.
The Personal Resonance of a Dystopian Vision
The author reveals a personal connection to the intellectual currents explored in the book. Having engaged with transhumanist ideas early in his career, including considering cryonics and participating in early online forums discussing immortality, and having studied cognitive psychology with an interest in mitigating biases, Becker found himself adjacent to these movements. His prior enjoyment of Harry Potter and the Methods of Rationality and his later academic work, which included discussions on nanotechnology and the psychology of technology, brought him into proximity with the very ideas he now critiques.
He recounts instances where funding for non-AI-related research became increasingly difficult, attributing this shift in part to the influence of individuals deeply involved in Effective Altruism. This pattern echoed his earlier experiences with the hype surrounding nanotechnology, where a pervasive optimism obscured realistic assessments of technological capabilities. The realization that the very discourse he had engaged with was being used to construct a dystopian future was a significant turning point.
A moment of intellectual excitement for the author occurred when recalling a conference panelist’s statement: "We have no reason to believe that the brain is a Turing Machine." This assertion, which had resonated with him for years regarding the likelihood of Artificial General Intelligence (AGI), was finally connected to Dr. Rodney Brooks, a figure who appears in Becker’s book. This allowed for a proper citation and a grounding of a long-held intellectual curiosity.
Seeds of Speculative Fiction: Rethinking Future Narratives
Becker suggests a departure from the pervasive "Silicon Valley Inevitable Future" trope in speculative fiction. He argues that while these narratives—the Singulatarian-Rationalist project, the promise of AI gods, the existential stakes of AI alignment—have fueled countless stories, it is crucial to examine their origins and applications. By understanding how these tropes are utilized, speculative fiction writers can detach their work from the orthodoxies of the tech industry and explore more nuanced and critical futures. The goal is not to discard these ideas entirely, but to reimagine them with a deeper awareness of their context and potential consequences.
Further Reading and Alternative Futures
For readers seeking to delve deeper into these themes, Becker offers a curated list of recommended works. A City on Mars by Kelly and Zach Weinersmith is suggested for its exploration of space colonization. Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI provides a focused case study on a prominent AI organization. Shannon Vallor’s The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking offers a counterpoint to the uncritical embrace of AI, focusing on human agency. Gary Marcus’s newsletter is recommended for ongoing critical analysis of LLM development. Stephen Jay Gould’s The Mismeasure of Man remains a seminal work on the problematic history and scientific fallacies of quantifying intelligence.
To explore alternative technological development models and the role of speculative fiction in envisioning community survival, Becker recommends Uneven Futures: Strategies for Community Survival from Speculative Fiction, edited by Ida Yoshinaga, Sean Guynes, and Gerry Canavan. Beautiful Solutions: A Toolbox for Liberation, edited by Elandria Williams et al., and Civic Media: Technology, Design, and Practice by Eric Gordon and Paul Mihailidis, alongside Power to the Public: The Promise of Public Interest Technology by Tara Dawson McGuinness and Hana Schank, are presented as valuable resources for understanding different approaches to technology development.
Max Gladstone’s Craft Sequence series is highlighted as a significant contemporary work of AI-skeptical science fiction, depicting a fantasy world where Big Tech’s ambitions are all too real, fought with "necromantic lawyers." Gladstone’s Empress of Forever is also lauded as a critical post-Singularity story. The article also revisits Vernor Vinge’s Zones of Thought series and Benjamin Rosenbaum’s The Unraveling as important contributions to post-Singularity narratives, exploring themes of transhuman stagnation and societal categorization.
The examination of these influential belief systems underscores the urgent need for critical engagement with the narratives shaping our technological future. Becker’s work serves as a vital resource for understanding the complex interplay of ideology, technology, and power, and for inspiring a more thoughtful and human-centered approach to the challenges and opportunities that lie ahead.
