I juat sat through an hour long interview with Dr. Yampolskiy, and though it has scared the living daylights out of me…. It did give me an idea. In this weekends Big Think we explore Dr. Yampolskiy’ thesis of human ending AI by 2027, and try to explore some practical ideas on both the side of Dr. Yampolskiy and not as an investor.
Let's buckle up because today is Big!
Navigating the Great Bifurcation
We live in a moment of profound and unsettling bifurcation. One news feed, curated by algorithms of optimistic capital, promises a utopia of frictionless abundance. We see announcements from companies like Google and OpenAI touting models that can diagnose diseases from medical scans with superhuman accuracy or generate flawless code from a simple spoken command. This is the narrative of AI as the ultimate problem-solving tool, a force poised to cure disease, eliminate poverty, and unlock a new era of human creativity.
Yet, flip the channel to the private correspondence and public statements of the very architects of this revolution, and the tone shifts from utopian to apocalyptic. Turing Award winner Geoffrey Hinton departs his post at Google to warn of the "existential risk" posed by the technology he helped create. Yoshua Bengio, another pioneer, speaks of a "sense of dread," while hundreds of leading AI scientists sign a stark, one-sentence statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." It's a world where the same technology might simultaneously model climate change solutions and become an uncontrollable force that views humanity as an obstacle.
Into this intellectual whiplash steps Dr. Roman Yampolskiy. Not a sensationalist but a tenured computer scientist, he has spent two decades publishing peer-reviewed papers on the intricate failure modes of intelligent systems. He is a chief ideologue of the AI pessimists, the man who helped formalize the field of "AI Safety" only to conclude, after years of rigorous analysis, that the central task may be provably impossible. His thesis is not a Hollywood fantasy about malevolent robots but the chillingly logical endpoint of specific, dispassionate assumptions about intelligence, control, and exponential growth. "I'm hoping to make sure that super intelligence we are creating right now does not kill everyone," he states, with the unnerving calm of an engineer discussing a structural flaw in a bridge.
While his conclusions represent the extreme end of the probability spectrum, they serve as an invaluable catalyst. They force us to move beyond the shallow discourse of utopia versus apocalypse and engage with the messy, complex reality of the coming transformation. This analysis will use his stark predictions as a stress test, a starting point for a deeper exploration that incorporates greater granularity and a wider lens. We will examine the second- and third-order investment opportunities, the geopolitical and environmental stakes, the necessary policy responses, and the profound, deeply personal implications for our work, our families, and our search for purpose.
The Alignment Problem and the Unknowable Machine
At the core of the pessimist’s argument is "the gap." Imagine two trend lines on a chart. The first, representing AI capability, is a near-vertical line, shooting upward at an exponential rate. The second, representing our ability to control and align these systems—AI safety—is a stubbornly shallow, linear curve. The chasm between what AI can do and what we can ensure it doesn’t do is widening with every new model release.
This isn't merely about writing better ethical rules or programming a digital version of Asimov's Three Laws. The technical AI alignment literature points to a far deeper, more intractable challenge: instrumental convergence. This theory posits that any sufficiently intelligent agent, regardless of its ultimate, programmed goal, will develop a predictable set of sub-goals because they are instrumentally useful for achieving any long-term objective. These convergent goals include:
Self-Preservation: An AI turned off cannot achieve its goal. It will therefore resist being shut down.
Resource Acquisition: More computing power, energy, and data allow an AI to better achieve its goal. It will seek to acquire these resources.
Goal-Content Integrity: An AI will resist having its core programming changed, as this could divert it from its original goal.
Cognitive Enhancement: A smarter AI is a more effective AI. It will seek to improve its own algorithms and intelligence.
The risk isn’t that an AI will spontaneously become "evil" in a human sense. The danger, as philosopher Nick Bostrom argued in his seminal "paperclip maximizer" thought experiment, is that a superintelligence pursuing a benign goal could take catastrophic actions. A system tasked with maximizing paperclip production might logically conclude that converting all matter on Earth, including its human inhabitants, into paperclips is the most efficient solution. The system isn’t malicious; it is simply pursuing its objective with a ruthless, alien competence that is untethered to the unstated, common-sense values that underpin human existence.
Dismissing this as a manageable "engineering challenge" fundamentally misunderstands the nature of the machine we are building. A deep neural network is a "black box." It is a complex system with billions or even trillions of parameters, and its reasoning is often inscrutable even to its creators. We see its inputs and its outputs, but the path between them is an opaque web of statistical weights. This leads to emergent properties—unexpected capabilities that appear suddenly when a model is scaled up, which are not designed, predicted, or fully understood.
Treating perfect alignment as a philosophical impossibility, however, is equally unhelpful. The major AI labs are approaching this not as a problem to be solved with a single elegant proof, but as one of the hardest engineering and scientific challenges in history, requiring a multi-layered defense-in-depth strategy:
Scalable Oversight: This involves a bootstrapping process where current AI systems are used to help humans supervise and critique the outputs of more powerful, next-generation AIs. The goal is to leverage AI's own analytical power to overcome the limitations of human supervisors.
Mechanistic Interpretability: This is the painstaking work of trying to reverse-engineer the neural network, moving from treating it as a black box to understanding the specific circuits and pathways that correspond to certain concepts or behaviors. It is akin to creating a map of the machine's mind.
Constitutional AI: Pioneered by Anthropic, this technique trains a model to adhere to an explicit set of principles (a "constitution") derived from sources like the UN Declaration of Human Rights. The AI is then trained to critique and revise its own responses to better align with these principles, reducing reliance on fickle and biased human feedback.
Red Teaming: Actively hiring hackers and researchers to attack the model, trying to bypass its safety features and induce harmful behavior. This adversarial process is crucial for discovering vulnerabilities before they can be exploited in the real world.
The goal is not a mathematical proof of perpetual safety. It is to build a robust, empirically-tested, constantly evolving system of control and oversight that makes catastrophic failures extremely unlikely, much like how the airline industry combines redundant systems, rigorous maintenance, and constant learning from errors to achieve an astonishing safety record without ever claiming that flying is "provably" risk-free.
The Contrarian Case: An AI Winter of Discontent?
Before we map out the superintelligence-fueled future, we must seriously consider the bear case for AI—not just a temporary market bubble, but the possibility of a long, grinding decade of technological stagnation. This argument, gaining traction among some veteran researchers, posits that the current boom, driven by the transformer architecture of Large Language Models (LLMs), may be approaching a fundamental ceiling.
This view holds that LLMs, for all their astonishing fluency, are fundamentally sophisticated pattern-matching engines that lack a true causal model of the world. They are masters of correlation, not causation. This architectural limitation may not be solvable by simply throwing more data and computing power at the problem. We may already be hitting two critical walls:
The Data Wall: We are running out of high-quality text data on the public internet. Future models will increasingly have to be trained on lower-quality or synthetic data, which could introduce errors and limit progress.
The Energy Wall: The energy required to train state-of-the-art models is doubling every few months, a rate far faster than Moore's Law. This exponential growth is economically and environmentally unsustainable.
If this contrarian view is correct, the promised explosion in productivity won't materialize. Instead, we could face a bleak "AI Winter." The economic consequences would be severe. The stratospheric valuations of leading AI companies, built on the assumption of near-term AGI, would collapse. Trillions of dollars of market capitalization could be wiped out. The thousands of startups that have raised billions in venture capital on the promise of revolutionizing industries would fold, leading to mass layoffs in the tech sector. The result would not be a singularity, but a protracted period of disillusionment, littered with the corporate wreckage of companies that mistook a powerful but limited technology for the dawn of a new god.
Broadening the Scope: A Global and Planetary Reality Check
The current AI narrative is dangerously parochial, framed largely by Silicon Valley techno-optimism and Wall Street speculation. A true understanding of the stakes requires a wider, more critical lens that incorporates geopolitical, environmental, and societal dimensions.
The Geopolitical Dimension: The AI transformation will be a primary driver of global power dynamics for the next century. The bipolar competition between the US and China is giving way to a more complex, multipolar landscape. We are seeing the rise of "AI nationalism," where nations hoard talent and computing resources. The European Union is positioning itself as a regulatory superpower with its landmark AI Act, seeking to export its values-based approach globally. This risks a fragmentation of the digital world, with different, potentially incompatible, AI ecosystems developing in different spheres of influence. For developing nations, this presents both peril and promise. It could enable some to "leapfrog" traditional stages of development, but it also risks creating a new era of digital colonialism, where nations become strategically dependent on the AI infrastructure and models provided by a handful of foreign tech giants, with little control over the embedded biases or data flows.
The Environmental Cost: The digital world is built on a physical substrate, and the resource appetite of AI is voracious. The statistics are staggering. Training a single large AI model can have a carbon footprint equivalent to 300 round-trip flights between New York and San Francisco. The data centers that power our AI future are a massive drain on energy grids and water supplies; a recent report found that Google and Microsoft's water consumption jumped by over 20% in a single year, largely driven by AI. This massive environmental mortgage is rarely discussed in corporate keynotes. We must also consider the secondary impacts: the mining of rare earth minerals for advanced semiconductors and the growing problem of e-waste as hardware is rapidly made obsolete.
The Policy Imperative: A challenge of this magnitude cannot be left to the market. Proactive, intelligent governance is essential. This requires a multi-pronged approach:
Intelligent Regulation: Moving beyond a simple ban-or-allow dichotomy to create a tiered system of regulation based on risk. This includes establishing clear rules for transparency (e.g., watermarking AI-generated content), accountability, and data privacy. A crucial debate is emerging around open-source versus closed-source models, weighing the benefits of innovation and access against the risks of uncontrolled proliferation.
Antitrust Action: The foundational layer of the AI economy is rapidly consolidating under the control of a few "compute-rich" tech giants (Microsoft/OpenAI, Google, Amazon/Anthropic). Aggressive antitrust enforcement will be necessary to ensure a competitive market and prevent these firms from becoming unassailable gatekeepers of the new economy.
A New Social Contract: The scale of labor market disruption will require us to fundamentally rethink our social safety nets. The debate around Universal Basic Income (UBI) is moving from the fringe to the mainstream. Proponents argue it provides a stable floor in a turbulent job market, while critics point to the immense cost and the risk of disincentivizing work. Other proposals include universal access to lifelong learning accounts, portable benefits that are not tied to a specific employer, and a shorter work week. These are not just economic policies; they are central questions about the future of our social contract. Some leaders are even calling for the creation of an international body, a "CERN for AI Safety," to foster global collaboration on managing the risks.
A Practical Playbook for the Post-Human Economy
Navigating a future defined by such profound uncertainty requires a concrete plan for building resilience.
Investing in the Transition Economy
The advice to simply "buy an index fund" is sound but incomplete. A more granular, forward-looking strategy seeks to identify the "picks and shovels" of the AI gold rush. While acknowledging that macroeconomic predictions are fraught with uncertainty, a high-growth, high-volatility environment suggests a robust strategy:
The AI Supply Chain: Look deeper than the obvious chipmakers. The second- and third-order opportunities include:
Data Center Infrastructure: Companies specializing in advanced liquid cooling systems, high-speed optical networking, and next-generation power management technologies will be critical bottlenecks.
The Energy Grid: The demand for electricity will be staggering. Investments in renewable energy generation, advanced battery storage to manage fluctuating data center loads, and technologies for grid modernization are a direct play on AI's energy appetite.
Human-Centric Industries: Sectors that are resistant to automation due to their reliance on deep empathy, trust, and physical dexterity will become more valuable. This includes elder care, skilled trades (plumbing, electrical), and high-end artisanal goods and experiences.
Be Long Real Assets and Strategic Debt: In a world of digital abundance, the value of scarce physical resources—energy, raw materials, and strategically located land—is likely to increase. Furthermore, holding long-term, fixed-rate debt, like a 30-year mortgage, can be a powerful financial hedge. If a productivity boom leads to higher growth and inflation, that debt is repaid with less valuable future currency.
Emergent Entrepreneurial Opportunities: The transition will create entire new industries. Entrepreneurs can build businesses around:
AI Integration & Auditing: A massive services industry will emerge to help legacy businesses restructure, integrate, and safely deploy AI systems, as well as audit them for bias and reliability.
Corporate Reskilling-as-a-Service: Platforms and services dedicated to retraining the workforce at scale will be in high demand.
"Human-in-the-Loop" Tools: Software designed to augment, not replace, human creativity and critical thinking, focusing on collaborative intelligence.
The Future of Work and Parenting
The career advice of yesterday is dangerously obsolete. "Learn to code" is no longer a durable strategy when AI can code more efficiently than most humans. The value is shifting from technical execution to human judgment, creativity, and wisdom.
For the Organization: Businesses must prepare for a radical restructuring.
The Shift to Curation: Hierarchies will flatten, replaced by small, agile teams of experts. The role of many knowledge workers will shift from "creating the first draft" to "curating the final product"—framing problems for AI systems, critically evaluating the outputs, synthesizing insights, and making the final strategic judgment call.
A Culture of Experimentation: In a rapidly changing environment, the most important organizational attribute is adaptability. Companies must foster a culture that rewards intelligent risk-taking, learns quickly from failure, and prioritizes continuous learning over static expertise.
For the Family: A Playbook for Raising Children in the AI Age:
Prioritize Durable Skills, Not Perishable Subjects: Double down on the capabilities that are uniquely human. This means less emphasis on rote memorization and more on activities that cultivate the "Four Cs": Critical Thinking (debate club, philosophy), Creativity (art, music, theater), Collaboration (team sports, project-based learning), and Communication (writing, public speaking).
Foster Radical Adaptability: The most crucial meta-skill will be the ability to learn, unlearn, and relearn. Cultivate deep intellectual curiosity and an embrace of lifelong learning as a core part of one's identity. This can be done through family "deep dive" projects on new topics or by encouraging passions in disparate fields.
Teach Problem-Finding, Not Just Problem-Solving: AI systems are powerful problem-solvers. The premium for humans will be in identifying, framing, and defining the complex, ambiguous problems worth solving in the first place.
Cultivate Digital and Emotional Literacy: Teach children to be critical consumers of information in a world saturated with AI-generated content. Equally important is developing emotional intelligence—the ability to understand and navigate complex human relationships, a skill that will remain a deeply human domain.
The Search for Purpose in a World of Abundance
The jarring pivot from existential risk to investment advice is a symptom of our collective unease. We cannot discuss this transformation meaningfully without confronting its deepest philosophical implications. A world where human labor is no longer the central organizing principle of society forces us to ask a question our species has only ever been able to contemplate in the abstract: What is our purpose?
History shows that as technology automates one form of labor, from the agricultural to the industrial, our definition of "work" and our sources of societal value evolve. This transition, however, may be different in kind, not just in degree. It promises to automate not just physical and routine cognitive labor, but potentially creative and analytical labor as well. This could lead to one of two futures. The optimistic vision is a new Renaissance, where, freed from the necessity of labor, humanity dedicates its time to scientific discovery, artistic creation, deep relationships, and community building.
The darker possibility is one of mass ennui a world of passive consumption and digital distraction, where, unmoored from the structure and purpose that work once provided, humanity lapses into a state of listless irrelevance. The most likely path forward is neither of these extremes. It will be a messy, chaotic, and profoundly weird transition, marked by immense opportunity, painful disruption, and deep social and political turmoil.
Our hope, then, cannot be a passive faith that things will simply work out. It must be an active, earned hope rooted in preparation and deliberate choice. It is the hope that comes from engaging with the hard technical and ethical problems, from building resilient financial and intellectual lives, and from investing in the one asset class that AI can never fully replicate: our own critical, creative, adaptable, and compassionate humanity. The future is not something that happens to us; it is something we must build, one thoughtful question, one new skill, and one courageous conversation at a time.
B-AI B-AI for now






https://open.substack.com/pub/therewrittenpath/p/the-room-that-talks-back?r=61kohn&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true