Saturday, February 14, 2026


Just in the last few months, the development of Artificial Intelligence has reached a new stage. We appear to be at an inflection point for humanity and for the immediate future of post-industrial civilization. Perhaps this sounds hyperbolic, but I do not think it is. People throughout the AI industry are calling the release of GPT-5.3 Codex and Claude Opus 4.6 (as well as Claude Code and Cowork) as harbingers of a massive shift: AI is on the verge of becoming self-recursively self-improving. This means it may quickly accelerate beyond human capacity not in a single domain like chess or mathematics, but in all domains at once.
As I explore creating AI videos, I am finding that this resonates. The tools are getting better week by week, almost day by day. I have this vertiginous sense of limitless possibility which also gives me this burning desire to move as quickly as possible. I understand why employees who are working with AI are suffering from burn-out. You have this sudden sensation that anything you ever wanted to do—in media, software, product design, and so on—is now possible. The rush is dangerously addictive.
I am grateful that, in my first books, I carefully explored the ideas around the Singularity developed by Terence McKenna as well as the ideas around the Noosphere that Jose Arguelles explored, following thinkers like Teilhard de Chardin and Vladimir Vernadsky. It gives me some theoretical and even emotional preparation for what may be coming, although I think it remains very unpredictable and difficult for any of us to fully fathom. Elon Musk posted on X recently that we have entered the Singularity. If we are not there quite yet, we are very close to it.
Mustafa Suleyman, co-founder of DeepMind and CEO of Microsoft AI, recently told the Financial Times that, within 12 to 18 months, AI will achieve human-level performance (or better) on most white-collar tasks. This includes the potential automation of professions such as lawyers, accountants, project managers, and marketing professionals. As Sam Harris discussed in a recent podcast, if this prognosis is accurate, our society is standing on the precipice of total disruption, with almost no preparation for what is coming. Personally, I find it ironic—and, also, as I will explore, amorphously hopeful—that the first jobs getting programmed out of existence are white-collar jobs. The robots are not coming for the janitors, plumbers, or nurses. At least initially, they will be fine. Instead, AI poses an immediate existential threat to high-status, high-income professions.
We are already seeing a collapse in entry-level positions in white-collar professions. A Stanford study found a 13% decline in jobs for early-career workers, and the real figure may be considerably higher, depending on the sector. Young people—who may have just spent hundreds of thousands of dollars on an advanced degree and put themselves into serious debt—are finding themselves stuck. The whole point of that education was to reach the first rung of the corporate ladder. But if the cognitive tasks they were trained to master are now automated, those degrees will be essentially worthless. And even beyond that, the entire ladder may be evaporating.
Software engineering is the current “proof of concept” for this transition. The role of a software engineer has fundamentally shifted in just the last six months. Rather than writing code line-by-line, engineers now function as “architects” or “debuggers,” scrutinizing and strategizing over code produced by AI. This meta-function allows a single individual to do the work that previously required a team, effectively signaling the end of the traditional coding hierarchy.
Just a few days ago, Matt Shumer, a 26-year-old AI CEO (co-founder of OthersideAI and HyperWrite), published a short essay, “Something Big Is Happening,” that went viral—racking up over 60 million views on X. Shumer believes we are no longer in the phase of “incremental improvement.” Instead, we have entered a new era of an intelligence explosion. He noted that AI has crossed a threshold and everything has shifted over the last months, writing:
For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn’t just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.
Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest.
I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
With the latest Codex model, Shumer notes, the AI was instrumental in its own creation. It debugged its own training, managed its own deployment, and diagnosed its own failures. When intelligence is applied to the development of intelligence, the feedback loop accelerates beyond human control. What this implies for the broader economy is a “general substitute for cognitive work.”
Shumer believes this is unlike previous waves of automation. When factories automated, workers moved to offices; when the internet disrupted retail, workers moved to services. However, AI does not leave a “convenient gap” to move into because it is improving at every cognitive task simultaneously. From legal document review to medical diagnosis and complex software architecture, the “ladder” of white-collar advancement is dissolving—as Karl Marx once noted about Capitalism, “all that is solid melts into air.”
Harris is usually a stable middle-of-the-road person—not someone I consider an alarmist. But even he realizes we have reached a threshold of imminent civilizational disruption with almost zero preparation for what’s coming. In fact, we could not have worse leadership for this meta-crisis at this moment, with Trump and his avaricious lackeys and slavish apparatchiks. But I also doubt that Democrats would have any idea on how to handle this challenge.
Many have forecast a scenario where the economy bifurcates violently, with a tiny cohort of founders and tech leaders who become stratospherically wealthy, alongside a small layer of early adopters who manage to secure the last remaining positions. Below this elite tier, everybody else is left without much to do or hope for. The danger is not just that new jobs won’t be created, but that the mechanisms for wealth distribution will fundamentally break down. Harris notes that a world where a single founder can run a billion-dollar company with barely any employees outside of AI agents is “totally untenable” without a radical redesign of the economic system. (But then you also have to ask who will be buying the products from that billion-dollar company if there is no consumer base).
If the timeline of 12 to 18 months is even remotely accurate, the time to anticipate this transition is now. The traditional link between labor and survival is likely to break—it may already be breaking down. If AI makes human labor “optional at best,” we must rewrite the societal contract to ensure that the abundance generated by synthetic super-intelligence benefits the people as a whole.
That doesn’t look likely at this point!
Harris concludes by noting that the scenario of economic upheaval—mass unemployment, the obsolescence of human cognition, and extreme wealth concentration—is what happens if everything goes right. This scenario represents the “success case” of AI development, where the technology works as intended. But this ignores the “failure modes” of AI, such as cyber-terrorism, engineered pandemics, or the hacking of critical infrastructure. The current emergency is not about AI going rogue; it is about AI doing its job so well that it breaks the economic foundations of modern society. Wild times!
I also appreciate David Shapiro’s ongoing video investigation of the AI inflection point. In a recent video, Shapiro proposes that the coming of AGI will quickly dismantle the traditional hierarchies of knowledge elites. Once again, I see some very positive potential in this... (read the rest on Substack, see link in first comment)

No comments:

Post a Comment