I do not usually make bold macro calls. I often have an opinion, but the opinions were usually not strong enough for me to make a call. But this time is different. On April 1st, 2026, there was an article titled AI Just Hacked One Of The World's Most Secure Operating Systems. It was about a researcher using the Anthropic model to hack FreeBSD in hours. That is an awakening for me: as the book The Infinite Machine says, AI is a great event in human history, and it may be the last one.
There are a lot of ways humans can end. Meteorites crashing into earth, nuclear wars, global warming causing ocean circulations to stop, aliens, global pandemics, etc. are often good themes for movies. However, I believe Artificial Intelligence will be the one that truly ends us, way before any other extinction-level events. It will not come through a rogue superintelligence, but through the closer, disruptive force of large language models and world models that are already here. It's not that I don't see superintelligence as a threat. It's that I see us being ended by something earlier, something may not even need reaching Artificial General Intelligence (AGI), yet. It will come in two kinds of threats.
I. The Lesser Threat: Societal Destabilization via Job Replacement
The first flavor is the one that people talk about the most recently. That is AI will replace jobs at a rapid pace that destabilizes society. The first wave of this is the displacement of white collar jobs. Anything that can be solved by software will be commoditized.
In the past, technology revolutions replaced jobs in a pace that humans could catch up by spending more time in learning and training. People did not need to go to school to earn a living in the distant past. Then at some point, most people needed some education in reading and writing to earn a decent living, then high school education became the minimum, and nowadays, for white collar jobs, a bachelor degree is almost the baseline. While I would expect PhD would become the norm some day in my lifetime given the vast amount of knowledge humans had accumulated for new generations to learn, the AI revolution changed it so drastically that not only it increased the baseline; it kept moving the baseline up that humans had no hopes to keep up. We are already seeing some signs of this in 2026. I believe this will become a big problem within 5 years.
The second wave of job replacement will come when robotics are good enough to replace blue collar jobs, like manufacturing, building, repairing, servicing. This wave will come some time later than 5 years after, so my guess is in 5-10 years.
It's not that there will be no jobs for humans; the problem is there will not be enough decent earning jobs for the vast majority of people. It is very de-stabilizing, and yet, this flavor is the lesser of my worries because there is hope for a fix.
The fix will be Elon Musk's vision of a sustainable abundance. The vision of a post-scarcity future where advanced technology, AI, robotics, and renewable energy create unlimited, accessible goods and services without environmental destruction. It aims to move beyond traditional trade-offs between growth and sustainability, aiming for a "win-win" scenario that improves quality of life for everyone. Humans are lazy. If the government can give everyone free decent shelter, decent food, electricity, water, basic utilities of all sorts, so everyone can just watch netflix or play with their phones all day long without working, I think a lot of people will be fine with that. That is doable, and that's why I worry less about the job issue.
II. The Greater Threat: Existential Risk via Malicious AI Actors
I am more worried about the second flavor of AI disruption. That flavor comes from what I mentioned in the beginning, which is that AI allows "bad actors" to automate the discovery of zero-day exploits and scale supply chain attacks from a handful of incidents to continuous, personalized threats against global infrastructure that safeguards our identity information, our banking information, and our all other personal information today. Using AI, the attacks can come in a scale that is very hard for the white hats (the good guys) to catch up. The attack of supply chain (malware in common important open sourced libraries used by a lot of software in the world, e.g. Axios, LiteLLM) is just a starter. Anthropic realized the danger of it, so it started the project glasswing to get the experts in the field to prevent something catastrophic from happening. I do not have much confidence in this. There are just way too many people out there. With AI, it only needs a handful of bad geniuses to cause a lot of damage to society. As Demis Hassabis said, curiosity is an inherent human nature. People will keep inventing the next powerful and dangerous thing until it blows up all of us.
The bad actors can come from governments which do not treat others kindly. There are quite a few authoritarian governments in the world which have a twisted mind that eagerly destroys others when they have a chance. Those governments usually are able to get the smartest people in their countries to work for them. On the other hand, I do not think the smartest people in AI in the western world work for the government. They are in private companies like Google, OpenAI, Anthropic, X, etc . Some of those people for some ideological reasons do not like to work with the government. And that is dangerous. The western governments tend to be more benign towards other countries. They believe in lifting the boats for all humans in the world. The sad thing is, without the smartest people working for them, the western governments may not be able to fend off the rogue states. In the past, it took a great system in society that maximizes everybody's potential in order to cultivate great logistics to manufacture great weapons to be the greatest military power in the world. In the modern world of mighty AI, it only takes a few geniuses to conquer the world, and it's just a matter of time before the geniuses are from the rogue states. There is a great chance that the rogue states will be able to penetrate the intelligence of countries with the greatest weapon, e.g. the United States. Either we (assuming we are not living in those rogue states) will get annihilated by the rogue states, or the world will end when preemptive nuclear strikes start off world war 3, a variant of what we saw on Mission: Impossible – The Final Reckoning.
Either way, I see a non-trivial probability that the current human civilization will end in my lifetime, like in 10-30 years. This is why I am using the game title, "The Last of Us", to describe this sad prediction.
So what?
Does it change any of my investment decisions? Yes. With an expectation of a shorter life horizon, there is less need for an investing time-horizon that long. Originally, I did not plan to start withdrawing money from the Tai Family Fund for at least 10-15 more years, with some fair chance of not needing to withdraw money at all for 20+ years. Now I expect to at least stop adding more money in 7-10 years.
With a shorter time horizon, more immediate income will be more attractive than growth from capital appreciation.
Other than that, I do not have any other good ideas for now. Gold? That doesn't do anything when the world ends. It's not about protecting my investment. It's more about enjoying the present than saving up sex for old age. It is about using money to spend more time with loved ones instead of worrying too much about retirement. I got more pessimistic about mankind, but I also became more chilled. This is the end, and I may as well make the most of it.