Monday, April 20, 2026

AI Maybe the Last of Us

I do not usually make bold macro calls. I often have an opinion, but the opinions were usually not strong enough for me to make a call. But this time is different. On April 1st, 2026, there was an article titled AI Just Hacked One Of The World's Most Secure Operating Systems. It was about a researcher using the Anthropic model to hack FreeBSD in hours. That is an awakening for me: as the book The Infinite Machine says, AI is a great event in human history, and it may be the last one.

There are a lot of ways humans can end. Meteorites crashing into earth, nuclear wars, global warming causing ocean circulations to stop, aliens, global pandemics, etc. are often good themes for movies. However, I believe Artificial Intelligence will be the one that truly ends us, way before any other extinction-level events. It will not come through a rogue superintelligence, but through the closer, disruptive force of large language models and world models that are already here. It's not that I don't see superintelligence as a threat. It's that I see us being ended by something earlier, something may not even need reaching Artificial General Intelligence (AGI), yet. It will come in two kinds of threats.

I. The Lesser Threat: Societal Destabilization via Job Replacement

The first flavor is the one that people talk about the most recently. That is AI will replace jobs at a rapid pace that destabilizes society. The first wave of this is the displacement of white collar jobs. Anything that can be solved by software will be commoditized.

In the past, technology revolutions replaced jobs in a pace that humans could catch up by spending more time in learning and training. People did not need to go to school to earn a living in the distant past. Then at some point, most people needed some education in reading and writing to earn a decent living, then high school education became the minimum, and nowadays, for white collar jobs, a bachelor degree is almost the baseline. While I would expect PhD would become the norm some day in my lifetime given the vast amount of knowledge humans had accumulated for new generations to learn, the AI revolution changed it so drastically that not only it increased the baseline; it kept moving the baseline up that humans had no hopes to keep up. We are already seeing some signs of this in 2026. I believe this will become a big problem within 5 years.

The second wave of job replacement will come when robotics are good enough to replace blue collar jobs, like manufacturing, building, repairing, servicing. This wave will come some time later than 5 years after, so my guess is in 5-10 years.

It's not that there will be no jobs for humans; the problem is there will not be enough decent earning jobs for the vast majority of people. It is very de-stabilizing, and yet, this flavor is the lesser of my worries because there is hope for a fix.

The fix will be Elon Musk's vision of a sustainable abundance. The vision of a post-scarcity future where advanced technology, AI, robotics, and renewable energy create unlimited, accessible goods and services without environmental destruction. It aims to move beyond traditional trade-offs between growth and sustainability, aiming for a "win-win" scenario that improves quality of life for everyone. Humans are lazy. If the government can give everyone free decent shelter, decent food, electricity, water, basic utilities of all sorts, so everyone can just watch netflix or play with their phones all day long without working, I think a lot of people will be fine with that. That is doable, and that's why I worry less about the job issue.

II. The Greater Threat: Existential Risk via Malicious AI Actors

I am more worried about the second flavor of AI disruption. That flavor comes from what I mentioned in the beginning, which is that AI allows "bad actors" to automate the discovery of zero-day exploits and scale supply chain attacks from a handful of incidents to continuous, personalized threats against global infrastructure that safeguards our identity information, our banking information, and our all other personal information today. Using AI, the attacks can come in a scale that is very hard for the white hats (the good guys) to catch up. The attack of supply chain (malware in common important open sourced libraries used by a lot of software in the world, e.g. Axios, LiteLLM) is just a starter. Anthropic realized the danger of it, so it started the project glasswing to get the experts in the field to prevent something catastrophic from happening. I do not have much confidence in this. There are just way too many people out there. With AI, it only needs a handful of bad geniuses to cause a lot of damage to society. As Demis Hassabis said, curiosity is an inherent human nature. People will keep inventing the next powerful and dangerous thing until it blows up all of us.

The bad actors can come from governments which do not treat others kindly. There are quite a few authoritarian governments in the world which have a twisted mind that eagerly destroys others when they have a chance. Those governments usually are able to get the smartest people in their countries to work for them. On the other hand, I do not think the smartest people in AI in the western world work for the government. They are in private companies like Google, OpenAI, Anthropic, X, etc . Some of those people for some ideological reasons do not like to work with the government. And that is dangerous. The western governments tend to be more benign towards other countries. They believe in lifting the boats for all humans in the world. The sad thing is, without the smartest people working for them, the western governments may not be able to fend off the rogue states. In the past, it took a great system in society that maximizes everybody's potential in order to cultivate great logistics to manufacture great weapons to be the greatest military power in the world. In the modern world of mighty AI, it only takes a few geniuses to conquer the world, and it's just a matter of time before the geniuses are from the rogue states. There is a great chance that the rogue states will be able to penetrate the intelligence of countries with the greatest weapon, e.g. the United States. Either we (assuming we are not living in those rogue states) will get annihilated by the rogue states, or the world will end when preemptive nuclear strikes start off world war 3, a variant of what we saw on Mission: Impossible – The Final Reckoning.

Either way, I see a non-trivial probability that the current human civilization will end in my lifetime, like in 10-30 years. This is why I am using the game title, "The Last of Us", to describe this sad prediction.

So what?

Does it change any of my investment decisions? Yes. With an expectation of a shorter life horizon, there is less need for an investing time-horizon that long. Originally, I did not plan to start withdrawing money from the Tai Family Fund for at least 10-15 more years, with some fair chance of not needing to withdraw money at all for 20+ years. Now I expect to at least stop adding more money in 7-10 years.

With a shorter time horizon, more immediate income will be more attractive than growth from capital appreciation.

Other than that, I do not have any other good ideas for now. Gold? That doesn't do anything when the world ends. It's not about protecting my investment. It's more about enjoying the present than saving up sex for old age. It is about using money to spend more time with loved ones instead of worrying too much about retirement. I got more pessimistic about mankind, but I also became more chilled. This is the end, and I may as well make the most of it.


Saturday, April 18, 2026

2026-04-17 Portfolio Update – Covered Margin Balance and Disabled Margin

Put in $1000 to cover my margin balance. This would allow me to earn some income from stock lending. I have been taking advantage of the $1000 margin interest free for a while, and now I think my account value is large enough that I don't need to care about this small leverage anymore. Hence, I also disabled margin for my account.

My portfolio has recovered substantially since my last update. Alternative asset managers bounced back nicely from the lows, with OWL increasing over 20% from the trough of trading at $7.95 on April 2nd. That being said, there is still a long way to go to my cost basis of around $13.5 per share, let alone my fair value of at least $20 per share. There is not much news, except that Blackstone successfully closed a $10 billion private credit fund (source), Ares targeted a small $20 billion private credit fund launching in summer (source), Blackstone BCRED sold $450 million private credits CLO (source) upsized from $400 million due to excess demand, and Pimco bought all $400 million Blue Owl Capital bond at 6.75% interest rate due in 2028 (source). The calm and normal default rate from private credits ensure the sentiment is not getting worse, more details in GFC 2.0 - Or A Golden Buying Opportunity? What The Data Says Is Really Happening In Private Credit

My portfolio IRR flipped a hypothetical SPY-only portfolio after trailing for a little bit. While my alternative asset managers bet (~45% of the portfolio) has yet to pay off enough, the tech companies in the portfolio (~30% of the portfolio, mostly consisted of AMZN, META, TSM, NVDA) performed very well to help me beat the market.

For the US-Iran war, while disagreements remain on how much enriched uranium can be kept by Iran, how much enriching capacity they can retain, and the control of Strait of Hormuz; the worst of the war is over (latest update: Iran says Strait of Hormuz is closed again as vessel attempting to cross comes under gunfire). The oil price came back down over 20% from the peak. There is light in the tunnel. I hope the war will end soon.

Pershing Square just kicked off the roadshow for its PSUS IPO a few days ago (news). For every 5 shares of PSUS purchased through IPO, the purchaser would get one share of PS (Pershing Square management). While I am a Bill Ackman fan, both the fund (Pershing Square USA) and the management company have a huge key man risk. PSUS is expected to trade at a substantial discount to NAV given the 2% annual management fees and that's what is happening for a similar fund trading in the London Exchange. I may initiate a position in PSUS when it starts trading on NYSE, or maybe just keep adding in HHH for my conviction in Ackman. It depends on the discount to NAV.

Transactions



Recent and upcoming dividend distributions



Portfolio performance snapshot

Total return:



One-year return:


Portfolio IRR (calculation): 18.42%

Approximated IRR for an SPY-only portfolio: 18.16%


Individual holdings:




Breakdown by categories (real-time):


Total returns for individual holdings:


Last prices:


Portfolio holdings conviction

The convictions in the table below reflects my current opinions and will guide the future contribution of additional investment to existing holdings. Stocks not inside the table are stocks with subpar return on equity that will be very unlikely to receive more contributions from new money (there can be exceptions for very cheap stocks).



Stock

Conviction in long-term prospect

Valuation

Price

XYZ

weak

neutral

$71.26

PYPL

weak

undervalued

$50.81

META

moderate

neutral

$688.55

BRK.B

strong

neutral

$474.58

AMZN

strong

slightly undervalued

$250.56

PLTR

moderate

greatly overvalued

$146.39

OWL

strong

greatly undervalued

$9.85

APO

strong

undervalued

$124.62

ARES

strong

slightly undervalued

$117.78

BN

strong

slightly undervalued

$46.59

BAM

strong

slightly overvalued

$49.32

BX

strong

neutral

$129.08

MAIN

strong

neutral

$54.82

BABA

moderate

neutral

$141.01

NNN

moderate

neutral

$45.14

TSLA

moderate

neutral

$400.62

BIDU

moderate

neutral

$126.13

NVDA

moderate

slightly undervalued

$201.68

TSM

moderate

neutral

$370.50

HASI

moderate

neutral

$40.64

HHH

moderate

slightly undervalued

$65.94

UNH

moderate

neutral

$324.63

HOOD

moderate

overvalued

$90.75

MCD

moderate

neutral

$311.36


Conviction in long-term prospects means how much I believe a company would match or outperform the market (e.g. S&P 500) in the long run. Valuation matters so the conviction generally corresponds to the neutral rating of Valuation. It has the following ratings: weak, moderate, strong


Valuation: greatly overvalued, overvalued, slightly overvalued, neutral, slightly undervalued, undervalued, greatly undervalued

AI Maybe the Last of Us

I do not usually make bold macro calls. I often have an opinion, but the opinions were usually not strong enough for me to make a call. But ...