The CornerMarkets

Nvidiaaagh

Nvidia and Deepseek logos(Dado Ruvic/Illustration/Reuters)
Share

It’s been a tough day for Nvidia in the markets today. The stock closed down some 17 percent (nearly $600 billion), reacting to the news that DeepSeek, a Chinese AI company, appears to be demonstrating some “second mover” advantage by developing an impressive AI product very cheaply (or so it would seem).

Wall Street Journal:

A Chinese artificial-intelligence company has Silicon Valley marveling at how its programmers nearly matched American rivals despite using inferior chips.

AI models from DeepSeek . . . have zoomed to the global top 10 in performance, according to a popular ranking, suggesting Washington’s export curbs are having difficulty blocking rapid advances in China.

On Jan. 20, DeepSeek introduced R1, a specialized model designed for complex problem-solving.

“Deepseek R1 is one of the most amazing and impressive breakthroughs I’ve ever seen,” said Marc Andreessen, the Silicon Valley venture capitalist who has been advising President Trump, in an X post on Friday. . . .

Specialists said DeepSeek’s technology still trails that of OpenAI and Google. But it is a close rival despite using fewer and less-advanced chips, and in some cases skipping steps that U.S. developers considered essential.

DeepSeek said training one of its latest models cost $5.6 million, compared with the $100 million to $1 billion range cited last year by Dario Amodei, chief executive of the AI developer Anthropic, as the cost of building a mode.

Economist John Cochrane argues that the fall in cost was predictable, although it came more quickly that he expected. He sees a shift in where the value will be created in AI: It will move, he argues, from the creators of AI to its users.

AI and machine learning are throw-spaghetti-at-the wall computer programming. Previous generations of computer programming were really careful about efficient algorithms, using as little memory and compute power as possible. Memory and compute power became really cheap, so it became feasible to just throw computer power at a fitting and forecasting problem. That proved fantastically successful, in that these highly inefficient models times huge computational capacity are able to do amazing things. But now that we’re spending in the hundreds of millions per training, all that free computing isn’t so free any more. It was obvious that a huge amount of attention would pour into the next generation of faster computing, and into more efficient algorithms. After all, the human brain does it with about 20 watts. It seems like DeepSeek did it with more efficient algorithms, so much so that it could use less powerful chips. Faster computing will be harder, but it will come next. The rewards to faster computing had fallen for a decade or so. They are back on again.

The first iteration of anything is fantastically expensive. Then the cost cutting gets to work. From Apollo to Starship. In two years.

“Second mover advantage,” again. Perhaps it’s worth adding that any claims of technological triumph coming from China must be treated with a certain degree of caution. Time will tell.

Cochrane:

It will be tempting among commentators to add up this loss of stock market “wealth” and say it’s a terrible thing. But no. The winners will not be the producers of AI, which looks to become a marginal cost commodity with remarkable speed, but the users of AI. And the benefit will be on quantities, not on monopoly or fixed-cost rents. (I say this because the stocks of companies that can potentially use cheap AI did not jump up. Maybe they will, once more people figure out how to use cheap AI to make profits for a while. Maybe those companies haven’t been founded yet. That is now the Wild West.) The stock market measures the present value of profits, not the present value of social benefits. The profit, and ultimate benefit, of railroads was not so much in the railroad itself, but in the wheat fields of Kansas.

But even assuming that DeepSeek’s numbers are reliable need not mean that investors will give up on advanced chipmakers.

The Financial Times quotes the Wall Street firm Bernstein Research, flagging (a road to my heart), the Jevons paradox.

If we acknowledge that DeepSeek may have reduced costs of achieving equivalent model performance by, say, 10x, we also note that current model cost trajectories are increasing by about that much every year anyway (the infamous “scaling laws…”) which can’t continue forever. In that context, we NEED innovations like this (MoE, distillation, mixed precision etc) if AI is to continue progressing. And for those looking for AI adoption, as semi analysts we are firm believers in the Jevons paradox (i.e. that efficiency gains generate a net increase in demand), and believe any new compute capacity unlocked is far more likely to get absorbed due to usage and demand increase vs impacting long term spending outlook at this point, as we do not believe compute needs are anywhere close to reaching their limit in AI.

Following this argument,  the chip build-out continues.

We will see. And we will see how the added uncertainty implied by those three words will affect the valuation put on stocks in this sector (and beyond).

Working out (guessing) the winners and losers from the DeepSeek news will take a while. If today is any indication, it could well mean quite a bit of turbulence (please note: We don’t make investment recommendations at Capital Matters), and the turbulence is likely to be amplified by profit-taking (AI plays have done very well). And, then of course, the heavy weighting of tech stocks within the S&P could trigger a cascading panic, especially if too many investment firms have bet too much of the ranch on AI (the short interest in Nvidia was not much more than 1 percent). The S&P 500 was off 1.5 percent. NASDAQ, which has an even heavier tech exposure, was down 3 percent.

Next PostThe Covid Iconoclasts Were Right About Everything
Back to The Corner

The Latest