Transcripts
News
Chinese AI Advancement Shakes Up Industry

Chinese AI Advancement Shakes Up Industry

Chinese artificial intelligence company DeepSeek upends America’s technology industry. Read the transcript here.

Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post

John (00:00):

Stock market's open just a few minutes ago, and it's not pretty down big, especially the tech-heavy Nasdaq. CNN's Matt Egan is here to explain what we're seeing and why, Matt.

Matt Egan (00:12):

Yeah, John, we've got a bit of a tech sell-off this morning and it's being caused by earth-shattering developments in the AI space. So let's take a look at this. Did you see the DOW down more than 140 points right now? The S&P's solidly lower, but the real action is over here in the NASDAQ 600 points lower, nearly 3% on track for one of its worst days in the past two years, and here's why. There's a Chinese startup that few people had ever heard of until the past few days, and it has emerged as a real player in the AI arms race. It's called DeepSeek and investors and I would imagine officials in Washington are stunned to learn that DeepSeek's AI model has developed technology that can actually be competitive with OpenAI and Google and XAI and all these more established players. It's only been around for a bit, but DeepSeek has already vaulted to the top of the app store on Apple as the most downloaded app passing ChatGPT, which is pretty shocking.

(01:12)
Look at that number one. Veteran investor billionaire Marc Andreessen is a legendary tech investor. He said that DeepSeek is, "One of the most amazing and impressive breakthroughs that he has ever seen." Now the most stunning thing here isn't necessarily that China has developed a pretty good AI app. It's how cheap it is. DeepSeek says that their AI model only costs $5.6 million. Now we don't know that, but if that's true, that is pretty stunning. Anthropic, one of the leading AI companies has said that it costs about a hundred million to $1 billion to develop an AI model. We know that Mark Zuckerberg, the Meta CEO, says that his company plans to spend $65 billion on AI. So John, look, this is all really questioning sort of the foundation here of the AI boom, which is that it requires a lot of spending, which is that the US is running away with the AI arms race and it's also questioning some of the big positives that had pushed markets to record highs.

John (02:20):

A lot of people have put a lot of money into AI and now they're wondering if that money is needed. The way that some of these American companies have said it is.

Matt Egan (02:28):

Exactly.

John (02:29):

We just don't know if the claims being made by these Chinese companies-

Matt Egan (02:31):

Yeah, this is moving so fast. It's stunning.

John (02:33):

All right, we're going to stay on top of this. As we said, big drops, especially in the NASDAQ.

Speaker 3 (02:38):

Speaking a lot more about DeepSeek. It's one of these big stocks that has really changed the game it seems in just a couple of days without warning, a big unknown Chinese firm rattling the US market. Shelly Palmer is CEO of the Palmer Group and a professor of advanced media. He joins me now from New York. Shelly, great to have you with us.

Shelly Palmer (03:01):

Great to be here.

Speaker 3 (03:03):

It seems like there's a bit of a coup happening in the AI space. Tell me about DeepSeek, what you know about it. I mean, it's a startup. It's one-year-old company. A lot of people are downloading it, and the question is it taking on the big guys in the United States?

Shelly Palmer (03:17):

So DeepSeek is a company out of China. It is a startup and as you said, it's about a year old. They have a model called DeepSeek-V3. In the last little bit, they released R1, which is a reasoning engine, and it's open source on an MIT license. You can download it. The magic here is that instead of taking months to train and hundreds of millions of dollars, it took under two months to train and cost, according to the company, under $6 million to train. When you download it, you can post-train it and pre-trained transformer models need to be post-trained in order to be valuable, and the compute cost is a fraction.

(03:58)
We're talking a few percentage points fraction of what it costs to use the large language models from the big hyperscalers and foundational model builders like OpenAI or Anthropic or any of the other large foundational model builders. So this cost savings along with the basically lightweight of the model, meaning you can download it and run it on a laptop, has sent shivers through the spines because what this really means is there's a possible future where algorithmic efficiency beats brute force computation, and the entire AI industry in the United States and around the world has been thinking about high-level computation and big data centers and a lot of energy use and chips, and now they're thinking maybe not.

Speaker 3 (04:50):

So it's a really good one to look at in terms of the numbers because you're saying, and DeepSeek's saying that they spent just $5.6 million on this new AI model. Meta last week said it would spend upward of $65 billion. How come DeepSeek was able to crack this code and do it so much more efficiently and the big guys in the US unable to do so?

Shelly Palmer (05:13):

First of all, it's hard to speculate, but let's just say the following. There is a cliche that I believe is a cliche for a reason. There are more honors students in China than there are students in the United States. You're talking about a very deep pool of very smart people. That's thing one. Two, they've been restricted from getting all of the tools they need from the West, including specifically the kind of chips that would do the computation they need to do the advanced computation. This is a mathematical solution to a hardware problem. It's called algorithmic efficiency. They've actually written code that's more efficient than the code that's being used generally in the space throughout the world. So how did they do it? It's open source. You can see their research. They've been very good and transparent about publishing their research. They haven't exactly shown the secret sauce, but it is something that we'll be able to replicate.

(06:03)
And by the way, the markets are reacting in a knee-jerk and most people don't understand any of what I'm about to say to you. No one in the United States, no American, no company anywhere on earth to be fair, is going to lay down and die because a couple of engineers figured out a better algorithm. Math is math and engineering is engineering, and these algorithmic efficiency approaches will be figured out worldwide. Now, what does that mean for Nvidia's chip prices? What does that mean for nuclear power plants? For data centers? What does that mean for future data centers? The jury's still out. By the way, no one's replicated this for all we know. This is the Sputnik moment, or this is the cyber Pearl Harbor where they faked this whole thing and it really wasn't done for five and a half million dollars in two months. We don't know. My guess is it was done for five million, $6 million, and was done in two months. That's my guess and we're all going to have to rise to the challenge. And by the way, I am confident that everybody will.

Speaker 3 (07:06):

Yeah, this is the thing. We were talking about, the race, the AI race, and I think this is pretty much indicative of the race that has truly just begun. As you say, you can crack this code. If it's a mathematical solution, then you've got people that could work on it. But you touched on something in terms of the hardware and the GPUs and Nvidia and so forth, and even what it means for electricity consumption for data centers. If what we're hearing from DeepSeek is in fact all factual, what does it actually… Does this change the calculation completely for AI?

Shelly Palmer (07:38):

So at the moment, it is much, much less expensive to run DeepSeek-R1 locally than it is to or even in the cloud than it is to run any of the other models. So the question is not is this better or worse? The question is going forward is algorithmic efficiency, this ability to run smaller models more efficiently, going to be a better path forward than the brute force massive compute models that we've used so far at OpenAI and Anthropic and at Google Gemini and at Meta's Llama. We don't know the answer to that right now. It points towards two futures. One future is algorithmic efficiency, the other is brute force compute. There's no one in the world who can answer this question right now. It's all speculation.

(08:25)
Time will tell, and as you've just said, it's very early days, which is like the beginning of the race. We're nowhere near the end of this race. We've never seen anything come this quickly. We've never seen exponential improvement that has really hit us at this speed. So it's accelerating and we're going to all learn together. That's just the way it's going to go, and the markets are going to follow along because markets do that.

Subscribe to the Rev Blog

Lectus donec nisi placerat suscipit tellus pellentesque turpis amet.

Share this post

Subscribe to The Rev Blog

Sign up to get Rev content delivered straight to your inbox.

Made in Webflow