DeepSeek, a Hangzhou-based AI startup, has sent shockwaves through the global tech industry with the release of its R1 reasoning model. Promising performance on par with OpenAI’s and Google’s AI systems at a fraction of the cost, R1 challenges the status quo of high-budget AI development, says James Disney-May.
Efficiency Over Expensive AI Models
DeepSeek’s R1 model was trained using only $5.6 million worth of computing resources—dramatically undercutting the $100 million required for OpenAI’s GPT-4. This approach, leveraging 2,048 Nvidia H800 GPUs, disrupts the conventional belief that AI innovation requires limitless resources.
Market Reactions and Industry Fallout
The release of R1 triggered a $1 trillion sell-off in tech stocks, with Nvidia’s shares dropping by 17%. Meanwhile, DeepSeek’s AI assistant app skyrocketed to the top of Apple’s download charts across several major countries.
While some industry players view R1 as a threat to existing AI models, Nvidia’s CEO praised it as a breakthrough that could drive further demand for AI hardware.
The Future of AI in a Changing Landscape
As US regulators investigate whether DeepSeek used restricted Nvidia chips, Western AI firms remain wary. However, DeepSeek’s rise underscores a fundamental shift in AI’s trajectory—one where smaller players can compete through efficiency rather than brute computational force.