DeepSeek just shook up the artificial intelligence (AI) world in the biggest way since OpenAI launched ChatGPT in late 2022. The Chinese company's new R1 large language model (LLM) reportedly matches or beats OpenAI's o1 model on some benchmarks.
Nvidia RTX series delays. There is a strong demand online for Nvidia's new RTX 50 series graphics cards, but that doesn't necessarily translate to big sales. That's because Nvidia
Nvidia called DeepSeek’s R1 model “an excellent AI advancement,” despite the Chinese startup’s emergence causing the chipmaker’s stock price to plunge 17%.
B AI model on its wafer-scale processor, delivering 57x faster speeds than GPU solutions and challenging Nvidia's AI chip dominance with U.S.-based inference processing.
U.S. officials are investigating whether Chinese AI startup DeepSeek sourced advanced Nvidia (NASDAQ: NVDA) processors through Singapore distributors to bypass U.S. sanctions, Bloomberg reported. The probe centers
US officials are probing whether Chinese AI startup DeepSeek bought advanced Nvidia Corp. semiconductors through third parties in Singapore, circumventing US restrictions on sales of chips used for artificial intelligence tasks,
So, let's consider a few facts for a moment. Reuters reports that DeepSeek's development entailed 2,000 of Nvidia's H800 GPUs and a training budget of just $6 million, while CNBC claims that R1 "outperforms" the best LLMs from the likes of OpenAI and others.
Despite the negative financial impact, Nvidia praised DeepSeek’s breakthrough. “DeepSeek is an excellent A.I. advancement and a perfect example of test time scaling,” a company spokesperson told Observer in a statement.
Development on the first DeepSeek R1 clone might have started with the announcement of the Open-R1 open-source project.
DeepSeek R1 model was trained on NVIDIA H800 AI GPUs, while inferencing was done on Chinese made chips from Huawei, the new 910C AI chip.
DeepSeek R1 is available as an Nvidia Inference Microservice (NIM) preview on the company’s website, Nvidia said in a statement. NIM is a service that allows developers to deploy and use AI programs on their personal systems while using remote Nvidia hardware.
Viraj “Raj” Patel, Head of Asset Allocation at Fiduciary Trust International, spoke with Quartz for the latest installment of our “Smart Investing” video series.