Qualcomm has unveiled two new AI accelerator chips for the booming data center market, taking direct aim at GPU king Nvidia's AI market dominance. The company also secured Saudi Arabia's Humain as its first customer for the new chips.
The semiconductor company, which has thus far focused on chips aimed at mobile and wireless devices, said its AI200 and AI250 chips will deliver rack-scale performance with a new memory architecture for enhanced AI inference at lower costs. The new chips will be commercially available in 2026 (for the AI200) and 2027 (for the AI250).
Booming AI demand has spurred a global race to outfit data centers with more AI processing power. According to research firm MarketsandMarkets, the global AI data center market is projected to grow from $236 billion in 2025 to more than $933 billion by 2030. Nvidia holds a 92% share of the current data center market, according to IoT Analytics.
Most of Nvidia's dominance has come from AI training, where its high-powered GPUs are the preferred hardware for handling those workloads. Nvidia is on track to generate more than $180 billion in revenue from data center operations this year.
But experts see an opportunity to challenge Nvidia when it comes to inference, as compute needs will shift. Qualcomm's new chips will combine Oryon CPUs, Hexagon NPU acceleration, and LPDDR memory along with liquid cooling, scaling over PCIe and Ethernet.
Related:AI Data Centers: A Popular Term That's Hard to Define
"Qualcomm is serious about data center inference efficiency," Patrick Moorhead, chief analyst and CEO of Moor Insights & Strategy, said in a LinkedIn post. "If it executes, it could evolve from being known for mobile and edge efficiency to becoming a leader in rack-scale AI performance-per-watt - a big shift in how the market sees Qualcomm's role in the broader AI ecosystem."
Qualcomm said the new chips will be part of a multi-generational data center AI inference roadmap. The company's AI software stack supports machine learning frameworks, inference engines, and generative AI frameworks, along with inference optimization techniques like disaggregated serving.
Durga Malladi, Qualcomm's senior vice president and general manager for technology planning, edge solutions and data center, said the solutions offer cost and flexibility advantages over competitors.
"We're redefining what's possible for rack-scale AI inference," he said in a statement, adding that the company's software stack and open ecosystem support "make it easier than ever for developers and enterprises to integrate, manage, and scale already trained AI models on our optimized AI inference solutions."
Related:What Are TPUs? A Guide to Tensor Processing Units
Wall Street seemed to welcome the news. Qualcomm shares rose more than 20% in Monday trading, its biggest rally since 2019.
Saudi Arabia's AI startup Humain plans to deploy 200 megawatts worth of the new chips starting in 2026, Qualcomm said.
Daniel Newman, analyst and CEO at Futurum Group, said Qualcomm's new AI chips will "catapult" the company into the AI arms race. "We see this as a big inflection with more than $10 billion in potential revenue for the company over the next few years and significant upside if it executes in key markets," he wrote in a LinkedIn post.