miscentertainmentcorporateresearchwellnessathletics

Nvidia faces growing competition from major tech firms


Nvidia faces growing competition from major tech firms

Following Google, Amazon Web Services (AWS) has launched its own energy-efficient artificial intelligence (AI) chip, signaling potential shifts in a market long dominated by Nvidia. As demand for training massive AI models surges, rising costs, power consumption, and supply chain constraints have prompted companies to develop their own computing architectures. Industry analysts are now watching closely to see whether these in-house AI chips, optimized for performance per watt, can challenge Nvidia's long-standing dominance.

● Tech giants develop custom AI chips

On Dec. 2, at its annual "AWS re:Invent 2025" event in Las Vegas, AWS officially unveiled its custom AI chip, Tranium 3. The company announced ultra servers capable of housing up to 144 Tranium 3 chips, which are available for immediate use.

According to AWS, Tranium 3 delivers four times the computational performance of the company's previous-generation chips while consuming 40 percent less power. The company added that using Tranium 3 could cut AI model training and operational costs by up to 50 percent compared with systems using comparable graphics processing units (GPUs). During his keynote, AWS CEO Matt Garman highlighted that Tranium 3 offers the industry's best cost efficiency for AI training and inference.

Google's custom-developed tensor processing units (TPUs) offer similar advantages, including low power consumption and reduced operational costs. The TPUs powered the training and deployment of Google's recently unveiled AI model, Gemini 3, and were developed in collaboration with U.S. semiconductor fabless company Broadcom. AI startup Anthropic plans to use up to one million TPUs for model development, while Meta is reportedly adopting Google TPUs in its own data centers. OpenAI is also working with Broadcom to co-develop custom AI chips for training and running its models, including ChatGPT.

● Addressing GPU shortages

The primary reason tech giants are developing custom AI chips is to secure a stable supply and reduce costs. Nvidia GPUs, which can process massive amounts of data simultaneously, are essential to the AI ecosystem but remain chronically scarce, even for companies with ample funding.

As global AI investment continues to grow, Nvidia, the first company to bring GPUs to market, has emerged as the dominant player. The company controls roughly 90 percent of the GPU-based AI chip market. Each GPU costs between $30,000 and $40,000, or about 44 million to 59 million won. When energy expenses are factored in, companies have concluded that purpose-built chips optimized for specific computations are more efficient over the long term.

The specialized needs of individual companies also drive the development of custom AI chips. AWS requires chips optimized for cloud services, while Google needs processors specifically for training large language models such as Gemini. Although general-purpose GPUs can handle most computations, chips built for a company's unique model architecture can perform the same tasks using less power.

Still, industry analysts warn that Nvidia's dominance is unlikely to be challenged immediately. The global AI research and development ecosystem is heavily centered on Nvidia GPUs and its CUDA software platform. Given the scale of existing infrastructure investments and the costs of switching, companies are unlikely to replace Nvidia hardware in the near term. Nvidia recently acknowledged Google's advancements in AI but maintained that its products remain a generation ahead of the competition.

이민아 기자 omg@donga.com

Previous articleNext article

POPULAR CATEGORY

misc

18119

entertainment

19929

corporate

16727

research

10172

wellness

16616

athletics

20959