Over the next five years, the deployment of artificial intelligence (AI) data centers, AI commercialization and the increasing performance requirements from large AI models will push the market size of AI chips to over $400 billion.
IDTechEx in their recent report AI Chips for Data Centers and Cloud 2025-2035: Technologies, Market, Forecasts projects the AI Chips market will reach $453 billion by 2030 at a CAGR of 14 percent between 2025 and 2030.
The report did caution that that underlying technology must evolve to remain competitive with the demand for more efficient computation, lower costs, higher performance, massively scalable systems, faster inference, and domain-specific computation.
Chip Driving Growth
The report notes that frontier AI has persistently attracted global investment as governments and hyperscalers race to lead in domains like drug discovery and autonomous infrastructure.
Graphics processing units (GPUs) and other AI chips have been instrumental in driving the growth in performance of top AI systems, providing the compute needed for deep learning within data centers and cloud infrastructure. But with the growing capacity of global data centers and investments reaching hundreds of billions of dollars, concerns about the energy efficiency and costs of current hardware have increasingly come into the spotlight.
Data Centers and Hyperscalers
The largest systems for AI tend to be hyperscaler AI data centers and supercomputers, both of which can offer performance on-premise or over distributed networks. While high-performance GPUs have been integral for training AI models, they face various limitations that include high total cost of ownership (TCO), vendor lock-in risks, low utilization for AI-specific operations, and can be overkill for specific inference workloads.
Because of this, the authors of the report noted an emerging strategy used by hyperscalers is to adopt custom AI ASICs from ASIC designers, such as Broadcom and Marvell.
These custom AI ASICs have purpose-built cores for AI workloads, are cheaper per operation, are specialized for particular systems, and offer energy-efficient inference. They additionally give hyperscalers and CSPs the opportunity for full-stack control and differentiation without sacrificing performance.
Alternative AI Chips
The market growth comes as large vendors and AI chip-specific startups have released alternative AI chips. Offering benefits over the incumbent GPU technologies, the alternative AI chips are designed using similar and novel AI chip architectures, intending to make more suitable chips for AI workloads, targeted at lowering costs and more efficient AI computations.
Large chip vendors, such as Intel, Huawei, and Qualcomm, have designed AI accelerators using heterogeneous arrays of compute units (similar to GPUs), but purpose-built to accelerate AI workloads. As a result, these chips offer a balance between performance, power efficiency, and flexibility for specific application domains.
Needing to Meet Demand
AI chip-focused startups often take a different approach, deploying cutting-edge architectures and fabrication techniques with the likes of dataflow-controlled processors, wafer-scale packaging, spatial AI accelerators, processing-in-memory (PIM) technologies, and coarse-grained reconfigurable arrays (CGRAs).
The various technologies involved in designing and manufacturing give a wide breadth for future technological innovation across the semiconductor industry supply chain.
The report concluded that government policy and heavy investment—both public and private—show the prevalent interest in pushing frontier AI toward new heights, requiring require exceptional volumes of AI chips within AI data centers to meet this demand.