Why DELL Could Be a Big Winner in the AI Cloud Spending Boom

As the tech world grapples with the ebb and flow of generative AI hype, one thing remains clear: the major players are doubling down on their investments. Despite a nearly 15% drop in the Nasdaq since July’s highs and concerns about a potential repeat of the dot-com bubble, the tech giants aren’t flinching.

The second-quarter earnings season revealed that major technology companies like Amazon.com, Inc. (AMZN), Microsoft Corporation (MSFT), Alphabet Inc. (GOOGL), and Meta Platforms, Inc. (META) are more bullish than ever, continuing to fuel their AI ambitions with hefty investments. Together, these companies have poured around $40 billion into cloud computing, with a significant portion allocated for GPUs and other AI-related tech.

For example, the partnership between Microsoft and OpenAI has sparked a massive capital expenditure (CAPEX) buildout and triggered a surge in demand for GPUs. So far, enterprise adoption of generative AI has mostly involved exploratory projects within the public cloud.

Following the release of second-quarter results by these tech behemoths, Susquehanna analyst Mehdi Hosseini raised his 2024 global capital expenditure forecast for the top 12 cloud computing providers by 3%, bringing the total to $192 billion, up by 55% from last year. And if that wasn’t robust enough, Hosseini predicts spending will rise by another 40% to 42% in 2025.

Amid this surge in AI investment, Dell Technologies Inc. (DELL) is emerging as an unexpected contender. Traditionally recognized for its personal computing products, Dell is now aggressively expanding its footprint in AI and cloud computing. With the growing need for data centers and advanced cloud solutions, Dell’s strategic shift positions it well to benefit from this boom.

So, could DELL be a major winner in the AI revolution? Let’s find out.

Dell’s Strategic Position in the AI Server Market

Dell Technologies has evolved far beyond its origins as a producer of Windows-powered PCs. While high-end laptops and gaming stations remain significant, Dell’s focus has increasingly shifted toward becoming a leading player in the AI and cloud infrastructure space.

The company’s extensive portfolio includes everything from data centers to edge computing solutions, positioning it as a versatile player in the tech world. DELL’s infrastructure solutions are particularly noteworthy, as they cater to the growing demand for advanced AI computing power. The company has built a strong reputation for assembling efficient, high-performance data centers, a crucial asset as AI and machine learning drive demand for robust computing infrastructure.

Moreover, Dell’s partnerships with major cloud providers and tech giants like NVIDIA Corporation (NVDA) underscore its critical role in the AI ecosystem. NVDA’s endorsement of Dell as a premier solution for building data centers is a testament to its capabilities. The “AI Factory” initiative, highlighted by Nvidia CEO Jensen Huang, marks DELL as a leading player in the transition to AI-accelerated computing environments.

The company’s infrastructure solutions segment, which generated $4.3 billion in operating income last year, stands to benefit immensely from the accelerating demand for advanced AI computing systems. This growth potential is reinforced by the company’s strategic focus on high-performance servers and storage solutions tailored for AI applications.

In the first quarter ended May 3, 2024, DELL’s net revenue increased 6% year-over-year to $22.24 billion, exceeding the analysts’ expectations of $21.65 billion. Its Infrastructure Solutions Group’s (ISG) revenue stood at $9.23 billion, up 22% year-over-year. Thanks to strong demand across AI and traditional servers, the company’s servers and networking revenue grew 42% from the year-ago value to $5.47 billion.

On the bottom line, DELL’s net income and EPS came in at $955 million and $1.32, indicating an increase of 65% and 67% from the prior year. The company returned $1.10 billion to shareholders through share repurchases and dividends, ending the quarter with $7.30 billion in cash and investments.

Dell’s consistent ability to meet or exceed expectations, coupled with its aggressive cash returns to shareholders, has proven to be a winning strategy. This, along with its strong positioning in AI, has driven the stock price to more than double over the past twelve months. Shares of DELL have surged more than 45% year-to-date and nearly 95% over the past year.

As companies invest more in AI computing systems, the company’s infrastructure solutions are expected to see substantial growth. With tens of billions, potentially even hundreds of billions of dollars up for grabs, DELL is well-positioned to capture a significant share of this expanding market. If it continues to leverage its partnerships and infrastructure expertise, it could emerge as a major beneficiary of the AI boom, making it an intriguing stock for investors to consider.

NVDA’ Blackwell Delay: Is It Time to Rotate Into AMD?

NVIDIA Corporation (NVDA), the AI darling, recently hit a rough patch. A report from The Information revealed that Nvidia’s highly anticipated Blackwell series chips are delayed due to design flaws, causing a sharp 15% drop in the stock over the past week. Even with this dip, the stock is still up more than 170% over the past year, but as we know, past performance isn’t a guarantee of future returns.

So, what’s going on with Nvidia? And more importantly, is it time to consider alternatives?

Dark Clouds Are Looming Over the Future of Nvidia

Back in March, NVDA announced its Blackwell series, boasting capabilities that promised to build and operate real-time generative AI on trillion-parameter large language models at a fraction of the cost and energy consumption of its predecessor. But fast forward a few months, and the picture isn't as rosy.

According to the report, the company has informed major customers, including tech giants like Alphabet Inc. (GOOGL) and Microsoft Corporation (MSFT), that shipments of its Blackwell AI accelerator will be delayed by at least three months due to design flaws. It appears to involve Taiwan Semiconductor Manufacturing's new packaging technology, which NVDA is one of the first to use, and issues with the placement of bridge dies connecting two GPUs.

This isn’t just a minor hiccup. The delay could throw off the plans of customers such as Microsoft and Meta Platforms, Inc. (META), who have invested billions in Nvidia’s new GPUs to drive their AI services. The worry is that these delays might prevent these companies from deploying large clusters of the new chips in their data centers by the first quarter of 2025, as they had hoped.

Design flaws aren’t something that can be fixed overnight, which explains the significant delay. Nvidia, for its part, hasn’t outright confirmed or denied the delays but did say that “production is on track to ramp later in 2024.” However, with only a few months left in the year, this sounds more like an early 2025 release.

The delay has led tech companies to look for alternatives from NVDA’s competitors, such as Advanced Micro Devices, Inc. (AMD). MSFT and GOOGL, for example, are already working on next-generation products with AMD.

While Nvidia still dominates the data center GPU market, the Blackwell delay could weigh on its stock price and reputation. It’s arguably the most significant setback NVDA has faced since the AI boom began, and it might just be the moment for AMD to shine.

The Future of Advanced Micro Devices

With a market cap of $3.18 trillion, NVDA’s growth prospects seem more limited compared to AMD, which could see its valuation double from its current $250 billion as it gains momentum in the data center space.

In the second quarter, AMD’s data center revenue surged 115% year-over-year to $2.83 billion, accounting for nearly half of its total revenue. The Mi300 series brought in over $1 billion in quarterly revenue for the first time, with its customer base expanding as Microsoft became the first cloud provider to offer general availability for the Instinct Mi300X.

The significant increase in AMD’s data center sales, driven by AI applications, is expected to boost profits further, as this segment typically yields higher margins. Additionally, the company's recent acquisition of Silo AI, Europe's largest private AI lab, will enhance its capabilities in generative AI, including inference, training, and large language models.

Furthermore, Advanced Micro Devices’ client revenue rose 49% year-over-year to $1.49 billion, though with slimmer margins than its data center business. The recent drop in the gaming and embedded segments will likely bottom out soon, potentially lifting overall results. Even modest gains could significantly boost AMD's bottom line. The company reported net income of $265 million or $0.16 per share, up from $27 million or $0.20 per share recorded last year.

Investors are keen to see AMD challenge NVDA with its MI300X AI chip and demonstrate growth in its data center AI business. On the other hand, Street expects its revenue and EPS for the current year (ending December 2024) to increase 12.9% and 27.6% year-over-year to $25.62 billion and $3.38, respectively. If AMD can exceed expectations, the stock could experience significant gains in the coming months. Earlier this year, the company projected $4 billion in AI chip sales for 2024, representing about 15% of its expected revenue.

Is It Time to Ditch NVDA and Buy AMD?

Delays in Blackwell chip could impact NVDA’s market share and growth. If the delay is short, the stock might have minimal impact on its fiscal 2025 results. However, if it extends beyond three months, it could weigh heavily on the stock, especially as some analysts were anticipating a quicker resolution.

Additionally, concerns about whether the design flaw could lead to chip failures or affect production yields add to the uncertainty. Nvidia's decision to pause production and address the issue is a smart move, but it highlights the risks of its aggressive development timeline, which has been shortened from two years to one. While this strategy could pay off, it also increases the risk of errors or delays.

On the other hand, AMD is well-positioned to benefit from NVDA's ongoing headwinds. With its MI300X AI chip gaining traction and strong data center growth, Advanced Micro Devices could capture some market share from Nvidia. Given this backdrop, it might be the right time to consider rotating out of NVDA and into AMD, especially for investors looking to capitalize on the AI-driven growth in the semiconductor sector.

Big Tech’s In-House AI Chips: A Threat to Nvidia’s Data Center Revenue

Nvidia Corporation (NVDA) has long been the dominant player in the AI-GPU market, particularly in data centers with paramount high-compute capabilities. According to Germany-based IoT Analytics, NVDA owns a 92% market share in data center GPUs.

Nvidia’s strength extends beyond semiconductor performance to its software capabilities. Launched in 2006, CUDA, its development platform, has been a cornerstone for AI development and is now utilized by more than 4 million developers.

The chipmaker’s flagship AI GPUs, including the H100 and A100, are known for their high performance and are widely used in data centers to power AI and machine learning workloads. These GPUs are integral to Nvidia’s dominance in the AI data center market, providing unmatched computational capabilities for complex tasks such as training large language models and running generative AI applications.

Additionally, NVDA announced its next-generation Blackwell GPU architecture for accelerated computing, unlocking breakthroughs in data processing, engineering simulation, quantum computing, and generative AI.

Led by Nvidia, U.S. tech companies dominate multiple facets of the burgeoning market for generative AI, with market shares of 70% to over 90% in chips and cloud services. Generative AI has surged in popularity since the launch of ChatGPT in 2022. Statista projects the AI market to grow at a CAGR of 28.5%, resulting in a market volume of $826.70 billion by 2030.

However, NVDA’s dominance is under threat as major tech companies like Microsoft Corporation, Meta Platforms, Inc. (META), Amazon.com, Inc. (AMZN), and Alphabet Inc. (GOOGL) develop their own in-house AI chips. This strategic shift could weaken Nvidia’s grip on the AI GPU market, significantly impacting the company’s revenue and market share.

Let’s analyze how these in-house AI chips from Big Tech could reduce reliance on Nvidia’s GPUs and examine the broader implications for NVDA, guiding how investors should respond.

The Rise of In-house AI Chips From Major Tech Companies

Microsoft Azure Maia 100

Microsoft Corporation’s (MSFT) Azure Maia 100 is designed to optimize AI workloads within its vast cloud infrastructure, like large language model training and inference. The new Azure Maia AI chip is built in-house at Microsoft, combined with a comprehensive overhaul of its entire cloud server stack to enhance performance, power efficiency, and cost-effectiveness.

Microsoft’s Maia 100 AI accelerator will handle some of the company’s largest AI workloads on Azure, including those associated with its multibillion-dollar partnership with OpenAI, where Microsoft powers all of OpenAI’s workloads. The software giant has been working closely with OpenAI during the design and testing phases of Maia.

“Since first partnering with Microsoft, we’ve collaborated to co-design Azure’s AI infrastructure at every layer for our models and unprecedented training needs,” stated Sam Altman, CEO of OpenAI. “Azure’s end-to-end AI architecture, now optimized down to the silicon with Maia, paves the way for training more capable models and making those models cheaper for our customers.”

By developing its own custom AI chip, MSFT aims to enhance performance while reducing costs associated with third-party GPU suppliers like Nvidia. This move will allow Microsoft to have greater control over its AI capabilities, potentially diminishing its reliance on Nvidia’s GPUs.

Alphabet Trillium

In May 2024, Google parent Alphabet Inc. (GOOGL) unveiled a Trillium chip in its AI data center chip family about five times as fast as its previous version. The Trillium chips are expected to provide powerful, efficient AI processing that is explicitly tailored to GOOGL’s needs.

Alphabet’s effort to build custom chips for AI data centers offers a notable alternative to Nvidia’s leading processors that dominate the market. Coupled with the software closely integrated with Google’s tensor processing units (TPUs), these custom chips will allow the company to capture a substantial market share.

The sixth-generation Trillium chip will deliver 4.7 times better computing performance than the TPU v5e and is designed to power the tech that generates text and other media from large models. Also, the Trillium processor is 67% more energy efficient than the v5e.

The company plans to make this new chip available to its cloud customers in “late 2024.”

Amazon Trainium2

Amazon.com, Inc.’s (AMZN) Trainium2 represents a significant step in its strategy to own more of its AI stack. AWS, Amazon’s cloud computing arm, is a major customer for Nvidia’s GPUs. However, with Trainium2, Amazon can internally enhance its machine learning capabilities, offering customers a competitive alternative to Nvidia-powered solutions.

AWS Trainium2 will power the highest-performance compute on AWS, enabling faster training of foundation models at reduced costs and with greater energy efficiency. Customers utilizing these new AWS-designed chips include Anthropic, Databricks, Datadog, Epic, Honeycomb, and SAP.

Moreover, Trainium2 is engineered to provide up to 4 times faster training compared to the first-generation Trainium chips. It can be deployed in EC2 UltraClusters with up to 100,000 chips, significantly accelerating the training of foundation models (FMs) and large language models (LLMs) while enhancing energy efficiency by up to 2 times.

Meta Training and Inference Accelerator

Meta Platforms, Inc. (META) is investing heavily in developing its own AI chips. The Meta Training and Inference Accelerator (MTIA) is a family of custom-made chips designed for Meta’s AI workloads. This latest version demonstrates significant performance enhancements compared to MTIA v1 and is instrumental in powering the company’s ranking and recommendation ads models.

MTIA is part of Meta’s expanding investment in AI infrastructure, designed to complement its existing and future AI infrastructure to deliver improved and innovative experiences across its products and services. It is expected to complement Nvidia’s GPUs and reduce META’s reliance on external suppliers.

Bottom Line

The development of in-house AI chips by major tech companies, including Microsoft, Meta, Amazon, and Alphabet, represents a significant transformative shift in the AI-GPU landscape. This move is poised to reduce these companies’ reliance on Nvidia’s GPUs, potentially impacting the chipmaker’s revenue, market share, and pricing power.

So, investors should consider diversifying their portfolios by increasing their exposure to tech giants such as MSFT, META, AMZN, and GOOGL, as they are developing their own AI chips and have diversified revenue streams and strong market positions in other areas.

Given the potential for reduced revenue and market share, investors should re-evaluate their holdings in NVDA. While Nvidia is still a leader in the AI-GPU market, the increasing competition from in-house AI chips by major tech companies poses a significant risk. Reducing exposure to Nvidia could be a strategic move in light of these developments.

Nvidia’s GPUs a Game-Changer for Investors?

NVIDIA Corporation (NVDA), a tech giant advancing AI through its cutting-edge graphics processing units (GPUs), became the third U.S. company to exceed a staggering market capitalization of $3 trillion in June, after Microsoft Corporation (MSFT) and Apple Inc. (AAPL). This significant milestone marks nearly a doubling of its value since the start of the year. Nvidia’s stock has surged more than 159% year-to-date and around 176% over the past year.

What drives the company’s exceptional growth, and how do Nvidia GPUs translate into significant financial benefits for cloud providers and investors? This piece will explore the financial implications of investing in NVIDIA GPUs, the impressive ROI metrics for cloud providers, and the company’s growth prospects in the AI GPU market.

Financial Benefits of NVDA’s GPUs for Cloud Providers

During the Bank of America Securities 2024 Global Technology Conference, Ian Buck, Vice President and General Manager of NVDA’s hyperscale and HPC business, highlighted the substantial financial benefits for cloud providers by investing in NVIDIA GPUs.

Buck illustrated that for every dollar spent on NVIDIA GPUs, cloud providers can generate five dollars over four years. This return on investment (ROI) becomes even more impressive for inferencing tasks, where the profitability rises to seven dollars per dollar invested over the same period, with this figure continuing to increase.

This compelling ROI is driven by the superior performance and efficiency of Nvidia’s GPUs, which enable cloud providers to offer enhanced services and handle more complex workloads, particularly in the realm of AI. As AI applications expand across various industries, the demand for high-performance inference solutions escalates, further boosting cloud providers’ financial benefits utilizing NVIDIA’s technology.

NVDA’s Progress in AI and GPU Innovations

NVIDIA’s commitment to addressing the surging demand for AI inference is evident in its continuous innovation and product development. The company introduced cutting-edge products like NVIDIA Inference Microservices (NIMs), designed to support popular AI models such as Llama, Mistral, and Gemma.

These optimized inference microservices for deploying AI models at scale facilitate seamless integration of AI capabilities into cloud infrastructures, enhancing efficiency and scalability for cloud providers.

In addition to NIMs, NVDA is also focusing on its new Blackwell GPU, engineered particularly for inference tasks and energy efficiency. The upcoming Blackwell model is expected to ship to customers later this year. While there may be initial shortages, Nvidia remains optimistic. Buck noted that each new technology phase brings supply and demand challenges, as they experienced with the Hopper GPU.

Furthermore, the early collaboration with cloud providers on the forthcoming Rubin GPU, slated for a 2026 release, underscores the company’s strategic foresight in aligning its innovations with industry requirements.

Nvidia’s GPUs Boost its Stock Value and Earnings

The financial returns of investing in Nvidia GPUs benefit cloud providers considerably and have significant implications for NVDA’s stock value and earnings. With a $4 trillion market cap within sight, the chip giant’s trajectory suggests continued growth and potential for substantial returns for investors.

NVDA’s first-quarter 2025 earnings topped analysts’ expectations and exceeded the high bar set by investors, as Data Center sales rose to a record high amid booming AI demand. For the quarter that ended April 28, 2024, the company posted a record revenue of $26 billion, up 262% year-over-year. That compared to the consensus revenue estimate of $24.56 billion.

The chip giant’s quarterly Data Center revenue was $22.60 billion, an increase of 427% from the prior year’s quarter. Its non-GAAP operating income rose 492% year-over-year to $18.06 billion. NVIDIA’s non-GAAP net income grew 462% from the prior year’s quarter to $15.24 billion. In addition, its non-GAAP EPS came in at $6.12, up 461% year-over-year.

“Our data center growth was fueled by strong and accelerating demand for generative AI training and inference on the Hopper platform. Beyond cloud service providers, generative AI has expanded to consumer internet companies, and enterprise, sovereign AI, automotive and healthcare customers, creating multiple multibillion-dollar vertical markets,” said Jensen Huang, CEO of NVDA.

“We are poised for our next wave of growth. The Blackwell platform is in full production and forms the foundation for trillion-parameter-scale generative AI. Spectrum-X opens a brand-new market for us to bring large-scale AI to Ethernet-only data centers. And NVIDIA NIM is our new software offering that delivers enterprise-grade, optimized generative AI to run on CUDA everywhere — from the cloud to on-prem data centers and RTX AI PCs — through our expansive network of ecosystem partners,” Huang added.

According to its outlook for the second quarter of fiscal 2025, Nvidia’s revenue is anticipated to be $28 billion, plus or minus 2%. The company expects its non-GAAP gross margins to be 75.5%. For the full year, gross margins are projected to be in the mid-70% range.

Analysts also appear highly bullish about the company’s upcoming earnings. NVDA’s revenue and EPS for the second quarter (ending July 2024) are expected to grow 110.5% and 135.5% year-over-year to $28.43 billion and $0.64, respectively. For the fiscal year ending January 2025, Street expects the chip company’s revenue and EPS to increase 97.3% and 111.1% year-over-year to $120.18 billion and $2.74, respectively.

Robust Future Growth in the AI Data Center Market

The exponential growth of AI use cases and applications across various sectors—ranging from healthcare and automobile to retail and manufacturing—highlights the critical role of GPUs in enabling these advancements. NVIDIA’s strategic investments in AI and GPU technology and its emphasis on collaboration with cloud providers position the company at the forefront of this burgeoning AI market.

As Nvidia’s high-end server GPUs are essential for training and deploying large AI models, tech giants like Microsoft and Meta Platforms, Inc. (META) have spent billions of dollars buying these chips. Meta CEO Mark Zuckerberg stated his company is “building an absolutely massive amount of infrastructure” that will include 350,000 H100 GPU graphics cards to be delivered by NVDA by the end of 2024.

NVIDIA’s GPUs are sought after by several other tech companies for superior performance, including Amazon, Microsoft Corporation (MSFT), Alphabet Inc. (GOOGL), and Tesla, Inc. (TSLA).

Notably, NVDA owns a 92% market share in data center GPUs. Led by Nvidia, U.S. tech companies dominate the burgeoning market for generative AI, with market shares of 70% to over 90% in chips and cloud services.

According to the Markets and Markets report, the data center GPU market is projected to value more than $63 billion by 2028, growing at an impressive CAGR of 34.6% during the forecast period (2024-2028). The rapidly rising adoption of data center GPUs across cloud providers should bode well for Nvidia.

Bottom Line

NVDA’s GPUs represent a game-changer for both cloud providers and investors, driven by superior performance and a compelling return on investment (ROI). The attractive financial benefits of investing in NVIDIA GPUs underscore their value, with cloud providers generating substantial profits from enhanced AI capabilities. This high ROI, particularly in AI inferencing tasks, positions Nvidia as a pivotal player in the burgeoning AI data center market, reinforcing its dominant market share and driving continued growth.

Moreover, Wall Street analysts remain bullish about this AI chipmaker’s prospects. TD Cowen analyst Matthew Ramsay increased his price target on NVDA stock from $140 to $165, while maintaining the Buy rating. “One thing remains the same: fundamental strength at Nvidia,” Ramsay said in a client note. “In fact, our checks continue to point to upside in data center (sales) as demand for Hopper/Blackwell-based AI systems continues to exceed supply.”

“Overall we see a product roadmap indicating a relentless pace of innovation across all aspects of the AI compute stack,” Ramsay added.

Meanwhile, KeyBanc Capital Markets analyst John Vinh reiterated his Overweight rating on NVIDIA stock with a price target of $180. “We expect Nvidia to deliver higher results and higher guidance” with its second-quarter 2025 report, Vinh said in a client note. He added solid demand for generative AI will drive the upside.

As AI applications expand across various key industries, NVIDIA’s continuous strategic innovations and product developments, such as the Blackwell GPU and NVIDIA Inference Microservices, ensure the company remains at the forefront of technological advancement. With a market cap nearing $4 trillion and a solid financial outlook, NVIDIA is well-poised to deliver substantial returns for investors, solidifying its standing as a leader in the AI and GPU technology sectors.

Intel's $8.5 Billion Gamble: Can It Rival Nvidia?

Intel Corporation (INTC), a leading player in the semiconductor industry, is making headlines with its ambitious plans to transform its operations, spurred by a substantial $8.5 billion boost from the CHIPS and Science Act. The roughly $280 billion legislative package, signed into law by President Joe Biden in 2022, aims to bolster U.S. semiconductor manufacturing and research and development (R&D) capabilities.

CHIPS Act funding will help advance Intel’s commercial semiconductor projects at key sites in Arizona, New Mexico, Ohio, and Oregon. Also, the company expects to benefit from a U.S. Treasury Department Investment Tax Credit (ITC) of up to 25% on over $100 billion in qualified investments and eligibility for federal loans up to $11 billion.

Previously, CHIPS Act funding and INTC announced plans to invest more than $1100 billion in the U.S. over five years to expand chipmaking capacity critical to national security and the advancement of cutting-edge technologies, including artificial intelligence (AI).

Notably, Intel is the sole American company that both designs and manufactures leading-edge logic chips. Its strategy focuses on three pillars: achieving process technology leadership, constructing a more resilient and sustainable global semiconductor supply chain, and developing a world-class foundry business. These goals align with the CHIPS Act’s objectives to restore manufacturing and technological leadership to the U.S.

The federal funding represents a pivotal opportunity for INTC to reclaim its position as a chip manufacturing powerhouse, potentially rivaling giants like NVIDIA Corporation (NVDA) and Advanced Micro Devices, Inc. (AMD).

Intel’s Strategic Initiatives to Capitalize on AI Boom

At Computex 2024, INTC introduced cutting-edge technologies and architectures that are well-poised to significantly accelerate the AI ecosystem, from the data center, cloud, and network to the edge and PC.

The company launched Intel® Xeon® 6 processors with E-core (Efficient-core) and P-core (Performance-core) SKUs, delivering enhanced performance and power efficiency for high-density, scale-out workloads in the data center. The first of the Xeon 6 processors debuted is the Intel Xeon 6 E-core (code-named Sierra Forest), available beginning June 4. Further, Xeon 6 P-cores (code-named Granite Rapids) are expected to launch next quarter.

Beyond the data center, Intel is expanding its AI footprint in edge computing and PCs. With over 90,000 edge deployments and 200 million CPUs distributed across the ecosystem, the company has consistently enabled enterprise choice for many years. INTC revealed the architectural details of Lunar Lake, the flagship processor for the next generation of AI PCs.

Lunar Lake is set to make a significant leap in graphics and AI processing capabilities, emphasizing power-efficient compute performance tailored for the thin-and-light segment. It promises up to a 40% reduction in System-on-Chip (SoC) power3 and over three times the AI compute8. It is scheduled for release in the third quarter of 2024, in time for the holiday shopping season.

Also, Intel unveiled pricing for Intel® Gaudi® 2 and Intel® Gaudi® 3 AI accelerator kits, providing high performance at up to one-third lower cost compared to competitive platforms. A standard AI kit, including Intel Gaudi 2 accelerators with a UBB, is offered to system providers at $65,000. Integrating Xeon processors with Gaudi AI accelerators in a system presents a robust solution to make AI faster, cheaper, and more accessible.

Intel CEO Pat Gelsinger said, “Intel is one of the only companies in the world innovating across the full spectrum of the AI market opportunity – from semiconductor manufacturing to PC, network, edge and data center systems. Our latest Xeon, Gaudi and Core Ultra platforms, combined with the power of our hardware and software ecosystem, are delivering the flexible, secure, sustainable and cost-effective solutions our customers need to maximize the immense opportunities ahead.”

On May 1, INTC achieved a significant milestone of surpassing 500 AI models running optimized on new Intel® Core™ Ultra processors due to the company’s investment in client AI, the AI PC transformation, framework optimizations, and AI tools like OpenVINO™ toolkit. These processors are the industry’s leading AI PC processors, offering enhanced AI experiences, immersive graphics, and optimized battery life.

Solid First-Quarter Performance and Second-Quarter Guidance

During the first quarter that ended March 30, 2024, INTC’s net revenue increased 8.6% year-over-year to $12.72 billion, primarily driven by growth in its personal computing, data center, and AI business. Revenue from the Client Computing Group (CCG), through which Intel continues to advance its mission to bring AI everywhere, rose 31% year-over-year to $7.50 billion.

Furthermore, the company’s non-GAAP operating income was $723 million, compared to an operating loss of $294 million in the previous year’s quarter. Its non-GAAP net income and non-GAAP earnings per share came in at $759 million and $0.18, compared to a net loss and loss per share of $169 million and $0.04, respectively, in the same quarter of 2023.

For the second quarter of fiscal 2024, Intel expects its revenue to come between $12.5 billion and $13.5 billion, and its non-GAAP earnings per share is expected to be $0.10.

Despite its outstanding financial performance and ambitious plans, INTC’s stock has plunged more than 38% over the past six months and nearly 40% year-to-date.

Competing with Nvidia: A Daunting Task

Despite INTC’s solid financial health and strategic moves, the competition with NVDA is fierce. Nvidia’s market performance has been stellar lately, driven by its global leadership in graphics processing units (GPUs) and its foray into AI and machine learning markets. The chip giant has built strong brand loyalty among developers and enterprise customers, which could be challenging for Intel to overcome.

Over the past year, NVIDIA has experienced a significant surge in sales due to high demand from tech giants such as c, Alphabet Inc. (GOOGL), Microsoft Corporation (MSFT), Meta Platforms, Inc. (META), and OpenAI, who invested billions of dollars in its advanced GPUs essential for developing and deploying AI applications.

Shares of the prominent chipmaker surged approximately 150% over the past six months and more than 196% over the past year. Moreover, NVDA’s stock is up around 2,938% over the past five years. Notably, after Amazon and Google, Nvidia recently became the third U.S. company with a market value surpassing $3 trillion.

As a result, NVDA commands a dominant market share of about 92% in the data center GPU market. Nvidia’s success stems from its cutting-edge semiconductor performance and software prowess. The CUDA development platform, launched in 2006, has emerged as a pivotal tool for AI development, with a user base exceeding 4 million developers.

Bottom Line

Proposed funding of $8.5 billion, along with an investment tax credit and eligibility for CHIPS Act loans, are pivotal in Intel’s bid to regain semiconductor leadership in the face of intense competition, particularly from Nvidia. This substantial federal funding will enhance Intel’s manufacturing and R&D capabilities across its key sites in Arizona, New Mexico, Ohio, and Oregon.

While INTC possesses the resources, technological expertise, and strategic vision to challenge NVDA, the path forward is fraught with challenges. Despite Intel’s recent strides in the AI ecosystem, from the data center to edge and PC with products like Xeon 6 processors and Gaudi AI accelerators, Nvidia’s dominance in data center GPUs remains pronounced, commanding a significant market share.

Future success will depend on Intel’s ability to leverage its strengths in manufacturing, introducing innovative product lines, and cultivating a compelling ecosystem of software and developer support. As Intel advances its ambitious plans, industry experts and stakeholders will keenly watch how these developments unfold, redefining the competitive landscape in the AI and data center markets.