Big Tech’s In-House AI Chips: A Threat to Nvidia’s Data Center Revenue

Nvidia Corporation (NVDA) has long been the dominant player in the AI-GPU market, particularly in data centers with paramount high-compute capabilities. According to Germany-based IoT Analytics, NVDA owns a 92% market share in data center GPUs.

Nvidia’s strength extends beyond semiconductor performance to its software capabilities. Launched in 2006, CUDA, its development platform, has been a cornerstone for AI development and is now utilized by more than 4 million developers.

The chipmaker’s flagship AI GPUs, including the H100 and A100, are known for their high performance and are widely used in data centers to power AI and machine learning workloads. These GPUs are integral to Nvidia’s dominance in the AI data center market, providing unmatched computational capabilities for complex tasks such as training large language models and running generative AI applications.

Additionally, NVDA announced its next-generation Blackwell GPU architecture for accelerated computing, unlocking breakthroughs in data processing, engineering simulation, quantum computing, and generative AI.

Led by Nvidia, U.S. tech companies dominate multiple facets of the burgeoning market for generative AI, with market shares of 70% to over 90% in chips and cloud services. Generative AI has surged in popularity since the launch of ChatGPT in 2022. Statista projects the AI market to grow at a CAGR of 28.5%, resulting in a market volume of $826.70 billion by 2030.

However, NVDA’s dominance is under threat as major tech companies like Microsoft Corporation, Meta Platforms, Inc. (META), Amazon.com, Inc. (AMZN), and Alphabet Inc. (GOOGL) develop their own in-house AI chips. This strategic shift could weaken Nvidia’s grip on the AI GPU market, significantly impacting the company’s revenue and market share.

Let’s analyze how these in-house AI chips from Big Tech could reduce reliance on Nvidia’s GPUs and examine the broader implications for NVDA, guiding how investors should respond.

The Rise of In-house AI Chips From Major Tech Companies

Microsoft Azure Maia 100

Microsoft Corporation’s (MSFT) Azure Maia 100 is designed to optimize AI workloads within its vast cloud infrastructure, like large language model training and inference. The new Azure Maia AI chip is built in-house at Microsoft, combined with a comprehensive overhaul of its entire cloud server stack to enhance performance, power efficiency, and cost-effectiveness.

Microsoft’s Maia 100 AI accelerator will handle some of the company’s largest AI workloads on Azure, including those associated with its multibillion-dollar partnership with OpenAI, where Microsoft powers all of OpenAI’s workloads. The software giant has been working closely with OpenAI during the design and testing phases of Maia.

“Since first partnering with Microsoft, we’ve collaborated to co-design Azure’s AI infrastructure at every layer for our models and unprecedented training needs,” stated Sam Altman, CEO of OpenAI. “Azure’s end-to-end AI architecture, now optimized down to the silicon with Maia, paves the way for training more capable models and making those models cheaper for our customers.”

By developing its own custom AI chip, MSFT aims to enhance performance while reducing costs associated with third-party GPU suppliers like Nvidia. This move will allow Microsoft to have greater control over its AI capabilities, potentially diminishing its reliance on Nvidia’s GPUs.

Alphabet Trillium

In May 2024, Google parent Alphabet Inc. (GOOGL) unveiled a Trillium chip in its AI data center chip family about five times as fast as its previous version. The Trillium chips are expected to provide powerful, efficient AI processing that is explicitly tailored to GOOGL’s needs.

Alphabet’s effort to build custom chips for AI data centers offers a notable alternative to Nvidia’s leading processors that dominate the market. Coupled with the software closely integrated with Google’s tensor processing units (TPUs), these custom chips will allow the company to capture a substantial market share.

The sixth-generation Trillium chip will deliver 4.7 times better computing performance than the TPU v5e and is designed to power the tech that generates text and other media from large models. Also, the Trillium processor is 67% more energy efficient than the v5e.

The company plans to make this new chip available to its cloud customers in “late 2024.”

Amazon Trainium2

Amazon.com, Inc.’s (AMZN) Trainium2 represents a significant step in its strategy to own more of its AI stack. AWS, Amazon’s cloud computing arm, is a major customer for Nvidia’s GPUs. However, with Trainium2, Amazon can internally enhance its machine learning capabilities, offering customers a competitive alternative to Nvidia-powered solutions.

AWS Trainium2 will power the highest-performance compute on AWS, enabling faster training of foundation models at reduced costs and with greater energy efficiency. Customers utilizing these new AWS-designed chips include Anthropic, Databricks, Datadog, Epic, Honeycomb, and SAP.

Moreover, Trainium2 is engineered to provide up to 4 times faster training compared to the first-generation Trainium chips. It can be deployed in EC2 UltraClusters with up to 100,000 chips, significantly accelerating the training of foundation models (FMs) and large language models (LLMs) while enhancing energy efficiency by up to 2 times.

Meta Training and Inference Accelerator

Meta Platforms, Inc. (META) is investing heavily in developing its own AI chips. The Meta Training and Inference Accelerator (MTIA) is a family of custom-made chips designed for Meta’s AI workloads. This latest version demonstrates significant performance enhancements compared to MTIA v1 and is instrumental in powering the company’s ranking and recommendation ads models.

MTIA is part of Meta’s expanding investment in AI infrastructure, designed to complement its existing and future AI infrastructure to deliver improved and innovative experiences across its products and services. It is expected to complement Nvidia’s GPUs and reduce META’s reliance on external suppliers.

Bottom Line

The development of in-house AI chips by major tech companies, including Microsoft, Meta, Amazon, and Alphabet, represents a significant transformative shift in the AI-GPU landscape. This move is poised to reduce these companies’ reliance on Nvidia’s GPUs, potentially impacting the chipmaker’s revenue, market share, and pricing power.

So, investors should consider diversifying their portfolios by increasing their exposure to tech giants such as MSFT, META, AMZN, and GOOGL, as they are developing their own AI chips and have diversified revenue streams and strong market positions in other areas.

Given the potential for reduced revenue and market share, investors should re-evaluate their holdings in NVDA. While Nvidia is still a leader in the AI-GPU market, the increasing competition from in-house AI chips by major tech companies poses a significant risk. Reducing exposure to Nvidia could be a strategic move in light of these developments.

Nvidia’s GPUs a Game-Changer for Investors?

NVIDIA Corporation (NVDA), a tech giant advancing AI through its cutting-edge graphics processing units (GPUs), became the third U.S. company to exceed a staggering market capitalization of $3 trillion in June, after Microsoft Corporation (MSFT) and Apple Inc. (AAPL). This significant milestone marks nearly a doubling of its value since the start of the year. Nvidia’s stock has surged more than 159% year-to-date and around 176% over the past year.

What drives the company’s exceptional growth, and how do Nvidia GPUs translate into significant financial benefits for cloud providers and investors? This piece will explore the financial implications of investing in NVIDIA GPUs, the impressive ROI metrics for cloud providers, and the company’s growth prospects in the AI GPU market.

Financial Benefits of NVDA’s GPUs for Cloud Providers

During the Bank of America Securities 2024 Global Technology Conference, Ian Buck, Vice President and General Manager of NVDA’s hyperscale and HPC business, highlighted the substantial financial benefits for cloud providers by investing in NVIDIA GPUs.

Buck illustrated that for every dollar spent on NVIDIA GPUs, cloud providers can generate five dollars over four years. This return on investment (ROI) becomes even more impressive for inferencing tasks, where the profitability rises to seven dollars per dollar invested over the same period, with this figure continuing to increase.

This compelling ROI is driven by the superior performance and efficiency of Nvidia’s GPUs, which enable cloud providers to offer enhanced services and handle more complex workloads, particularly in the realm of AI. As AI applications expand across various industries, the demand for high-performance inference solutions escalates, further boosting cloud providers’ financial benefits utilizing NVIDIA’s technology.

NVDA’s Progress in AI and GPU Innovations

NVIDIA’s commitment to addressing the surging demand for AI inference is evident in its continuous innovation and product development. The company introduced cutting-edge products like NVIDIA Inference Microservices (NIMs), designed to support popular AI models such as Llama, Mistral, and Gemma.

These optimized inference microservices for deploying AI models at scale facilitate seamless integration of AI capabilities into cloud infrastructures, enhancing efficiency and scalability for cloud providers.

In addition to NIMs, NVDA is also focusing on its new Blackwell GPU, engineered particularly for inference tasks and energy efficiency. The upcoming Blackwell model is expected to ship to customers later this year. While there may be initial shortages, Nvidia remains optimistic. Buck noted that each new technology phase brings supply and demand challenges, as they experienced with the Hopper GPU.

Furthermore, the early collaboration with cloud providers on the forthcoming Rubin GPU, slated for a 2026 release, underscores the company’s strategic foresight in aligning its innovations with industry requirements.

Nvidia’s GPUs Boost its Stock Value and Earnings

The financial returns of investing in Nvidia GPUs benefit cloud providers considerably and have significant implications for NVDA’s stock value and earnings. With a $4 trillion market cap within sight, the chip giant’s trajectory suggests continued growth and potential for substantial returns for investors.

NVDA’s first-quarter 2025 earnings topped analysts’ expectations and exceeded the high bar set by investors, as Data Center sales rose to a record high amid booming AI demand. For the quarter that ended April 28, 2024, the company posted a record revenue of $26 billion, up 262% year-over-year. That compared to the consensus revenue estimate of $24.56 billion.

The chip giant’s quarterly Data Center revenue was $22.60 billion, an increase of 427% from the prior year’s quarter. Its non-GAAP operating income rose 492% year-over-year to $18.06 billion. NVIDIA’s non-GAAP net income grew 462% from the prior year’s quarter to $15.24 billion. In addition, its non-GAAP EPS came in at $6.12, up 461% year-over-year.

“Our data center growth was fueled by strong and accelerating demand for generative AI training and inference on the Hopper platform. Beyond cloud service providers, generative AI has expanded to consumer internet companies, and enterprise, sovereign AI, automotive and healthcare customers, creating multiple multibillion-dollar vertical markets,” said Jensen Huang, CEO of NVDA.

“We are poised for our next wave of growth. The Blackwell platform is in full production and forms the foundation for trillion-parameter-scale generative AI. Spectrum-X opens a brand-new market for us to bring large-scale AI to Ethernet-only data centers. And NVIDIA NIM is our new software offering that delivers enterprise-grade, optimized generative AI to run on CUDA everywhere — from the cloud to on-prem data centers and RTX AI PCs — through our expansive network of ecosystem partners,” Huang added.

According to its outlook for the second quarter of fiscal 2025, Nvidia’s revenue is anticipated to be $28 billion, plus or minus 2%. The company expects its non-GAAP gross margins to be 75.5%. For the full year, gross margins are projected to be in the mid-70% range.

Analysts also appear highly bullish about the company’s upcoming earnings. NVDA’s revenue and EPS for the second quarter (ending July 2024) are expected to grow 110.5% and 135.5% year-over-year to $28.43 billion and $0.64, respectively. For the fiscal year ending January 2025, Street expects the chip company’s revenue and EPS to increase 97.3% and 111.1% year-over-year to $120.18 billion and $2.74, respectively.

Robust Future Growth in the AI Data Center Market

The exponential growth of AI use cases and applications across various sectors—ranging from healthcare and automobile to retail and manufacturing—highlights the critical role of GPUs in enabling these advancements. NVIDIA’s strategic investments in AI and GPU technology and its emphasis on collaboration with cloud providers position the company at the forefront of this burgeoning AI market.

As Nvidia’s high-end server GPUs are essential for training and deploying large AI models, tech giants like Microsoft and Meta Platforms, Inc. (META) have spent billions of dollars buying these chips. Meta CEO Mark Zuckerberg stated his company is “building an absolutely massive amount of infrastructure” that will include 350,000 H100 GPU graphics cards to be delivered by NVDA by the end of 2024.

NVIDIA’s GPUs are sought after by several other tech companies for superior performance, including Amazon, Microsoft Corporation (MSFT), Alphabet Inc. (GOOGL), and Tesla, Inc. (TSLA).

Notably, NVDA owns a 92% market share in data center GPUs. Led by Nvidia, U.S. tech companies dominate the burgeoning market for generative AI, with market shares of 70% to over 90% in chips and cloud services.

According to the Markets and Markets report, the data center GPU market is projected to value more than $63 billion by 2028, growing at an impressive CAGR of 34.6% during the forecast period (2024-2028). The rapidly rising adoption of data center GPUs across cloud providers should bode well for Nvidia.

Bottom Line

NVDA’s GPUs represent a game-changer for both cloud providers and investors, driven by superior performance and a compelling return on investment (ROI). The attractive financial benefits of investing in NVIDIA GPUs underscore their value, with cloud providers generating substantial profits from enhanced AI capabilities. This high ROI, particularly in AI inferencing tasks, positions Nvidia as a pivotal player in the burgeoning AI data center market, reinforcing its dominant market share and driving continued growth.

Moreover, Wall Street analysts remain bullish about this AI chipmaker’s prospects. TD Cowen analyst Matthew Ramsay increased his price target on NVDA stock from $140 to $165, while maintaining the Buy rating. “One thing remains the same: fundamental strength at Nvidia,” Ramsay said in a client note. “In fact, our checks continue to point to upside in data center (sales) as demand for Hopper/Blackwell-based AI systems continues to exceed supply.”

“Overall we see a product roadmap indicating a relentless pace of innovation across all aspects of the AI compute stack,” Ramsay added.

Meanwhile, KeyBanc Capital Markets analyst John Vinh reiterated his Overweight rating on NVIDIA stock with a price target of $180. “We expect Nvidia to deliver higher results and higher guidance” with its second-quarter 2025 report, Vinh said in a client note. He added solid demand for generative AI will drive the upside.

As AI applications expand across various key industries, NVIDIA’s continuous strategic innovations and product developments, such as the Blackwell GPU and NVIDIA Inference Microservices, ensure the company remains at the forefront of technological advancement. With a market cap nearing $4 trillion and a solid financial outlook, NVIDIA is well-poised to deliver substantial returns for investors, solidifying its standing as a leader in the AI and GPU technology sectors.

Intel's $8.5 Billion Gamble: Can It Rival Nvidia?

Intel Corporation (INTC), a leading player in the semiconductor industry, is making headlines with its ambitious plans to transform its operations, spurred by a substantial $8.5 billion boost from the CHIPS and Science Act. The roughly $280 billion legislative package, signed into law by President Joe Biden in 2022, aims to bolster U.S. semiconductor manufacturing and research and development (R&D) capabilities.

CHIPS Act funding will help advance Intel’s commercial semiconductor projects at key sites in Arizona, New Mexico, Ohio, and Oregon. Also, the company expects to benefit from a U.S. Treasury Department Investment Tax Credit (ITC) of up to 25% on over $100 billion in qualified investments and eligibility for federal loans up to $11 billion.

Previously, CHIPS Act funding and INTC announced plans to invest more than $1100 billion in the U.S. over five years to expand chipmaking capacity critical to national security and the advancement of cutting-edge technologies, including artificial intelligence (AI).

Notably, Intel is the sole American company that both designs and manufactures leading-edge logic chips. Its strategy focuses on three pillars: achieving process technology leadership, constructing a more resilient and sustainable global semiconductor supply chain, and developing a world-class foundry business. These goals align with the CHIPS Act’s objectives to restore manufacturing and technological leadership to the U.S.

The federal funding represents a pivotal opportunity for INTC to reclaim its position as a chip manufacturing powerhouse, potentially rivaling giants like NVIDIA Corporation (NVDA) and Advanced Micro Devices, Inc. (AMD).

Intel’s Strategic Initiatives to Capitalize on AI Boom

At Computex 2024, INTC introduced cutting-edge technologies and architectures that are well-poised to significantly accelerate the AI ecosystem, from the data center, cloud, and network to the edge and PC.

The company launched Intel® Xeon® 6 processors with E-core (Efficient-core) and P-core (Performance-core) SKUs, delivering enhanced performance and power efficiency for high-density, scale-out workloads in the data center. The first of the Xeon 6 processors debuted is the Intel Xeon 6 E-core (code-named Sierra Forest), available beginning June 4. Further, Xeon 6 P-cores (code-named Granite Rapids) are expected to launch next quarter.

Beyond the data center, Intel is expanding its AI footprint in edge computing and PCs. With over 90,000 edge deployments and 200 million CPUs distributed across the ecosystem, the company has consistently enabled enterprise choice for many years. INTC revealed the architectural details of Lunar Lake, the flagship processor for the next generation of AI PCs.

Lunar Lake is set to make a significant leap in graphics and AI processing capabilities, emphasizing power-efficient compute performance tailored for the thin-and-light segment. It promises up to a 40% reduction in System-on-Chip (SoC) power3 and over three times the AI compute8. It is scheduled for release in the third quarter of 2024, in time for the holiday shopping season.

Also, Intel unveiled pricing for Intel® Gaudi® 2 and Intel® Gaudi® 3 AI accelerator kits, providing high performance at up to one-third lower cost compared to competitive platforms. A standard AI kit, including Intel Gaudi 2 accelerators with a UBB, is offered to system providers at $65,000. Integrating Xeon processors with Gaudi AI accelerators in a system presents a robust solution to make AI faster, cheaper, and more accessible.

Intel CEO Pat Gelsinger said, “Intel is one of the only companies in the world innovating across the full spectrum of the AI market opportunity – from semiconductor manufacturing to PC, network, edge and data center systems. Our latest Xeon, Gaudi and Core Ultra platforms, combined with the power of our hardware and software ecosystem, are delivering the flexible, secure, sustainable and cost-effective solutions our customers need to maximize the immense opportunities ahead.”

On May 1, INTC achieved a significant milestone of surpassing 500 AI models running optimized on new Intel® Core™ Ultra processors due to the company’s investment in client AI, the AI PC transformation, framework optimizations, and AI tools like OpenVINO™ toolkit. These processors are the industry’s leading AI PC processors, offering enhanced AI experiences, immersive graphics, and optimized battery life.

Solid First-Quarter Performance and Second-Quarter Guidance

During the first quarter that ended March 30, 2024, INTC’s net revenue increased 8.6% year-over-year to $12.72 billion, primarily driven by growth in its personal computing, data center, and AI business. Revenue from the Client Computing Group (CCG), through which Intel continues to advance its mission to bring AI everywhere, rose 31% year-over-year to $7.50 billion.

Furthermore, the company’s non-GAAP operating income was $723 million, compared to an operating loss of $294 million in the previous year’s quarter. Its non-GAAP net income and non-GAAP earnings per share came in at $759 million and $0.18, compared to a net loss and loss per share of $169 million and $0.04, respectively, in the same quarter of 2023.

For the second quarter of fiscal 2024, Intel expects its revenue to come between $12.5 billion and $13.5 billion, and its non-GAAP earnings per share is expected to be $0.10.

Despite its outstanding financial performance and ambitious plans, INTC’s stock has plunged more than 38% over the past six months and nearly 40% year-to-date.

Competing with Nvidia: A Daunting Task

Despite INTC’s solid financial health and strategic moves, the competition with NVDA is fierce. Nvidia’s market performance has been stellar lately, driven by its global leadership in graphics processing units (GPUs) and its foray into AI and machine learning markets. The chip giant has built strong brand loyalty among developers and enterprise customers, which could be challenging for Intel to overcome.

Over the past year, NVIDIA has experienced a significant surge in sales due to high demand from tech giants such as c, Alphabet Inc. (GOOGL), Microsoft Corporation (MSFT), Meta Platforms, Inc. (META), and OpenAI, who invested billions of dollars in its advanced GPUs essential for developing and deploying AI applications.

Shares of the prominent chipmaker surged approximately 150% over the past six months and more than 196% over the past year. Moreover, NVDA’s stock is up around 2,938% over the past five years. Notably, after Amazon and Google, Nvidia recently became the third U.S. company with a market value surpassing $3 trillion.

As a result, NVDA commands a dominant market share of about 92% in the data center GPU market. Nvidia’s success stems from its cutting-edge semiconductor performance and software prowess. The CUDA development platform, launched in 2006, has emerged as a pivotal tool for AI development, with a user base exceeding 4 million developers.

Bottom Line

Proposed funding of $8.5 billion, along with an investment tax credit and eligibility for CHIPS Act loans, are pivotal in Intel’s bid to regain semiconductor leadership in the face of intense competition, particularly from Nvidia. This substantial federal funding will enhance Intel’s manufacturing and R&D capabilities across its key sites in Arizona, New Mexico, Ohio, and Oregon.

While INTC possesses the resources, technological expertise, and strategic vision to challenge NVDA, the path forward is fraught with challenges. Despite Intel’s recent strides in the AI ecosystem, from the data center to edge and PC with products like Xeon 6 processors and Gaudi AI accelerators, Nvidia’s dominance in data center GPUs remains pronounced, commanding a significant market share.

Future success will depend on Intel’s ability to leverage its strengths in manufacturing, introducing innovative product lines, and cultivating a compelling ecosystem of software and developer support. As Intel advances its ambitious plans, industry experts and stakeholders will keenly watch how these developments unfold, redefining the competitive landscape in the AI and data center markets.

How Micron Technology Is Poised to Benefit from AI Investments

Artificial Intelligence (AI) continues revolutionizing industries worldwide, including healthcare, retail, finance, automotive, manufacturing, and logistics, driving demand for advanced technology and infrastructure. Among the companies set to benefit significantly from this AI boom is Micron Technology, Inc. (MU), a prominent manufacturer of memory and storage solutions.

MU’s shares have surged more than 70% over the past six months and nearly 104% over the past year. Moreover, the stock is up approximately 12% over the past month.

This piece delves into the broader market dynamics of AI investments and how MU is strategically positioned to capitalize on these trends, offering insights into how investors might act now.

Broader Market Dynamics of AI Investments

According to Grand View Research, the AI market is expected to exceed $1.81 trillion by 2030, growing at a CAGR of 36.6% from 2024 to 2030. This robust market growth is propelled by the rapid adoption of advanced technologies in numerous industry verticals, increased generation of data, developments in machine learning and deep learning, the introduction of big data, and substantial investments from government and private enterprises.

AI has emerged as a pivotal force in the modern digital era. Tech giants such as Amazon.com, Inc. (AMZN), Alphabet Inc. (GOOGL), Apple Inc. (AAPL), Meta Platforms, Inc. (META), and Microsoft Corporation (MSFT) are heavily investing in research and development (R&D), thereby making AI more accessible for enterprise use cases.

Moreover, several companies have adopted AI technology to enhance customer experience and strengthen their presence in the AI industry 4.0.

Big Tech has spent billions of dollars in the AI revolution. So far, in 2024, Microsoft and Amazon have collectively allocated over $40 billion for investments in AI-related initiatives and data center projects worldwide.

DA Davidson analyst Gil Luria anticipates these companies will spend over $100 billion this year on AI infrastructure. According to Luria, spending will continue to rise in response to growing demand. Meanwhile, Wedbush analyst Daniel Ives projects continued investment in AI infrastructure by leading tech firms, “This is a $1 trillion spending jump ball over the next decade.”

Micron Technology’s Strategic Position

With a $156.54 billion market cap, MU is a crucial player in the AI ecosystem because it focuses on providing cutting-edge memory and storage products globally. The company operates through four segments: Compute and Networking Business Unit; Mobile Business Unit; Embedded Business Unit; and Storage Business Unit.

Micron’s dynamic random-access memory (DRAM) and NAND flash memory are critical components in AI applications, offering the speed and efficiency required for high-performance computing. The company has consistently introduced innovative products, such as the HBM2E with the industry’s fastest, highest capacity high-bandwidth memory (HBM), designed to advance generative AI innovation.

This month, MU announced sampling its next-generation GDDR7 graphics memory with the industry’s highest bit density. With more than 1.5 TB/s of system bandwidth and four independent channels to optimize workloads, Micron GDDR7 memory allows faster response times, smoother gameplay, and reduced processing times. The best-in-class capabilities of Micro GDDR7 will optimize AI, gaming, and high-performance computing workloads.

Notably, Micron recently reached an industry milestone as the first to validate and ship 128GB DDR5 32Gb server DRAM to address the increasing demands for rigorous speed and capacity of memory-intensive Gen AI applications.

Furthermore, MU has forged strategic partnerships with prominent tech companies like NVIDIA Corporation (NVDA) and Intel Corporation (INTC), positioning the company at the forefront of AI technology advancements. In February this year, Micron started mass production of its HBM2E solution for use in Nvidia’s latest AI chip. Micron’s 24GB 8H HBM3E will be part of NVIDIA H200 Tensor Core GPUs, expected to begin shipping in the second quarter.

Also, Micron's 128GB RDIMMs are ready for deployment on the 4th and 5th Gen Intel® Xeon® platforms. In addition to Intel, Micron’s 128GB DDR5 RDIMM memory will be supported by a robust ecosystem, including Advanced Micro Devices, Inc. (AMD), Hewlett Packard Enterprise Company (HPE), and Supermicro, among many others.

Further, in April, MU qualified a full suite of its automotive-grade memory and storage solutions for Qualcomm Technologies Inc.’s Snapdragon Digital Chassis, a comprehensive set of cloud-connected platforms designed to power data-rich, intelligent automotive services. This partnership is aimed at helping the ecosystem build next-generation intelligent vehicles powered by sophisticated AI.

Robust Second-Quarter Financials and Upbeat Outlook

Solid AI demand and constrained supply accelerated Micron’s return to profitability in the second quarter of fiscal 2024, which ended February 29, 2024. MU reported revenue of $5.82 billion, beating analysts’ estimate of $5.35 billion. This revenue is compared to $4.74 billion for the previous quarter and $3.69 billion for the same period in 2023.

The company’s non-GAAP gross margin was $1.16 billion, versus $37 million in the prior quarter and negative $1.16 billion for the previous year’s quarter. Micron’s non-GAAP operating income came in at $204 million, compared to an operating loss of $955 million and $2.08 billion for the prior quarter and the same period last year, respectively.

MU posted non-GAAP net income and earnings per share of $476 million and $0.42 for the second quarter, compared to non-GAAP net loss and loss per share of $2.08 billion and $1.91 a year ago, respectively. The company’s EPS also surpassed the consensus loss per share estimate of $0.24. During the quarter, its operating cash flow was $1.22 billion versus $343 million for the same quarter of 2023.

“Micron delivered fiscal Q2 results with revenue, gross margin and EPS well above the high-end of our guidance range — a testament to our team’s excellent execution on pricing, products and operations,” said Sanjay Mehrotra, MU’s President and CEO. “Our preeminent product portfolio positions us well to deliver a strong fiscal second half of 2024. We believe Micron is one of the biggest beneficiaries in the semiconductor industry of the multi-year opportunity enabled by AI.”

For the third quarter of 2024, the company expects revenue of $6.60 million ± $200 million, and its gross margin is projected to be 26.5% ± 1.5%. Also, Micron expects its non-GAAP earnings per share to be $0.45 ± 0.07.

Bottom Line

MU is strategically positioned to benefit from the burgeoning AI market, driven by its diversified portfolio of advanced memory and storage solutions, strategic partnerships and investments, robust financial health characterized by solid revenue growth and profitability, and expanding market presence.

The company’s recent innovations, including HBM3E and DDR5 RDIMM memory, underscore the commitment to advancing its capabilities across AI and high-performance computing applications.

Moreover, the company’s second-quarter 2024 earnings beat analysts' expectations, supported by the AI boom. Also, Micron offered a rosy guidance for the third quarter of fiscal 2024. Investors eagerly await insights into MU’s financial performance, strategic updates, and outlook during the third-quarter earnings conference call scheduled for June 26, 2024.

Braid Senior Research Analyst Tristan Gerra upgraded MU stock from “Neutral” to “Outperform” and increased the price target from $115 to $150, citing that the company has meaningful upside opportunities. Gerra stated that DRAM chip pricing has been rising while supply is anticipated to slow. Also, Morgan Stanley raised their outlook for Micron from “Underweight” to “Equal-Weight.”

As AI investments from numerous sectors continue to grow, Micron stands to capture significant market share, making it an attractive option for investors seeking long-term growth in the semiconductor sector.

The Future of NVIDIA: Post-Split Valuation and Growth Projections

NVIDIA Corporation (NVDA), a prominent force in the AI and semiconductor technology industries, announced a 10-for-1 forward stock split of the company’s issued common stock during its last earnings release in May. Shareholders of record as of June 6 received nine additional shares for each share held after the close on Friday, June 7. Trading will commence on a split-adjusted basis at market open on June 10.

This strategic move is poised to reshape the landscape for Nvidia investors and the broader tech market.

Post-Split Valuation

NVDA was already a leading AI stock in the market, but investor interest in the chipmaker skyrocketed as its 10-for-1 stock split took effect after the market’s close on June 7. Shares of the hottest stock on the S&P 500 surged tenfold on Friday following its much-anticipated stock split.

Moreover, NVIDIA’s stock has gained more than 158% over the past six months and nearly 222% over the past year. Notably, the stock is up over 3,222% over the past five years. During this remarkable run, Nvidia’s market cap of around $3 trillion surpassed those of Amazon (AMZN) and Alphabet Inc. (GOOGL). Before the 10-for-1 split, the stock traded at a lofty $1,209.

The chip giant’s strategic decision to split its stock follows a broader trend among tech giants to make their stock ownership more affordable and appealing to retail investors. With more individual investors gaining access to Nvidia’s shares post-split, increased trading activity and demand are observed, potentially driving share prices higher.

According to data from BofA research, total returns for companies announcing stock splits are about 25% in the 12 months after a stock split historically versus 12% gains for the S&P 500. Thus, stock splits are seen as a bullish signal, often accompanied by positive investor sentiment and increased buying activity.

Solid Earnings And A Healthy Outlook

The stock split isn’t the only reason for NVDA’s latest bull run. The company also reported better-than-expected revenue and earnings in the fiscal 2025 first quarter, driven by robust demand for its AI chips. During the quarter that ended April 28, 2024, Nvidia’s revenue rose 262% year-over-year to $26.04 billion. That surpassed the consensus revenue estimate of $24.59 billion.

The company’s largest business segment, Data Center, which includes its AI chips and several additional parts required to run big AI servers, reported a record revenue of $22.60 billion, up 427% year-over-year.

“Our data center growth was fueled by strong and accelerating demand for generative AI training and inference on the Hopper platform. Beyond cloud service providers, generative AI has expanded to consumer internet companies, and enterprise, sovereign AI, automotive and healthcare customers, creating multiple multibillion-dollar vertical markets,” said Jensen Huang, founder and CEO of NVDA.

“We are poised for our next wave of growth. The Blackwell platform is in full production and forms the foundation for trillion-parameter-scale generative AI,” Huang added. During a call with analysts, the CEO mentioned that there would be significant Blackwell revenue this year and that the new chip would be deployed in data centers by the fourth quarter.

The chipmaker’s non-GAAP gross profit grew 328.2% from the previous year’s quarter to $20.56 billion. NVDA’s non-GAAP operating income was $18.06 billion, an increase of 491.7% year-over-year. Its non-GAAP net income rose 461.7% year-over-year to $15.24 billion. Also, it posted a non-GAAP EPS of $6.12, compared to analysts’ estimate of $5.58, and up 461.5% year-over-year.

Furthermore, NVIDIA’s cash, cash equivalents and marketable securities were $31.44 billion as of April 28, 2024, compared to $25.98 billion as of January 28, 2024.

According to its outlook for the second quarter of 2025, the company expects revenue to be $28 billion, plus or minus 2%. Its non-GAAP gross margin is expected to be 75.5%, plus or minus 50 basis points. NVDA’s non-GAAP operating expenses are anticipated to be approximately $2.8 billion.

Raised Dividends

NVDA raised its dividend payouts to reward shareholders and demonstrate confidence in its financial strength and growth prospects. The company increased its quarterly cash dividend by 150% from $0.04 per share to $0.10 per share of common stock. The dividend is equivalent to $0.01 per share on a post-split basis and will be paid on June 28 to all shareholders of record on June 11.

While Nvidia's dividend yield is modest compared to its tech peers, its considerable cash flow and strong balance sheet provide ample room for growth.

Dominance in AI and Data Center Markets Fuels Unprecedented Growth Opportunities

NVDA is strategically positioned at the forefront of the AI and data center markets, with a high demand for AI chips for data processing, training, and inference from large cloud service providers, GPU-specialized ones, enterprise software, and consumer internet companies. In addition, vertical industries, led by automotive, financial services, and healthcare, drive the demand.

Statista projects the generative AI (GenAI) market to reach $36.06 billion in 2024, with the U.S. accounting for the largest market size of $11.66 billion. Further, the GenAI market is expected to total $356.10 billion by 2030, expanding at a CAGR of 46.5% from 2024 to 2030.

Over the past year, Nvidia has experienced a significant surge in sales due to robust demand from tech giants like Google, Microsoft Corporation (MSFT), Meta Platforms, Inc. (META), Amazon, and OpenAI, who invested billions of dollars in Nvidia’s advanced GPUs essential for developing and deploying AI applications. In January, META announced a sizable order of 350,000 high-end H100 graphics cards from Nvidia.

As a result, NVDA holds a market share of about 92% in the data center GPU market for generative AI applications.

Bottom Line

NVDA’s recent 10-for-1 stock split has significantly impacted its valuation and market appeal. This strategic move not only made Nvidia's shares more accessible to retail investors but also fueled increased trading activity and demand, driving share prices higher. The stock surged tenfold on Friday when the stock split took effect, reflecting the heightened investor interest.

NVIDIA's strong financial performance, as evidenced by the fiscal 2025 first quarter report, further solidifies its position in the AI and data center market. The company reported threefold revenue growth, driven by the massive demand for its AI processors from major tech companies, including Microsoft, Meta, Amazon, Google, and OpenAI.

The chipmaker’s remarkable growth has propelled it to the third-largest market capitalization globally, surpassing peers such as AMZN and META.

Further, the company’s revenue and EPS for the fiscal year ending January 2025 are expected to grow 97.9% and 108.9% year-over-year to $120.55 billion and $27.07, respectively. For the fiscal year 2026, Analysts expect its revenue and EPS to increase 32.4% and 32.6% from the prior year to $159.55 billion and $35.90, respectively. With a healthy outlook for the future, NVDA continues to attract investors looking for long-term growth opportunities.

Moreover, the recent decision to raise dividends by 150% showcases NVDA's confidence in its financial strength and growth prospects, making it more attractive to income-oriented investors. This move, coupled with the stock split, appeals to different investor demographics and reflects NVDA's commitment to rewarding shareholders while positioning itself for future growth in the AI and semiconductor sectors.