NVIDIA (NVDA) vs. Advanced Micro Devices (AMD): Which Stock Is Proving to Be the Better Long-Term AI Buy

After its earnings release on May 24, the Santa Clara-based graphics chip maker NVIDIA Corporation (NVDA) stole the thunder by becoming the first semiconductor company to hit a valuation of $1 trillion.

NVDA has also blown away Street expectations ahead of its quarterly earnings release on August 23, with profits for the current quarter expected to be at least 50% higher than analyst estimates and the momentum expected to continue in the foreseeable future.

On the other hand, since its humble beginnings as a supplier for Intel Corporation (INTC), Advanced Micro Devices, Inc. (AMD) has come a long way. During its earnings release for the second quarter, despite persistent weakness in the PC market, the company’s result topped analyst estimates.

While NVDA has carved its niche and cornered a significant share of the GPU domain through advancements in parallel (and consequently accelerated) computing which began back in 2006 with the release of a software toolkit called CUDA, Chair and CEO Dr. Lisa Su is widely credited with AMD’s turnaround and transition from being widely dismissed due to performance issues and delayed releases to being the only company in the world to design both CPUs and GPUs at scale.

The New (Perhaps Only) Game in Town

As a general-purpose technology, such as the steam engine and electricity, Artificial Intelligence (AI) that has already been touching and influencing all facets of our life, including how we shop, drive, date, entertain ourselves, manage our finances, take care of our health, and much more.

However, late in November of last year, when OpenAI opened its artificial intelligence chatbot, ChatGPT, to the general public, all hell broke loose. The application took the world by storm. It amassed 1 million users in five days and 100 million monthly active users only two months into its launch to become the fastest-growing application in history.

The generative AI-powered application’s capability to provide (surprisingly) human-like responses to user requests equally fascinated and concerned individuals, businesses, and institutions with the possibilities of the technology. A large language model or LLM powers ChatGPT. This gives the application the ability to understand human language and provide responses based on the large body of information on which the model has been trained.

NVDA is reaping the rewards for all that invisible work done in the field of parallel computing. Parallel computing was ideal for artificial neural networks' deep (machine) learning. As a result of that head start in the AI tech race, its A100 chips, which are powering LLMs like ChatGPT, have become indispensable for Silicon Valley tech giants.

To put things into context, the supercomputer behind OpenAI’s ChatGPT needed 10,000 of NVDA’s famous chips. With each chip costing $10,000, a single algorithm that’s fast becoming ubiquitous is powered by semiconductors worth $100 million.

However, AMD isn’t too far behind either. According to Dr. Su, Data Center is the most strategic piece of business as far as high-performance computing is concerned. AMD underscored this commitment with the recent acquisition of data center optimization startup Pensando for $1.9 billion.

At the premiere, AMD’s ambitions to capitalize on the AI boom were loud and clear, with the launch of MI300X (a GPU-only chip) as a direct competitor to NVDA’s H100. The chip includes 8 GPUs (5nm GPUs with 6nm I/O) with 192GB of HBM3 and 5.2TB/s of memory bandwidth.

AMD believes this will allow LLMs’ inference workloads that require substantial memory to be run using fewer GPUs, which could improve the TCO (Total Cost of Ownership) compared to the H100.

The Road Ahead

The optimism surrounding both companies is justified.

With NVDA’s presence in data centers, cloud computing, and AI, its chips are making their way into self-driving cars, engines that enable the creation of digital twins with omniverse that could be used to run simulations and train AI algorithms for various applications.

On the other hand, AMD has also been training its guns to exploit the burgeoning AI accelerator market, projected to be over $30 billion in 2023 and potentially exceed $150 billion in 2027.

AMD is one of the few companies making high-end GPUs needed for artificial intelligence. With AI being seen as a tailwind that could drive PC sales, the company announced plans to launch new Radeon 7000 desktop GPUs at its quarterly earnings release. It is being speculated that the GPU will come with two 8-pin PCIe power connectors and four video out ports, including three DisplayPort 2.1 and one HDMI 2.1.

Caveats

AMD existed as both a chip designer and manufacturer, at least until 2009. However,  significant capex requirements associated with manufacturing, amid financial troubles in the wake of the Great Recession, compelled the company to demerge and spin off its fab to form GlobalFoundries Inc. (GFS), which has been focused on manufacturing low-end chips ever since.

Today, both NVDA and AMD operate as fabless chip companies. Hence, both companies face risks of backward integration by companies such as Apple Inc. (AAPL), Amazon.com, Inc. (AMZN), and Tesla Inc. (TSLA) with the wherewithal to develop the intellectual capital to design their own chips.

Moreover, almost all of the manufacturing has been outsourced to Taiwan Semiconductor Manufacturing Company Ltd. (TSM), which has yet to diversify significantly outside Taiwan and has become the bone of contention between the two leading superpowers.

With geopolitical risk being the potential Achilles heel for both companies, their efforts toward geographical diversification also receive much-needed political encouragement through the Chips and Science Act.

Dr. Su, who also serves on President Biden’s council of advisors on science and technology, pushed hard for the passage of the Act. It is aimed at on-shoring and de-risking semiconductor manufacturing in the interest of national security by setting aside $52 billion to incentivize companies to manufacture semiconductors domestically.

Bottom Line

Given its massive importance and cornucopia of applications, it’s hardly surprising that Zion Market Research forecasts the global AI industry to grow to $422.37 billion by 2028. Hence, this field has understandably garnered massive attention from investors who are reluctant to miss the bus on such a watershed development in the history of humankind.

Hence, in view of product diversification, increasing traction in the GPU segment, and relatively higher valuation comfort, investors in AMD could benefit from more sustained upside potential compared to NVDA.

Intel Corporation (INTC) Races to Dominate the AI Market – Will It Succeed?

In our June 3 post, we concluded that the resurgent Intel Corporation (INTC), which had weathered back-to-back quarterly losses amid softening PC demand, consequent surplus inventory, and realignment toward GPU-heavy and AI-centered enterprise demand, could be worth more than what was being suggested by its market price at that time.

By announcing its return to profitability during last week’s earnings release, the pioneer of modern computing lived up to our expectations while exceeding the ones on the Street. INTC posted a net income of $1.5 billion, compared to a net loss of $454 million during the previous-year quarter. Its adjusted EPS came in at $0.13 compared to an adjusted loss of $0.3 per share expected by Wall Street.

While the Market greeted the news with a 7% surge in the stock price in extended trading and a further 5% gain the following morning, INTC’s revenue, despite exceeding low expectations, declined 15.7% year-over-year to $12.9 billion, marking the sixth consecutive quarter of sales decline.
In view of a subdued topline, much of the outperformance in the quarterly results can be attributed to the progress INTC had made in cutting $3 billion in costs this year.

In addition to exiting nine lines of business since CEO Pat Gelsinger rejoined INTC to achieve a combined annual savings of more than $1.7 billion, the company slashed its dividend and announced plans to save $10 billion per year by 2025, including through layoffs.

With INTC’s cloud computing group, which includes the company’s laptop and desktop processor shipments, and server chip division, which is reported as Data Center and AI, reporting year-over-year declines of 12% and 15%, respectively, Pat Gelsinger forecasted “persistent weakness” in all segments of its business through year-end, and that server chip sales won’t recover until the fourth quarter.

With upside through cost optimization capped and customers prioritizing GPUs over CPUs to handle ever-increasing AI/ML workloads, INTC is eager to join the race currently being led by NVIDIA Corporation (NVDA) and Advanced Micro Devices, Inc. (AMD). The company is working on the manufacturing front, for which it significantly depends on Taiwan Semiconductor Manufacturing Company Ltd. (TSM).

By doubling down on the fab business, INTC aims to match TSM’s chip-manufacturing capabilities by 2026, enabling it to bid to make the most advanced mobile processors for other companies, a strategy the company calls “five nodes in four years.”

To that end, INTC is pursuing an aggressive IDM 2.0 road map with new manufacturing facilities in Oregon, New Mexico, Arizona, Ireland, and Israel in the pipeline to augment the capabilities of 15 fabs worldwide and facilities to assemble and test the manufactured chips in Vietnam, Malaysia, Costa Rica, China, and the U.S.

Among those, Arizona's new facilities would be manufacturing chips for the company and customers such as Amazon, Qualcomm, and others as part of Intel Foundry Services. While the company still depends on TSMC for 5nm chips used for AI applications, it aims to take a quantum leap in that direction with even smaller 18 A chips.

While companies such as Amazon.com, Inc. (AMZN) are resorting to chips designed in-house to support their cloud infrastructure, INTC, in addition to being the manufacturer of both wafers and packaging of AI accelerators, is also present with its Gaudi chips.

The company’s efforts are also receiving much-needed political encouragement in the form of the Chips and Science Act, which is aimed at on-shoring and de-risking semiconductor manufacturing in the interest of national security.

Recently, Pat and his team at INTC upped the ante by unveiling its new ambitious plans to incorporate AI into every product it creates. This announcement comes as the company’s upcoming Meteor Lake chips are rumored to feature a built-in neural processor specifically designed for handling machine learning tasks.

With an objective to “democratize AI,” Pat was loud and clear about INTC’s plans to make it a ubiquitous and integral feature of its products designed to cater to all segments of the computing ecosystem, including “at the edge in the Client, in the enterprise, as well as in the cloud.”
The upbeat CEO forecasted that AI would permeate all business domains, including the client-facing consumer electronics market, enterprise data centers, and even manufacturing, and make its way into personal devices, such as hearing aids and personal computers. AI is already present as a co-pilot for Windows 11, allowing users to type questions and perform specific actions, and it could play a significant role in the next iteration of Windows.

Bottom Line

For the third quarter of the fiscal, INTC expects adjusted earnings of 20 cents per share on $13.4 billion revenue at the midpoint.
Whether or not INTC manages to meet or exceed the above target could go a long way in helping investors determine if its ambitious turnaround is on track to restore the company to its former glory.

3 AI Stocks Dominating the Market: How to Trade Them

Artificial Intelligence (AI) is an umbrella term that denotes a series of programs and algorithms designed to mimic human intelligence and perform cognitive tasks efficiently with little to no human intervention.

AI, in its various forms and applications, can analyze large volumes of data generated during the entire course of our increasingly digital existence and identify trends and exceptions to help us develop better insights and make more effective decisions.

Unlike other next-big things, such as nuclear fusion, quantum computing, and flying cars, which are practically (and literally) pies in the sky, AI has been around for quite some time, influencing how we shop, drive, date, entertain ourselves, manage our finances, take care of our health, and much more.
However, the technology came into the limelight late last year with the release of ChatGPT, which in its own description, is “an AI-powered chatbot developed by OpenAI, based on the GPT (Generative Pretrained Transformer) language model. It uses deep learning techniques to generate human-like responses to text inputs in a conversational manner.”

ChatGPT, which took the world by storm by signing up 1 million users in five days and amassing 100 million monthly active users only two months into its launch, is one of the several use cases of generative AI. It is the subset of algorithms that creates and returns content, such as human-like text, images, and videos, based on the user's written instructions (prompts).

Given its massive importance, it’s hardly surprising that Zion Market Research forecasts the global AI industry to grow to $422.37 billion by 2028. Hence, this field has understandably garnered massive attention from investors who are reluctant to miss the bus on such a watershed development in the history of humankind.

Although OpenAI, the creator of ChatGPT, is not a publicly listed company, Microsoft Corporation (MSFT) has bet big on the company with the announcement of a multiyear, multibillion-dollar investment deal. Here’s CEO Satya Nadella discussing, at the World Economic Forum held in Davos this year, how the underlying technology would eventually be ubiquitous across MSFT’s products. The process has already begun with updates to its Bing search engine.

However, more recently, the company which made headlines when its stock got its moonshot due to the widespread public interest in AI is NVIDIA Corporation (NVDA). Post its earnings release on May 24, the Santa Clara-based graphics chip maker has stolen the thunder over the past week by becoming the first semiconductor company to hit a valuation of $1 trillion.

NVDA’s A100 chips, powering LLMs like ChatGPT, have become indispensable for Silicon Valley tech giants. To put things into context, the supercomputer behind OpenAI’s ChatGPT needed 10,000 of Nvidia’s famous chips. With each chip costing $10,000, a single algorithm that’s fast becoming ubiquitous is powered by semiconductors worth $100 million.

Earlier this year, Advanced Micro Devices, Inc. (AMD) made history by surpassing Intel Corporation (INTC)in terms of market cap for the first time ever. Chair and CEO Dr. Lisa Su is widely credited with the turnaround and transition from being widely dismissed due to performance issues and delayed releases to being the only company in the world to design both CPUs and GPUs at scale.

According to Dr. Su, Data Center is the most strategic piece of business as far as high-performance computing is concerned. AMD underscored this commitment with the recent acquisition of data center optimization startup Pensando for $1.9 billion.

AMD has made its ambitions to capitalize on the AI boom loud and clear with the launch of MI300X (a GPU-only chip) as a direct competitor to NVDA’s H100. The chip includes 8 GPUs (5nm GPUs with 6nm I/O) with 192GB of HBM3 and 5.2TB/s of memory bandwidth.

AMD believes this will allow LLMs’ inference workloads that require substantial memory to be run using fewer GPUs, which could improve the TCO compared to the H100.

Lastly, the company aims to address the growing AI accelerator market, projected to be over $30 billion in 2023 and potentially exceed $150 billion in 2027.

The Catch

While the chip and software companies at the cutting edge of the AI arms race have contributed to a melt-up in the markets that have seen the Nasdaq Composite gain more than 36% year-to-date, investors would be wise to be aware of the limitations and loopholes of investing in technology before FOMO drives them to inflate a "baby bubble" growing in plain sight.

LLM-based generative AI chatbots are auto-complete on steroids trained on vast data. While they are really good (and continually getting better) at predicting what the next word is going to be and extrapolating it to generate extensive literature, it lacks contextual understanding.
Consequently, the algorithms struggle with nuances such as sarcasm, irony, satire, analogies, etc. This also leads to the propensity to “hallucinate” and generate responses even if those are factually and logically incorrect.

Moreover, since, in the words of Morgan Housel, “things that have never happened before happen all the time,” it could be challenging for any AI tool to deal with tails, exceptions, and outliers in the shifting sands of business, economy, and society.
Even AAPL co-founder Steve Wozniak, who knows more than a thing or two about technology, agrees with the ‘A’ and not the ‘I’ of Artificial Intelligence.

Stick to Basics

Just as we have learned during the dot-com, cryptocurrency, real estate, and numerous other bubbles through the ages, markets can stay irrational longer than investors can stay solvent.

Big tech mega caps (mentioned earlier in the article) are involved in providing the infrastructure and computing horsepower required to make the data and power-hungry AI algorithms work. Moreover, since AI is well-embedded into their business operations and market offerings and AI as a service is (still) a small portion of their revenue, concentration risks can be more easily managed.

Therefore, rather than getting too carried away and stretching a worthwhile and useful innovation to frothy excesses with unrealistic expectations, it could be wise and safe for investors to add to their positions in the aforementioned stocks on dips.

Chips and AI Advanced Micro Devices Inc. (AMD)'s Next-Level Breakthroughs!

Last month, we gauged the prospects of two semiconductor giants, NVIDIA Corporation (NVDA) and Intel Corporation (INTC), which have carved out their niches and cornered a significant share of the GPU and CPU domains, respectively. In this article, we have talked about another chip company and its agile efforts to grab the best of both worlds while creating a widespread following of its own.

Founded in 1968 by a group of 8 men led by the larger-than-life Jerry Sanders, Advanced Micro Devices, Inc. (AMD) released its first product in 1970 and went public in 1972. Despite starting life as a supplier for INTC, AMD parted ways with its client in the mid-80s, and by the late 80s, it reverse-engineered INTC’s products to make its own chips that were compatible with INTC’s software.

AMD existed as both a chip designer and manufacturer, at least until 2009. However, significant capex requirements associated with manufacturing, amid financial troubles in the wake of the Great Recession, compelled the company to demerge and spin off its fab to form GlobalFoundries Inc. (GFS), which has been focused on manufacturing low-end chips ever since.

With the acquisition of ATI, a major fabless chip company, in 2006, AMD began shifting its focus toward chip designing and turned to Taiwan Semiconductor Manufacturing Company Ltd. (TSM) as its exclusive chip manufacturer.

With manufacturing no longer weighing it down, AMD started catching INTC with its Zen line of CPUs. Earlier this year, the former made history by surpassing the latter’s market cap for the first time ever. Chair and CEO Dr. Lisa Su is widely credited with the turnaround and transition from being widely dismissed due to performance issues and delayed releases to being the only company in the world to design both CPUs and GPUs at scale.

We look at how Dr. Su and her team’s unwavering focus on great products, customer relations, and simplifying the company’s structure to respond to the dynamic business with agility are shaping AMD’s offerings in each product category.

CPU Portfolio

Despite a conservative outlook, AMD believes its Genoa CPU processors are superior to competitive offerings in terms of performance and efficiency across diverse workloads, including AI. During the recent AMD Data Center & AI Tech Premiere, the company expanded its EPYC server CPU portfolio by launching the highly anticipated Bergamo EPYC CPUs optimized for cloud environments.

Given the focus on single-threaded performance and energy efficiency, Meta Platforms, Inc. (META), which has collaborated with AMD to customize the design of the Bergamo server, reported seeing 2.5 times greater performance than AMD's previous generation Milan CPUs and notable improvements in total cost of ownership (TCO).

In addition, AMD also introduced Genoa-X as another workload-optimized alternative to Genoa for faster general-purpose computing and optimal technical computing tasks. The company also updated that its upcoming server CPU product, Turin, has shown promising initial results and remains on schedule for a 2024 release.

Data Center Portfolio

According to Dr. Su, Data Center is the most strategic piece of business as far as high-performance computing is concerned. AMD underscored this commitment with the recent acquisition of data center optimization startup Pensando for $1.9 billion.

At the premiere, AMD’s ambitions to capitalize on the AI boom were loud and clear, with the launch of MI300X (a GPU-only chip) as a direct competitor to NVDA’s H100. The chip includes 8 GPUs (5nm GPUs with 6nm I/O) with 192GB of HBM3 and 5.2TB/s of memory bandwidth.

AMD believes this will allow LLMs’ inference workloads that require substantial memory to be run using fewer GPUs, which could improve the TCO compared to the H100.

Lastly, the company aims to address the growing AI accelerator market, projected to be over $30 billion in 2023 and potentially exceed $150 billion in 2027.
Gaming and Other Applications.

While INTC and NVDA control most of the CPU and GPU market, respectively, AMD dominates gaming by designing 83% of gaming console processors.

The recently launched AMD Ryzen 5 5600X3D is equipped with AMD’s revolutionary 3D V-Cache technology. Despite being close to both the Ryzen 7 5800X3D and the non-3D Ryzen 5 5600X in terms of specifications, it comes with a lot of L3 cache, giving it an edge over the latter, thereby improving gaming performance.

Moreover, with Moore’s Law, which is the core of computer chip advancement, showing visible signs of a slowdown and the 5-decade-old x86 architecture gradually but surely being replaced by ARM, general-purpose computing using CPUs is making way for more customized solutions.

That has prompted AMD to acquire Xilinx for $49 billion to close one of the biggest acquisitions in semiconductor history. The investee is known for its reprogrammable adaptive chips called Field-Programmable Gate Arrays or FPGAs, which have diverse applications, such as robotics, telecommunications, agriculture, and space exploration.

As a result, AMD is expanding its footprint from PCs and supercomputers to Teslas and Mars Land Rover.

Road Ahead

Despite its future readiness, geopolitical tensions between the U.S. and China could turn out to be the Achilles heel for AMD since all of its chips are made in China and Taiwan. Also, Mainland China accounts for roughly 30% of the company’s revenues.

Dr. Su also serves on President Biden’s council of advertisers on science and technology, which pushed hard for the recent passage of the Chips and Science Act, aimed at on-shoring and de-risking semiconductor manufacturing in the interest of national security by setting aside $52 billion to incentivize companies to manufacture semiconductors domestically.

Geographical diversification, as a result of this Act, could act as a hedge against geopolitical tensions for AMD by reducing reliance on Asian manufacturing.

Bottom Line

As AMD continues to advance its x86 core computing chips along with diversifying to accommodate high-performance and customized computing, its more than 70% increase in stock price since the beginning of the year (and coincidentally during the AI wave) could be indicative of a company that is poised to gain market share and capitalize on the expanding demand for AI technology in various industries.