Amazon Web Services (AMZN) isn’t putting all its chips on one AI model. Instead, it aims to win by keeping friends close and potential rivals closer.
While other hyperscalers such as Microsoft (MSFT), Google (GOOG, GOOGL), and Meta (META) compete on large language models, AWS hopes to offer customers whatever models they want. The cloud giant is positioning itself not just as an artificial intelligence platform to train models but as a marketplace to sell them.
“There’s not going to be one model that’s going rule them all,” AWS CEO Matt Garman told Yahoo Finance at the Goldman Sachs 2024 Communacopia and Technology Conference. “The best model today may not be the best model tomorrow. … If you have a platform that has a bunch of models available, it’s actually relatively easy to take advantage of new ones or add new capabilities from other providers as they come along.”
Amazon CEO Andy Jassy mentioned this strategy on Amazon’s latest earnings call. Jassy noted that Amazon’s Bedrock service has the “broadest selection of LLMs” and offers a variety of leading foundational models to AWS customers, who can then build their own AI applications using Anthropic’s Claude 3 models, Meta’s Llama 3 models, and Amazon’s Titan models.
The latest example of this strategy emerged on Monday when AWS and Oracle announced a partnership despite 15 years of competition between the two cloud providers.
Cooperation, not direct competition, is how AWS plans to diversify its revenue streams and monetize AI. AWS is projecting $105 billion in revenue this year — about 17% of Amazon’s total revenue, according to estimates.
AWS not ‘rushing in’ to chatbots
When asked about the perception that Amazon is falling behind on AI — particularly when compared to Microsoft — Garman was quick to mention that for Microsoft, “it’s not their own technology. … OpenAI was listed as a competitor of theirs, which is an interesting dynamic for them. … We like to partner, not necessarily compete.”
In other words, AWS’s perceived sluggishness in AI adoption is a strategic feature, not a bug.
“We felt that it was more important to actually build a baseline platform that customers could really go build real enterprise value into their applications on as opposed to rushing in and quickly getting out chatbot technology that … looks cool and allows people to write haikus,” Garman said. “That’s not really the eventual place that the value is going to come from this technology.”
That focus on AI infrastructure has helped buoy the stock amid a tech pullback. Year to date, Amazon stock has gained 18%, outperforming the Nasdaq index and other hyperscalers, such as Microsoft and Google.
And AWS is not just offering multiple LLMs to customers — the company is also offering chips from Nvidia (NVDA), AMD (AMD), and Intel (INTC), as well as their own.
“If you talk to customers out there, still, the large majority of usage is on Nvidia chips,” Garman said. “They have a great platform … and they’re very popular with customers.”
That popularity explains why AWS has yet to talk about competing directly with Nvidia on chips. While Garman would not share details about the usage ratio of Nvidia’s chips to AWS’s on internal workloads, he agreed it would be beneficial to rely more fully on their in-house chips over time to protect margins.
“I don’t know that we’ll ever be fully reliant on [in-house chips] because I think that the Nvidia team builds some great processors and some great products, and they execute really well.”
In his remarks at the conference, Garman walked a fine line between promoting AWS chips while reminding the audience that AWS offers Nvidia GPUs as well, again promoting the idea of customer choice as AWS’s key differentiator.
Nvidia CEO Jensen Huang is set to speak at the Goldman conference on Wednesday morning. Garman quipped that he would have plenty to say about Nvidia’s partnership with AWS.
Click here for the latest technology news that will impact the stock market