Meta has announced the newest addition to its Llama family of generative AI models: Llama 3.3 70B.
In a post on X, Ahmad Al-Dahle, VP of generative AI at Meta, said that the text-only Llama 3.3 70B delivers the performance of Meta’s largest Llama model, Llama 3.1 405B, at lower cost.
“By leveraging the latest advancements in post-training techniques … this model improves core performance at a significantly lower cost,” Al-Dahle wrote.
Al-Dahle published a chart showing Llama 3.3 70B outperforming Google’s Gemini 1.5 Pro, OpenAI’s GPT-4o, and Amazon’s newly released Nova Pro on a number of industry benchmarks, including MMLU, which evaluates a model’s ability to understand language. Via email, a Meta spokesperson said that the model should deliver improvements in areas like math, general knowledge, instruction following, and app use.
Introducing Llama 3.3 – a new 70B model that delivers the performance of our 405B model but is easier & more cost-efficient to run. By leveraging the latest advancements in post-training techniques including online preference optimization, this model improves core performance at… pic.twitter.com/6oQ7b3Yuzc
— Ahmad Al-Dahle (@Ahmad_Al_Dahle) December 6, 2024
Llama 3.3 70B, which is available for download from the AI dev platform Hugging Face and other sources, including the official Llama website, is Meta’s latest play to dominate the AI field with “open” models that can be used and commercialized for a range of purposes.
Meta’s terms constrains how certain developers can use its Llama models; platforms with more than 700 million monthly users must request special permission from the company. But for many devs and companies, it’s immaterial that Llama models aren’t “open” in the strictest sense. According to Meta, its Llama models have racked up more than 650 million downloads.
Meta has leveraged Llama for its own ends, as well. Meta AI, the company’s AI assistant, which is powered entirely by Llama models, now has nearly 600 million monthly active users, according to an Instagram post by CEO Mark Zuckerberg on Friday. Zuckerberg claims that Meta AI is on track to be the most-used AI assistant in the world.
The open nature of Llama has been a blessing and curse for Meta.
In November, a report emerged that Chinese military researchers had used a Llama model to develop defense chatbot. Meta responded by making its Llama models available to U.S. defense partners.
Meta has also expressed concerns about its ability to comply with the AI Act, the EU law that establishes a legal and regulatory framework for AI — calling the law’s implementation “too unpredictable.” At issue for the company are related provisions in the GDPR, the EU’s privacy law, pertaining to AI training. Meta trains AI models on the public data of Instagram and Facebook users who haven’t opted out — data that in Europe is subject to GDPR guarantees.
EU regulators earlier this year requested that Meta halt training on European user data while they assessed the company’s GDPR compliance. Meta relented, while at the same time endorsing an open letter calling for “a modern interpretation” of GDPR that doesn’t “reject progress.”
Meta, not immune to the technical challenges other AI labs are facing, is ramping up its compute infrastructure to train and serve future generations of Llama models. The company announced Wednesday that it would build a $10 billion AI data center in Louisiana — the largest AI data center it’s ever built.
Zuckerberg said on Meta’s Q4 earnings call in August that to train the next major set of Llama models, Llama 4, the company will need 10x more compute than what was needed to train Llama 3.
Training large language models can be a costly business. Meta’s capital expenditures rose nearly 33% to $8.5 billion in Q2 2024, from $6.4 billion a year earlier, driven by investments in servers, data centers and network infrastructure.