Llama 4: Smarter, Faster, More Efficient Than Ever
The first models in the Llama 4 herd Available today in Azure AI Foundry and Azure Databricks, let users create more individualised multimodal experiences
Setting a new standard in AI development, this creative solution lets programmers use Llama 4 models in applications needing large volumes of unlabelled text, image, and video data
The models are optimised for simple deployment, cost-effectiveness, and performance scaling to billions of users
Meta claims Llama 4 Scout, which fits in a single H100 GPU, is more powerful than its Llama 3 models and among the best multimodal models in its class
Offering good quality at a lesser cost than Llama 3.3 70B, Llama 4 Maverick is a general-purpose LLM with 17 billion active parameters, 128 experts, and 400 billion total parameters
The main conversation model of the Meta Llama 4 family think of it as the multilingual, multimodal equivalent to a ChatGPT-like assistant
A still-in-training early glimpse of the Llama 4 instructor model employed to distil Scout and Maverick