Google's new Gemini 1.5 version

New advances in AI could help billions of people in the future. Since launching Gemini 1.0, Google has tested, improved, and added features.

A new Mixture-of-Experts (MoE) architecture makes Gemini 1.5 train and serve more efficiently

First up is Gemini 1.5 Pro, their early testing model. The standard Gemini 1.5 Pro context window holds 128,000 tokens

Google’s latest model architecture innovations help Gemini 1.5 learn complex tasks faster, maintain quality, and train and serve more efficiently

Google’s machine learning innovations have increased 1.5 Pro’s context window capacity beyond Gemini 1.0’s 32,000 tokens

This means Gemini 1.5 Pro can process 1 hour of video, 11 hours of audio, 30,000 lines of code, or 700,000 words in one go.

Gemini 1.5 Pro efficiently analyzes, classifies, and summarizes large amounts of content per prompt

It can reason across examples, suggest helpful modifications, and explain how different parts of the code work when given a prompt with more than 100,000 lines of code

1.5 Pro beats 1.0 Pro on 87% of Google’s large language model benchmarks in text, code, image, audio, and video evaluations

Google has prepared 1.5 Pro for responsible deployment by conducting extensive content safety and representational harm testing, as they did for Gemini 1.0