Gemini 1.5 Pro updates, 1.5 Flash debut and 2 new Gemma models
Gemini and Gemma models are being updated. Gemini 1.5 Pro has been improved and 1.5 Flash has been introduced. Two new Gemma models have also been added.
Gemini 1.5 Pro and 1.5 Flash
- Gemini 1.5 Pro has undergone quality improvements across key use cases such as translation, coding, reasoning, and more.
- Gemini 1.5 Flash, a smaller Gemini model, is optimized for narrower or high-frequency tasks where the model’s response time is crucial.
- Both models are available in more than 200 countries and territories in preview and will be generally available in June.
- Both 1.5 Pro and 1.5 Flash come with a 1 million token context window and allow text, images, audio, and video as inputs. To access 1.5 Pro with a 2 million token context window, join the waitlist in Google AI Studio or in Vertex AI for Google Cloud customers.
New developer features and pricing options for the Gemini API
- New API features include video frame extraction and parallel function calling, which allows more than one function call at a time. In June, context caching will be added to Gemini 1.5 Pro.
- Gemini API access is free of charge in eligible regions through Google AI Studio. Rate limits are increased by the new pay-as-you-go service.
Additions to the Gemma family
- PaliGemma, the first vision-language open model, is available today and optimized for image captioning, visual Q&A, and other image labeling tasks.
- Gemma 2, the next generation of Gemma, launches in June and is built for industry-leading performance at the most useful developer sizes. The new Gemma 27B model outperforms some models that are more than twice its size and will run efficiently on GPUs or a single TPU host in Vertex AI.
Gemini API Developer Competition
- The first-ever Gemini API Developer Competition has been announced. The grand prize is a custom electric DeLorean. The deadline for project submissions is August 12.