Google DeepMind Presents the Gemma 2 Model with 2B Parameters

The second generation of Google DeepMind’s Gemma AI models, known as Gemma 2, has been released with 2 billion (2B) parameters.

Built on the same technology as Google Gemini, Gemma is a series of open, lightweight text-to-text models aimed at researchers and developers. It was first introduced in February of this year.

In June, DeepMind announced Gemma 2, which comes in two sizes: 27 billion parameters (27B) and 9 billion parameters (9B).

According to DeepMind, the new 2B model generates disproportionate outcomes by distilling knowledge from larger models. Additionally, the company says that on the LMSYS Chatbot Arena leaderboard, it performs better than all GPT-3.5 models.

Gemma 2 2B is compatible with a broad variety of hardware, including laptops, edge devices, and cloud deployments utilizing Google Kubernetes Engine (GKE) and Vertex AI. Furthermore, its size allows it to function on the NVIDIA T4 deep learning accelerator’s free tier.

Gemma Shield and Gemma Scope

ShieldGemma and Gemma Scope are two new models that DeepMind is introducing to the family.

A family of safety classifiers called ShieldGemma is intended to identify and filter offensive information from AI model inputs and outputs.

GemmaScope emphasizes openness. A group of sparse autoencoders (SAEs) make up the tool. These are specialist neural networks which deconstruct the intricate internal mechanisms of the Gemma 2 models and present their information processing and decision-making processes in a more comprehensible manner.

More than 400 publicly accessible SAEs that span every layer of Gemma 2 2B and 9B are available. The objective is to empower scientists to develop AI systems that are more trustworthy and transparent.

Developers and researchers can test Gemma 2 2B in Google AI Studio or download it from Hugging Face, Vertex AI Model Garden, and Kaggle starting today.

Komal Patil: