Skip to main content

Google Rolls Out Imagen 4 in Gemini API and Introduces Gemma 3 270M

Created on August 15|Last edited on August 15
Google has released Imagen 4, its most advanced text-to-image model, to general availability through the Gemini API and Google AI Studio. The Imagen 4 family includes three models: Imagen 4 Fast for high-speed, low-cost image generation at $0.02 per output, the standard Imagen 4 for a balance of speed and quality, and Imagen 4 Ultra for maximum prompt alignment and detail. Both the flagship and Ultra versions now support 2K resolution images, enabling richer visuals for creative and professional use cases.


Demonstrations of Imagen 4 Fast

To showcase the performance of Imagen 4 Fast, Google shared examples such as photorealistic landscapes, retro-style comic strips, and vintage sci-fi posters. Each image is watermarked with SynthID for responsible AI use. Developers can experiment with the model through documentation and ready-made cookbooks available in AI Studio.


Gemma 3 270M: A compact AI model for efficiency

Alongside image generation news, Google introduced Gemma 3 270M, a 270-million-parameter model engineered for fine-tuning on well-defined tasks while maintaining strong instruction-following ability. With 170 million embedding parameters and a 256k-token vocabulary, it can handle rare and domain-specific terms. The model’s INT4-quantized version runs with minimal power usage, consuming only 0.75% battery on a Pixel 9 Pro for 25 conversations.


Specialization and deployment scenarios

Gemma 3 270M is designed for scenarios where small, targeted models outperform large general-purpose ones, such as sentiment analysis, entity extraction, and on-device data processing. It inherits the architecture of the larger Gemma models but is optimized for rapid fine-tuning, making it easier to create task-specific AI that runs on inexpensive infrastructure or entirely offline. Google highlights real-world successes with specialized Gemma models, including multilingual content moderation outperforming larger models.

Getting started with Gemma 3 270M

The model is available pretrained and instruction-tuned through platforms like Hugging Face, Ollama, Kaggle, and LM Studio, with compatibility for popular inference frameworks. Developers can fine-tune it with tools such as Hugging Face, UnSloth, and JAX, then deploy locally or to cloud environments like Google Cloud Run.

Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.