---
title: "Gemini 3 Flash Preview"
route_path: "/model/gemini-3-flash-preview"
canonical_url: "https://www.pipellm.ai/model/gemini-3-flash-preview"
markdown_path: "/llms/models/gemini-3-flash-preview.md"
markdown_url: "https://www.pipellm.ai/llms/models/gemini-3-flash-preview.md"
content_type: "model-detail-page"
description: "Machine-readable detail page for Gemini 3 Flash Preview."
generated_at: "2026-03-27T06:53:30.752Z"
---
Canonical page: https://www.pipellm.ai/model/gemini-3-flash-preview
Markdown mirror: https://www.pipellm.ai/llms/models/gemini-3-flash-preview.md
Content type: model-detail-page
Generated at: 2026-03-27T06:53:30.752Z
# Gemini 3 Flash Preview
## Query Intents
- Understand pricing, provider availability, context window, and capabilities for Gemini 3 Flash Preview.
- Compare Gemini 3 Flash Preview against other models available through PipeLLM.
- Find the canonical model identifier to use in SDK or API requests.
## Overview
Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool use performance with substantially lower latency than larger Gemini variants, making it well suited for interactive development, long running agent loops, and collaborative coding tasks. Compared to Gemini 2.5 Flash, it provides broad quality improvements across reasoning, multimodal understanding, and reliability. The model supports a 1M token context window and multimodal inputs including text, images, audio, video, and PDFs, with text output. It includes configurable reasoning via thinking levels (minimal, low, medium, high), structured output, tool use, and automatic context caching. Gemini 3 Flash Preview is optimized for users who want strong reasoning and agentic behavior without the cost or latency of full scale frontier models.
## Model Metadata
- Display name: Gemini 3 Flash Preview
- Model ID: gemini-3-flash-preview
- Provider family: Google
- Release date: 2025-11-17T00:00:00.000Z
- Context window: 1000K
- Max output: 65K
- Input modalities: text, image
- Output modalities: text
- Tool use support: Yes
- Computer use support: Yes
- Cache control support: Yes
## Official Pricing (per 1M tokens)
| Metric | <=200K Context | >200K Context |
| --- | --- | --- |
| Input Price | $0.05 | — |
| Output Price | $3 | $0 |
| Cache Read | $0 | — |
| Image Input | $0 | — |
| Image Output | $0 | — |

## Provider Availability
| Provider | Region | Context Window | Max Output | Input Price | Output Price | Cache Read | Cache Write |
| --- | --- | --- | --- | --- | --- | --- | --- |
| GCP Vertex | — | 1000K | 65K | $0.05 | $3 | $0 | — |
