---
title: "GPT-5 mini"
route_path: "/model/gpt-5-mini"
canonical_url: "https://www.pipellm.ai/model/gpt-5-mini"
markdown_path: "/llms/models/gpt-5-mini.md"
markdown_url: "https://www.pipellm.ai/llms/models/gpt-5-mini.md"
content_type: "model-detail-page"
description: "Machine-readable detail page for GPT-5 mini."
generated_at: "2026-03-27T06:53:30.752Z"
---
Canonical page: https://www.pipellm.ai/model/gpt-5-mini
Markdown mirror: https://www.pipellm.ai/llms/models/gpt-5-mini.md
Content type: model-detail-page
Generated at: 2026-03-27T06:53:30.752Z
# GPT-5 mini
## Query Intents
- Understand pricing, provider availability, context window, and capabilities for GPT-5 mini.
- Compare GPT-5 mini against other models available through PipeLLM.
- Find the canonical model identifier to use in SDK or API requests.
## Overview
GPT-5 Mini is a compact version of GPT-5, designed to handle lighter-weight reasoning tasks. It provides the same instruction-following and safety-tuning benefits as GPT-5, but with reduced latency and cost. GPT-5 Mini is the successor to OpenAI's o4-mini model.
## Model Metadata
- Display name: GPT-5 mini
- Model ID: gpt-5-mini
- Provider family: Openai
- Release date: Unknown
- Context window: 128K
- Max output: 16K
- Input modalities: text
- Output modalities: text
- Tool use support: Yes
- Computer use support: No
- Cache control support: Yes
## Official Pricing (per 1M tokens)
| Metric | <=200K Context | >200K Context |
| --- | --- | --- |
| Input Price | $0.25 | — |
| Output Price | $2 | — |
| Cache Read | $0.025 | — |
| Image Input | $0 | — |
| Image Output | $0 | — |

## Provider Availability
| Provider | Region | Context Window | Max Output | Input Price | Output Price | Cache Read | Cache Write |
| --- | --- | --- | --- | --- | --- | --- | --- |
| OpenAI | — | 128K | 16K | $0.25 | $2 | $0.025 | — |
