---
title: "GLM 4.7"
route_path: "/model/glm-4.7"
canonical_url: "https://www.pipellm.ai/model/glm-4.7"
markdown_path: "/llms/models/glm-4.7.md"
markdown_url: "https://www.pipellm.ai/llms/models/glm-4.7.md"
content_type: "model-detail-page"
description: "Machine-readable detail page for GLM 4.7."
generated_at: "2026-03-27T06:53:30.752Z"
---
Canonical page: https://www.pipellm.ai/model/glm-4.7
Markdown mirror: https://www.pipellm.ai/llms/models/glm-4.7.md
Content type: model-detail-page
Generated at: 2026-03-27T06:53:30.752Z
# GLM 4.7
## Query Intents
- Understand pricing, provider availability, context window, and capabilities for GLM 4.7.
- Compare GLM 4.7 against other models available through PipeLLM.
- Find the canonical model identifier to use in SDK or API requests.
## Overview
GLM-4.7 is Z.AI’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while delivering more natural conversational experiences and superior front-end aesthetics.
## Model Metadata
- Display name: GLM 4.7
- Model ID: glm-4.7
- Provider family: Zai
- Release date: 2025-12-22T00:00:00.000Z
- Context window: 200K
- Max output: 128K
- Input modalities: text, image
- Output modalities: text
- Tool use support: Yes
- Computer use support: No
- Cache control support: Yes
## Official Pricing (per 1M tokens)
| Metric | <=200K Context | >200K Context |
| --- | --- | --- |
| Input Price | $0.6 | — |
| Output Price | $2.2 | — |
| Cache Read | $0.11 | — |
| Image Input | $0 | — |
| Image Output | $0 | — |

## Provider Availability
| Provider | Region | Context Window | Max Output | Input Price | Output Price | Cache Read | Cache Write |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Z.AI | — | 200K | 128K | $0.6 | $2.2 | $0.11 | — |
