
Z.ai (Zhipu AI) · Chat / LLM · 355B Parameters (32B Active) · 128K Context

Streaming Reasoning Agentic Coding Long Context Code Tool OrchestrationOverview
GLM-4.7-FP8 is the flagship model from Z.ai (formerly Zhipu AI) — a Chinese AI research company focused on building large-scale open-source foundation models for reasoning, coding, and agentic workflows. With 355B total parameters and 32B active per forward pass, it introduces three novel thinking paradigms — Interleaved Thinking, Preserved Thinking, and Turn-level Thinking — enabling the model to reason before every action and maintain coherent reasoning state across long coding sessions. It achieves 95.7% on AIME 2025, 73.8% on SWE-bench, and 87.4% on τ²-Bench, delivering frontier-level mathematical and software engineering performance at open-source scale. Served instantly via the Qubrid AI Serverless API.🏆 95.7% AIME 2025. 73.8% SWE-bench. 355B MoE. Interleaved Thinking. Deploy on Qubrid AI — no infrastructure required.
Model Specifications
| Field | Details |
|---|---|
| Model ID | zai-org/GLM-4.7-FP8 |
| Provider | Z.ai (formerly Zhipu AI) |
| Kind | Chat / LLM |
| Architecture | Sparse MoE Transformer — 355B total / 32B active per token, FP8 native quantization |
| Parameters | 355B total (32B active per forward pass) |
| Context Length | 128,000 Tokens |
| MoE | No |
| Release Date | December 2025 |
| License | MIT (commercial use allowed) |
| Training Data | Large-scale multilingual dataset with code, math, reasoning, and agentic workflows |
| Function Calling | Not Supported |
| Image Support | N/A |
| Serverless API | Available |
| Fine-tuning | Coming Soon |
| On-demand | Coming Soon |
| State | 🟢 Ready |
Pricing
💳 Access via the Qubrid AI Serverless API with pay-per-token pricing. No infrastructure management required.
| Token Type | Price per 1M Tokens |
|---|---|
| Input Tokens | $0.60 |
| Input Tokens (Cached) | $0.30 |
| Output Tokens | $2.20 |
Quickstart
Prerequisites
- Create a free account at platform.qubrid.com
- Generate your API key from the API Keys section
- Replace
QUBRID_API_KEYin the code below with your actual key
💡 Thinking mode: enable_thinking=true by default — the model reasons before every response and tool call. Toggle per request for precise control over reasoning depth and latency.
Python
JavaScript
Go
cURL
Live Example
Prompt: Write a Python function to find all prime numbers up to n using the Sieve of Eratosthenes
Response:
Playground Features
The Qubrid AI Playground lets you interact with GLM-4.7-FP8 directly in your browser — no setup, no code, no cost to explore.🧠 System Prompt
Define the model’s role, coding style, and reasoning constraints before the conversation begins. Particularly powerful for agentic coding sessions and long-horizon tool orchestration.Set your system prompt once in the Qubrid Playground and it applies across every turn — including preserved reasoning state across multi-turn coding sessions.
🎯 Few-Shot Examples
Establish your preferred code style and output format with concrete examples — no fine-tuning required. Especially powerful for consistent structured outputs in agentic pipelines.| User Input | Assistant Response |
|---|---|
Write a function to check if a binary tree is balanced | def is_balanced(root) -> bool: def height(node): if not node: return 0; l, r = height(node.left), height(node.right); if l == -1 or r == -1 or abs(l-r) > 1: return -1; return max(l,r)+1; return height(root) != -1 |
Refactor this: for i in range(len(arr)): result.append(arr[i]*2) | result = [x * 2 for x in arr] # List comprehension: cleaner and faster |
💡 Stack multiple few-shot examples in the Qubrid Playground to establish coding style, language preference, and output structure — no fine-tuning required.
Inference Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| Streaming | boolean | true | Enable streaming responses for real-time output |
| Temperature | number | 0.6 | Controls randomness. Lower values recommended for reasoning and coding |
| Max Tokens | number | 4096 | Maximum number of tokens to generate |
| Top P | number | 1 | Controls nucleus sampling |
| Enable Thinking | boolean | true | Enable Interleaved Thinking mode — the model reasons before every response and tool call for improved accuracy |
Use Cases
- Agentic multilingual coding
- Terminal-based task automation
- Vibe coding and UI generation
- Complex mathematical reasoning
- Tool orchestration (Claude Code, Cline, Roo Code)
- Long-horizon multi-turn tasks
Strengths & Limitations
| Strengths | Limitations |
|---|---|
| Interleaved Thinking — reasons before every response and tool call | Very large model requires significant infrastructure for self-hosting |
| Preserved Thinking — retains reasoning state across long coding sessions | FP8 inference requires natively supporting hardware |
| Turn-level control over thinking depth per request | Thinking mode increases latency |
| 355B MoE with 32B active — frontier reasoning at low cost | Function calling not supported |
| 95.7% AIME 2025 — state-of-the-art mathematical reasoning | |
| Open-source with MIT license — full commercial use permitted |
Why Qubrid AI?
- 🚀 No infrastructure setup — 355B MoE served serverlessly, pay only for what you use
- 🔁 OpenAI-compatible — drop-in replacement using the same SDK, just swap the base URL
- 💰 Cached input pricing — $0.30/1M for cached tokens, ideal for long agentic coding sessions
- 🧠 Interleaved Thinking on demand — toggle reasoning depth per request via the API without managing model configuration
- 🧪 Built-in Playground — prototype with system prompts and few-shot examples instantly at platform.qubrid.com
- 📊 Full observability — API logs and usage tracking built into the Qubrid dashboard
Resources
| Resource | Link |
|---|---|
| 📖 Qubrid Docs | docs.platform.qubrid.com |
| 🎮 Playground | Try GLM-4.7-FP8 live |
| 🔑 API Keys | Get your API Key |
| 🤗 Hugging Face | zai-org/GLM-4.7-FP8 |
| 💬 Discord | Join the Qubrid Community |
Built with ❤️ by Qubrid AI
Frontier models. Serverless infrastructure. Zero friction.
Frontier models. Serverless infrastructure. Zero friction.