Roo Code 3.26 Release Notes
This document combines all releases in the v3.26 series.
Grok Code Fast
As you may have already figured out, our stealth model Sonic has officially been uncloaked! (#7426)
From xAI, this model is optimized for coding tasks and already beloved by the community in Code Mode for its:
- Sharp reasoning capabilities
- Plan execution at scale
- Code suggestions with UI taste and intuition
If you've already been enjoying Sonic in Roo Code Cloud, you'll be transitioned to Grok Code Fast. The model xai/grok-code-fast-1 is also available under the xAI Provider and is not free (for when the free period ends on Aug 28th, 2025).
A massive thank-you to our partners at xAI and to all of you — over 100B tokens (and counting!) ran through Sonic during stealth! Your incredible adoption and helpful feedback shaped Grok Code Fast into the powerful model it is today.
Important: Grok Code Fast remains FREE when accessed through the Roo Code Cloud provider during the promotional period. Using it directly through the xAI provider will incur standard charges once pricing is established.
📚 Documentation: See Roo Code Cloud Provider for free access or xAI Provider for direct configuration.
Built-in /init Command
We've added a new /init slash command for project onboarding (#7381, #7400):
- Automatic Project Analysis: Analyzes your entire codebase and creates comprehensive AGENTS.md files
- AI Assistant Optimization: Generates documentation that enables AI assistants to be immediately productive in your codebase
- Mode-Specific Guidance: Creates tailored documentation for different Roo Code modes (code, debug, architect, etc.)
The /init command helps LLMs understand your project's unique patterns and conventions by documenting project-specific information that isn't obvious from the code structure alone.
📚 Documentation: See Slash Commands - The init command for details.
Qwen Code CLI API Support
We've integrated with the Qwen Code CLI tool, allowing Roo Code to leverage its free access tier for Alibaba's Qwen3 Coder models (#7380):
- Free Inference: Piggybacks off the Qwen Code CLI's generous free tier (2,000 requests/day and 60 requests/minute with no token limits) via OAuth, available during a promotional period.
- 1M Context Windows: Handle entire codebases in a single conversation.
- Seamless Setup: Works automatically if you've already authenticated the Qwen Code CLI tool.
This integration provides free access to the Qwen3 Coder models by using the local authentication from the Qwen Code CLI.
📚 Documentation: See Qwen Code CLI Provider for setup and configuration.
Vercel AI Gateway Provider
We've added Vercel AI Gateway as a complete provider integration (thanks joshualipman123!) (#7396, #7433):
- Full Provider Support: Use Vercel AI Gateway as a comprehensive AI model provider alongside existing options
- Model Access: Access Vercel's wide range of AI models through their optimized gateway infrastructure
- Embeddings Support: Includes built-in support for Vercel AI Gateway embeddings (#7445)
📚 Documentation: See Vercel AI Gateway for detailed setup instructions.
Image Generation (OpenRouter) — Free option: Gemini 2.5 Flash Image Preview
Generate images from natural‑language prompts directly inside Roo Code using OpenRouter's image generation models. Configure your OpenRouter API key, pick a supported model, and preview results in the built‑in Image Viewer. See Image Generation and OpenRouter Provider for setup and model selection.
- Free option available: Gemini 2.5 Flash Image Preview — try image generation without paid credits for faster onboarding and quick experiments
- Prompt‑to‑image workflow inside the editor with approvals flow (supports auto‑approval when write permissions are granted)
- Image Viewer with zoom, copy, and save for quick reuse in docs and prototypes
- NEW in v3.26.3: Image Editing — Transform and edit existing images in your workspace (#7525):
- Apply artistic styles like watercolor, oil painting, or sketch
- Upscale and enhance images to higher resolution
- Modify specific aspects while preserving the rest
- Supports PNG, JPG, JPEG, GIF, and WEBP input formats
PRs: #7474, #7492, #7493, #7525)
📚 Documentation: See Image Generation - Editing Existing Images for transformation examples.
Provider Updates
- Qwen3 235B Thinking Model: Added support for Qwen3-235B-A22B-Thinking-2507 model with an impressive 262K context window, enabling processing of extremely long documents and large codebases in a single request through the Chutes provider (thanks mohammad154, apple-techie!) (#7578)
- Ollama Turbo Mode: Added API key support for Turbo mode, enabling faster model execution with datacenter-grade hardware (thanks LivioGama!) (#7425)
- DeepSeek V3.1 on Fireworks: Added support for DeepSeek V3.1 model in the Fireworks AI provider (thanks dmarkey!) (#7375)
- Provider Visibility: Static providers with no models are now hidden from the provider list for a cleaner interface (#7392)
QOL Improvements
- MCP Resource Auto-Approval: MCP resource access requests are now automatically approved when auto-approve is enabled, eliminating manual approval steps and enabling smoother automation workflows (thanks m-ibm!) (#7606)
- Message Queue Performance: Improved message queueing reliability and performance by moving the queue management to the extension host, making the interface more stable (#7604)
- Memory Optimization: Optimized memory usage for image handling in webview, achieving ~75% reduction in memory consumption (#7556)
- Auto-Approve Toggle UI: The auto-approve toggle now stays at the bottom when expanded, reducing mouse movements (thanks elianiva, kyle-apex!) (#7318)
- OpenRouter Cache Pricing: Cache read and write prices are now displayed for OpenRouter models (thanks chrarnoldus!) (#7176)
- Protected Workspace Files: VS Code workspace configuration files (*.code-workspace) are now protected from accidental modification (thanks thelicato!) (#7403)
- Cleaner Model Display: Removed dot separator in API configuration dropdown for cleaner appearance (#7461)
- Better Tooltips: Updated tooltip styling to match VSCode native shadows for improved visual consistency (#7457)
- Model ID Visibility: API configuration dropdown now shows model IDs alongside profile names for easier identification (#7423)
- Chat UI Cleanup: Improved consistency in chat input controls and fixed tooltip behavior (#7436)
- Clearer Task Headers: Removed duplicate cache display in task headers to eliminate confusion (#7443)
- Cloud Tab Rename: Renamed Account tab to Cloud tab for clarity (#7558)
- Improved padding and click targets in the image model picker for easier selection and fewer misclicks (#7494)
- Generic default filename for saved images (e.g.,
img_<timestamp>) instead ofmermaid_diagram_<timestamp>(#7479)
Bug Fixes
- Configurable Embedding Batch Size: Fixed an issue where users with API providers having stricter batch limits couldn't use code indexing. You can now configure the embedding batch size (1-2048, default: 400) to match your provider's limits (thanks BenLampson!) (#7464)
- OpenAI-Native Cache Reporting: Fixed cache usage statistics and cost calculations when using the OpenAI-Native provider with cached content (#7602)
- Special Tokens Handling: Fixed issue where special tokens would break task processing (thanks pwilkin!) (#7540)
- Security - Symlink Handling: Fixed security vulnerability where symlinks could bypass rooignore patterns (#7405)
- Security - Default Commands: Removed potentially unsafe commands (
npm test,npm install,tsc) from default allowed list (thanks thelicato, SGudbrandsson!) (#7404) - Command Validation: Fixed handling of substitution patterns in command validation (#7390)
- Follow-up Input Preservation: Fixed issue where user input wasn't preserved when selecting follow-up choices (#7394)
- Mistral Thinking Content: Fixed validation errors when using Mistral models that send thinking content (thanks Biotrioo!) (#7106)
- Requesty Model Listing: Fixed model listing for Requesty provider when using custom base URLs (thanks dtrugman!) (#7378)
- Todo List Setting: Fixed newTaskRequireTodos setting to properly enforce todo list requirements (#7363)
- Image Generation Settings (v3.26.3): Fixed issue where the saved API key would clear when switching modes (#7536)
- ImageGenerationSettings no longer shows a dirty state on first open; the save button only enables after an actual change (#7495)
- GPT‑5 reliability improvements:
- Manual condense preserves conversation continuity by correctly handling
previous_response_idon the next request - Image inputs work reliably with structured text+image payloads
- Temperature control is shown only for models that support it
- Fewer GPT‑5–specific errors with updated provider definitions and SDK (thanks nlbuescher!)
- Manual condense preserves conversation continuity by correctly handling
Misc Improvements
- Release Image: Added kangaroo-themed release image generation (#7546)
- Issue Fixer Mode: Added missing todos parameter in new_task tool usage (#7391)
- Privacy Policy Update: Updated privacy policy to clarify proxy mode data handling (thanks jdilla1277!) (#7255)
- Dependencies: Updated drizzle-kit to v0.31.4 (#5453)
- Test Debugging (v3.26.3): Console logs now visible in tests when using the --no-silent flag (thanks hassoncs!) (#7467)
- Release automation: version bumps, changelog updates, and auto-publishing on merge for a faster, more reliable release process (#7490)
- New TaskSpawned developer event so integrations can detect when a subtask is created and capture its ID for chaining or monitoring (#7465)
- Roo Code Cloud SDK bumped to 0.25.0 (#7475)