Introduction: xAI introduced grok-code-fast-1, a model for agentic coding workflows.
- Architecture: Built from scratch with a pre-training corpus of programming data and a post-training set from real pull requests and tasks.
- Optimizations: Includes optimization for tool usage commands like [grep], terminal ops, and file editing to integrate with coding environments. Uses serving techniques and prompt caching with high cache hit rates.
- Language Support: Supports multiple programming languages including TypeScript, Python, Java, Rust, C++, and Go.
Performance: Measured on [SWE-Bench-Verified] with a score of 70.8% using xAI's internal evaluation suite. Incorporates human evaluations and automated assessments for real-world usability.
- Context Window: Uses a 256 k token context window to process larger codebases.
- Architecture: Internally uses a mixture-of-experts architecture with an estimated 314 billion parameters for speed and coding capability. Throughput is about 92 tokens per second.
- Comparison: Emphasizes speed and tool integration over maximum benchmark accuracy compared to other coding-focused models like OpenAI's [o1-mini] and Anthropic's Claude [Sonnet 3.5]. Its mixture-of-experts design is similar to Google DeepMind's [Gemini 1.5 Pro] but adapted for software development.
- Community Responses: Highlighted execution speed. Software developer [Eric Jiang] praised its impact on productivity. Others discussed use cases and accessibility, with a need for a CLI to compete with Claude Code.
- Access: Available for free through select launch partners like [GitHub Copilot], [Cursor], etc. for a limited time. xAI will update the model frequently and is training a new variant with multimodal input and extended context.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。