Mistral AI Releases Open-Source Model Surpassing Google and OpenAI Benchmarks
The new Mistral Small 3.1 features a 128K token context window, multimodal capabilities, and efficient performance on consumer hardware.
- Mistral Small 3.1, a 24-billion-parameter model, outperforms Google’s Gemma 3 and OpenAI’s GPT-4o Mini in text, vision, and multilingual benchmarks.
- The model supports multimodal inputs, handling both text and images with a 128K token context window for long-form reasoning and document analysis.
- It is optimized for local deployment, running efficiently on a single RTX 4090 GPU or a MacBook with 32GB RAM when quantized.
- Released under an Apache 2.0 license, the model is freely available for use and modification, promoting accessibility and open-source innovation.
- Currently available on Hugging Face and Google Cloud, with planned integrations into NVIDIA NIM and Microsoft Azure AI Foundry.