EXAONE-Deep
Open-source large language models optimized for enhanced reasoning capabilities
EXAONE-Deep is a series of open-source language models that excel in mathematical reasoning, scientific understanding, and code generation. Available in 32B, 7.8B, and 2.4B parameter versions.
Key Features
Mathematical Reasoning
EXAONE-Deep demonstrates exceptional performance on benchmarks like MATH-500, AIME 2024/2025, and CSAT mathematics, showcasing its powerful mathematical reasoning abilities.
Scientific Understanding
The model excels on the GPQA Diamond test, demonstrating its ability to solve doctorate-level problems in physics, chemistry, and biology.
Coding Abilities
Achieves leading scores on programming benchmarks like LiveCodeBench, proving the model's excellence in writing and understanding code.
Efficient Architecture
Model parameters range from 2.4B to 32B, suitable for different scenarios, with support for AWQ and GGUF quantization formats for easy deployment.
Performance Highlights
32B Model Performance
EXAONE-Deep 32B model achieves a score of 94.5 on the CSAT 2025 mathematics section and an impressive 90.0 on AIME 2024 (American Invitational Mathematics Examination), demonstrating its excellent performance on challenging mathematical problems.
7.8B & 2.4B Models
The 7.8B and 2.4B variants maintain strong reasoning capabilities while being more efficient, with the 7.8B model achieving outstanding results on LiveCodeBench and the 2.4B model performing well even with limited parameters.
Broad Compatibility
All model versions support AWQ and GGUF quantization formats, making them compatible with various deployment options including TensorRT-LLM, vLLM, and llama.cpp, enabling flexible implementation in different environments.
Quick Start
Get started with EXAONE-Deep models quickly
Check our deployment guide for detailed instructions