1 / 3
Swipe to compare

K-EXAONE is LG AI Research's Korean-specialized frontier large language model, employing a Mixture-of-Experts (MoE) architecture with 236B total parameters and only 23B active during inference for efficient frontier-level performance. Its Hybrid Attention Mechanism combines sliding window attention with global attention, reducing memory and computational requirements by 70% compared to the previous generation. An expanded 150K-word tokenizer and Multi-Token Prediction (MTP) boost inference speed by 150%. The model supports a 260K-token context length (approximately 400 A4 pages) and ranked 1st in 10 out of 13 categories in South Korea's national AI foundation model evaluation. It placed 7th globally on the Artificial Analysis Intelligence Index — the only model from outside the US and China in the global top 10. With a KGC-SAFETY score of 96.2, it leads in Korean sociocultural safety standards, and its A100-grade GPU compatibility makes frontier AI accessible to organizations with limited infrastructure.

Author
LG AI ResearchLG AI Research
Release Date
2026-01-12
Knowledge Cutoff
2024-12
License
Open Model
I/O Format
Context Length
262K
API I/O (1M)
$0.2 / $0.8
How to Use
API Access
Output Speed
Arena Overall
Intelligence Index
32.1
Coding Index
27.0
Math Index
90.3
LiveBench
ForecastBench
GPQA Diamond
78.3%
HLE
13.1%
MMLU-Pro
83.8%
AIME 2025
90.3%
MATH-500
LB Reasoning
LB Math
LB Data Analysis
LiveCodeBench
76.8%
LB Coding
LB Agentic
TAU2
74.3%
TerminalBench
22.7%
SciCode
35.6%
IFBench
64.7%
AA-LCR
0.6
Hallucination (HHEM)
Factual Consistency (HHEM)
LB Language
LB Instruction Following