CVE-2025-49847
moderate-risk
Published 2025-06-17
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.
Do I need to act?
-
0.61% chance of exploitation
EPSS score — low exploit probability
-
Not on CISA KEV list
No confirmed active exploitation reported to CISA
?
Patch status unknown
Check vendor advisories for fix availability and mitigation guidance
8
CVSS 8.8/10
High
NETWORK
/ LOW complexity
Affected Products (1)
Llama.Cpp
Affected Vendors
37
/ 100
moderate-risk
Severity
30/34 · Critical
Exploitability
2/34 · Minimal
Exposure
5/34 · Minimal