CVE-2026-34753

low-risk
Published 2026-04-06

vLLM is an inference and serving engine for large language models (LLMs). From 0.16.0 to before 0.19.0, a server-side request forgery (SSRF) vulnerability in download_bytes_from_url allows any actor who can control batch input JSON to make the vLLM batch runner issue arbitrary HTTP/HTTPS requests from the server, without any URL validation or domain restrictions. This can be used to target internal services (e.g. cloud metadata endpoints or internal HTTP APIs) reachable from the vLLM host. This vulnerability is fixed in 0.19.0.

Do I need to act?

-
0.04% chance of exploitation
EPSS score — low exploit probability
-
Not on CISA KEV list
No confirmed active exploitation reported to CISA
?
Patch status unknown
Check vendor advisories for fix availability and mitigation guidance
5
CVSS 5.4/10 Medium
NETWORK / LOW complexity

Affected Products (1)

Affected Vendors

26
/ 100
low-risk
Severity 21/34 · High
Exploitability 0/34 · Minimal
Exposure 5/34 · Minimal