Vulnerability Monitor

The vendors, products, and vulnerabilities you care about

CVE-2025-48944


vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.


Published

2025-05-30T19:15:30.433

Last Modified

2025-07-01T20:42:13.840

Status

Analyzed

Source

[email protected]

Severity

CVSSv3.1: 6.5 (MEDIUM)

Weaknesses
  • Type: Secondary
    CWE-20

Affected Vendors & Products
Type Vendor Product Version/Range Vulnerable?
Application vllm vllm < 0.9.0 Yes

References