Browse all 33 CVE security advisories affecting vllm-project. AI-powered Chinese analysis, POCs, and references for each vulnerability.
vllm-project is an open-source library designed for high-throughput and memory-efficient inference of large language models, primarily serving developers who require optimized serving infrastructure for generative AI applications. Despite its utility in streamlining model deployment, the software has accumulated thirty-three recorded Common Vulnerabilities and Exposures (CVEs), indicating significant historical security debt. Analysis of these flaws reveals a prevalence of input validation errors and improper access controls, which frequently lead to remote code execution and privilege escalation scenarios. These vulnerabilities often stem from inadequate sanitization of user-supplied data within the inference pipeline, allowing attackers to manipulate model outputs or execute arbitrary commands on the host system. While no single catastrophic breach has been widely publicized, the sheer volume of disclosed issues suggests that the project has struggled with consistent security auditing during its rapid development phase. Users must apply rigorous patching and network segmentation to mitigate these risks when deploying vllm-project in production environments.
Showing up to 20 recent security advisories. View all →
This page lists every published CVE security advisory associated with vllm-project. Each entry links to a detailed page with CVSS scoring, CWE classification, affected products and references. AI-generated Chinese analysis is provided for fast triage.