漏洞信息
尽管我们使用了先进的大模型技术,但其输出仍可能包含不准确或过时的信息。神龙努力确保数据的准确性,但请您根据实际情况进行核实和判断。
Vulnerability Title
llama.cpp has a Heap Buffer Overflow via Integer Overflow in GGUF Tensor Parsing
Vulnerability Description
llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.
CVSS Information
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Vulnerability Type
堆缓冲区溢出
Vulnerability Title
llama.cpp 输入验证错误漏洞
Vulnerability Description
llama.cpp是Georgi Gerganov个人开发者的一个多模态模型。 llama.cpp b7824之前版本存在输入验证错误漏洞,该漏洞源于ggml_nbytes函数存在整数溢出,可能导致堆缓冲区溢出和远程代码执行。
CVSS Information
N/A
Vulnerability Type
N/A