Vulnerability Monitor

The vendors, products, and vulnerabilities you care about

CVE-2025-53630


llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.


Published

2025-07-10T20:15:27.523

Last Modified

2025-07-15T13:14:49.980

Status

Awaiting Analysis

Source

[email protected]

Severity

-

Weaknesses
  • Type: Secondary
    CWE-122
    CWE-680

Affected Vendors & Products

-


References