Goal Reached Thanks to every supporter — we hit 100%!

Goal: 1000 CNY · Raised: 1000 CNY

100.0%

CWE-1426 — Vulnerability Class 3

3 vulnerabilities classified as CWE-1426. AI Chinese analysis included.

CWE-1426 represents a critical validation weakness where applications fail to adequately verify outputs from generative AI components. Since these models produce unpredictable results, developers often assume the generated content is safe without rigorous checks. Attackers typically exploit this by crafting malicious prompts that trigger the AI to generate harmful code, sensitive data leaks, or policy-violating text. If the application blindly executes or displays this unverified output, it can lead to severe security breaches, including injection attacks or privacy violations. To mitigate this risk, developers must implement robust post-processing validation layers. These include using dedicated safety filters, enforcing strict output schemas, and applying human-in-the-loop reviews for high-stakes decisions. By treating generative AI outputs as inherently untrusted, organizations can prevent malicious content from reaching end-users or affecting downstream systems.

MITRE CWE Description
The product invokes a generative AI/ML component whose behaviors and outputs cannot be directly controlled, but the product does not validate or insufficiently validates the outputs to ensure that they align with the intended security, content, or privacy policy.
Common Consequences (1)
IntegrityExecute Unauthorized Code or Commands, Varies by Context
In an agent-oriented setting, output could be used to cause unpredictable agent invocation, i.e., to control or influence agents that might be invoked from the output. The impact varies depending on the access that is granted to the tools, such as creating a database or wri…
Mitigations (4)
Architecture and DesignSince the output from a generative AI component (such as an LLM) cannot be trusted, ensure that it operates in an untrusted or non-privileged space.
OperationUse "semantic comparators," which are mechanisms that provide semantic comparison to identify objects that might appear different but are semantically similar.
OperationUse components that operate externally to the system to monitor the output and act as a moderator. These components are called different terms, such as supervisors or guardrails.
Build and CompilationDuring model training, use an appropriate variety of good and bad examples to guide preferred outputs.
CVE IDTitleCVSSSeverityPublished
CVE-2025-55074 Channel member objects leak read status — Mattermost 3.0 Low2025-11-18
CVE-2025-62453 GitHub Copilot and Visual Studio Code Security Feature Bypass Vulnerability — Visual Studio Code 5.0 Medium2025-11-11
CVE-2025-31363 Data exfiltration via AI plugin Jira tool — Mattermost 3.0 Low2025-04-16

Vulnerabilities classified as CWE-1426 represent 3 CVEs. The CWE taxonomy describes the weakness; review individual CVEs for product-specific impact.