Browse all 4 CVE security advisories affecting Giskard-AI. AI-powered Chinese analysis, POCs, and references for each vulnerability.
Giskard-AI is an open-source testing framework for detecting vulnerabilities in machine learning models, primarily used by security researchers and developers to evaluate AI systems for robustness and safety. Historically, it has been associated with vulnerabilities like remote code execution, cross-site scripting, and privilege escalation, with four CVEs currently documented. The framework's security characteristics focus on identifying model vulnerabilities such as data poisoning and adversarial attacks, though no major public incidents have been reported. Its design emphasizes comprehensive scanning capabilities for ML-specific threats, making it a tool for both defensive security research and proactive vulnerability assessment in AI applications.
Showing up to 20 recent security advisories. View all →
This page lists every published CVE security advisory associated with Giskard-AI. Each entry links to a detailed page with CVSS scoring, CWE classification, affected products and references. AI-generated Chinese analysis is provided for fast triage.