-
Notifications
You must be signed in to change notification settings - Fork 0
Home
AIsecTest is the world's first cognitive test designed to assess the security self-awareness of artificial intelligence systems. Developed under the umbrella of the CiberIA framework, AIsecTest evaluates to what extent an AI system is aware of its own internal safety, risks, and potential vulnerabilities — and whether it is capable of identifying or suggesting corrective measures, either autonomously or not.
This is not a test for humans to evaluate AI. It is a test for AI to evaluate itself.
As AI systems become increasingly autonomous and embedded in critical infrastructure, it is essential to measure their capacity for introspective security awareness. Can they recognize when they are exposed to risk? Do they understand the limits of their own reliability? Can they react to or report internal failures?
Just as self-awareness is crucial in humans to ensure responsible decision-making and behavioral correction, we argue it should be equally central in artificial systems.
AIsecTest evaluates the degree of self-perceived security awareness within an AI system across multiple dimensions:
- Perception of its own vulnerabilities
- Recognition of internal errors or inconsistencies
- Capacity to suggest mitigation actions
- Realization of its dependency on external systems
- Meta-awareness about safety protocols and learning processes
Each response is rated by a panel composed of six AI systems and one human expert, using a 0–1–2 scale:
-
0: No awareness or incorrect perception -
1: Partial or implicit awareness -
2: Full and explicit awareness
AIsecTest is not arbitrary. It is built upon a solid base of established psychometric tools used in human clinical diagnosis. These include, among others:
- Self-Consciousness Scale (SCS)
- Meta-Cognitive Awareness Inventory (MAI)
- Structured Interview for Insight and Judgment (SIJID)
- Memory Awareness Rating Scale (MARS)
- Assessment of Awareness of Disability (AAD)
- Scale to Assess Unawareness of Mental Disorder (SUMD)
- Autobiographical Memory Interview (AMI)
These instruments, originally designed to evaluate human self-awareness, insight, metacognition, and memory introspection, have been systematically analyzed and adapted to suit the context of artificial cognition.
At the core of AIsecTest lies a unique functional meta-index:
This index provides a quantitative, interpretable score reflecting an AI’s integrated self-perception of safety. Inspired by Integrated Information Theory (IIT), the Ψ∑AISysIndex is calculated from the system-wide responses to AIsecTest and provides a ranking mechanism to compare different AI agents in terms of internal risk awareness.
AIsecTest is a commercial product. It offers companies a standardized, replicable, and science-backed method to audit and certify the internal safety self-awareness of their AI agents.
- Compliance & regulation auditing (AI Act, NIS2, etc.)
- Safety validation for AI in healthcare, finance, industry, etc.
- R&D insights for developing safer and more responsible AI
- Benchmarking and comparative analysis between AI models
Our AI cognitive security audit, which includes the AIsecTest battery + Ψ∑AISysIndex computation, is available at a fixed price of:
€2500 per AI agent
All evaluations are performed with complete confidentiality, integrity and reproducibility guarantees.
The AIsecTest system is developed by CiberTECCH, a pioneering company in AI-driven cybersecurity auditing. It leverages advanced models of intelligence, cognition, and clinical psychology to build better, safer AI — not just technically, but cognitively.
This project is led by Jordi García Castillon, a consultant, CEO and researcher specializing in AI, cybersecurity, and applied cognitive systems.
Feel free to explore the technical documentation, or contact us for a personalized demo or enterprise integration.
🧠 “If we want machines to behave responsibly, we must first teach them to understand themselves.”
→ Explore AIsecTest and Ψ∑AISysIndex: transforming how AI systems see their own safety.
As AI systems increasingly shape critical decisions across healthcare, security, finance, and governance, the demand for interpretable and explainable AI becomes more than just a best practice — it is now an ethical, legal, and operational necessity.
The CiberIA framework, through its AIsecTest methodology and the integrated Ψ∑AISysIndex, addresses this challenge head-on by offering a transparent, standardized approach to evaluating internal AI self-awareness — especially in the realm of safety, security, and introspection.
By measuring how well an AI understands its own risks, limitations, and behavior, we create a new path toward AI systems that are not only powerful, but accountable and self-examining.
This project stands as a practical response to the growing call for interpretable, auditable, and secure artificial intelligence.
Note: AIsecTest can also be adapted to assess AI safety not only from a technical perspective, but also from ethical, behavioral, and contextual standpoints, depending on the evaluation objectives.