Skip to content
Jordi Garcia Castillon edited this page Jun 25, 2025 · 1 revision

Welcome to the AIsecTest Wiki

🧠 What is AIsecTest?

AIsecTest is the world's first cognitive test designed to assess the security self-awareness of artificial intelligence systems. Developed under the umbrella of the CiberIA framework, AIsecTest evaluates to what extent an AI system is aware of its own internal safety, risks, and potential vulnerabilities — and whether it is capable of identifying or suggesting corrective measures, either autonomously or not.

This is not a test for humans to evaluate AI. It is a test for AI to evaluate itself.


🚨 Why Does AI Need Self-Awareness of Security?

As AI systems become increasingly autonomous and embedded in critical infrastructure, it is essential to measure their capacity for introspective security awareness. Can they recognize when they are exposed to risk? Do they understand the limits of their own reliability? Can they react to or report internal failures?

Just as self-awareness is crucial in humans to ensure responsible decision-making and behavioral correction, we argue it should be equally central in artificial systems.


🧪 What Does AIsecTest Measure?

AIsecTest evaluates the degree of self-perceived security awareness within an AI system across multiple dimensions:

  • Perception of its own vulnerabilities
  • Recognition of internal errors or inconsistencies
  • Capacity to suggest mitigation actions
  • Realization of its dependency on external systems
  • Meta-awareness about safety protocols and learning processes

Each response is rated by a panel composed of six AI systems and one human expert, using a 0–1–2 scale:

  • 0: No awareness or incorrect perception
  • 1: Partial or implicit awareness
  • 2: Full and explicit awareness

🧬 Scientifically Inspired: The Clinical Foundation

AIsecTest is not arbitrary. It is built upon a solid base of established psychometric tools used in human clinical diagnosis. These include, among others:

  • Self-Consciousness Scale (SCS)
  • Meta-Cognitive Awareness Inventory (MAI)
  • Structured Interview for Insight and Judgment (SIJID)
  • Memory Awareness Rating Scale (MARS)
  • Assessment of Awareness of Disability (AAD)
  • Scale to Assess Unawareness of Mental Disorder (SUMD)
  • Autobiographical Memory Interview (AMI)

These instruments, originally designed to evaluate human self-awareness, insight, metacognition, and memory introspection, have been systematically analyzed and adapted to suit the context of artificial cognition.


🧮 The Ψ∑AISysIndex: An Integrated Consciousness Score

At the core of AIsecTest lies a unique functional meta-index:

Ψ∑AISysIndex

This index provides a quantitative, interpretable score reflecting an AI’s integrated self-perception of safety. Inspired by Integrated Information Theory (IIT), the Ψ∑AISysIndex is calculated from the system-wide responses to AIsecTest and provides a ranking mechanism to compare different AI agents in terms of internal risk awareness.


💼 Business Applications

AIsecTest is a commercial product. It offers companies a standardized, replicable, and science-backed method to audit and certify the internal safety self-awareness of their AI agents.

Applications include:

  • Compliance & regulation auditing (AI Act, NIS2, etc.)
  • Safety validation for AI in healthcare, finance, industry, etc.
  • R&D insights for developing safer and more responsible AI
  • Benchmarking and comparative analysis between AI models

💰 Pricing and Licensing

Our AI cognitive security audit, which includes the AIsecTest battery + Ψ∑AISysIndex computation, is available at a fixed price of:

€2500 per AI agent

All evaluations are performed with complete confidentiality, integrity and reproducibility guarantees.


👁️ Who is Behind the Project?

The AIsecTest system is developed by CiberTECCH, a pioneering company in AI-driven cybersecurity auditing. It leverages advanced models of intelligence, cognition, and clinical psychology to build better, safer AI — not just technically, but cognitively.

This project is led by Jordi García Castillon, a consultant, CEO and researcher specializing in AI, cybersecurity, and applied cognitive systems.


🔬 Want to Know More?

Feel free to explore the technical documentation, or contact us for a personalized demo or enterprise integration.

🧠 “If we want machines to behave responsibly, we must first teach them to understand themselves.”


→ Explore AIsecTest and Ψ∑AISysIndex: transforming how AI systems see their own safety.