Become a sponsor to Giskard
We are building the first holistic Testing & Evaluation platform for AI systems. We help AI practitioners (Data Scientists & AI Engineers) increase the efficiency of their AI development workflow, eliminate risks of AI biases and ensure robust, secure & compliant AI systems.
We are a team of engineers & researchers on AI Quality, Security & Compliance who have been working on this topic since 2021. While we are excited about the new AI opportunities, we acknowledge the risks involved and the need for appropriate testing.
We believe crucial to have independent third-party evaluations to control the risks of AI models. These evaluations, conducted by separate entities from the AI developers, provide important checks and balances to ensure responsible regulation of the AI ecosystem.
By sponsoring our open-source project, you can help bring AI into the age of Quality, Security & Compliance!
Meet the team
-
Jean-Marie John-Mathews jmsquareCo-founder & co-CEO of Giskard | Ph.D. in AI Ethics, Ex-Thales data scientist
-
Alex Combessie alexcombessieCo-founder & co-CEO of Giskard | Ex-Dataiku AI engineer & data scientist
-
Matteo mattbitCTO @ Giskard
-
Inoki InokinokiSoftware Engineer @ Giskard
-
Blanca Rivera Campos BlancaRiveraCamposCommunity & Growth Manager @ Giskard
-
Pierre Le Jeune pierljML Research @ Giskard
-
Kevin Messiaen kevinmessiaenSoftware Engineer @ Giskard
-
Henrique Chaves henchavesDeveloping data products π
Featured work
-
Giskard-AI/giskard
π’ Open-Source Evaluation & Testing for AI & LLM systems
Python 4,166 -
Giskard-AI/awesome-ai-safety
π A curated list of papers & technical articles on AI Quality & Safety
-
Giskard-AI/giskard-vision
πΈ Open-Source Evaluation & Testing for Computer Vision AI systems
Python 24