AI Assurance Institute
The AI Assurance Institute is an organization focused on providing independent, rigorous, and internationally harmonized assessment of artificial intelligence systems. Its goal is to support organizations in demonstrating that their AI systems are safe, ethical, transparent, and compliant with regulations such as the EU AI Act, NIST AI RMF, and ISO/IEC 42001.
prEN 18282:2025
Cybersecurity for Artificial Intelligence Systems
Status: draft
A draft European standard defining requirements for risk management systems (RMS) in organizations that develop or implement artificial intelligence systems. The standard supports compliance with the EU AI Act by defining the structure of processes for identifying, assessing, treating, and monitoring risks related to security, ethics, and the protection of the fundamental rights of AI users. Its goal is to ensure secure, transparent, and responsible AI systems throughout the product lifecycle.
Source:https://aiassurance.institute/pren-18282-clauses.html
prEN 18286
Quality Management System for AI Act regulatory compliance
A draft European standard defining requirements for quality management systems (QMS) in organizations developing or implementing artificial intelligence systems. The standard supports compliance with the EU AI Act by ensuring that AI development, testing, and deployment processes are controlled, repeatable, and compliant with regulations, minimizing risks related to safety, quality, and user rights protection. Its goal is to enable organizations to deliver reliable, safe, and responsible AI systems.
Source: https://www.dinmedia.de/en/draft-standard/din-en-18286/396620255