Skip to content

Introduction

This section highlights key organizations and initiatives working to advance measurement, safety, risk assessment, and security in artificial intelligence (AI). Each plays a unique role in shaping how AI systems are developed, evaluated, and governed—ranging from open benchmarking and digital threat research to government-led programs assessing real-world impacts and advanced AI risks.

Organizations

MLCommons

A global nonprofit AI engineering consortium that accelerates machine learning innovation by creating open benchmarks, datasets, and tools for measuring performance, reliability, and safety.

Source: https://mlcommons.org/

UL Digital Safety Research Institute

A research institute focused on understanding and mitigating risks in the digital and AI ecosystem, such as cybersecurity threats, misinformation, and privacy harms.

Source: https://ul.org/institutes-offices/digital-safety/

NIST ARIA (Assessing Risks and Impacts of AI)

A U.S. government program at the National Institute of Standards and Technology that develops methods to evaluate the real-world risks and societal impacts of AI systems.

Source: https://ai-challenges.nist.gov/aria

AI Security Institute (AISI)

A UK government-backed research institute conducting technical research and evaluation of advanced AI risks and security, informing policymakers and supporting safe AI development.

Source: https://www.aisi.gov.uk/