Focus Area
AI Assessment & Oversight
Given recent advancements in AI capabilities and the growing use of these systems, policymakers around the world are turning to governance measures to ensure AI products are developed and deployed responsibly. In the European Union, the AI Act aims to set binding standards for assessing and mitigating risks in AI systems, anchoring governance measures in fundamental rights. Yet, turning these political goals into regulatory reality leaves many open questions and challenges during implementation. Among others, success depends on whether regulators themselves will be able to build the technical infrastructure and expertise needed to establish real oversight and interact with companies on an equal footing.
To support policymakers and regulators in implementing the EU’s AI Act, our team focuses on the accountability infrastructure needed to evaluate, audit, and research AI systems. We explore frameworks, tools, and practices that enable independent, meaningful assessments, and we consider questions such as:
-
What forms of technical access, information sharing, and supportive policies do external auditors need to thoroughly scrutinize AI systems?
-
How can we foster a robust, independent third-party evaluation sector that helps regulators and the public gain insight into the capabilities and risks of AI?
-
As the science of AI evaluation continues to evolve, what institutional arrangements, standards, and protocols will help oversight bodies remain effective and up to date?
By examining these and other critical issues, our team supports the implementation of the AI Act and similar initiatives. We aim to empower policymakers and regulators to engage with companies on equal footing, establish credible oversight measures, and ultimately build confidence in AI’s potential. In doing so, we help ensure that innovation aligns with democratic values and the broader public interest.