study
Open Problems in Technical AI Governance
-
Open Problems in Technical AI Governance
Author's Note: This post summarizes a collaborative research paper titled "Open Problems in Technical AI Governance." This project was a joint effort involving over 30 leading experts in AI governance, led by Anka Reuel and Ben Bucknall. Below, we provide an executive summary of the paper. The full paper and an online repository listing open research questions and resources can be found here.
Introduction
As AI systems become more integrated into our daily lives, there is a growing recognition of the need for effective governance mechanisms to manage their development, deployment, and impact on society. However, current governance mechanisms are hampered by technical limitations. In particular, policymakers and stakeholders face gaps in:
-
Technical Information: There is a lack of comprehensive technical understanding to identify where interventions are needed and how to assess the efficacy of different policy options.
-
Technical Tools: The necessary technical tools to successfully implement policy proposals are often underdeveloped or nonexistent.
To address these challenges, we introduce the emerging field of Technical AI Governance—a discipline focused on bridging these gaps by developing technical analyses and tools that support the effective governance of AI systems.
A Taxonomy of Technical AI Governance
We categorize Technical AI Governance along two dimensions: technical targets across the AI value chain—from inputs to deployment—and governance capacities that can be applied to those targets. Key targets include:
-
Data: The datasets used for training AI models.
-
Compute: The computational resources required for AI development.
-
Algorithms and Models: The software and mathematical frameworks that underpin AI systems.
-
Deployment: The real-world application and integration of AI systems.
Across these targets, we identify key governance capacities:
-
Assessment: Evaluating AI systems for safety, fairness, robustness, and effectiveness.
-
Access: Controlling and facilitating appropriate access to AI systems and data, balancing openness with security and privacy considerations.
-
Verification: Ensuring the integrity, compliance, and accountability of AI systems through mechanisms like audits and certifications.
-
Security: Protecting AI systems from unauthorized access, manipulation, and other security threats.
-
Operationalization: Translating governance objectives into practical implementation strategies, standards, and best practices.
-
Ecosystem Monitoring: Observing and analyzing trends in AI development, deployment, and impact to inform proactive and future-proof governance.
Alt text: An infographic titled 'Overview of Technical AI Governance Capacities,' featuring six sections, each with an icon and a brief description corresponding to a capacity within AI governance: Assessment, Access, Verification, Security, Operationalization, and Ecosystem Monitoring.
Examples of open problems across key governance capacities
Key Takeaways
We highlight several key takeaways from the emerging field of technical AI governance:
-
Evaluations of systems and their downstream impacts on users and society have been proposed in many governance regimes. However, current evaluations lack robustness, reliability, and validity, especially for foundation models.
-
Hardware mechanisms could potentially enable actions including facilitating privacy-preserving access to datasets and models, verifying the use of computational resources, or attesting to the results of audits and evaluations. However, the use of such mechanisms for these purposes is largely unproven.
-
The development of AI research infrastructure, such as resources for conducting analyses of large training datasets or for providing privacy-preserving access to models for evaluation and auditing, could facilitate the scientific understanding of AI systems and external oversight into developers’ activities.
-
Research that aims to monitor the AI ecosystem by collecting and analyzing data on trends and advances in AI has already proven crucial for providing policymakers with the information needed to ensure that policy is forward-looking and future-proof.
We note that technical AI governance is merely one component of a comprehensive AI governance portfolio, and should therefore be seen in service of broader sociotechnical and political solutions. A purely “techno-solutionist” approach to AI governance and policy is unlikely to succeed.
Recommendations
Based on the above takeaways, we recommend:
-
Allocating funding and resources to technical AI governance research through open calls and funding bodies, drawing on established expertise in adjacent fields;
-
That policymakers collaborate closely with technical experts to define feasible objectives and identify viable pathways to implementation;
-
That government bodies, such as AI Safety Institutes and the EU AI Office, conduct in-house research on technical AI governance topics, beyond their current focus on performing evaluations;
-
That the future summits on AI, other fora such as the G7, the UN AI advisory body, and reports such as the International Scientific Report on the Safety of Advanced AI, focus effort and attention towards technical AI governance.