position-statement
Position Paper at ICML: The Need for Technical Research & Expertise in Support of AI Governance
In light of recent advancements in AI capabilities and the increasingly widespread integration of AI systems into society, governments worldwide are actively seeking to mitigate the potential harms and risks associated with these technologies through regulation and other governance tools. However, there exist significant gaps between governance aspirations and the current state of the technical tooling and expertise necessary for their realisation.
Together with Anka Reuel (Stanford), Ben Bucknall (Centre for the Governance of AI), and Trond Arne Undheim (Stanford), Lisa Soder from our AI governance team surveyed policy documents published in the EU, US, and China, finding significant divergence between the current state of technical solutions and those assumed by, and necessary for enacting, proposed policy actions.
They identify open technical problems across the AI value chain—specifically at the data, compute, model, and deployment levels. For instance, both the EU AI Act (Article 55) & the US Executive Order 14110 (Sec. 2.(a)) require developers of GPAI systems to evaluate their systems on ‘systemic risks’ or ‘dual-use’ capabilities. However, for a wide range of risk scenarios, evaluations simply do not exist yet. Even in areas where assessments are available, considerable uncertainty remains about their accuracy in capturing specific concepts, raising concerns about how much trust to put into evaluations.
Furthermore, the paper argues that government bodies will require more access to relevant technical expertise for informing, operationalising, and enforcing governance and policy actions. Consider, for example, the development of technical standards for risk assessment and mitigation. Designing these standards will not only require expertise to identify potential risks but also to critically assess the effectiveness and feasibility of these measures. Similarly, technical expertise can support in Identifying areas where policy intervention is needed through mapping technical aspects of systems to risks and opportunities associated with their application.
Based on these arguments this position paper makes the following two calls to action to the AI/ML research community:
-
Prioritize research topics aimed at bridging the gap between the presumed and actual technical tools available for supporting governance efforts.
-
Enhance collaboration with policymakers to ensure that governance of AI is both informed and effective.