Using Market Design to Improve Red Teaming of Generative AI Models
ZEW policy brief Nr. 24-06 // 2024With the final approval of the EU’s Artificial Intelligence Act (AI Act), it is now clear that general-purpose AI (GPAI) models with systemic risk will need to undergo adversarial testing. This provision is a response to the emergence of “generative AI” models, which are currently the most notable form of GPAI models generating
rich-form content such as text, images, and video. Adversarial testing involves repeatedly interacting with a model to try to lead it to exhibit unwanted behaviour. However, the specific implementation of such testing for GPAI models with systemic risk has not been clearly spelled out in the AI Act. Instead, the legislation only refers to codes of practice and harmonised standards which are soon to be developed. In this policy brief, which is based on research funded by the Baden-Württemberg Foundation, we propose that these codes and standards should reflect that an effective adversarial testing regime requires testing by independent third parties, a well-defined goal, clear roles with proper incentive and coordination schemes for all parties involved, and standardised reporting of the results. The market design approach is helpful for developing, testing and improving the underlying rules and the institutional setup of such adversarial testing regimes. We outline the design space for an extensive form of adversarial testing, called red teaming, of generative AI models. This is intended to stimulate the discussion in preparation for the codes of practice, harmonised standards and potential additional provisions by governing bodies.
Rehse, Dominik, Sebastian Valet und Johannes Walter (2024), Using Market Design to Improve Red Teaming of Generative AI Models, ZEW policy brief Nr. 24-06, Mannheim