CIPL publishes recommendations for a risk-based approach to regulating AI
On March 22, 2021, the Center for Information Policy Leadership (“CIPL”) in Hunton Andrews Kurth published its paper on Providing a Risk-Based Approach to the Regulation of Artificial Intelligence (the “Paper”) with the intention of informing current EU discussions about the development of rules to regulate AI.
CIPL worked with key EU experts and AI leaders to develop the paper to translate best practices and emerging policy trends into actionable recommendations for effective AI regulation.
In the paper, CIPL recommends a risk-based approach to regulating AI applications that includes: (1) a regulatory framework that focuses only on “high risk” AI applications; (2) a risk-based organizational accountability framework that calibrates AI requirements and compliance with specific risks; and (3) intelligent and risk-based oversight.
In particular, CIPL recommends:
- Adopt a user-friendly framework for identifying high-risk AI applications, including the use of impact assessments to assess the likelihood, severity and magnitude of the impact of AI use;
- Provision of criteria and guard rails for determining high-risk AI applications;
- Consideration of the advantages of an AI application as part of a risk assessment;
- Creation of an “AI Innovation Committee” to provide additional guidance and help organizations identify high risk AI;
- That high-risk representations of AI applications are treated as rebuttable presumptions in the regulation or guidelines;
- Conducting a pre-screening or triage assessment prior to a full impact assessment;
- The explicit recognition that AI uses no or low risk does not fall within the scope of the AI Regulation. and
- Avoiding sectoral classifications of AI as risky.
Regarding AI systems that pose a high risk, CIPL recommends using principled and results-based rules instead of requirements to avoid regulations quickly becoming obsolete. CIPL proposes to provide incentives and rewards for achieving the desired results and to include an explicit obligation of accountability in every regulation as well as to calibrate compliance with the requirements of the regulation on the results of a risk assessment (i.e. a more differentiated implementation of compliance measures for systems with higher risk).
CIPL believes that any regulation should enable continuous improvement and encourage organizations to identify risks and address them in an agile and iterative manner throughout the life cycle of an AI application. Prior consultation of regulators or prior compliance assessments should only be required in relation to high risk AI applications. CIPL also highlights the benefits of using accountability frameworks to address the challenges posed by the use of AI and recommends that any legal framework encourage accountability measures and create incentives, e.g. B. by linking accountability to external certification to allow wider use of data in AI for socially beneficial projects and recognizing demonstrated AI accountability as a mitigating or liability-reducing factor in the enforcement context.
CIPL defines the essential features that an effective supervisory framework should include. These are:
- Novel and agile regulatory oversight based on the current ecosystem of sectoral and national regulatory authorities, rather than creating an additional layer of AI-specific agencies;
- Collaboration through an AI regulatory center made up of AI experts from various regulatory authorities to enable agile collaboration “on demand” and promote consistent application;
- Maintaining the competence of the data protection authorities and the European Data Protection Board (“EDPB”) in cases where an AI request involves the processing of personal data;
- Risk-based monitoring and enforcement, with a focus on areas of high risk AI and recognizing compliance as a dynamic process and pathway that enables serious attempts and honest errors;
- Enforce as a last resort and prioritize engagement, collaboration, thought leadership, guidance, and other proactive measures to achieve better compliance with the AI rules;
- Creation of a uniform system of voluntary codes of conduct, standards and certifications at EU level to complement the risk-based approach of AI supervision; and
- Use of innovative, experimental regulatory tools such as regulatory sandboxes.