Meta

Testing the EU’s AI Act’s Effectiveness With the Open Loop Initiative

By Norberto Andrade, Director AI Policy and Antonella Zarra, AI Policy Program Manager

Takeaways

  • As artificial intelligence (AI) advances in its applications, the European Union’s proposed AI Act could serve as a key piece of legislation that helps build and deploy trustworthy AI.
  • More than 50 European AI companies, SMEs and startups tested key requirements of the AI Act through the Open Loop initiative to see if it stays true to its objective of enabling the building and deployment of trustworthy AI.
  • Thanks to this experimental, multi-stakeholder policy prototyping approach, the Open Loop consortium came up with six recommendations for how the AI Act could be made clearer, more feasible and effective.

AI is at the core of our work, helping to improve our existing products and serving as a foundation for innovative new applications. It has recently hit an inflection point in capturing the imagination of the general public and people are becoming more aware of its many applications and benefits. 

As AI advances, so does its regulation. The European Union is leading the way with the upcoming AI Act, which has the potential to be a scene-setting piece of legislation aiming to introduce a common regulatory and legal framework encompassing all types of AI. Throughout its development process, it is critical that innovative companies of all sizes, especially startups, have their voices heard so we can help ensure that AI regulation is clear, inclusive and fosters greater innovation and competition.

How We Tested the AI Act in Europe

In 2021, we launched Open Loop, a global initiative that brings together governments, tech businesses, academics and civil society representatives to help develop forward-looking policies through experimental approaches such as policy prototyping. Through the Open Loop program focused on the AI Act, more than 50 European AI companies, SMEs and startups tested key requirements of the upcoming rules to see if they could be made clearer, more feasible and effective. In this trial, European businesses identified a set of six recommendations that would help ensure the AI Act stays true to its objective of enabling the building and deployment of trustworthy AI. Here’s what they found:

  1. Responsibilities between actors along the AI value chain should be better defined to reduce uncertainty: Roles and responsibilities, from providers to users, should work around the dynamic and interconnected relationships between all those involved in developing, deploying, and monitoring AI systems. 
  2. More guidance on risk assessments and data quality requirements is needed: Most participants said they would perform a risk assessment even if their AI systems are not high-risk by definition, but they found it challenging to anticipate how users or third parties might use the AI systems. This is especially true for SMEs, which would benefit from more guidance.
  3. Data quality requirements should be realistic: Requiring “complete” and “error-free” datasets was considered unrealistic by Open Loop participants and they encourage using a “best effort” approach instead, as suggested by the European Parliament.
  4. Reporting should be made clear and simple: Participants thought it was unclear how to interpret and comply with the technical documentation requirements and called for clearer guidance and templates. They also warned against setting out too-detailed rules that could create an excess of red tape. 
  5. Distinguishing different audiences for transparency requirements and ensuring enough qualified workers for human oversight of AI: European companies want to ensure that users of their AI systems are clearly informed about how to operate them. To ensure proper human oversight, businesses stressed that the level of detail of instructions and explanation varies greatly according to the target audience.
  6. Maximise the potential of regulatory sandboxes to foster and strengthen innovation: Participants considered regulatory sandboxes an important mechanism for fostering innovation and strengthening compliance and felt they could be made more effective by legal certainty and a collaborative environment between the regulator and companies. 

These suggestions show how the AI Act can be improved to benefit society and achieve its goals. It demonstrates how this experimental, multi-stakeholder policy prototyping approach can be applied to emerging technologies to help develop effective, evidence-based policies.

You can read the full report here.



To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy