E Point Perfect
Law \ Legal

CNIL Tests Tools to Audit AI Systems


With the growing use of AI systems and the increasing complexity of the legal framework relating to such use, the need for appropriate methods and tools to audit AI systems is becoming more pressing both for professionals and for regulators. The French Supervisory Authority (“CNIL”) has recently tested tools that could potentially help its auditors understand the functioning of an AI system.

Overview of the tools tested by the CNIL

The CNIL tested two different tools, IBEX and Algocate.  While IBEX aims at explaining an AI system, Algocate seeks to justify the decisions made by a AI system by checking the decision against specific standards. Both tools enable “black box” audits, meaning that they focus on the ins and outs of an AI system rather than on its internal functioning. The tools also rely on local explanatory methods, which provide an explanation for a decision related to a particular data input in the system; not on global explanatory methods which would attempt to explain all possible decisions simultaneously.

Test and conclusions

The CNIL asked some of its agents to use these tools in a theoretical scenario and consider the following questions:

  • Were the explanations provided by the tool helpful to understand the functioning of the AI system?
  • Were such explanations understandable by the participants?
  • Would these tools facilitate the work of the CNIL’s auditors?

The CNIL agents noted some challenges for each tool, in particular in relation to real-life use and the complexity of the tools.  The CNIL’s experiment also showed that some users would have preferred an explanation of the generic functioning of the system rather than local analyses. 

It therefore seems the tools will require some further improvement before they can be effectively used by regulators.  Other French public initiatives are looking into different audit models relying, for example, on global explicative methods (e.g., Pôle d’expertise de la régulation numérique’s study on methodologies for auditing content recommendation algorithms – available in French here). 

We will keep monitoring this topic moving forward, and relay any updates from the CNIL relating to auditing tools for AI systems.



Source link

Related posts

聚焦“芯”科技赛道资本运作的法律风险及解决路径:核心业务及财税问题

Dawn Zuniga

The letter from a “potential suitor”

Dawn Zuniga

Employee’s Fiduciary Duty May Not Be Limited To His Or Her Employer

Dawn Zuniga

Liability excluded for motor vehicle used as tool of trade (Aus)

Dawn Zuniga

Ontario court rejects mother’s proposal to be bound by treaty if child not returned to Canada

Dawn Zuniga

German Federal Office for Information Security Publishes Security Requirements for Healthcare Apps

Dawn Zuniga