A. General Data Protection Regulation (GDPR)
Before analyzing the AI Act (AIA), RTE’s roots extend to the General Data Protection Regulation (GDPR). Art. 22 of GDPR states that "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." However, the prohibition of fully automated decisions is not absolute, as this provision is not applied if it:
- is necessary for entering into, or performance of, a contract between the data subject and a data controller;
- is authorized by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
- is based on the data subject’s explicit consent.
To summarize, for implementing Art. 22 certain provisions should be met simultaneously: solely automation process, absence of exceptional circumstance, and serious impactful effects for the data subject.
To understand the first requirement, solely automated decisions and decisions involving humans should be differentiated. Human involvement does not cover a human rubber-stamp algorithmic decision, it is not enough to avoid Art.22, human involvement should be meaningful. The human factor has several reasons such as limiting the power of machines, ensuring fairness, avoiding flaws and building human-machine collaboration. However, regulation of human intervention is uncertain, as well.
Secondly, exceptional circumstances limit data subjects' right to reject automation processes. It is noteworthy to point out that the above-mentioned limitations are disputable. For example, in China, there is no restriction, Chinese Personal Information Protection Law allows subjects to reject the automation process if they simply do not understand automated decisions.
The third element is a significant impactful decision. Working party guidelines give us the list to determine which decisions have impactful effects: decisions that affect financial circumstances or access to health services or access to education, or decisions that deny employment or put someone "at a serious disadvantage".
Resorting to RTE in GDPR, this Regulation gives at least the right to obtain human intervention to express the view of data controllers and to contest decisions. According to Art29 Working Party
Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679, the data controllers should provide a way for human intervention and contesting decisions. Results of European Parliamentary Research Service also state these safeguards provide a link to the appeal process for opaque decisions.
While GDPR gives broader rights to data subjects against automated decision-making processes, some authors such as Wachter and others state that the requirement in GDPR is about the explanation of system functionality, not specific decisions. Besides, the right to explanation is emphasized in recital 71 GDPR which is not a binding part of the Regulation. In the writing by European Parliamentary Research Service, it is interpreted as the discretion (not legal obligation) of controllers to provide individual explanations when it is convenient. It is intended to apply the right to explanation when it is practically possible to avoid unnecessary burdens for controllers.
B. EU Artificial Intelligence Act (AIA)
In AIA, we can see transparency and opacity terms which warn us of developing XAI systems (Explainable Artificial Intelligence). Here transparency is defined by traceability and explainability.
There are different transparency requirements for general and high-risk AI. For general-risk AI, there is the right to be informed about the factual use and effects of AI (Art. 52). Besides, for high-risk AI, cognitive sovereignty, human oversight, accuracy, robustness, cybersecurity, quality of datasets, technical documentation, record-keeping, and transparency are mandatory requirements (Art. 13).
Art. 13 of AIA under the requirements section, covers technical interpretability for high-risk AI processes. This interpretability is required for high-risk systems which cover decisions on important human interests. This article states that high-risk AI systems must be designed to be transparent so that those using them can understand and use them correctly. They must come with clear instructions, including information about the provider, the system’s capabilities and limitations, and any potential risks. The instructions should also explain how to interpret the system’s output, any predetermined changes to the system, and how to maintain it. If relevant, they should also describe how to collect, store and interpret data logs.
Additionally, Art. 86 under the remedies section allows an affected individual to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decisions taken.
Even though these articles seem to provide a sufficient basis for RTE, there is also a lack of clarity and missing points. Some critics say AIA only focuses on technical transparency instead of meaningful explanations, "the question of how to make an AI system explainable is left to the discretion of the AI system provider". Additionally, critics demonstrate that AIA is addressed to professional users, if we consider the list of information provided to deployers, this information requires technical knowledge to decide on the fairness of the AI system. This transparency is "by experts for experts".
However, AIA shed light on innovative changes in the world: most notably, it differentiated types of regulation based on the risk categories of AI systems. It offers a certain level of transparency for each category and tries to maintain balance in the digital environment. Unlike GDPR, AIA’s provisions extended to nonautomated decision-making and non-high-risk applications of AI systems and it is not limited to the protection of personal data.