To main content
Publications

Understanding the right to explanation

Introduction

Today various AI systems are used to distribute welfare benefits and state allowances, hire people, manage companies, determine disability, calculate administrative, and even criminal penalties by applying automated data processing.
While AI systems handle hundreds of processes in finance, education, health care, and administration, these systems often operate as a 'black box', making it difficult for individuals to understand how their data is processed. Black box models, in particular, make it difficult to provide accurate and transparent information to the data subject. Because these models are opaque and can infringe on fundamental rights such as the right to a fair trial, and privacy, and may lead to discrimination, a lack of accountability, and other issues. That is why most states have already started to build their regulation system and adapt it to modern changes.
The outline of this research will explore the Right to Explanation, the current regulation of automated processing in Europe and other world countries, and analyse the existing situation in Azerbaijan.
Before starting the research, here are crucial terms and expressions to ensure a clear understanding:
Artificial Intelligence (AI) system — a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
Black box model — a system using inputs and outputs to create useful information, without any knowledge of its internal workings;
Data subject — an individual whose personal data is collected, stored, or processed by data controllers or processors;
Source code — the set of computer instructions that have been written to create a program or piece of software;
Machine Learning (ML) — a subfield of artificial intelligence that gives computers the ability to learn without explicitly being programmed;
Input data — data provided to or directly acquired by an AI system based on which the system produces an output;
XAI (Explainable Artificial Intelligence) system — an AI system that is more transparent and interpretable, addressing the inherent "black box" nature of traditional AI models;
High-risk AI — According to Art. 6 of AIA, an AI system is considered high-risk if it is used as a safety component of a product, or if it is a product itself that is covered by EU legislation. Besides, Annex III of the Act illustrates sensitive areas where high-risk AI systems are used. These are biometrics, critical infrastructure, education and vocational training, employment, essential private services and essential public services and benefits, law enforcement, migration, asylum and border control management, administration of justice and democratic processes;
Deployer — a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;
Controller — the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data;
Provider — a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its name or trademark, whether for payment or free of charge.
March 17, 2025
Xoşqədəm Salmanova

WHAT IS THE RIGHT TO EXPLANATION AND WHY IT MATTERS?

To understand the right to explanation (hereinafter RTE) and why it matters, its meaning should be analyzed independently and in the context of similar concepts, such as interpretability and transparency.
According to some authors, interpretation is a more suitable term within a legal context, as it focuses on the logic and rationale behind an output. Interpretability refers to "the degree of human comprehensibility of a given black box model or decision." It shows how input is converted into output.
Next, transparency means understanding the model on some level — either entirely or partially. It ensures the detection of bias and unfairness and helps with accurate auditing.
Turning to explainability, it seeks to clarify the reasoning behind a prediction or decision. In the publication of the European Data Protection Supervisor, it is stated that explainability finds an answer to the question of why AI make a particular decision and shows justification. Explainability is mostly used for complex AI systems such as medical diagnosis, and self-driving cars while interpretability is best for less complex systems — credit scoring. Even though explainability and interpretability are different concepts, explainability incorporates interpretation and relates to human-computer interaction, law, and ethics. It is one step beyond interpretability and uses more human-readable terms. Therefore, in this research, we will focus primarily on explainability, as the law should be comprehensible to everyone, not just IT experts.
Two of the suggested explanation models are subject-centric and model-centric. The model-centric approach includes setup information, training metadata, performance metrics, estimated global logic, and process information. In contrast, the subject-centric model focuses on input data. The latter is more practical because it focuses on the input or decision, rather than the system.
Both models underscore the importance of RTE, which ensures that data subjects receive sufficient information and are empowered to agree with, contest, or correct decisions or predictions. It is also crucial for building public trust in technology for the future.
For some companies, it is a requirement to effectively design information using visualization and interactive techniques to ensure that data subjects notice and understand the necessary explanations that ensure RTE.
As we strive for understandable, actionable, and meaningful elements in automated systems, it is important to address some technical challenges and maintain a balance between the scope of explainability and potential shortcomings. Lawyers should be familiar with some of the current complexities:
  • It is not possible to provide proper RTE to data subjects merely by disclosing the source code of the algorithms, as the source code is unintelligible to non-experts. We need more than technical formalities.
  • The risk of automation bias: If incorrect suggestions are accepted as correct due to misleading numerical and visual explanations, it could have negative consequences.
  • The balance between the complexity of variables and clarity: "Systems with more variables will typically perform better than simpler systems."
  • Over-explaining can reduce the predictive power of signals and may lead to the strategic manipulation of the system.
  • Unsupervised systems — online machine learning (ML) where input is not provided by humans but is instead discovered by the software—further complicate the implementation.
Despite these challenges and uncertainties about how and when automated processes are explainable and what the moral implications are, there are evolving regulations worldwide. These will be discussed in the next section with references to the GDPR and the AI Act.

SETTING THE STAGE FOR REGULATION

A. General Data Protection Regulation (GDPR)
Before analyzing the AI Act (AIA), RTE’s roots extend to the General Data Protection Regulation (GDPR). Art. 22 of GDPR states that "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." However, the prohibition of fully automated decisions is not absolute, as this provision is not applied if it:
  • is necessary for entering into, or performance of, a contract between the data subject and a data controller;
  • is authorized by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
  • is based on the data subject’s explicit consent.
To summarize, for implementing Art. 22 certain provisions should be met simultaneously: solely automation process, absence of exceptional circumstance, and serious impactful effects for the data subject.
To understand the first requirement, solely automated decisions and decisions involving humans should be differentiated. Human involvement does not cover a human rubber-stamp algorithmic decision, it is not enough to avoid Art.22, human involvement should be meaningful. The human factor has several reasons such as limiting the power of machines, ensuring fairness, avoiding flaws and building human-machine collaboration. However, regulation of human intervention is uncertain, as well.
Secondly, exceptional circumstances limit data subjects' right to reject automation processes. It is noteworthy to point out that the above-mentioned limitations are disputable. For example, in China, there is no restriction, Chinese Personal Information Protection Law allows subjects to reject the automation process if they simply do not understand automated decisions.
The third element is a significant impactful decision. Working party guidelines give us the list to determine which decisions have impactful effects: decisions that affect financial circumstances or access to health services or access to education, or decisions that deny employment or put someone "at a serious disadvantage".
Resorting to RTE in GDPR, this Regulation gives at least the right to obtain human intervention to express the view of data controllers and to contest decisions. According to Art29 Working Party Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679, the data controllers should provide a way for human intervention and contesting decisions. Results of European Parliamentary Research Service also state these safeguards provide a link to the appeal process for opaque decisions.
While GDPR gives broader rights to data subjects against automated decision-making processes, some authors such as Wachter and others state that the requirement in GDPR is about the explanation of system functionality, not specific decisions. Besides, the right to explanation is emphasized in recital 71 GDPR which is not a binding part of the Regulation. In the writing by European Parliamentary Research Service, it is interpreted as the discretion (not legal obligation) of controllers to provide individual explanations when it is convenient. It is intended to apply the right to explanation when it is practically possible to avoid unnecessary burdens for controllers.
B. EU Artificial Intelligence Act (AIA)
In AIA, we can see transparency and opacity terms which warn us of developing XAI systems (Explainable Artificial Intelligence). Here transparency is defined by traceability and explainability.
There are different transparency requirements for general and high-risk AI. For general-risk AI, there is the right to be informed about the factual use and effects of AI (Art. 52). Besides, for high-risk AI, cognitive sovereignty, human oversight, accuracy, robustness, cybersecurity, quality of datasets, technical documentation, record-keeping, and transparency are mandatory requirements (Art. 13).
Art. 13 of AIA under the requirements section, covers technical interpretability for high-risk AI processes. This interpretability is required for high-risk systems which cover decisions on important human interests. This article states that high-risk AI systems must be designed to be transparent so that those using them can understand and use them correctly. They must come with clear instructions, including information about the provider, the system’s capabilities and limitations, and any potential risks. The instructions should also explain how to interpret the system’s output, any predetermined changes to the system, and how to maintain it. If relevant, they should also describe how to collect, store and interpret data logs.
Additionally, Art. 86 under the remedies section allows an affected individual to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decisions taken.
Even though these articles seem to provide a sufficient basis for RTE, there is also a lack of clarity and missing points. Some critics say AIA only focuses on technical transparency instead of meaningful explanations, "the question of how to make an AI system explainable is left to the discretion of the AI system provider". Additionally, critics demonstrate that AIA is addressed to professional users, if we consider the list of information provided to deployers, this information requires technical knowledge to decide on the fairness of the AI system. This transparency is "by experts for experts".
However, AIA shed light on innovative changes in the world: most notably, it differentiated types of regulation based on the risk categories of AI systems. It offers a certain level of transparency for each category and tries to maintain balance in the digital environment. Unlike GDPR, AIA’s provisions extended to nonautomated decision-making and non-high-risk applications of AI systems and it is not limited to the protection of personal data.

CURRENT LEGAL FRAMEWORK IN AZERBAIJAN AND SHAPING ITS AI FUTURE

The current situation in Azerbaijan shows minimum use of solely automated decision-making processes. However, state bodies tend to pave the way for fully automated processes to facilitate certain operations.
One of these bodies is the State Customs Committee, as it represents the Automated Risk Analyse System — ARAS, which processes data in advance and according to the risk profile of the subject, determines which customs corridor is convenient for the subject. It ensures agile customs clearance. It has been applied since 2024 January and is based on AI system and ML algorithms.
The next sample is the Automated Tax Information System (AVİS) — used by the State Tax Committee, its functions include processing applications, declarations, and other requests received from taxpayers and citizens, managing appropriate responses and notifications; registering taxpayers; automatically calculating payments; ensuring access to information based on the authorities of the Tax Service’s main offices and local tax authorities, etc. This system automatically processes data and operates its state duties and formal operations. Compared to ARAS, its decision-making part does not seem active, however, both should provide access to information and ensure transparency.
Further, in the bank sector, the creditworthiness of individuals is assessed in different ways. In most banks of Azerbaijan calculating the creditworthiness of customers is a scoring system which is conducted through 2 ways: expert advice and modelling (automatic way). The second way is using algorithms such as logistic regression, and boostin'. For instance, there is a "finscore" system to calculate scoring by the Azerbaijan Credit Bureau. It uses 4 main features and subjects' latest 12 months' scoring history to calculate its score. It should not be considered a fully automated system as there is human — experts' intervention to output. If the customer does not agree with the bank’s decision, the latter provides which elements they have considered to evaluate and which part caused a negative answer from the bank, however, mathematical calculations mostly are not disclosed.
The abovementioned examples are some of the technology-involved data processing which demonstrates a way to solely automated decisions. However, there is no systematic act to cover data subjects' and affected third parties' rights.
Azerbaijan ratified the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data in 2009. According to the Convention, "automatic processing" includes the following operations if carried out in whole or in part by automated means: storage of data, carrying out of logical and/or arithmetical operations on those data, their alteration, erasure, retrieval or dissemination. Art. 8 gives safeguards to individuals such as establishing the existence of an automated personal data file, its main purposes, as well as the identity and habitual residence or principal place of business of the controller of the file; obtaining at reasonable intervals and without excessive delay or expense confirmation of whether personal data relating to him are stored in the automated data file as well as communication to him of such data in an intelligible form; making amendments or deleting data in exceptional cases; using law enforcement measures otherwise. However, in the modern world, these safeguards are not enough to provide an explanation for subjects to decide whether or not this process violates their rights and to contest the automatic processing of data.
Further, Art. 2.4−1 of the Law on Obtaining Information, states that access to information is permitted if it does not contradict the purposes of protecting the interests of the Republic of Azerbaijan in the fields of political, economic, military, financial credit, and currency policy; safeguarding public order, health, and morality; protecting the rights and freedoms of others, as well as commercial and other economic interests; ensuring the authority and impartiality of the judiciary; and maintaining the normal course of preliminary investigations in criminal cases.
Additionally, according to the Decision of the Cabinet of Ministers of the Republic of Azerbaijan on the Approval of the "Rules for the Use of Information Systems, Information Technologies, and Their Support Tools in Customs Operations", information systems should determine the level of access to information resources used by customs bodies according to users' rights (Art. 2.1.3).
According to the Law on Personal Data (2010), data subjects have the right:

  • to demand the legal justification for the collection, processing, and transfer of their personal data in the information system, and to obtain information on the legal consequences of such collection, processing, and transfer for the data subject;
  • to know the purpose, processing period, and methods of collecting and processing their personal data in the information system, as well as the individuals authorized to access their personal data, including the scope of information systems intended for data exchange.
These rights provide basic answers to questions of which proceedings are carried out on data and why. It is similar to the provisions of GDPR, especially Art. 13 which gives a list of Information to be provided where personal data are collected from the data subject. However GDPR provides wider rights, one of them is stated in paragraph 2(f): the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.
Compared with EU, in the legislation of Azerbaijan, there is no exact boundary to what extent individuals can obtain information from certain bodies using automated systems in Azerbaijan. Further, as obtaining information does not equal to explanation, in the best case, individuals can be informed that their data is stored or processed in an automated way. In this case, especially data subjects and other third parties who are under the effect of automated decisions are under the danger of uncertainty.
Some of the steps that can be taken on the path to reform legislation:

  • inserting the right to obtain meaningful explanations to the Law on Personal Data
  • determining safeguards for data subjects according to the risk level of automated systems
  • providing mandatory human oversight over some critical spheres
  • raising legal awareness of people regarding transparency, the right to obtain an explanation, the right to object and contest.

SUMMARY

As automated decision-making becomes more prevalent, legal frameworks must adapt to ensure transparency and accountability. The right to explanation plays a crucial role in safeguarding individuals' rights, as individuals need to contest automated decisions where they disagree.
While Azerbaijan increasingly integrates automated systems in state processes, its legal framework lacks explicit provisions regulating automated decisions. To align with global standards, Azerbaijan’s legislation should establish clear and unified regulations that balance innovation with fundamental rights, ensuring individuals can understand and contest technology-involved decisions when necessary.

References

  1. EDPS TechDispatch on Explainable Artificial Intelligence (2023)
  2. Interpretable AI: Building explainable machine learning systems, Ajay Thampi (2022)
  3. Lilian Edwards & Michael Veale, Slave to the Algorithm? Why a ‘right to an explanation' is probably not the remedy you are looking for, 16 Duke Law & Technology Review
  4. Margot E. Kaminski, The Right to Explanation, Explained, 34 Berkeley Technology Law Journal (2019)
  5. Article 29 Data Protection Working Party, Guidelines on Automated Individual Decision-making and Profiling for the Purposes of Regulation 2016/679, 17/EN. WP 251rev.01, (2018)
  6. Joshua A. Kroll, Joanna Huey etc, Accountable algorithms, 165 University of Pennsylvania Law Review (2017)
  7. Cecilia Panigutti, etc, The role of explainable AI in the context of the AI Act (2023)
  8. Hofit Wasserman Rozen, Niva Elkin-Koren, Ran Gilad-Bachrach, The Case Against Explainability, (2023)
  9. Chiara Gallese, AI Act Proposal: A new right to technical interpretability? International Society Law Center (2023)
  10. Andrew D Selbst & Julia Powles, Meaningful information and the right to explanation, International Data Privacy Law (2017)
  11. Papadimitriou, Eleftheria, The right to explanation on the processing of personal data with the use of AI systems. International Journal of Law in Changing World (2023)
  12. Law of the Republic of Azerbaijan on Personal Data (2010)
  13. The Azerbaijan Credit Bureau, ACB I Individual I Services I Scoring
  14. The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981)
  15. Law of the Republic of Azerbaijan on Obtaining Information (2005)
  16. Decision of the Cabinet of Ministers of the Republic of Azerbaijan on the Approval of the "Rules for the Use of Information Systems, Information Technologies, and Their Support Tools in Customs Operations" (2012)
Made on
Tilda