Publications

The Artificial Intelligence Act of South Korea An Article by Article Commentary

Introduction

In recent weeks, international media outlets have increasingly reported that South Korea is launching a comprehensive framework for the regulation of artificial intelligence. These developments naturally attracted the attention of our Digital Law Laboratory.
We located the original text of the law and worked with the official version in Korean1. Given the limited accessibility of the document to non-Korean readers, we prepared our own translation and decided to conduct a detailed, article-by-article legal analysis of the Act.
With due caution, it should be noted that while the law indeed addresses a wide range of issues related to artificial intelligence, including governance structures, risk management, transparency, and institutional coordination, some media narratives appear to overstate the immediacy and completeness of its regulatory impact. The Act relies heavily on framework provisions, delegated legislation, and gradual implementation mechanisms. As such, significant regulatory work still lies ahead, which is both expected and structurally embedded in the design of the law.
That said, the foundation laid by this legislation is robust and conceptually well-constructed. It reflects a deliberate attempt to balance innovation, trust, and the protection of fundamental rights within a coherent governance model.
We invite readers to explore the full analysis below, share it with colleagues, and reference our work where relevant.
December 24, 2025

Article 1
Purpose

The purpose of this Act is to establish the basic matters necessary for the sound development of artificial intelligence and the creation of a trust-based foundation for artificial intelligence, thereby protecting human dignity and the rights and interests of citizens, improving the quality of life of the people, and enhancing national competitiveness.
Commentary:
Article 1 functions as the teleological anchor of the entire statute. Although it does not impose directly enforceable obligations, it is legally significant in that it determines the interpretative direction of all subsequent provisions. By explicitly coupling the development of artificial intelligence with the protection of human dignity and citizens' rights, the legislature places AI governance within the normative framework of constitutional values rather than treating it as a purely technological or economic matter. In Korean statutory interpretation, such purpose clauses are routinely relied upon to resolve ambiguity in operative provisions and to justify regulatory intervention where the statutory text employs open-ended standards such as safety, trust, or reasonableness.
At the same time, Article 1 reveals the hybrid regulatory character of the Act. The inclusion of national competitiveness as a co-equal objective alongside rights protection indicates that the statute is not designed as a restrictive or precautionary instrument alone. Instead, it legitimizes a dual-track approach in which promotion of the AI industry and the imposition of governance constraints coexist within a single legal framework. This duality is critical for understanding later provisions, particularly those that balance industrial support measures against targeted obligations for high-impact or large-scale AI systems.

Article 2
Definitions

For the purposes of this Act, the following terms shall have the meanings set forth below:
  1. The term "artificial intelligence" means the implementation of human intellectual abilities, such as learning, inference, perception, judgment, and language understanding, by electronic methods.
  2. The term "artificial intelligence system" means a system based on artificial intelligence that, with varying degrees of autonomy and adaptability, derives outputs such as predictions, recommendations, or decisions for the achievement of given objectives, thereby influencing real or virtual environments.
  3. The term "artificial intelligence technology" means hardware or software technologies necessary for the implementation of artificial intelligence, or technologies for applying such artificial intelligence.
  4. The term "high-impact artificial intelligence" means an artificial intelligence system that may have a significant impact on, or pose a risk to, human life, physical safety, or fundamental rights, and that is used in any of the following areas:
  5. (a) the supply of energy;
  6. (b) the production and management of drinking water;
  7. (c) the construction and operation of systems for the provision and use of healthcare services;
  8. (d) the development and use of medical devices and digital medical devices;
  9. (e) the safe management and operation of nuclear materials and nuclear facilities;
  10. (f) the analysis or use of biometric information for criminal investigation, arrest, or detention, including facial images, fingerprints, iris data, palm veins, and similar identifiers;
  11. (g) decision-making or evaluation that has a significant impact on rights and obligations, such as employment, credit evaluation, or similar matters;
  12. (h) the core operation and management of transportation means, transportation infrastructure, and transportation systems;
  13. (i) decision-making by public authorities or other public entities that affects citizens, including eligibility verification for public services, administrative decisions, or the imposition or collection of payments;
  14. (j) evaluation of learners in early childhood, primary, and secondary education;
  15. (k) any other area designated by Presidential Decree as having a significant impact on the protection of life, physical safety, or fundamental rights.
  16. The term "generative artificial intelligence" means artificial intelligence that imitates the structure and characteristics of input data and generates outputs such as text, sound, images, video, or other content.
  17. The term "artificial intelligence industry" means an industry that develops, produces, distributes, or provides products or services related to artificial intelligence.
  18. The term "artificial intelligence business operator" means a person, corporation, organization, or State or local government that conducts business in the artificial intelligence industry, including:
  19. (a) a person who develops and provides artificial intelligence; or
  20. (b) a person who provides artificial intelligence-based products or services using artificial intelligence provided by another party.
  21. The term "user" means a person who receives artificial intelligence-based products or services.
  22. The term "person affected" means a person whose life, physical safety, or fundamental rights may be significantly affected by artificial intelligence-based products or services.
  23. The term "artificial intelligence society" means a society in which artificial intelligence creates value and drives development across various fields, including industry, economy, socio-cultural activities, and public administration.
  24. The term "artificial intelligence ethics" means ethical standards that members of society should observe across all stages of the development, provision, and use of artificial intelligence in order to build a safe and trustworthy artificial intelligence society based on respect for human dignity and the protection of citizens' rights, lives, and property.
Commentary:
Article 2 is structurally decisive for the Act because it does not merely clarify terminology but determines the scope of regulatory exposure. The legislature adopts a deliberately technology-neutral and function-oriented approach, defining artificial intelligence and AI systems in broad terms that focus on cognitive functions and real-world effects rather than on specific technical architectures. This drafting choice is legally significant: it minimizes the risk that rapid technological change will render the statute obsolete and ensures that regulatory obligations attach to systems based on their operational role and societal impact, not on formal classifications chosen by developers.
The most consequential element of Article 2 is the definition of "high-impact artificial intelligence." Unlike regimes that rely primarily on abstract capability thresholds, this Act grounds heightened regulatory attention in the context of use and the potential effect on life, safety, and fundamental rights. By enumerating specific sectors while simultaneously allowing expansion by Presidential Decree, the legislature creates a flexible but legally bounded trigger for enhanced obligations under later provisions. In practice, classification under this definition functions as a gateway to the Act’s core compliance regime, including safety management, explainability, documentation, and impact assessment duties. The parallel definitions of "AI business operator," "user," and "person affected" further indicate an intention to decouple contractual relationships from regulatory concern, extending legal attention beyond customers to any individual whose rights or interests may be materially influenced by AI-driven outcomes.

Article 3
Basic Principles and Responsibilities

(1) Artificial intelligence technologies and the artificial intelligence industry shall be developed in a manner that enhances safety and trust and improves the quality of life of the people.
(2) A person affected by artificial intelligence shall, to the extent technically and reasonably possible, be able to receive a clear and meaningful explanation of the main criteria and principles used in deriving the final outcome of artificial intelligence.
(3) The State and local governments shall respect the creative initiative of artificial intelligence business operators and endeavor to establish a safe environment for the use of artificial intelligence.
(4) The State and local governments shall endeavor to devise measures to ensure that all citizens can sustainably adapt to the changes brought about by artificial intelligence in various areas, including social, economic, cultural, and everyday life.
Commentary:
Article 3 articulates the normative principles that govern the relationship between technological development, individual rights, and public authority under the Act. From a legal standpoint, the provision is composed almost entirely of principle-oriented obligations rather than rules of conduct. The repeated use of formulations equivalent to "shall endeavor" indicates that the legislature intentionally refrained from imposing duties of result, opting instead to guide administrative policy and regulatory discretion. As such, Article 3 does not, on its own, create enforceable claims or sanctionable duties, but it establishes a binding normative framework within which concrete obligations in later articles must be interpreted and applied.
The second paragraph of Article 3 is particularly significant, as it introduces the concept of explainability at the level of principle. While carefully limited by the qualifier "to the extent technically and reasonably possible," this provision formally recognizes that affected persons have a legitimate interest in understanding how AI-driven outcomes are produced when those outcomes materially affect them. This principle serves as the legal and conceptual foundation for subsequent, more concrete transparency obligations, most notably the notice and explanation requirements imposed in relation to high-impact artificial intelligence. In practice, Article 3(2) functions as an interpretative bridge between abstract rights protection and operational compliance mechanisms, enabling regulators and courts to assess whether specific disclosure practices are consistent with the Act’s underlying intent.

Article 4
Scope of Application

(1) This Act shall apply to acts committed outside the territory of the Republic of Korea if such acts affect the domestic market or users in the Republic of Korea.
(2) This Act shall not apply to artificial intelligence that is developed or used exclusively for national defense or national security purposes, as prescribed by Presidential Decree.
Commentary:
Article 4 defines the spatial and material scope of the Act and is critical for understanding its jurisdictional reach. Paragraph (1) establishes an explicit form of extraterritorial application based on effects within the Republic of Korea. By adopting an effects-based criterion impact on the domestic market or users, the legislature aligns the Act with contemporary regulatory approaches in fields such as competition law and data protection. This provision ensures that foreign AI operators cannot evade the Act’s requirements merely by locating development or operational infrastructure outside Korean territory, provided that their systems materially affect Korean users or markets.
Paragraph (2) introduces a narrowly framed exclusion for AI developed or used exclusively for national defense or national security. Importantly, this exclusion is not self-executing: it applies only to categories designated by Presidential Decree. From a legal perspective, this preserves civilian oversight and prevents overbroad invocation of national security as a blanket exemption. The structure of the provision reflects a deliberate attempt to balance sovereign security interests with the Act’s general commitment to trust and rights protection, while reserving detailed boundary-setting to subordinate legislation that can be adjusted in response to changing security and technological conditions.

Article 5
Relationship with Other Acts

(1) With respect to matters concerning artificial intelligence, artificial intelligence technologies, the artificial intelligence industry, and the artificial intelligence society, this Act shall apply unless otherwise provided for by other Acts.
(2) When enacting or amending other Acts related to the matters referred to in paragraph (1), such Acts shall be consistent with the purpose of this Act.
Commentary:
Article 5 serves as a coordination clause that situates the Act within the broader Korean legal system. Paragraph (1) establishes this statute as a general framework law, applicable by default in the absence of special provisions in sector-specific legislation. This formulation avoids automatic preemption of existing regulatory regimes, such as those governing personal data protection, medical devices, or industrial safety, while nonetheless ensuring that the Act functions as a baseline reference point for AI-related matters. In practice, this means that obligations arising under other statutes continue to apply concurrently, unless a clear legislative intent to derogate from this Act is expressed.
Paragraph (2) performs a forward-looking harmonization function. By requiring future AI-related legislation to be consistent with the purpose of this Act, the legislature seeks to prevent fragmentation and normative contradiction as AI regulation evolves. Although this provision does not create a justiciable standard in a strict sense, it carries interpretative weight in legislative drafting and administrative application. Over time, Article 5 is likely to reinforce the Act’s role as the central normative framework for AI governance in Korea, shaping both the content of subsequent statutes and the manner in which conflicts between overlapping regulatory regimes are resolved.

Article 6
Establishment of the Basic Plan for the Development of AI

(1) The Government shall establish a Basic Plan for the Development of Artificial Intelligence (hereinafter referred to as the "Basic Plan") every three years in order to systematically promote the sound development of artificial intelligence and the creation of a trust-based foundation for artificial intelligence.
(2) The Basic Plan shall include the following matters:

  • the basic direction of policies for the development of artificial intelligence technologies and the artificial intelligence industry;
  • matters concerning the establishment of a trust-based foundation for artificial intelligence, including fairness, transparency, accountability, and safety;
  • matters concerning the protection of the rights and interests of citizens in relation to artificial intelligence;
  • matters concerning the cultivation of human resources and the improvement of artificial intelligence literacy;
  • matters concerning research and development, commercialization, and industrialization of artificial intelligence;
  • matters concerning the establishment and utilization of data and computing infrastructure;
  • matters concerning international cooperation in the field of artificial intelligence;
and other matters necessary for the development of artificial intelligence and the creation of a trust-based foundation.
(3) The Government shall establish the Basic Plan after hearing the opinions of relevant experts and the public.
Commentary:
Article 6 introduces the central planning instrument of the Act and marks the transition from abstract principles to concrete governance architecture. Legally, this provision imposes a clear obligation on the Government to adopt a rolling, time-bound policy framework in the form of a three-year Basic Plan. Unlike the principle-based norms in Chapter 1, Article 6 creates a duty of action rather than mere policy aspiration. The Basic Plan is not optional, nor is its periodic renewal discretionary; it constitutes a mandatory planning mechanism through which the State must articulate priorities, allocate resources, and coordinate AI-related policy across ministries.
From a regulatory perspective, Article 6 is significant because it functions as the backbone of the Act’s implementation. The breadth of required content ranging from industrial promotion and infrastructure to trust, rights protection, and international cooperation, confirms that the legislature views AI governance as a cross-sectoral and long-term undertaking. While the Basic Plan itself does not directly bind private actors, it exerts substantial indirect legal and practical influence by shaping subordinate legislation, budgetary decisions, administrative guidance, and enforcement priorities. The requirement to consult experts and the public further embeds procedural legitimacy into AI policymaking, reinforcing the Act’s emphasis on trust not only in AI systems, but also in the regulatory process governing them.

Article 7
National AI Committee

(1) In order to deliberate and coordinate major matters concerning national policies on artificial intelligence, a National Artificial Intelligence Committee (hereinafter referred to as the "Committee") shall be established under the President.
(2) The Committee shall deliberate on the following matters:

  • the basic direction of national policies on artificial intelligence;
  • the Basic Plan and annual implementation plans;
  • coordination of policies among central administrative agencies;
  • and other matters concerning artificial intelligence designated by Presidential Decree.
(3) The Committee shall be composed of a Chairperson and not more than a prescribed number of members, including both public officials and private-sector experts.
(4) The majority of the members of the Committee shall be private-sector experts with professional knowledge and experience in artificial intelligence and related fields.
(5) The organization, composition, and operation of the Committee, including matters concerning the appointment and term of members, shall be prescribed by Presidential Decree.
Commentary:
Article 7 establishes the institutional centerpiece of the Act’s governance framework by creating the National Artificial Intelligence Committee under the President. From a legal standpoint, this provision reflects a deliberate choice to centralize strategic AI policy coordination at the highest level of the executive branch. By situating the Committee directly under the President rather than within a single ministry, the legislature signals that artificial intelligence is to be treated as a cross-cutting national priority rather than a sectoral regulatory issue. The Committee’s mandate to deliberate and coordinate major policy matters underscores its role as a strategic, rather than purely administrative, body.
The requirement that a majority of Committee members be private-sector experts is particularly noteworthy. This composition rule embeds technical and practical expertise into the core of AI policymaking, while simultaneously limiting the dominance of bureaucratic perspectives. Legally, however, the Committee remains an advisory and deliberative organ; it does not itself exercise direct regulatory or enforcement authority. Its influence instead operates through agenda-setting, policy coordination, and the shaping of subordinate legislation and administrative action. As a result, Article 7 lays the institutional foundation through which the abstract objectives articulated in Article 1 and the planning obligations in Article 6 are translated into coherent and coordinated state action.

Article 8
Functions of the National AI Committee

The Committee shall deliberate on and coordinate the following matters:

  • the establishment and implementation of national policies on artificial intelligence;
  • the establishment, modification, and implementation of the Basic Plan and annual implementation plans;
  • the coordination of policies related to artificial intelligence among central administrative agencies;
  • matters concerning research and development, investment, and infrastructure related to artificial intelligence;
  • matters concerning the improvement of laws and regulations related to artificial intelligence;
  • matters concerning the utilization of artificial intelligence in industry and the public sector;
  • matters concerning international cooperation related to artificial intelligence;
  • and other matters necessary for the development of artificial intelligence and the creation of a trust-based foundation.
Commentary:
Article 8 specifies the substantive scope of the National Artificial Intelligence Committee’s authority and clarifies its role within the administrative structure established by the Act. Legally, this provision does not confer independent regulatory or enforcement powers on the Committee; instead, it defines a broad deliberative and coordination mandate that cuts across policy design, implementation oversight, and inter-agency alignment. The breadth of the enumerated functions reflects the legislature’s understanding that artificial intelligence policy cannot be effectively governed through fragmented ministerial action and requires a centralized forum capable of reconciling competing regulatory objectives.
From a governance perspective, Article 8 positions the Committee as the primary mechanism for horizontal integration of AI policy. Its competence extends beyond industrial promotion to encompass regulatory reform, public-sector deployment, and international cooperation, thereby reinforcing the hybrid nature of the Act. Although the Committee’s decisions are not legally binding in the manner of administrative orders, they are likely to exert significant practical influence over subordinate legislation, budgetary allocations, and enforcement priorities. In this sense, Article 8 operationalizes the strategic intent of Articles 1 and 6 by embedding policy coherence and long-term planning into the institutional fabric of AI governance.

Article 9
Subcommittees and Supporting Organizations

(1) The Committee may establish subcommittees to efficiently perform its functions.
(2) The Committee may establish or designate a supporting organization to provide administrative, technical, and professional support necessary for the operation of the Committee and its subcommittees.
(3) Matters necessary for the organization, operation, and support of the subcommittees and supporting organizations shall be prescribed by Presidential Decree.
Commentary:
Article 9 provides the operational infrastructure that enables the National Artificial Intelligence Committee to function effectively. While Article 7 establishes the Committee as a high-level deliberative body and Article 8 defines its substantive mandate, Article 9 addresses the practical reality that such a broad mandate cannot be discharged by a single plenary body alone. Legally, this provision grants discretionary authority to create subcommittees and support structures, thereby allowing the governance framework to scale and specialize in response to the technical complexity and policy breadth of artificial intelligence.
The delegation of organizational details to Presidential Decree is legally significant. It reflects a conscious legislative choice to preserve flexibility in institutional design, allowing the executive to adapt subcommittee structures, staffing, and support mechanisms as AI technologies and policy priorities evolve. At the same time, Article 9 reinforces the non-regulatory character of the Committee by situating its support functions firmly within an administrative assistance framework rather than granting autonomous decision-making powers. In practice, the effectiveness of the Committee’s work and by extension, the coherence of national AI policy will depend heavily on how robustly these subcommittees and supporting organizations are constituted under subordinate legislation.

Article 10
AI Policy Center

(1) The State may establish or designate a specialized organization as an Artificial Intelligence Policy Center in order to support the formulation and implementation of policies related to artificial intelligence.
(2) The Artificial Intelligence Policy Center shall perform the following functions: research and analysis of trends in the development and use of artificial intelligence; support for the establishment and implementation of the Basic Plan and annual implementation plans; collection and analysis of information related to safety, trust, and ethics of artificial intelligence; support for international cooperation related to artificial intelligence; and other functions prescribed by Presidential Decree.
(3) Matters necessary for the establishment, designation, operation, and support of the Artificial Intelligence Policy Center shall be prescribed by Presidential Decree.
Commentary:
Article 10 introduces an institutional support mechanism designed to translate strategic policy deliberation into continuous analytical and operational capacity. Unlike the National Artificial Intelligence Committee, which functions primarily as a high-level deliberative and coordination body, the Artificial Intelligence Policy Center is conceived as a permanent expert institution responsible for research, data collection, and policy support. Legally, the provision is enabling rather than mandatory: the State "may" establish or designate such a center, thereby retaining discretion as to institutional form while clearly signaling legislative intent to professionalize and institutionalize AI policy expertise.
The allocation of detailed functions to the Policy Center underscores the legislature’s recognition that effective AI governance requires sustained technical and analytical competence beyond the capacities of traditional administrative bodies. By entrusting tasks such as trend analysis, safety and trust assessment, and international cooperation support to a specialized entity, Article 10 strengthens the evidence-based character of policymaking under the Act. At the same time, the extensive delegation to Presidential Decree reflects a preference for administrative flexibility over rigid statutory design. The legal significance of this provision thus lies less in immediate regulatory effect and more in its role in building the epistemic infrastructure upon which the Act’s substantive obligations and enforcement mechanisms will ultimately depend.

Article 11
AI Safety Institute

(1) The State may establish or designate a specialized organization as an Artificial Intelligence Safety Institute in order to assess risks related to artificial intelligence and to support the safety of artificial intelligence.
(2) The Artificial Intelligence Safety Institute shall perform the following functions: research on risks and potential harms arising from artificial intelligence; development of methods for risk assessment and risk management; support for artificial intelligence business operators in matters related to safety; cooperation with international organizations on artificial intelligence safety; and other functions prescribed by Presidential Decree.
(3) Matters necessary for the establishment, designation, operation, and support of the Artificial Intelligence Safety Institute shall be prescribed by Presidential Decree.
Commentary:
Article 11 establishes the legal basis for a dedicated institutional mechanism focused specifically on artificial intelligence safety. In contrast to the Artificial Intelligence Policy Center under Article 10, which addresses policy formulation and trend analysis, the Safety Institute is oriented toward technical risk identification, assessment, and mitigation. Legally, the provision is again framed as an enabling norm rather than a mandatory directive, granting the State discretion to determine whether to create a new body or designate an existing institution. This approach reflects an awareness of the rapidly evolving nature of AI risk and the need to leverage existing technical expertise where appropriate.
From a regulatory standpoint, Article 11 plays a critical anticipatory role within the Act’s architecture. Although it does not itself impose safety obligations on private actors, it establishes the institutional capacity necessary to give substantive content to later provisions concerning high-impact artificial intelligence and large-scale AI systems. The Safety Institute is positioned to function as a technical reference point for guidelines, assessments, and best practices, both domestically and in international cooperation. As such, Article 11 lays the groundwork for a science-informed approach to AI safety governance, enabling the State to support compliance and oversight without relying exclusively on coercive regulatory instruments.

Article 12
Support for Research and Development of Artificial Intelligence

(1) The State shall promote research and development of artificial intelligence, including basic research and applied research.
(2) The State may provide financial, institutional, or other necessary support to individuals or organizations conducting research and development of artificial intelligence.
(3) In providing support under paragraphs (1) and (2), the State shall take into consideration the need to ensure the safety, trustworthiness, and ethical use of artificial intelligence.
Commentary:
Article 12 marks the point at which the Act’s industrial policy dimension becomes explicit. Unlike earlier provisions that establish governance structures and planning mechanisms, this article authorizes concrete state intervention in the form of research and development support. The combination of a mandatory policy orientation ("shall promote") with discretionary support instruments ("may provide") reflects a calibrated legislative approach: the State is obliged to prioritize AI R&D as a matter of national policy, while retaining flexibility in the choice of funding mechanisms, beneficiaries, and institutional arrangements.
At the same time, Article 12 embeds normative constraints within the promotion of innovation. By expressly requiring consideration of safety, trustworthiness, and ethical use when providing R&D support, the legislature signals that public funding is not value-neutral. This linkage is legally significant because it allows the State to condition grants, subsidies, or institutional backing on compliance with emerging safety and ethics standards, even in the absence of binding regulatory duties at the development stage. In practice, Article 12 thus functions as an early alignment mechanism, incentivizing research trajectories that are compatible with the broader trust-based framework established by the Act.

Article 13
Support for the AI Industry

(1) The State may provide support necessary for the development of the artificial intelligence industry in order to strengthen national competitiveness.
(2) Support under paragraph (1) may include support for the establishment and growth of artificial intelligence enterprises, commercialization of artificial intelligence technologies, expansion into overseas markets, formation of artificial intelligence-related industrial ecosystems, and other necessary measures.
(3) Matters necessary for the provision of support under this Article shall be prescribed by Presidential Decree.
Commentary:
Article 13 extends the Act’s promotional logic from research and development into the broader commercial and industrial sphere. Legally, this provision is framed as a discretionary empowerment of the State rather than as a mandate, reflecting the legislature’s intention to enable flexible industrial policy rather than impose rigid support schemes. By explicitly tying industry support to the objective of national competitiveness, Article 13 situates AI firmly within Korea’s strategic economic policy and provides a statutory basis for targeted subsidies, incentives, and ecosystem-building measures.
From a regulatory perspective, Article 13 is significant because it complements the Act’s trust- and safety-oriented provisions with a clear commitment to market formation and scaling. The breadth of potential support measures allows the executive to tailor interventions to different stages of industrial development, from startup formation to global market expansion. At the same time, the delegation of operational details to Presidential Decree underscores that industry support under the Act is intended to remain adaptable to economic conditions and technological change. In practice, this article reinforces the hybrid nature of the statute by embedding growth-oriented policy tools alongside governance and risk-management mechanisms.

Article 14
Development of Data and Computing Infrastructure

(1) The State shall promote the establishment and expansion of data infrastructure necessary for the development of artificial intelligence, including the collection, management, and utilization of data.
(2) The State may support the development and expansion of computing infrastructure, including high-performance computing resources, necessary for research, development, and utilization of artificial intelligence.
(3) In promoting and supporting infrastructure under paragraphs (1) and (2), the State shall take into consideration the protection of personal data, information security, and compliance with relevant laws.
Commentary:
Article 14 addresses the material preconditions for artificial intelligence development by focusing on data and computing infrastructure as objects of public policy. Legally, this provision establishes a dual obligation-discretion structure: the State is required to pursue policies for data infrastructure development, while support for computing infrastructure is framed as discretionary. This distinction reflects the legislature’s recognition that access to data is a systemic prerequisite for AI development across sectors, whereas computing capacity may be addressed through a combination of public and private investment depending on market conditions.
At the same time, Article 14 embeds regulatory safeguards directly into infrastructure promotion. By expressly requiring consideration of personal data protection, information security, and legal compliance, the provision prevents infrastructure policy from operating in isolation from existing legal regimes. This integration is legally significant because it ensures that the expansion of AI-enabling resources does not undermine established rights-based protections. In practice, Article 14 provides the statutory basis for large-scale public initiatives, such as shared datasets or national computing facilities, while simultaneously legitimizing the imposition of governance conditions on their use.

Article 15
Standardization and Certification

(1) The State shall promote the establishment and dissemination of standards related to artificial intelligence in order to ensure interoperability, safety, and reliability of artificial intelligence technologies and products.
(2) The State may introduce systems for certification, conformity assessment, or verification of artificial intelligence-related products or services, where necessary.
(3) In establishing standards and systems under paragraphs (1) and (2), the State shall take into consideration international standards and global trends.
Commentary:
Article 15 provides the statutory foundation for standard-setting and assurance mechanisms in the field of artificial intelligence. From a legal perspective, this provision bridges the gap between abstract principles of safety and trust and their operationalization through technical norms. By obligating the State to promote standardization, the legislature acknowledges that many of the risks associated with AI cannot be effectively addressed through prohibitions or ex post enforcement alone, but instead require ex ante coordination around shared technical and procedural expectations.
The discretionary authority to introduce certification or conformity assessment systems is particularly significant. Rather than mandating a comprehensive licensing regime, Article 15 allows the State to deploy assurance mechanisms selectively, where risk profiles or market conditions justify them. This approach preserves regulatory proportionality while enabling the development of trust signals that can be leveraged by both regulators and market participants. The explicit reference to international standards further underscores the Act’s outward-looking orientation, seeking to align domestic governance with global norms and thereby reduce barriers to cross-border deployment of AI technologies while maintaining domestic safeguards.

Article 16
Cultivation of Human Resources in AI

(1) The State shall establish and implement measures for the cultivation and development of human resources in the field of artificial intelligence.
(2) Measures under paragraph (1) may include support for education and training related to artificial intelligence, retraining and upskilling of the workforce, cultivation of interdisciplinary professionals, and other necessary measures.
(3) Matters necessary for the establishment and implementation of measures under this Article shall be prescribed by Presidential Decree.
Commentary:
Article 16 addresses human capital as a core component of artificial intelligence governance and development. Legally, the provision imposes a clear obligation on the State to pursue policies for cultivating AI-related human resources, reflecting a recognition that regulatory frameworks and infrastructure alone are insufficient without a skilled workforce capable of developing, deploying, and supervising AI systems. The use of "shall" in paragraph (1) indicates a duty of policy commitment rather than a mere authorization, distinguishing this provision from more discretionary industry-support clauses elsewhere in the Act.
From a regulatory and systemic perspective, Article 16 complements the Act’s technological and institutional measures by focusing on long-term capacity building. The inclusion of retraining and interdisciplinary education is particularly significant, as it signals an understanding that AI governance requires not only technical expertise but also professionals who can bridge law, ethics, policy, and engineering. By delegating implementation details to Presidential Decree, the legislature allows workforce policy to evolve in response to labor market conditions and technological change, while maintaining a firm statutory mandate to invest in human capital as an essential pillar of a trust-based AI ecosystem.

Article 17
Improvement of AI Literacy

(1) The State shall endeavor to improve the understanding and literacy of the general public with respect to artificial intelligence.
(2) Measures under paragraph (1) may include education programs, dissemination of information, and the provision of accessible explanations regarding the principles, functions, and potential risks of artificial intelligence.
Commentary:
Article 17 extends the Act’s governance framework beyond experts and institutions to the general public, recognizing artificial intelligence literacy as a prerequisite for a trust-based AI society. Legally, this provision is formulated as an obligation of effort rather than result, reflecting the practical limits of state influence over public understanding. Nonetheless, it establishes a clear policy responsibility for the State to engage in education and information dissemination, thereby grounding public-facing AI literacy initiatives in statutory authority rather than ad hoc policy choice.
From a regulatory perspective, Article 17 plays an important legitimizing role. By committing the State to improving public understanding of AI principles and risks, the legislature acknowledges that meaningful trust cannot be achieved solely through technical safeguards or institutional oversight. This provision also indirectly supports later transparency and notice obligations by helping to ensure that disclosures and explanations provided under the Act are intelligible to non-expert audiences. In this sense, Article 17 contributes to the effectiveness of the Act as a whole by addressing the societal conditions necessary for informed interaction with AI systems.

Article 18
International Cooperation

(1) The State shall promote international cooperation in the field of artificial intelligence in order to respond jointly to risks arising from artificial intelligence and to utilize its benefits.
(2) International cooperation under paragraph (1) may include participation in international organizations, conclusion of international agreements, joint research and development, and cooperation in the establishment of international norms related to artificial intelligence.
Commentary:
Article 18 situates the Act within the international dimension of artificial intelligence governance and reflects the legislature’s recognition that AI-related risks and opportunities transcend national borders. Legally, the provision imposes a duty of policy orientation on the State to actively engage in international cooperation, while leaving the specific modalities of such engagement to executive discretion. This structure is consistent with the constitutional allocation of foreign affairs powers and allows the State to adapt its international posture as global AI governance frameworks evolve.
From a regulatory standpoint, Article 18 serves both a defensive and a proactive function. On the one hand, it legitimizes participation in international efforts to address cross-border risks, such as safety, security, and ethical concerns associated with advanced AI systems. On the other hand, it provides a statutory basis for Korea to shape emerging global norms and standards in a manner that aligns with its domestic regulatory approach and industrial interests. In practice, this provision reinforces the Act’s hybrid character by linking domestic trust-based governance to international norm-setting and cooperation, thereby reducing the risk of regulatory isolation while enhancing policy coherence across jurisdictions.

Article 19
Collection and Use of Information

(1) The State may collect and analyze information related to the development, distribution, and use of artificial intelligence for the purpose of establishing and implementing policies under this Act.
(2) In collecting and using information under paragraph (1), the State shall comply with statutes concerning the protection of personal data and other relevant laws.
Commentary:
Article 19 provides the legal basis for information-gathering activities necessary to support evidence-based artificial intelligence policy. Unlike earlier provisions that impose planning or promotional obligations, this article authorizes the State to collect and analyze information across the AI lifecycle, from development to deployment. Legally, the provision is permissive rather than mandatory, reflecting the legislature’s intention to empower, but not compel, the administration to engage in data-driven policy formulation as circumstances require.
At the same time, Article 19 embeds clear legal constraints on the exercise of this authority. By explicitly requiring compliance with personal data protection statutes and other relevant laws, the provision prevents policy-driven information collection from circumventing established rights-based safeguards. This dual structure—authorization coupled with legal limitation ensures that regulatory oversight and statistical analysis of the AI ecosystem are conducted within the bounds of existing privacy and information governance frameworks. In practice, Article 19 underpins later supervisory and enforcement mechanisms by legitimizing the collection of information necessary to assess risks, monitor compliance trends, and refine regulatory responses.

Article 20
Financial Resources

(1) The State may secure and allocate the financial resources necessary to implement the policies and measures prescribed under this Act, within the limits of the national budget.
(2) The State may utilize or attract financial resources from sources other than the national budget in accordance with relevant statutes.
Commentary:
Article 20 addresses the fiscal foundations of the Act and confirms that its implementation is intended to be supported by sustained financial commitment rather than ad hoc funding. Legally, the provision is enabling rather than mandatory, reflecting the constitutional principle that budgetary authority ultimately resides in the legislature. Nonetheless, by explicitly referencing the allocation of budgetary resources for the purposes of this Act, Article 20 signals a clear legislative expectation that AI governance and promotion will be treated as a continuing public expenditure priority.
The second paragraph is particularly important in practical terms, as it authorizes the State to mobilize financial resources beyond the national budget, subject to existing legal frameworks. This allows for the use of public-private partnerships, special funds, or other financing mechanisms to support AI-related initiatives. From a regulatory perspective, Article 20 reinforces the Act’s hybrid character by pairing governance and trust-building measures with the fiscal capacity necessary to sustain industrial development, institutional infrastructure, and long-term policy implementation.

Article 21
Ethics of AI

(1) The State shall establish and disseminate ethical principles for artificial intelligence in order to ensure respect for human dignity, protection of fundamental rights, and the safe and trustworthy use of artificial intelligence.
(2) Artificial intelligence business operators shall endeavor to comply with the ethical principles for artificial intelligence in the development, provision, and use of artificial intelligence.
(3) The State may establish guidelines and recommendations necessary for the practical application of the ethical principles for artificial intelligence.
Commentary:
Article 21 introduces ethics as an explicit normative layer within the statutory framework, bridging high-level constitutional values and operational governance. From a legal standpoint, the provision adopts a soft-law architecture: while the State is obliged to establish and disseminate ethical principles, private actors are subject only to a duty of effort to comply. This design reflects a deliberate legislative judgment that ethical governance of AI should guide behavior and institutional practice without immediately crystallizing into rigid, sanction-backed obligations.
At the same time, Article 21 has tangible regulatory significance. The authorization for the State to issue guidelines and recommendations enables the gradual translation of ethical principles into concrete expectations that can influence procurement criteria, funding conditions, and supervisory assessments under later provisions. In practice, although non-compliance with ethical principles alone is unlikely to trigger enforcement action, these principles are positioned to function as interpretative benchmarks when assessing reasonableness, adequacy of safeguards, or good-faith compliance with the Act’s binding duties. As such, Article 21 embeds ethics into the legal ecosystem of AI governance in a way that is flexible yet normatively consequential.

Article 22
Formation of a Trust-Based Environment for the Use of AI

(1) The State shall implement measures necessary to form an environment in which users and persons affected can trust the use of artificial intelligence.
(2) Measures under paragraph (1) may include enhancing the transparency of artificial intelligence systems, ensuring explainability of artificial intelligence outcomes, preventing discrimination and unfair treatment, and strengthening the protection of the rights and interests of citizens.
Commentary:
Article 22 operationalizes the abstract notion of "trust" introduced earlier in the Act by assigning the State responsibility for shaping the conditions under which artificial intelligence can be relied upon by society. Legally, the provision establishes a duty of policy action rather than a set of concrete regulatory commands. The formulation "shall implement measures" imposes an obligation on the State to act, but the non-exhaustive list of possible measures preserves wide administrative discretion in selecting appropriate tools and interventions.
From a governance perspective, Article 22 serves as a connective provision linking ethical principles, transparency concepts, and rights protection into a coherent regulatory objective. Although it does not directly impose obligations on AI business operators, it functions as a normative justification for the introduction of binding duties in later articles, particularly those concerning notice, explainability, discrimination prevention, and safety management. In practice, Article 22 strengthens the legal legitimacy of state intervention aimed at mitigating social and rights-based risks of AI use, framing such intervention as necessary to sustain public trust rather than as an undue constraint on technological development.

Article 23
Duties of Artificial Intelligence Business Operators

(1) Artificial intelligence business operators shall endeavor to ensure the safety, reliability, and quality of artificial intelligence-based products and services.
(2) Artificial intelligence business operators shall endeavor to take reasonable measures to prevent harm that may be caused to users or persons affected in the course of providing artificial intelligence-based products or services.
(3) Artificial intelligence business operators shall cooperate with the State and local governments in the implementation of policies and measures under this Act.
Commentary:
Article 23 represents the first provision in the Act that directly addresses the conduct of private-sector actors in general terms. From a legal standpoint, the duties imposed here are deliberately framed as obligations of effort rather than obligations of result. The repeated use of "shall endeavor" reflects a legislative decision to avoid imposing immediate, sanction-backed compliance duties at this stage of the statute, instead establishing a baseline expectation of responsible conduct that informs the interpretation of later, more specific obligations.
Despite its non-coercive formulation, Article 23 has important normative and practical implications. It articulates a general standard of care for AI business operators, encompassing safety, reliability, quality, and harm prevention. This standard is likely to function as a contextual reference point in administrative assessments and policy guidance, particularly when determining whether an operator has acted reasonably in fulfilling more concrete duties related to high-impact artificial intelligence or large-scale systems. The cooperation obligation in paragraph (3) further integrates private actors into the governance framework, reinforcing the Act’s cooperative regulatory model in which trust and safety are pursued through a combination of state oversight and industry participation.

Article 24
Explanation and Information Disclosure

(1) Where an artificial intelligence-based product or service may have a significant impact on the rights or interests of a person, the artificial intelligence business operator shall, to the extent technically and reasonably possible, provide the following information:

  • the fact that artificial intelligence is used;
  • and a meaningful explanation of the main criteria and principles used to derive the outcome.
(2) Matters necessary for the scope, methods, and procedures of explanation and information disclosure under paragraph (1) shall be prescribed by Presidential Decree.
Commentary:
Article 24 marks the transition from principle-oriented expectations to concretized transparency obligations applicable to private actors. Although still limited by the qualifier "to the extent technically and reasonably possible," this provision imposes a legally relevant duty on AI business operators to disclose both the use of artificial intelligence and the logic underlying outcomes when those outcomes significantly affect individual rights or interests. The legal importance of this article lies in its recognition that transparency is not merely an ethical aspiration but a functional requirement tied to the protection of affected persons.
At the same time, Article 24 carefully calibrates the scope of this obligation by delegating critical details to Presidential Decree. This delegation reflects an awareness of the technical diversity of AI systems and the potential tension between explainability, trade secrets, and system performance. In practice, Article 24 is likely to operate as a bridge between the general right-to-understand principle articulated in Article 3 and the more stringent disclosure and documentation duties imposed later on high-impact artificial intelligence. Its interpretation and enforcement will therefore depend heavily on how subordinate legislation defines "significant impact," acceptable forms of explanation, and permissible limitations based on technical feasibility.

Article 25
High-Impact AI

(1) Where an artificial intelligence business operator develops or uses high-impact artificial intelligence, the operator shall endeavor to take additional measures necessary to manage risks associated with such artificial intelligence.
(2) Measures under paragraph (1) may include prior risk assessment, implementation of risk mitigation measures, continuous monitoring after deployment, and response measures in the event that risks or problems are identified.
(3) The State may establish guidelines necessary for the safe management of high-impact artificial intelligence.
Commentary:
Article 25 introduces a differentiated regulatory approach by explicitly recognizing high-impact artificial intelligence as warranting heightened attention. Legally, the provision stops short of imposing strict, enforceable duties, instead formulating the operator’s responsibilities as obligations of effort. This reflects a cautious legislative strategy: the Act acknowledges the elevated risks posed by certain AI applications while deferring the imposition of hard compliance requirements to subsequent, more specific provisions and subordinate legislation.
Nonetheless, Article 25 is normatively significant because it establishes risk management as an expected baseline for high-impact AI. By enumerating concrete examples of relevant measures, such as prior risk assessment and post-deployment monitoring, the provision provides substantive guidance that informs both regulatory expectations and industry practice. The authorization for the State to issue guidelines further signals that this area is intended to evolve dynamically, allowing risk management standards to be refined as technical understanding and societal expectations develop. In practical terms, Article 25 prepares the ground for the binding safety, documentation, and oversight obligations that follow in later articles, functioning as an intermediate step between general principles and enforceable compliance regimes.

Article 26
Risk Assessment and Risk Management

(1) The State may establish and operate systems for assessing risks associated with artificial intelligence.
(2) Artificial intelligence business operators shall cooperate with the State in the assessment of risks under paragraph (1) and provide necessary information, within the scope permitted by relevant statutes.
Commentary:
Article 26 shifts the regulatory focus from voluntary internal risk management to a more structured, state-facilitated assessment framework. Legally, the provision authorizes the State to institutionalize risk assessment mechanisms without mandating a single uniform model. This enables regulators to develop sector-specific or technology-specific assessment systems while preserving flexibility to respond to emerging risk profiles. The permissive formulation reflects a governance strategy that prioritizes adaptive oversight rather than rigid ex ante control.
The obligation imposed on AI business operators to cooperate with state-led risk assessments introduces a meaningful, albeit limited, compliance dimension. While the duty is framed in cooperative terms, it nonetheless creates a legal expectation that operators will engage with regulatory processes and share relevant information. The express reference to compliance with other statutes, particularly those governing confidentiality and data protection, indicates an attempt to balance effective oversight with the protection of legitimate commercial and personal interests. In practice, Article 26 functions as a procedural bridge between policy-oriented safety goals and the investigative and corrective powers exercised by the State under later enforcement provisions.

Article 27
Labeling and Notification

(1) The State may recommend or implement measures for labeling or notification with respect to artificial intelligence-based products or services.
(2) Labeling or notification under paragraph (1) may include information indicating the use of artificial intelligence or the characteristics of artificial intelligence applied to the relevant product or service.
(3) Matters necessary for labeling and notification under this Article shall be prescribed by Presidential Decree.
Commentary:
Article 27 introduces a transparency instrument aimed at informing users and affected persons about the involvement of artificial intelligence in products and services. Legally, the provision is formulated as an enabling norm rather than an immediate obligation, granting the State discretion to recommend or implement labeling and notification measures. This approach reflects a regulatory preference for gradual normalization of AI transparency practices before imposing binding disclosure requirements backed by sanctions.
Despite its discretionary framing, Article 27 has substantial regulatory significance. It establishes statutory legitimacy for future mandatory labeling regimes and creates a legal foundation upon which binding notice obligations can later be constructed through subordinate legislation. In practice, this provision functions as a preparatory step toward more stringent transparency requirements, particularly in relation to generative or high-impact AI systems. By anchoring labeling and notification within the statute itself, Article 27 signals that transparency is an integral component of the trust-based governance model, rather than an optional consumer-information measure.

Article 28
Protection of Personal Data and Information

(1) In the development and use of artificial intelligence, statutes concerning the protection of personal data and other relevant laws shall be complied with.
(2) Artificial intelligence business operators shall endeavor to take measures necessary to prevent leakage, misuse, or unauthorized access to personal data and other protected information in the course of developing or using artificial intelligence.
Commentary:
Article 28 clarifies the relationship between the Act and existing data protection and information security regimes. Legally, this provision does not create a new or autonomous data protection framework; instead, it reaffirms the continued applicability and primacy of specialized statutes governing personal data and protected information. This reaffirmation is significant because it forecloses any argument that compliance with the AI Act could displace or dilute obligations arising under data protection law, sectoral confidentiality rules, or cybersecurity legislation.
The second paragraph introduces an obligation of effort on AI business operators to prevent data-related harms in the AI lifecycle. While framed in non-coercive terms, this duty has concrete regulatory relevance when read in conjunction with later provisions concerning risk management, safety assurance, and enforcement. In practice, Article 28 functions as a normative baseline against which the adequacy of technical and organizational safeguards may be assessed, particularly in contexts where AI systems rely on large-scale data processing. It thus reinforces the Act’s trust-based approach by embedding data protection as a foundational element of lawful and socially acceptable AI deployment.

Article 29
Prevention of Discrimination

(1) The State shall implement measures necessary to prevent discrimination that may arise from the development or use of artificial intelligence.
(2) Artificial intelligence business operators shall endeavor to prevent unfair or discriminatory outcomes in the development and use of artificial intelligence-based products and services.
Commentary:
Article 29 explicitly incorporates the prevention of discrimination into the statutory framework for artificial intelligence governance. Legally, the provision is structured as a shared responsibility between the State and private actors, with the State bearing a duty of policy action and AI business operators subject to an obligation of effort. This reflects a legislative assessment that algorithmic discrimination is a systemic risk requiring both regulatory intervention and responsible design and deployment practices by industry participants.
From a doctrinal perspective, Article 29 does not itself define discrimination or establish standalone enforcement standards. Instead, it operates as a connective norm that links the Act to existing equality and anti-discrimination principles embedded in constitutional and statutory law. In practice, this article provides interpretative support for regulatory guidance, risk assessments, and enforcement decisions under later provisions, particularly where biased or unfair AI outcomes implicate fundamental rights. Its significance lies in embedding non-discrimination as a core element of the trust-based AI governance model, rather than treating it as a peripheral ethical concern.

Article 30
Monitoring and Improvement of AI Policies

(1) The State shall monitor the development and use of artificial intelligence and the effectiveness of policies implemented under this Act.
(2) Based on the results of monitoring under paragraph (1), the State may improve or adjust policies and measures related to artificial intelligence.
Commentary:
Article 30 establishes a feedback mechanism within the statutory framework, ensuring that artificial intelligence governance under the Act remains adaptive rather than static. Legally, the provision imposes a duty on the State to engage in continuous observation of both technological developments and policy outcomes. This monitoring obligation reflects a legislative acknowledgment that AI technologies evolve rapidly and that regulatory approaches must be capable of responding to unforeseen risks, market shifts, and social impacts.
The discretionary authority to improve or adjust policies based on monitoring results reinforces the Act’s emphasis on responsive governance. Rather than prescribing fixed regulatory solutions, Article 30 legitimizes iterative policy refinement informed by empirical evidence and practical experience. In practice, this provision supports the dynamic use of subordinate legislation, administrative guidance, and policy instruments to recalibrate the balance between innovation promotion and risk management, thereby sustaining the trust-based framework envisioned by the Act over time.

Article 31
Notice and Labeling Obligations

(1) Where an artificial intelligence business operator provides an artificial intelligence-based product or service that falls under any of the following categories, the operator shall provide notice thereof to users or persons affected, as prescribed by Presidential Decree:
  1. artificial intelligence-based products or services that may have a significant impact on human life, physical safety, or fundamental rights;
  2. generative artificial intelligence-based products or services that generate text, sound, images, video, or other content.
(2) Where generative artificial intelligence is used to generate content that is difficult for ordinary persons to distinguish from content created by humans, the artificial intelligence business operator shall clearly indicate that such content was generated by artificial intelligence, as prescribed by Presidential Decree.
(3) Matters necessary for the scope, methods, and procedures of notice and labeling under paragraphs (1) and (2) shall be prescribed by Presidential Decree.
Commentary:
Article 31 represents a decisive shift from principle-oriented governance to binding, enforceable obligations imposed directly on artificial intelligence business operators. Unlike earlier provisions framed in terms of "endeavor" or policy discretion, this article employs mandatory language and establishes clear duties of notice and labeling. Its regulatory focus is twofold: first, to ensure that individuals are informed when they are interacting with or affected by AI systems that may significantly impact their rights or safety; second, to address the specific risks associated with generative AI, particularly the potential for deception or confusion arising from AI-generated content that closely resembles human-created material.
From a legal and regulatory perspective, Article 31 is one of the Act’s most practically consequential provisions. It operationalizes transparency as a concrete compliance requirement and creates a direct interface between AI systems and end users. The delegation of detailed requirements to Presidential Decree allows the State to calibrate disclosure thresholds, formats, and exceptions in line with technological developments and social expectations. At the same time, failure to comply with Article 31 is explicitly linked, in later provisions, to administrative sanctions. As such, this article functions as a cornerstone of the trust-based framework, translating abstract concerns about transparency and informed interaction into legally enforceable duties with real compliance implications for AI operators.

Article 32
Safety Assurance Obligations for Large-Scale Artificial Intelligence

(1) Where an artificial intelligence business operator develops or operates an artificial intelligence system for which the cumulative amount of computation used for training exceeds a threshold prescribed by Presidential Decree, the operator shall establish and implement measures necessary to ensure the safety of such artificial intelligence.
(2) Measures under paragraph (1) shall include the identification and assessment of risks throughout the lifecycle of the artificial intelligence system, the implementation of measures to mitigate such risks, and the establishment of a risk management system for monitoring and responding to safety-related incidents after deployment.
(3) An artificial intelligence business operator that has established and implemented measures under paragraph (1) shall submit the results thereof to the Minister of Science and ICT, as prescribed by Ministerial Ordinance.
(4) Matters necessary for the methods, procedures, and standards for safety assurance under this Article shall be prescribed by Ministerial Ordinance.
Commentary:
Article 32 introduces a capability- and scale-based safety regime into the Act by linking mandatory obligations to the computational scale used in training artificial intelligence systems. Legally, this provision is significant because it departs from the predominantly use-context-based approach applied elsewhere in the statute and instead adopts a quantitative trigger grounded in technical capacity. By tying safety obligations to cumulative training compute, the legislature implicitly targets advanced or frontier-level AI systems whose complexity and potential systemic impact justify heightened regulatory scrutiny.
From a governance perspective, Article 32 establishes one of the most concrete and enforceable compliance frameworks under the Act. Unlike earlier risk-management provisions framed as duties of effort, this article requires the establishment and implementation of specific safety assurance measures and mandates reporting to the competent ministry. The delegation of substantive standards to Ministerial Ordinance allows for technical specificity and rapid updating, but it also concentrates regulatory power within the executive. In practice, Article 32 is likely to function as a focal point for regulatory oversight of large-scale AI developers, shaping internal governance structures, documentation practices, and incident response mechanisms in ways comparable to safety regimes in other high-risk technological domains.

Article 33
Determination of High-Impact AI

(1) An artificial intelligence business operator shall examine whether an artificial intelligence system, or an artificial intelligence-based product or service using such system, falls under high-impact artificial intelligence.
(2) Where an artificial intelligence business operator requests a determination as to whether artificial intelligence falls under high-impact artificial intelligence, the Minister of Science and ICT shall determine whether it constitutes high-impact artificial intelligence and notify the result to the requesting operator.
(3) In determining whether artificial intelligence constitutes high-impact artificial intelligence, the Minister of Science and ICT may seek the opinion of an expert committee.
(4) The Minister of Science and ICT may prepare and distribute guidelines concerning the criteria and examples for determining high-impact artificial intelligence.
(5) Matters necessary for the procedures and methods for determination under this Article shall be prescribed by Presidential Decree.
Commentary:
Article 33 establishes a formal classification mechanism that is central to the operation of the Act’s differentiated regulatory regime. Legally, the provision imposes an initial responsibility on AI business operators to conduct self-assessment regarding high-impact status, thereby embedding regulatory awareness and due diligence into the deployment process. At the same time, it provides a procedural pathway for obtaining an authoritative determination from the competent ministry, reducing legal uncertainty and mitigating the risk of inconsistent self-classification across the market.
From a governance standpoint, Article 33 reflects a cooperative and dialogical regulatory model. The availability of ministerial determinations, supported by expert committee input and supplemented by non-binding guidelines, allows the classification of high-impact AI to evolve in line with technological and social developments. This structure balances flexibility with legal certainty: while the statutory definition in Article 2 sets the outer boundaries, Article 33 supplies the institutional process through which those boundaries are applied in practice. In effect, the provision functions as a gatekeeping mechanism, determining when the Act’s most stringent obligations are triggered and thereby shaping the compliance landscape for AI operators.

Article 34
Obligations of AI Business Operators with Respect to High-Impact AI

(1) Where an artificial intelligence business operator provides a high-impact artificial intelligence system, or an artificial intelligence-based product or service using such system, the operator shall establish and implement the following measures, as prescribed by Presidential Decree:
  1. measures for the identification, assessment, and management of risks arising from high-impact artificial intelligence;
  2. measures to ensure that users and persons affected can obtain explanations regarding the operation and outcomes of high-impact artificial intelligence, including the main criteria used and an overview of data used for training;
  3. measures for the protection of users and persons affected, including human oversight and intervention mechanisms;
  4. preparation and retention of documents related to the development, provision, and operation of high-impact artificial intelligence;
  5. other measures necessary to ensure the safe and trustworthy use of high-impact artificial intelligence.
(2) Matters necessary for the detailed standards, methods, and procedures for implementing measures under paragraph (1) shall be prescribed by Presidential Decree.
Commentary:
Article 34 constitutes the core substantive compliance provision of the Act for high-impact artificial intelligence. Legally, it marks a decisive transition from obligations of effort to mandatory duties of implementation. Unlike earlier articles that emphasize principles, cooperation, or discretionary policy measures, Article 34 imposes concrete organizational, technical, and procedural requirements on AI business operators once a system is classified as high-impact. The provision establishes a multi-dimensional compliance framework encompassing risk management, transparency, human oversight, and documentation, thereby reflecting a holistic approach to AI governance.
From a regulatory perspective, Article 34 operationalizes the trust-based model articulated throughout the Act by translating abstract concerns, such as explainability, accountability, and safety, into enforceable obligations. The explicit requirement to provide explanations and an overview of training data is particularly significant, as it introduces a structured transparency expectation without mandating full disclosure of proprietary datasets or models. At the same time, the extensive delegation to Presidential Decree underscores that the practical burden imposed on operators will depend heavily on subordinate legislation. In practice, Article 34 is likely to function as the primary benchmark for regulatory scrutiny and enforcement in relation to high-impact AI, shaping internal governance systems and compliance strategies across affected sectors.

Article 35
Impact Assessment of High-Impact AI

(1) An artificial intelligence business operator shall endeavor to conduct, in advance, an assessment of the impact of high-impact artificial intelligence on fundamental rights before providing a high-impact artificial intelligence system or an artificial intelligence-based product or service using such system.
(2) In conducting the impact assessment under paragraph (1), matters such as the purpose of use of the artificial intelligence, potential risks to fundamental rights, and measures to mitigate such risks shall be included.
(3) Where the State or a local government intends to procure or use artificial intelligence-based products or services, priority may be given to products or services for which an impact assessment under paragraph (1) has been conducted.
(4) Matters necessary for the scope, methods, and procedures of impact assessments under this Article shall be prescribed by Presidential Decree.
Commentary:
Article 35 introduces an ex ante assessment mechanism focused explicitly on the impact of high-impact artificial intelligence on fundamental rights. Legally, the obligation imposed on AI business operators is framed as a duty of effort rather than a mandatory requirement, reflecting a cautious legislative approach to rights-impact assessment in a rapidly evolving technological field. Nevertheless, by situating this assessment prior to provision or deployment, the article embeds the concept of anticipatory governance into the regulatory framework, encouraging operators to internalize rights-based considerations at the design and deployment stages rather than relying solely on ex post remedies.
The provision’s practical significance is amplified by its linkage to public procurement. By allowing State and local governments to give priority to products or services that have undergone an impact assessment, Article 35 creates a powerful indirect incentive for compliance without converting the assessment into a universally binding obligation. This procurement-based leverage aligns with the Act’s broader trust-based model, using market signals and public-sector purchasing power to diffuse governance practices. In effect, Article 35 functions as a bridge between voluntary rights due diligence and enforceable regulatory standards, positioning impact assessment as a best practice that may gradually become de facto mandatory in high-stakes contexts.

Article 36
Domestic Representative of Foreign AI Business Operators

(1) Where an artificial intelligence business operator that does not have an address or place of business in the Republic of Korea falls under criteria prescribed by Presidential Decree, such as the number of users or revenue, the operator shall designate a domestic representative in writing and report such designation to the Minister of Science and ICT.
(2) The domestic representative shall be responsible for performing the following functions on behalf of the artificial intelligence business operator:

  • submission of materials related to compliance with obligations under this Act;
  • requests for determination of high-impact artificial intelligence under Article 33;
  • support for the implementation of measures related to safety and trust, including verification of the accuracy and currency of relevant documentation;
  • and other matters prescribed by Presidential Decree.
(3) The domestic representative shall have an address or place of business in the Republic of Korea.
(4) Where the domestic representative violates this Act in the course of performing duties under paragraph (2), such violation shall be deemed to have been committed by the artificial intelligence business operator.
Commentary:
Article 36 establishes an extraterritorial enforcement mechanism designed to ensure the effectiveness of the Act with respect to foreign AI business operators. Legally, the provision conditions the obligation to appoint a domestic representative on thresholds to be specified by Presidential Decree, thereby targeting operators whose activities have sufficient economic or social presence in Korea to justify regulatory oversight. This approach mirrors enforcement models used in other areas of transnational regulation, enabling domestic authorities to exercise supervisory powers without requiring foreign entities to establish a full local subsidiary.
From a regulatory perspective, Article 36 plays a critical role in closing potential enforcement gaps created by the Act’s effects-based scope of application. By attributing violations committed by the domestic representative to the foreign AI business operator itself, the provision ensures that the representative functions as a true compliance interface rather than a mere formality. In practice, this article significantly raises the compliance stakes for global AI providers serving the Korean market, as it requires them to establish internal governance structures capable of supporting ongoing interaction with regulators, documentation management, and risk-related procedures under the Act.

Article 37
Expansion of Financial Resources for the Promotion of the AI Industry

The State shall endeavor to secure stable financial resources necessary for the promotion of the artificial intelligence industry and for the effective implementation of the Basic Plan and policies under this Act, and may recommend that public institutions provide support in this regard. The State shall also endeavor to encourage private investment and to ensure the efficient operation of investment resources related to artificial intelligence.
Commentary:
Article 37 reinforces the Act’s industrial policy dimension by addressing the sustainability of financial support mechanisms over the medium and long term. Legally, the provision is framed as an obligation of effort rather than a binding fiscal mandate, reflecting constitutional constraints on budgetary authority and the legislature’s recognition that funding levels must remain subject to democratic budgetary processes. Nonetheless, the article establishes a clear statutory expectation that AI policy is not to be implemented on a purely ad hoc or short-term basis, but rather supported by stable and predictable financial resources.
From a regulatory perspective, Article 37 signals continuity and institutional commitment. By authorizing the State to recommend support by public institutions and to encourage private investment, the provision embeds a mixed financing model that combines public funds with market-based capital. In practice, this article provides legal justification for coordinated funding strategies involving government agencies, public financial institutions, and private investors, thereby supporting the long-term viability of both the governance infrastructure and the industrial ecosystem envisioned by the Act.

Article 38
Surveys, Statistics, and Indicators

(1) The Minister of Science and ICT may conduct surveys and compile statistics and indicators necessary for the establishment and implementation of policies under this Act.
(2) The Minister of Science and ICT may request relevant public institutions, artificial intelligence business operators, and related organizations to submit materials necessary for surveys, statistics, and indicators under paragraph (1).
(3) Matters necessary for the scope, methods, and procedures of surveys, statistics, and indicators under this Article shall be prescribed by Presidential Decree.
Commentary:
Article 38 provides the informational backbone for evidence-based governance under the Act. Legally, it authorizes the competent ministry to move beyond ad hoc data collection and to institutionalize systematic surveys, statistical compilation, and indicator development related to artificial intelligence. This authority is framed permissively rather than mandatorily, reflecting the legislature’s intent to equip the administration with analytical tools while preserving discretion as to their deployment.
The power to request materials from public institutions and AI business operators is particularly significant. Although the provision does not itself specify sanctions for non-cooperation, it creates a legal basis for information requests that support policy design, monitoring, and future regulatory calibration. In practice, Article 38 functions as a preparatory and supporting mechanism for both promotion and oversight, enabling the State to track technological trends, market structures, and risk profiles in a structured manner while relying on subordinate legislation to define procedural safeguards and limits.

Article 39
Delegation and Entrustment of Duties

The Minister of Science and ICT may delegate part of the authority under this Act to the heads of affiliated agencies or entrust part of the duties under this Act to relevant institutions or organizations, as prescribed by Presidential Decree.
Commentary:
Article 39 performs an administrative law function by enabling the operational decentralization of authority under the Act. Legally, this provision authorizes both delegation of governmental powers and entrustment of tasks to external institutions, subject to the conditions set forth in subordinate legislation. This distinction is important: delegation involves the transfer of decision-making authority within the administrative hierarchy, whereas entrustment typically concerns the performance of technical, procedural, or supportive tasks by specialized bodies without conferring regulatory discretion.
From a governance perspective, Article 39 reflects a pragmatic recognition that the breadth and technical complexity of AI governance cannot be managed solely within a single ministry. By permitting delegation and entrustment, the Act facilitates the involvement of expert institutions, research bodies, and quasi-public organizations in implementation and oversight. At the same time, the requirement that such arrangements be prescribed by Presidential Decree preserves legal accountability and transparency, ensuring that responsibility for regulatory outcomes ultimately remains traceable to the competent public authorities.

Article 40
Fact-Finding Investigations and Corrective Orders

(1) Where the Minister of Science and ICT deems it necessary to confirm whether an artificial intelligence business operator has violated this Act or subordinate statutes, the Minister may request the submission of relevant data or conduct a fact-finding investigation.
(2) In conducting a fact-finding investigation under paragraph (1), the Minister may have public officials enter the place of business of the artificial intelligence business operator to inspect documents, facilities, equipment, or other relevant materials. In such cases, the public officials shall carry identification indicating their authority.
(3) Where the Minister of Science and ICT finds that an artificial intelligence business operator has violated this Act or subordinate statutes as a result of a fact-finding investigation, the Minister may order the operator to suspend the relevant act, correct the violation, or take other necessary measures.
(4) Fact-finding investigations under this Article shall be conducted in accordance with the Act on the Regulation of Violations of Public Order and other relevant statutes.
Commentary:
Article 40 constitutes the principal enforcement mechanism of the Act. Legally, it grants the Minister of Science and ICT investigative and corrective powers typical of administrative supervision regimes, including document requests, on-site inspections, and corrective orders. The provision is significant because it operationalizes compliance oversight without introducing a licensing or prior authorization system. Instead, the legislature adopts an ex post supervisory model in which compliance is monitored through targeted investigations triggered by suspected violations or complaints.
From an administrative law perspective, Article 40 carefully embeds procedural safeguards alongside enforcement authority. The requirement that investigations be conducted in accordance with general statutes governing administrative investigations reflects an intent to subject AI-related enforcement to established due process standards, including proportionality and legality. In practice, Article 40 provides the legal backbone for enforcing substantive obligations such as notice and labeling, safety assurance, and high-impact AI duties. Its presence underscores that, notwithstanding the Act’s emphasis on trust and cooperation, non-compliance is ultimately addressed through formal supervisory powers backed by corrective orders.

Article 41
Deemed Public Officials

With respect to the application of Articles 129 through 132 of the Criminal Act, members of the National Artificial Intelligence Committee who are not public officials and employees of institutions or organizations entrusted with duties under this Act shall be deemed public officials.
Commentary:
Article 41 performs a classic integrity-protection function within the statutory framework by extending criminal-law safeguards against corruption to individuals involved in AI governance who are not formally public officials. Legally, the provision ensures that private-sector experts serving on the National Artificial Intelligence Committee, as well as personnel of entrusted institutions, are subject to the same bribery and related offenses as government officials when performing duties under the Act. This deeming clause closes a potential accountability gap that could otherwise arise from the hybrid public-private structure of AI governance.
From a governance perspective, Article 41 is significant because it reinforces trust not only in artificial intelligence systems, but also in the institutions responsible for regulating them. By subjecting non-official participants in governance processes to criminal liability standards equivalent to those applied to civil servants, the legislature seeks to prevent regulatory capture, conflicts of interest, and undue influence. In practical terms, this provision strengthens the legitimacy of expert-driven and delegated governance mechanisms by ensuring that expanded participation does not come at the expense of legal accountability.

Article 42
Penal Provisions

A person who discloses or uses confidential information acquired in the course of performing duties as a member of the National Artificial Intelligence Committee or as an employee of an institution or organization entrusted with duties under this Act, without justifiable grounds, shall be punished by imprisonment for not more than three years or by a fine not exceeding 30 million won.
Commentary:
Article 42 introduces the sole criminal penalty provided directly by the Act and is narrowly tailored to protect the integrity of the AI governance framework. Legally, the provision does not target AI business operators or technological misuse; instead, it focuses on individuals who, by virtue of their participation in governance or delegated implementation, gain access to sensitive information. This reflects a legislative assessment that the principal criminal-law risk associated with the Act lies not in AI deployment itself, but in the potential misuse of confidential information within regulatory and advisory processes.
From a systemic perspective, Article 42 complements Article 41's deeming clause by attaching concrete criminal consequences to breaches of trust by governance actors. The provision reinforces confidence among regulated entities that commercially sensitive or security-relevant information disclosed during compliance, assessment, or consultation processes will be legally protected. In practice, this article underpins cooperative regulatory mechanisms, such as risk assessments and reporting obligations by reducing disincentives to candid disclosure and thereby supporting the effective functioning of the Act’s supervisory regime.

Article 43
Administrative Fines

(1) A person who falls under any of the following subparagraphs shall be subject to an administrative fine not exceeding 30 million won:
  1. a person who fails to provide notice or labeling in violation of Article 31;
  2. a person who fails to designate a domestic representative in violation of Article 36;
  3. a person who fails to comply with an order issued under Article 40(3).
(2) The imposition and collection of administrative fines under paragraph (1) shall be prescribed by Presidential Decree.
Commentary:
Article 43 completes the Act’s enforcement architecture by specifying the administrative sanctions applicable to key compliance failures. Legally, this provision confirms that the Act relies primarily on an administrative enforcement model rather than criminal punishment. The selection of sanctionable violations is deliberate and limited: it targets transparency obligations, jurisdictional enforceability through domestic representation, and compliance with corrective orders. This indicates that the legislature views these obligations as essential to the operability of the regulatory system rather than as merely aspirational standards.
From a regulatory design perspective, Article 43 reflects a proportional and functional approach to enforcement. By capping administrative fines at a fixed monetary amount and delegating procedural details to Presidential Decree, the provision balances deterrence with flexibility. Importantly, the inclusion of non-compliance with corrective orders as a sanctionable offense reinforces the authority of supervisory interventions under Article 40. In practice, Article 43 ensures that the Act’s trust-based and cooperative framework is underpinned by credible enforcement mechanisms, without resorting to broad or punitive criminal sanctions against AI business operators.
Made on
Tilda