Skip to content

Bridging Soft and Hard Law in AI Governance


This post, authored by Marco Pasqua, is part of a series of external posts building on the 2025 BIICL-SLS Workshop which this year focused on soft law in international law. Between December 2025 and February 2026, we will be publishing posts addressing different aspects of soft law, including its conceptualisation and role, its application in different areas of law, and its influence in particular domestic contexts. The posts in this series are by external authors, and whilst BIICL has undertaken a review, they do not necessarily reflect the views of BIICL or its team members.

The governance of artificial intelligence (AI) has emerged as one of the most pressing challenges in contemporary international law, demanding a careful balance between innovation, human rights and societal impact. The rapid development and deployment of AI technologies across sectors ranging from healthcare to finance and from education to the judiciary has outpaced the creation of binding legal frameworks capable of regulating their use comprehensively. In this context, soft law has assumed a central role, providing guidelines, principles and standards that, although non-binding, exert significant normative influence and are likely to shape future regulation. This includes international initiatives under the United Nations, OECD, UNESCO, G20, ISO/IEC standards and other global multi-stakeholder frameworks. In this post, the nature, utility and limitations of soft law in AI governance are explored, including its interplay with emerging hard law instruments and the ongoing necessity for a hybrid approach that combines adaptability with enforceability.

Soft law can be defined as a set of non-binding norms, principles and guidelines intended to influence behaviour but which lack direct legal enforcement. In AI governance, soft law instruments have proven particularly useful due to their flexibility and responsiveness to (technological) innovation. Their creation typically stems from international organisations, multi-stakeholder initiatives or industry-led consortia that seek to provide guidance in areas where binding law is either absent or underdeveloped. Prominent examples include: the OECD AI Principles, adopted in 2019 and updated in 2024 following extensive consultations among policymakers, academics and industry leaders; the UNESCO's Recommendation on the Ethics of Artificial Intelligence, adopted on 23 November 2021 within the United Nations framework, reflecting broad international cooperation to define ethical standards for AI development and deployment; and the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems, developed through an international initiative to establish ethical guidelines and best practices for research and technological development, demonstrating how collaborative frameworks can guide responsible conduct even outside binding legal provisions. These instruments are complemented by industry-led frameworks such as the Partnership on AI, established by major technology companies including Amazon, Microsoft, Google, Facebook and IBM, which has issued a series of best practice documents addressing responsible AI research, synthetic media and the deployment of foundation models, and ISO/IEC 42001:2023, the first international standard for AI management systems, setting out technical and organisational measures for AI governance. In each of these frameworks, soft law provides a mechanism for setting out expectations, guiding behaviour and promoting accountability without the delays and rigidity associated with hard law. None of these carry legal penalties, but they shape behaviour through consensus and shared commitment(s).

The normative status of soft law instruments is complex and context-dependent. Although non-binding, they carry significant weight in shaping regulatory approaches and influencing policy development at the national, regional and international levels. The OECD AI Principles, for instance, have been endorsed by more than 40 countries and cited in national AI strategies, demonstrating that voluntary frameworks can have tangible regulatory influence. As of 2025, the OECD AI Policy Observatory  lists 900+ national AI policies and initiatives across more than 80 countries aligned with the OECD AI Principles. Similarly, the G20 and the European Union have referenced OECD and UNESCO guidelines in their respective AI policies (see here and here respectively), reinforcing their authority even in the absence of formal legal obligations. Soft law's utility is particularly evident in fast-evolving technological domains where flexibility is key: unlike hard law, which requires lengthy legislative processes, soft law instruments can be updated and refined more rapidly, allowing governance frameworks to keep pace with AI innovation. Industry-led codes of conduct, reporting mechanisms and voluntary peer review processes further exemplify the adaptability of soft law while fostering accountability and transparency. For example, industry-led frameworks are frequently updated: the Partnership on AI regularly publishes new guidance on safety critical AI, publication norms and "synthetic media", evolving as technologies change. This mirrors the way standards bodies work - ISO's 42001 was finalized in 2023 to help organizations build AI governance systems quickly. Soft law can also better enable multi-stakeholder input (governments, companies, civil society) in the drafting process, generating broader buy-in. In short, soft law functions as a "living laboratory" for AI rules, testing ideas like transparency requirements, risk assessments or algorithmic auditing before codifying them.

Despite these advantages, the primary limitation of soft law lies in enforcement. Compliance is typically voluntary and relies on reputational incentives, public scrutiny or internal corporate governance. For example, UNESCO's Recommendation on AI Ethics requires member states to submit periodic reports on implementation, promoting accountability without legal sanctions and illustrating how soft law can pave the way toward enforceable rules while respecting stakeholders' autonomy. Yet, in situations where soft law exists without any accompanying hard law, enforcement gaps can be significant, particularly in countries where AI-specific legislation is minimal or absent. In these contexts, compliance depends largely on voluntary commitment, which may be insufficient for high-risk AI applications. Firms or states might ignore voluntary guidelines under soft law if commercial and political pressures are strong. Consequently, soft law often serves as a "signposting" tool: it highlights best practices and reputational risks, but only hard law can mandate them.

Critically, soft law in AI is bridging toward hard law.

Many principles first floated as guidelines have been, or are being, codified into binding rules. A prime example is the European Union's AI Act (Regulation (EU) 2024/1689). This instrument is the first-ever comprehensive legal framework on AI, which builds upon and reflects many principles initially articulated in soft law, establishing a risk-based regulatory framework for high-risk AI applications. The EU AI Act's requirements for transparency, accountability and fundamental rights safeguards can be traced back to the Ethics Guidelines for Trustworthy AI drawn up by the EU High-Level Expert Group in 2019. In other words, ideas incubated in soft law directly informed binding regulations. Likewise, the new Council of Europe Framework Convention on Artificial Intelligence (opened for signature on 5 September 2024), including its Explanatory Report, reflects a similar trajectory. While grounded on human rights, democracy and the rule of law, the Convention was shaped through an extensive preparatory soft law process, including the work of the Ad hoc Committee on Artificial Intelligence (CAHAI), which conducted multi-stakeholder consultations and developed preliminary guidance; the "Possible elements of a legal framework on AI" report which provides non-binding orientated elements; and supporting voluntary methodologies such as HUDERIA for risk and impact assessment. In this sense, earlier soft law and pre-normative processes informed the structure and regulatory choices of the Convention, paving the way for their consolidation in a binding international treaty. Therefore, the interaction between soft and hard law is mutually reinforcing: soft law allows experimentation and regulatory piloting, enabling policymakers to test approaches before codifying them, while also promoting diffusion of best practices that may later be formalised into binding rules.

The integration of national hard law instruments alongside EU regulations further illustrates this hybrid governance model. Italy's recent adoption of Law No. 132 of 23 September 2025, in force from 10 October 2025, complements the EU AI Act, demonstrating how domestic legislation can coexist with supranational frameworks. Other national initiatives, such as France's AI Regulation Strategy (launched in 2018), which responds to one of the objectives of France 2030 to make France a pioneer in artificial intelligence, and Germany's draft AI Market Surveillance and Innovation Promotion Act (2025), currently under discussion, reinforce the notion that effective AI governance benefits from multi-level legal coordination, combining soft and hard law to ensure both adaptability and enforceability. These domestic instruments also draw on and reflect principles established in international frameworks, including the OECD AI Principles, UNESCO AI Ethics Recommendation, G20/G7 AI guidelines, ISO/IEC AI standards and other UN-led frameworks.

From a more global perspective, the United States and the People's Republic of China are also developing their own approaches to AI governance, albeit through different regulatory philosophies. In the United States, there is no single, comprehensive federal law on artificial intelligence. However, a patchwork of targeted federal guidelines and executive orders is emerging - such as Executive Order 14179 (23 January 2025) promoting national AI leadership and federal guidance like the NIST AI Risk Management Framework - while states increasingly adopt their own rules, including Colorado's AI Act (2024) and California's Transparency in Frontier AI Act (SB 53) (2025). The situation has been further complicated by the 11 December 2025 Executive Order, which seeks to limit states' regulatory authority and establish a minimally burdensome national AI policy, putting the federal government and states on a potential collision course. This patchwork approach illustrates the coexistence of federal and state-level initiatives within the United States' AI governance, reflecting a variety of regulatory strategies. This also highlights that domestic AI regulations in the United States reflect national priorities and approaches, which may not always perfectly correspond to international frameworks.

In Asia, China has established a multi-layered regulatory framework encompassing data governance, algorithmic accountability, cybersecurity and ethical oversight, reflecting a more centralised and top-down model of AI control, with measures such as the Labelling of AI-Generated Synthetic Content and the Interim Measures for the Management of Generative AI Services (2023) regulating public‑facing generative AI services, the Algorithmic Recommendation Provisions and Deep Synthesis Provisions (2021) addressing algorithmic transparency and content governance, the amendments to the Cybersecurity Law that explicitly integrate AI governance and ethics (effective 1 January 2026) and comprehensive data governance under the Data Security Law and Personal Information Protection Law (2021). China's domestic AI regulations incorporate elements that are broadly consistent with certain international principles and frameworks, though they primarily reflect national priorities and a centralized governance model.

Nevertheless, many states currently rely mainly on soft law, without yet moving to binding legislation (and alignment with international standards may be partial or evolving in these contexts). For example, India still relies on the 2018 National Strategy for AI and other voluntary frameworks, but no comprehensive AI statute has been adopted. Australia has yet to enact AI specific laws; but it endorsed 8 AI Ethics Principles (2019), designed to ensure AI is safe, secure and reliable, which are entirely voluntary, and on 21 October 2025 the Guidance for AI Adoption has been published, outlining 6 essential practices for safe and responsible AI governance and building upon the 10 guardrails of the Voluntary AI Safety Standard and the 8 AI Ethics Principles. Japan similarly does not have a comprehensive AI law and instead uses a combination of existing laws, non-binding guidelines and a recent Act on Promotion of Research and Development, and Utilization of AI-related Technology (2025). Where only soft law commitments exist, compliance can lag - and in high stakes areas, the lack of enforceable rules poses significant risks.

Beyond public regulation, private and industry-led self-regulation is an essential component of AI governance. Companies increasingly adopt ethical charters, technical standards and internal governance frameworks to signal responsible development and deployment (for example, Microsoft's Responsible AI Standard and Internal Aether Committee for AI ethics oversight, and IBM's Responsible Technology Board / AI Ethics Board guiding ethical design and use of its AI systems). While these measures demonstrate commitment, they also raise questions regarding accountability and oversight: self-regulatory instruments often lack enforcement mechanisms, leaving companies to implement principles voluntarily, potentially prioritising reputational concerns over substantive compliance. The Partnership on AI, for instance, issues guidelines on ethical AI deployment (such as its Guidance for Safe Foundation Model Deployment, Shared Prosperity Guidelines and Publication Norms for Responsible AI) without any supervisory authority, highlighting the limits of private self-regulation when not paired with binding rules. These realities underscore the need for hybrid governance strategies, integrating private and public, soft and hard law measures to ensure comprehensive oversight of AI risks. Self-regulation can complement public rules, but ultimately governments (and international bodies) must enforce core obligations to protect, at the very least, the public interest.

Despite its clear benefits, the limitations of soft law are also apparent. A few examples illustrate these limits. First, consider human rights. Soft law frameworks stress values like fairness, privacy and non-discrimination, but without binding laws these values can be ignored. Only hard law can prohibit abusive AI (e.g. unlawful surveillance, bias that violates privacy/data laws, mass-scale profiling) and grant remedies to victims. Second, issues of criminal liability fall under the (hard) law. Ethical charters cannot define how to prosecute crimes or criminal acts committed by AI systems, including those operating autonomously; only statutes and case law can provide guidance in these cases. Third, and particularly urgent, is workplace safety. AI is increasingly used in employment (such as for task automation and monitoring), yet traditional occupational safety laws have barely kept up. For example, the International Labour Organization (ILO)'s Convention No 155 (1981) on Occupational Safety and Health sets general duties for safe workplaces, but it is unclear whether these provisions can be effectively applied to AI-driven work environments. Moreover, many states have ratified the Convention only in recent years, illustrating the slow pace at which binding provisions evolve. While guidelines may recommend that AI tools should not endanger workers, without enforceable legal standards or inspections there remains a real risk of accidents or violations of workers' rights.

In conclusion, soft law has proven indispensable in shaping the governance of AI, bridging the gap between non-binding guidance and emerging hard law. It offers flexibility, fosters international cooperation and enables experimentation with regulatory models. Yet soft law remains limited in enforcement and uniform application. The hybrid approach, exemplified by the interplay between soft law instruments (e.g. the OECD AI Principles, UNESCO's Recommendation on AI Ethics), industry self-regulation and hard law instruments (such as the EU AI Act and other domestic and international frameworks), provides a pathway toward effective, principled and pragmatic AI governance. By balancing adaptability with enforceability, this approach ensures that AI innovation is ethically grounded, socially responsible and legally accountable, reflecting the broader need for comprehensive frameworks that protect fundamental rights while enabling technological advancement.

Author:

Marco Pasqua, Lawyer and co-chair of the Working Group on the Anti-SLAPP Directive Implementations, European Association of Private International Law (EAPIL).


Join the conversation

No comments have been added to this insight.

-
Save and continue