The EU AI Act and the Corporate Sustainability Due Diligence Directive: overlapping and conflicting requirements

The EU is regulating Artificial Intelligence (AI) and corporate sustainability with two pioneering instruments: the EU AI Act and the Corporate Sustainability Due Diligence Directive (CSDDD), both adopted on 13 June 2024. The AI Act lays down harmonised rules for the placing on the EU market and use of AI systems, the CSDDD creates obligations for companies regarding adverse impacts on human rights and the environment. The interaction between AI-specific and general due diligence requirements may create overlapping and potentially conflicting obligations for large companies that provide AI systems in the EU, in-scope of both instruments.
Both the AI Act and the CSDDD introduces risk-based obligations. The AI Act classifies AI systems into four categories (Arts 5-7 AI ACT): i) unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI); ii) high-risk AI systems, which are regulated; iii) limited risk AI systems subject to lighter transparency obligations (e.g., chatbots and deepfakes), and iv) minimal risk is unregulated. High-risk systems, classified in Article 6, are listed in Annex III, which broadly covers AI systems used in biometrics, critical infrastructure, education, employment, access to essential services (both public and private law enforcement), immigration, administration of justice and democratic processes. They are considered high-risk when 'pose a significant risk of harm to the health, safety or fundamental rights' and are subject to strict obligations (Chapter III AI Act), including risk management, data governance, technical documentation, and quality management. Specifically, the risk management system shall include 'the identification and analysis of the know and reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights' (Art 9 AI Act).
Article 6 of the CSDDD requires that companies conduct a risk-based human rights and environmental due diligence (HREDD) to identify and address actual and potential adverse human rights and environmental impacts (Arts 7-16 CSDDD detail this process). Article 8 requires companies to take appropriate measures to identify and assess actual and potential adverse impacts arising from their operations, those of their subsidiaries and those of their business partners. The European Commission's Omnibus I proposal, part of its February 2025 legislative package, limits due diligence measures to 'direct business partners' (tier 1), unless companies have 'plausible information' of that suggests that adverse impacts in the operations of an indirect business partner.
The CSDDD, under Article 1(3), states that if another EU legal instrument 'pursuing the same objectives and providing for more extensive or specific obligations', this shall prevail as lex specialis. This has practical implications: companies subject to both frameworks may argue that compliance with the AI Act - specifically its risk management and transparency provisions - satisfies their due diligence obligations under the lex generalis CSDDD. This overlap may create interpretative problems. For example, if an AI system used in employee hiring or management (designated as high-risk under Article 6 of the AI Act) negatively affects workers' rights, is compliance with the AI Act sufficient? Or must companies also undertake broader HREDD under the CSDDD? And should they conduct risk assessment on indirect business partners only if they have plausible information on impacts or assume that high-risk AI systems have impacts beyond tier 1?
Engagement with stakeholders is also problematic. At different stages of the due diligence process companies are required under the Article 13 of the CSDDD to carry out meaningful engagement with stakeholders (redefined in Omnibus as 'relevant' stakeholders e.g. affected individuals). What does it mean for the large AI companies? AI deployment is likely going to affect the labour market - potentially affecting millions of workers in the EU. As such, large companies providing high-risk AI systems should carry out effective engagement with workers and workers representatives as part of their due diligence process under the CSDDD. But they are not required to do so under the AI Act. The AI Act applies 'without prejudice to existing Union law, in particular on data protection, consumer protection, fundamental rights, employment, and protection of workers, and product safety, to which this Regulation is complementary' (Recital 9 AI Act). Arguably in this case the CSDDD should prevail.
To add complexity, under Article 34 of the 2022 EU Digital Services Act (DSA), providers of very large online platforms and very large online search engines' must conduct annual risk assessments related to 'any actual or foreseeable negative effects for the exercise of fundamental rights'. The language in Article 34 mirrors that in the CSDDD regarding human rights risk assessment, but introduces a presumption of risk based on platform size, a simplification that is contested.
The European Commissions needs to clarify the interaction between AI-specific and general HRDD requirements and whether the obligations in the AI Act (and in the DSA) are more extensive and specific than in the CSDDD or the other way around. Ultimately, HREDD must evolve to match the realities of AI transformation.
Join the conversation
Comments are not enabled for this blog.