Skip to content

AI Ethics and the Rule of Law: Business and Human Rights Considerations


Earlier this week, I was at the House of Lords discussing business and human rights considerations surrounding Artificial Intelligence (AI). The event could not be more timely with daily news about AI developments and increasing concern and contrasting views on whether it is going to be a force for good or a critical risk for humanity. AI has the potential to improve healthcare and diseases diagnosis but can also cause patient harm and worsen health inequalities. New AI tools can help fighting deforestation and save the Amazon rainforests but may also end up wiping out humans. What is clear is that AI will have a profound impact on our lives and on our human rights -from freedom of expression, privacy, discrimination, or our labour rights, to our absolute right of freedom of thought.

As a business and human rights expert, I am concerned about the responsibilities of both governments and corporations to protect and respect human rights. The business and human rights framework challenges the view that governments are the only entities responsible for human rights - States are the primary duty bearers for human rights protection, but business also have an expectation to respect human rights, or 'do not harm'.

AI is the most transformative technology that has developed following the endorsement of the UN Guiding Principles on business and human rights in 2011. This makes it necessary to assess the suitability of the business and human rights framework to address AI effects on human rights. AI is different from any other business sector. Its development is rapid and unpredictable, it potentially impacts not just individual human rights but also our societies and humanity as whole, and it challenges traditional thinking on value chain as well as the concepts of worker, product, and consumer - for example, users of a social media platform are consumers but they are also products (their data is sold) and workers (they train the algorithm to make better predictions).

AI is not effectively regulated as an enabling component of a data-driven business model that empowers companies and impacts human rights. How can the regulation of AI be achieved in a context where regulation takes years to be negotiated - take for example the proposed EU Artificial Intelligence Act - whereas AI development is moving at unprecedented fast pace? The existing business and human rights framework may provide the enabling tool to regulate AI applications.

As AI evolves and humans adapt, the business and human rights framework - based on the three pillars of State's duty to protect human rights, the corporate responsibility to respect human rights, and access to remedy for victims - can become the core concept for regulating AI application and development. It may provide a universally agreed set of norms for assessing and addressing the effects of AI on individuals and society, and the responsibilities of both States and business, covering all human rights and all business enterprises.

A key component of the business and human rights framework is the corporate responsibility to conduct Human Rights Due Diligence (HRDD) to identify, prevent, mitigate, and account for how they address human rights impacts. This process is now a requirement under newly established legislation in several European countries and upcoming at the EU level. A meaningful new use of the HRDD process in respect of AI systems can ensure that this technology can be deployed for the benefit of all rather than the few. I regard this as a business and human rights problem.

To what degree can we trust AI to consider our rights? - it was asked during the event. We cannot trust AI systems as such, and we cannot trust companies that develop AI tools. For example, Elon Musk calling for a pause in the development of any AI tool more powerful than Open AI's GPT 4 may be based on a genuine concern but it may be based on competition reasons. Governments cannot leave companies to self-regulate and decide what is ethical or responsible AI - because companies will make such decision based on opportunity for profit. We need to trust our governments to fulfil their obligation to regulate AI development and use by different entities in a way that only AI systems that meet certain criteria and comply with a human rights due diligence process can be deployed.

The event at the House of Lords is part of the Bingham Centre's Public and Youth Engagement programme, which is supported by the Sybil Shine Memorial Trust.

Author:
Dr Irene Pietropaoli 

Senior Fellow in Business and Human Rights, BIICL

Join the conversation

No comments have been added to this blog entry.

-
Donate Now Keep In Touch
Save and continue