Guidelines for the ethical use of AI in business
RULES OF THE GAME - Finance, CSR, Ethics

Guidelines for the ethical use of AI in business

When a company or an organisation designs or uses artificial intelligence (AI), it has a duty to question how it can develop this technology in a responsible manner, without raising any ethical issues. It falls to both AI designers and the leaders who will use this technology for their company’s activities to establish conscientious practices that follow a series of fundamental guidelines. So, what are those guidelines?

Ethical issues relating to the use of AI in business

Ethics is a set of well-reasoned, moral principles whose goal it is to define rules for life and action, provide recommendations and also set limits in order to orientate our existence and organise social life, with the goal of preserving our societies.

As a result, encouraging ethical practices in the field of AI requires consideration of what has moral value, what gives meaning to our actions and our life as a community, what the desirable or fair outcomes should be and what defines us as moral beings.

In this respect, it seems to me that the ethical creation and use of AI in business should adhere to the following imperatives:

  • Transparency
  • Explainability
  • Consideration of the different stakeholders.

Transparency, according to the European Expert Group, is the requirement that AI systems must be designed and implemented in a way that enables the supervision / monitoring1 of the data, the system itself and the associated business model. Other specialists believe this should be taken a step further, asserting that this transparency should also be extended to the source code of the concerned AI systems.

This demand for transparency will undoubtedly need to be applied on a progressive basis, depending on the subject and on any legal principles that may contradict it, for example as concerns personal data, safety and security.

Further, transparency must be maintained throughout the life cycles of AI systems – from design to development and throughout the period of their use – since these systems will evolve, learn and be transformed by the addition of new data and by their interactions with human users. 

The explainability of the decisions made and the results of autonomous intelligent systems is also a crucial ethical point. According to the European Expert Group, it is the ability to translate AI systems’ operations into intelligible results that, in particular, allow them to be evaluated and provide information about the place, time and manner in which those systems are used.

This criterion expresses the same requirements that anyone is entitled to expect of a human being making decisions affecting them or acting against them. This parallel with inter-human relationships is at the heart of the matter, since the principle of explainability is closely tied to the principle of accountability. In the “French vision” of ethical AI, humans should be accountable for their use of this technology, and individuals should be the first to be held responsible in the event of any damage caused by using AI systems.

Lastly, the consideration of the different stakeholders – the designers, engineers and users – is essential, although the issues are undoubtedly different from one group to the next. For example, a responsible engineer will think about how to build an ethical algorithm, whilst a responsible user will be concerned with not diverting an AI solution for potentially dangerous purposes.

Ethics and the law: Two prescriptive spheres with separate goals

It is important to underscore the fact that ethics and the law do not have the same end goals. They come from two prescriptive worlds with separate mechanisms.

While there are a number of regulatory safeguards – like the General Data Protection Regulation (GDPR), the AI Act2 which aims to regulate the use of AI by establishing a European code of conduct, as well as various certifications –, these rules of hard law are imposed upon private actors (companies) by a public actor (the legislature), supported by judges who will punish any conduct that does not conform to the rule of law. These measures are coercive, unlike ethical standards which are incentivising.

Ethical values and principles, when shared, and especially in business settings, can help to orientate actions, like a compass guiding a ship and its crew. The power of ethics lies stems from the actors co-building a standard and taking real ownership of it, so that it is truly meaningful to them. It then both holds them accountable and gives them a strong sense of belonging.

The effects of ethics and the law – which are two positive resources – are therefore very different, such that the one cannot replace the other.

That being said, the European Union has a rather powerful data protection system that guarantees our sovereignty, at least to some extent. But in my opinion, we should avoid enacting more and more laws, as this could impede innovation in Europe, which in turn would reduce our competitiveness in a globalised world, particularly in the field of AI. If EU companies are not competitive, the risk in the end would be a loss of sovereignty, to the benefit of countries like the US, China and India, which are not subject to the same restrictive regulations. Lastly, it appears absolute necessary that we build a shared ethical framework with all those States, in a field which is becoming more and more sensitive due to the major scientific advances made in recent years.

How can we encourage an ethical relationship with AI without hindering innovation?

The question now is determining the right decision-making level and the right degree of restriction to elicit a positive attitude in the stakeholders. There are multiple issues to address in this arena.

The issue of the behaviour of humans who have some relationship with AI, whether during the research, development or implementation phase: one of the most fundamental questions is how to guarantee the ethics of the human beings who copilot AI.

In a recommendation issued in November 2021, which constituted the first global normative instrument relating to the subject, UNESCO urged AI actors to “promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all”.

The issue of the behaviour of machines is a crucial subject, especially when it comes to generative AI, which targets the autonomous production of texts, images, videos, sounds and other types of data by computer systems, using advanced machine learning models to generate content that looks like the kind that is usually created by human beings.

It is also the reason why it is vital at this stage not to enact vertical legislation, from top to bottom, without having first carried out a study on the ground covering the feasibility and enforceability of such laws, as it is essential not to permit a State organisation that is out of touch with reality to impose how AI will be developed.

It is likewise important to ensure that the stakeholders, along with private sector companies, academic institutions, research institutes and civil society, are involved in building and launching a normative framework that is proposed by public authorities and that considers the specific situations and priorities of each Member State. To develop an intelligent regulatory process, a “smart law” with some assurance of effectiveness, actors must be encouraged to implement tools capable of assessing the impact of AI on human rights, the rule of law, democracy and ethics. 

In other words, all actors should be incentivised to adopt responsible, ethical behaviour when designing, developing and using algorithms in order to help foresee major issues relating to AI. Moreover, it could be useful to create an independent supervisory body for the development and use of AI, like France already has in the field of healthcare, in the form of its National Ethics Advisory Committee.

Lastly, it is worth noting that companies which have adopted an ethical approach to AI are often applauded by their customers. As indicated in a white paper published by the Cité de l’IA, thinking and acting ethically is a new point of leverage for a company’s competitiveness and for consistency with its mission and purpose. By building trust in AI-related topics, companies can create value for their customers and partners.

  1. These experts have been tasked by the Members of the European Parliament with delivering advice to the EU on the strategy it should adopt in respect of AI. Their 2019 proposals are reiterated in the European White Paper on Artificial Intelligence, which was published on 19 February 2020. ↩︎
  2. This law was unanimously adopted on 2 February 2024 and will come into effect in 2026. ↩︎