When a company or an organisation designs or uses artificial intelligence (AI), it has a duty to question how it can develop this technology in a responsible manner, without raising any ethical issues. It falls to both AI designers and the leaders who will use this technology for their company’s activities to establish conscientious practices that follow a series of fundamental guidelines. So, what are those guidelines? An opinion by Diane de Saint-Affrique.
Diane de Saint-Affrique
Full Professor of Law, SKEMA Business School
Diane de Saint-Affrique holds a doctorate in law from the University of Paris 2, Panthéon-Assas. She is a professor at SKEMA Business School, where she created and directed the dual Masters in Business Law and Business Contract Law. She is also involved in SKEMA Venture, SKEMA's incubator, where she advises start-ups on their strategy in the context of their entrepreneurial and legal development. She trains managers in governance and CSR. Her areas of research are corporate law, corporate governance and CSR. Her research also focuses on bioethics, AI and ethics. Diane de Saint-Affrique is a board member of AQUAVERA (a non-profit organisation). She is a board member of AFD&M (Association française droit et management).
Diane de Saint-Affrique's contributions
Guidelines for the ethical use of AI in business