AI with Purpose: AI for Ecology, Health, and Education
Events

AI with Purpose: AI for Ecology, Health, and Education

Round table “AI for Ecology, Health, and Education” by the SKEMA Centre for Artificial Intelligence and SKEMA Publika, with the support of ENGIE Research & Innovation and the MIT Club de France

Recommendations
  • Measure the real impact of AI: Each solution must generate more benefits than harm.
  • Put people in the heart of systems: AI must assist in decision-making and never replace the user’s judgement or responsibility.
  • Strengthen ethical governance: Transparency, human supervision, and accountability must be organisational reflexes.
  • Train critical thinking: Understanding how AI works, its biases and limitations mean training citizens to choose technology rather than to be subjected to it.
  • Encourage interdisciplinary and collaborative research: Combine expertise to find solutions that are truly useful to society.

On October 23rd, 2025, the SKEMA Centre for Artificial Intelligence and SKEMA Publika hosted a panel of international experts for a round table discussion on the purposeful applications of artificial intelligence in the fields of ecology, health, and education.

Round table “AI for Ecology, Health, and Education” at the Grand Paris campus of SKEMA Business School.
Round table “AI for Ecology, Health, and Education” at the Grand Paris campus of SKEMA Business School.

In a world disrupted by the rise of artificial intelligence, the need to design and implement an AI with purpose has never been more urgent. Following the publication of the book Artificial Intelligence for Ecology, Health, and Education, edited by Professor Margherita Pagani, this conference highlights a human-centric approach to AI, one in which algorithms are designed to support humane values, ethics, and the public good.

At the SKEMA Centre for Artificial Intelligence, the concept of AI with purpose lies at the very heart of its research strategy. It guides the Centre’s mission to ensure that technological innovation serves society responsibly, fostering meaningful sustainable progress. As Professor Pagani explains, “artificial intelligence must not only advance technology but also serve humanity by fostering sustainable progress across our most vital domains — ecology, health, and education”. The book she edited explores AI’s transformative potential across these three critical areas, while thoughtfully addressing its ethical implications. It outlines how AI can enhance the human experience in ways that transcend its technological capabilities, making it a truly purposeful tool for sustainable development.

Frédérique Vidal, Director of Development at SKEMA Publika, opened and concluded the conference by reaffirming its central ambition: to imagine an artificial intelligence truly centred on the human being. She emphasised that this vision aligns closely with SKEMA Business School’s Unveil 2030 strategy, which aims to strengthen research, responsible innovation and interdisciplinarity in order to deliver concrete solutions to the challenges of our time. She also highlighted the immense potential of AI, while calling for a measured and enlightened use of the technology. Her closing message was clear: artificial intelligence must become a tool serving the common good, one that helps shape the future of our societies without compromising their fundamental values.

Ecology: AI – Accelerator and Challenge for Sustainable Transition

Amidst a climate emergency, artificial intelligence is often presented as an essential technological lever for accelerating the energy transition. At ENGIE, optimisation algorithms help reduce network losses, better predict demand, and support renewable energies. In this field, AI helps modelling complex scenarios and identifying efficiency gains that would otherwise remain invisible.

However, it is essential to bear in mind the environmental paradox inherent to this technology. Operating infrastructures (data centres, model training, data storage) consume a tremendous amount of electricity and water. In some countries, such as Ireland or Chile, the proliferation of AI centres is already straining natural resources. Mihir Sarkar (Head of AI – ENGIE Research & Innovation) emphasised how AI must now be assessed according to its net positive carbon footprint, as it will only be truly sustainable if emissions avoided exceed those generated.

Every AI solution must produce more positive effects than it consumes energy.

The contributions featured in the book illustrate how sustainability also relies on human and social factors. At the Catholic University of Portugal (Católica Lisbon), researchers study how ecological constraints are transforming business models. At the University of Southern California, researchers use generative AI to engage local communities in addressing climate change. Far from being a miracle solution, AI for ecology is forcing us to make compromises between technological progress, environmental impact and collective governance.

Health: From Medical Promises to Ethical Vigilance

In healthcare, hopes are also high. AI is already helping to accelerate research, improve diagnostics and personalise treatments. It helps doctors to better interpret medical images, predict the progression of certain diseases, and develop new medications. For instance, the work carried by the Carnegie Mellon Center for Machine Learning and Health explores, to name just one example, how gamification can promote prevention and therapeutic education.

An excessive use of AI at all stages of the therapeutic process risks creating a dehumanised form of medicine. Delegating vital decisions to non-transparent algorithms casts doubt on the very principles of medical responsibility. Irregularities such as in the Neuralink project remind us how tenuous the line between therapeutic innovation and technological experimentation on human beings can be. Christine Balagué (professor in digital consumer behaviour management – Institut Mines-Télécom Business School), member of the French High Autority for Health’s prospective committee, emphasised the need for an accountable AI based on four principles: justice, autonomy, beneficence, and non-maleficence. She reminded the audience that many algorithms, particularly in the United States, still reproduce discriminatory biases based on patient’s ethnic background or socio-economic status.

In healthcare, AI is promising if and only if it is accountable. Ethical considerations are not just words on a page; they are governance and processes.

The European union is progressing on these issues. The AI Act classifies healthcare as a high-risk sector, thus requiring traceability, auditability, and human oversight of artificial intelligence systems in healthcare. However, regulation is not enough on its own. Protecting patients and a humane healthcare system will also require transforming hospitals governance, practitioners training, cybersecurity management, and defining safeguards against transhumanist tendencies. Medical AI should not replace human judgement but enhance it, by strengthening transparency, trust and the caregiver-patient relationship.

Education: Teaching Critical Thinking in the Age of Generative AI

The rise of generative artificial intelligence is shaking up the foundations of education. Writing an essay, solving a problem or drawing a picture can now take seconds thanks to generative AI. For Erwan Paitel, General Inspector for Education, Sport, and Research, this revolution demands rethinking teaching methods. Rather than banning the tool, we need to learn how to use it intelligently, understand how it works, its limitations and its biases. For him, teaching AI means teaching critical thinking, and so humanities.

Furthermore, the benefits of AI for education are real. It can be used to individualise learning, tailor tutoring, and save teachers’ time. EdTech (educational technology) is a booming sector and represents an opportunity for innovation, which France 2030 is already fostering with adaptive solutions capable of adjusting the pace and content of lessons to the needs of each pupil. In this respect, SKEMA Entrepreneurs supports several projects led by its students, like Skesia, an educational game designed to make learning mathematics easy. Nevertheless, using AI in education without careful consideration could lead to educational and social risks. Indeed, the speakers warned about existing cases of loss of creativity, cognitive dependence, homogenisation of knowledge, and a digital divide between institutions.

Teaching AI means first and foremost teaching citizens how to evaluate information and machines.

The challenge is not only technical but also civic. As early as in primary school, children must learn that machines do not think but imitate. In secondary school, they must develop critical thinking skills when faced with generated content, and in higher education, they must learn to use these tools to innovate without compromising intellectual integrity. Schools must therefore become the first laboratories for ethical AI, places where students learn to question technology before using it.

AI with Purpose: Serving Life

From ecology to education and health, the same conclusion applies, artificial intelligence only makes sense if it serves humans and life. Speakers reiterated how the challenge is no longer about adopting AI but about determining the values that will guide its development. AI with purpose, a sustainable AI, is first and foremost a measured AI, whose positive effects must outweigh its energy and social footprint. It is based on ethical governance, where transparency, human supervision, and collective responsibility take precedence over the sole pursuit of technological performance. Such an ambition can only be realised through education. Learning to understand AI, recognising its biases, and guiding its use are the pillars of AI education. This education is what will prepare the informed citizens of tomorrow, capable of choosing the technology they use rather than being subjected to it.

By bringing together researchers, businesses and institutions, SKEMA reaffirms its commitment to AI with purpose, which enlightens, cares for and educates as much as it innovates.

Watch the Conference

About them
  • Margherita Pagani (Professor, Director of SKEMA Centre for AI)
  • Tom Davenport (Professor, Babson College; Co-founder, International Institute for Analytics; Fellow, MIT Initiative on the Digital Economy; Senior Advisor, Deloitte)
  • Anuchika Stanislaus (Digital and Key Projects Adivsor, France 2030)
  • Mihir Sarkar (Head of AI, ENGIE)
  • Christine Balagué (Professor, Institut Mines-Télécom Business School)
  • Erwan Paitel (General Inspector, Inspection générale de l’éducation, du sport et de la recherche)
  • Frédérique Vidal (Director of Strategy and Scientific Impact, SKEMA Business School)

The Book

Cover page of the book AI for Ecology, Health, and Education.
About it

This prescient book explores AI’s transformative potential across three critical domains: ecology, healthcare, and education, while thoughtfully addressing its ethical implications. It outlines how AI can enhance the human experience in ways that transcend its technological capabilities, making it a unique tool for sustainable progress.

Find out more in the interview of the book editor Margherita Pagani