THE DIGITAL TRANSITION: IMPACTS AND DEPENDENCES

Why this study?

The term “digital transition” is synonymous with a significant increase in digital devices, today artificial intelligence is broadening and deepening its impact on all aspects of life.

The digital transition is a historic process of changeover to the Internet that began at the end of the 20ᵉ century. Today, digital technologies can be found at every level of our daily and professional lives. Digital transition refers to the path towards the digitisation of information systems. In other words, it’s the integration of digital technologies into the way society works. This true digital revolution implies imperatives such as the transformation of practices, dematerialisation and the need to rethink the organization and culture of our societies.

The arrival of AI poses a new challenge for the digital transition. Indeed, AI stems from the digital transition insofar as this technology opens up perspectives that were unimaginable even ten years ago. AI enables data governance through the implementation of automated analysis processes. In particular, through the creation of autonomous systems such as machine learning. Consequently, this report aims to reflect on the impacts and dependencies arising from both the digital transition and the transition to AI.

What are the dependencies and societal impacts that will arise from the digital transition and the transition to AI?

A key question is what societal impacts and dependences will result from it. This is the subject of this report. Some impacts and dependences, such as those in the financial and environmental spheres, are well known and will only be recalled here, while other, less obvious ones, in the social, ethical, political and geopolitical, democratic and sovereignty spheres, will be discussed in greater detail.

This report draws on a wide range of national and international analyses and expert contributions to answer the following questions:

  • What is the place of human, social and sovereignty aspects in the paradigm of the digital transition and the transition to artificial intelligence?
  • How should we respond to these changes?
  • What are some possible solutions?

These solutions are based on education, the development of critical thinking and of the relationship with truth, ethics by design, and the possible implementation of global regulation.

Recommendations

  • For organisations, ensure full awareness of the obsolescence and constant competition of technology components and applications, and integrate the identification and management of lifespans into continuity plans.
  • Design algorithms that are more energy and resource efficient from the outset. By encouraging the digital transition while at the same time urging energy conservation, our societies face a paradox. “We can live with less AI, but not necessarily with less water.”
  • Support the radical structural changes taking place in the labour market by introducing appropriate public policies to provide the keys to understanding AI from an early age and as part of professional development. This should be done in close cooperation with the education system, so that learners and teachers are trained both in technical aspects and in best practices. Given the rapid evolution of the tools available, this training will need to be updated regularly – this is not a “one and done” exercise.
  • One reality of the transition to digital technology and AI is the erosion and casualisation of jobs. Experts have warned about the situation of “click-workers”, particularly those based in countries with cheap labour and low levels of regulation. New ways of working have emerged with the advent of digital platforms that connect supply and demand for goods or services. Notwithstanding, with the emergence of click-workers or digital platforms, it is clear that there is a risk of casualisation of the labour market and that it is imperative to address this issue in countries committed to labour standards.
  • Make changes to the legal framework governing intellectual property rights, perhaps by taking the rules governing the use of copies of books or articles as a model. The innovation of AI lies in its ability to create new computer-generated content from existing data. Who owns AI-generated content? In the age of AI, how can we ensure that intellectual property rights are protected? Could an AI creation be considered plagiarism? We note that there is a legal vacuum on this subject.
  • Avoid a dogmatic approach to digital technology by using it when it adds value to human work. Now that all aspects of human life are being digitalised, it is imperative to ensure complementarity, avoid succumbing to a single paradigm of digital technology and AI, and know how to identify situations in which the human mind is irreplaceable. To begin with, simple measures can be taken to avoid a 100% digital world: for example, limiting the use of smartphones by installing jammers in classrooms. This would avoid the need to ban the device, while making it impossible to use on school premises.
  • Work on teaching individuals to distinguish between the true, the plausible and the relative. The aim is for citizens to learn how to qualify information by developing their critical thinking skills. Above all, this is not about questioning without reason. Methodological, scientific or Cartesian doubt should be instilled in citizens from an early age, rather than feeling the irrepressible need to “protect” them from a lack of it. There are methods that can be adapted to all ages. Furthermore, given the influence of AI on digital communication methods and the risk of intellectual standardisation, it would be worthwhile to introduce algorithms that present a certain number of random results in order to maintain intellectual curiosity and innovation. Finally, to avoid altering the relationship with truth, one solution to consider would be for any content generated by AI to systematically include a statement to that effect. This practice already exists, but should be extended to all platforms, especially – but not only – during election periods, in order to combat misinformation.
  • Implement ethics by design upstream, when algorithms are being developed by their creators. In other words, build best practices in from the start. Algorithm creators need to ask themselves the following questions: Who writes the algorithms? Where are the engineers who write the algorithms located? What are the general standards that should be implemented? What moral, human or ethical considerations should be adopted?
  • Minimise our dependence on digital services governed by foreign law. In terms of sovereignty, Europe missed the internet boat, leaving the leadership of this technological revolution to American and Chinese multinationals. In order to guarantee European digital sovereignty, a national and European public procurement strategy must be put in place, whether for services or for the creation of a French and/or European cloud.
  • Encourage French digital professionals to pursue their careers in France. Increasing domestic attractiveness and limiting brain drain are key elements in the battle for AI sovereignty in France. This means, among other things, increasing the remuneration of researchers, but also making it easier for them to carry out their projects, funding them, helping companies to start up and grow, and cutting red tape at all levels of business and government.
  • Lay the groundwork for a non-coercive international convention establishing principles for the governance of digital technology, and AI in particular. Given the different political models that exist, there are obvious risks associated with global digital and AI governance. It could therefore be envisaged as a cooperation based on principles (the lowest common denominator), drawing on texts already produced by various international organisations, and gradually supplemented by recommendations and non-binding technical standards.
  • Develop a principle of subsidiarity to avoid dependence, by systematically providing an alternative to ensure continuity of operations and maintenance of skills. There are two key aspects to consider: preserving the memory of documents and anticipating crises. Digital and AI systems are not immune to unforeseen events that could alter their operation, hence the need for non-digital ‘Plan Bs’. This mode of resilience must be designed at the same time as the technology it is intended to replace in the event of a failure.

To read the report :

To read the executive summary: