0_defisIA
Home / Dossiers / Digital / What are the next challenges for AI?
π Digital π Science and technology

What are the next challenges for AI?

5 episodes
  • 1
    Machine Learning: can we correct biases?
  • 2
    “Decisions made by algorithms must be justified”
  • 3
    Artificial intelligence: a tool for domination or emancipation?
  • 4
    When algorithms replace humans, what is at stake? 
  • 5
    Are artificial intelligence and human intelligence comparable?
Épisode 1/5
Sophy Caulier, Independant journalist
On December 1st, 2021
4 min reading time
Stéphan Clémençon
Stephan Clémençon
Professor of Applied Mathematics at Télécom Paris (IP Paris)

Key takeaways

  • AI is a set of tools, methods and technologies that allow a system to perform tasks in an (almost) autonomous way.
  • The question of trust in Machine Learning (ML) tools is recurrent, because deep learning requires very large volumes of data, which often come from the web.
  • There are different types of bias that can be related to the data source used. These include “selection bias”, due to the lack of representativity, and “omission bias”, where data are lacking.
  • When the available data are too sparse to implement ML in a simple way, we talk about “weak signals”. Hybridisation of ML with symbolic AI could provide solutions.
Épisode 2/5
Sophy Caulier, Independant journalist
On December 1st, 2021
4 min reading time
Isabelle Bloch
Isabelle Bloch
Professor at Sorbonne University (Chair in Artificial Intelligence)

Key takeaways

  • Symbolic AI is based on certain rules, which reproduce human reasoning. This approach is said to be “inherently explainable”, with a few exceptions.
  • Statistical approaches to AI rely on statistical learning methods; it is difficult to extract and express the rules of what its neural networks do.
  • The need for explainability comes from different issues around trust, ethics, responsibility, and also possibly economic issues.
  • Hybrid AI can address this problem by combining several AI approaches. It combines knowledge and data, symbolic AI and neural networks, logic, and learning.
  • But, whatever the approach, the role of the human being remains essential, and it will always be necessary to justify the decisions made by an algorithm.
Épisode 3/5
On January 17th, 2023
5 min reading time
Lê Nguyên Hoang
Lê Nguyên Hoang
Co-founder and President of Tournesol.app
BERGER Victor
Victor Berger
Post-doctoral researcher at CEA Saclay
PISTILLI Giada
Giada Pistilli
PhD student at Sorbonne University affiliated with the CNRS Science, Norms, Democracy laboratory

Key takeaways

  • There are three ways to teach artificial intelligence (AI): supervised learning, unsupervised learning, and reinforcement learning.
  • Machine learning algorithms can spot patterns so the slightest hidden bias in a dataset can therefore be exploited and amplified.
  • Generalising from past experience in AI can be problematic because algorithms use historical data to answer present problems.
  • AI is also a field with a great deal of power: ethical issues such as the use of data can emerge.
  • Communities could take ownership of AI, using it as a true participatory tool for emancipation.
Épisode 4/5
On March 22nd, 2023
4 min reading time
Véronique Steyer
Véronique Steyer
Associate Professor in Innovation Management at École Polytechnique (IP Paris)
Milie Taing
Milie Taing
Founder and CEO of Lili.ai

Key takeaways

  • Artificial intelligence (AI) is increasingly involved in our daily decisions but raises practical and ethical issues.
  • A distinction must be made between the notion of interpretability of AI (its functioning) and the notion of accountability (the degree of responsibility of the creator/user).
  • A draft European regulation should lead in 2023 to a classification of AIs according to different levels of risk.
  • AI can free humans from time-consuming and repetitive tasks and allow them to focus on more important tasks.
  • It is in France's interest to invest in this type of AI for very large projects because it has access to colossal amounts of data to process.
Épisode 5/5
On January 17th, 2024
5 min reading time
Daniel Andler
Daniel Andler
Professor emeritus in Philosophy of Science at Sorbonne Université
Maxime Amblard
Maxime Amblard
Professor of Computer Science at Université de Lorraine
Avatar
Annabelle Blangero
PhD in Neuroscience and Data Science Manager at Ekimetrics

Key takeaways

  • Artificial intelligence and human intelligence are inevitably compared.
  • This confrontation is intrinsic to the history of AI: some approaches are inspired by human cognition, while others are completely independent from it.
  • The imprecise and controversial definition of intelligence makes this comparison vague.
  • Consciousness remains one of the main elements that AI seems to lack in order to imitate human intelligence.
  • The question of comparison in fact raises ethical issues about the use, purpose and regulation of AI.

Contributors

Sophy Caulier

Sophy Caulier

Independant journalist

Sophy Caulier has a degree in Literature (University Paris Diderot) and in Computer science (University Sorbonne Paris Nord). She began her career as an editorial journalist at 'Industrie & Technologies' and then at 01 Informatique. She is now a freelance journalist for daily newspapers (Les Echos, La Tribune), specialised and non-specialised magazines and websites. She writes about digital technology, economics, management, industry and space. Today, she writes mainly for Le Monde and The Good Life.