0_defisIA
Home / Braincamps / Science and technology / What are the next challenges for AI?
π Science and technology

What are the next challenges for AI?

3 episodes
  • 1
    Machine Learning: can we correct biases?
  • 2
    AI Act: how Europe wants to regulate machines
  • 3
    “Decisions made by algorithms must be justified”
Épisode 1/3
Sophy Caulier, Independant journalist
On December 1st, 2021
4 mins reading time
Stéphan Clémençon
Stephan Clémençon
Professor of Applied Mathematics at Télécom Paris (IP Paris)

Key takeaways

  • AI is a set of tools, methods and technologies that allow a system to perform tasks in an (almost) autonomous way.
  • The question of trust in Machine Learning (ML) tools is recurrent, because deep learning requires very large volumes of data, which often come from the web.
  • There are different types of bias that can be related to the data source used. These include “selection bias”, due to the lack of representativity, and “omission bias”, where data are lacking.
  • When the available data are too sparse to implement ML in a simple way, we talk about “weak signals”. Hybridisation of ML with symbolic AI could provide solutions.
Épisode 2/3
Sophy Caulier, Independant journalist
On December 1st, 2021
3 mins reading time
Winston Maxwell
Winston Maxwell
Director of Rights and Digital Studies at Télécom Paris (IP Paris)

Key takeaways

  • AI is not outside the law. Whether its RGPD for personal data, or sector-specific regulations in the health, finance, or automotive sectors, existing regulations already apply.
  • In Machine Learning (ML), algorithms are self-created and operate in a probabilistic manner. Their results are accurate most of the time, but risk of error is an unavoidable characteristic of this type of model.
  • A challenge for the future will be to surround these very powerful probabilistic systems with safeguards for tasks like image recognition.
  • Upcoming EU AI regulations in the form of the “AI Act” will require compliance testing and ‘CE’ marking for any high-risk AI systems put on the market in Europe.
Épisode 3/3
Sophy Caulier, Independant journalist
On December 1st, 2021
4 mins reading time
Isabelle Bloch
Isabelle Bloch
Professor at Sorbonne University (Chair in Artificial Intelligence)

Key takeaways

  • Symbolic AI is based on certain rules, which reproduce human reasoning. This approach is said to be “inherently explainable”, with a few exceptions.
  • Statistical approaches to AI rely on statistical learning methods; it is difficult to extract and express the rules of what its neural networks do.
  • The need for explainability comes from different issues around trust, ethics, responsibility, and also possibly economic issues.
  • Hybrid AI can address this problem by combining several AI approaches. It combines knowledge and data, symbolic AI and neural networks, logic, and learning.
  • But, whatever the approach, the role of the human being remains essential, and it will always be necessary to justify the decisions made by an algorithm.

Contributors

Sophy Caulier

Sophy Caulier

Independant journalist

Sophy Caulier has a degree in Literature (University Paris Diderot) and in Computer science (University Sorbonne Paris Nord). She began her career as an editorial journalist at 'Industrie & Technologies' and then at 01 Informatique. She is now a freelance journalist for daily newspapers (Les Echos, La Tribune), specialised and non-specialised magazines and websites. She writes about digital technology, economics, management, industry and space. Today, she writes mainly for Le Monde and The Good Life.