Professor of Applied Mathematics at Télécom Paris (IP Paris)
Key takeaways
AI is a set of tools, methods and technologies that allow a system to perform tasks in an (almost) autonomous way.
The question of trust in Machine Learning (ML) tools is recurrent, because deep learning requires very large volumes of data, which often come from the web.
There are different types of bias that can be related to the data source used. These include “selection bias”, due to the lack of representativity, and “omission bias”, where data are lacking.
When the available data are too sparse to implement ML in a simple way, we talk about “weak signals”. Hybridisation of ML with symbolic AI could provide solutions.
Director of Rights and Digital Studies at Télécom Paris (IP Paris)
Key takeaways
AI is not outside the law. Whether its RGPD for personal data, or sector-specific regulations in the health, finance, or automotive sectors, existing regulations already apply.
In Machine Learning (ML), algorithms are self-created and operate in a probabilistic manner. Their results are accurate most of the time, but risk of error is an unavoidable characteristic of this type of model.
A challenge for the future will be to surround these very powerful probabilistic systems with safeguards for tasks like image recognition.
Upcoming EU AI regulations in the form of the “AI Act” will require compliance testing and ‘CE’ marking for any high-risk AI systems put on the market in Europe.
Professor at Sorbonne University (Chair in Artificial Intelligence)
Key takeaways
Symbolic AI is based on certain rules, which reproduce human reasoning. This approach is said to be “inherently explainable”, with a few exceptions.
Statistical approaches to AI rely on statistical learning methods; it is difficult to extract and express the rules of what its neural networks do.
The need for explainability comes from different issues around trust, ethics, responsibility, and also possibly economic issues.
Hybrid AI can address this problem by combining several AI approaches. It combines knowledge and data, symbolic AI and neural networks, logic, and learning.
But, whatever the approach, the role of the human being remains essential, and it will always be necessary to justify the decisions made by an algorithm.
Associate Professor in Innovation Management at École Polytechnique (IP Paris)
Milie Taing
Founder and CEO of Lili.ai
Key takeaways
Artificial intelligence (AI) is increasingly involved in our daily decisions but raises practical and ethical issues.
A distinction must be made between the notion of interpretability of AI (its functioning) and the notion of accountability (the degree of responsibility of the creator/user).
A draft European regulation should lead in 2023 to a classification of AIs according to different levels of risk.
AI can free humans from time-consuming and repetitive tasks and allow them to focus on more important tasks.
It is in France's interest to invest in this type of AI for very large projects because it has access to colossal amounts of data to process.
Contributors
Sophy Caulier
Independant journalist
Sophy Caulier has a degree in Literature (University Paris Diderot) and in Computer science (University Sorbonne Paris Nord). She began her career as an editorial journalist at 'Industrie & Technologies' and then at 01 Informatique. She is now a freelance journalist for daily newspapers (Les Echos, La Tribune), specialised and non-specialised magazines and websites. She writes about digital technology, economics, management, industry and space. Today, she writes mainly for Le Monde and The Good Life.