Director of Rights and Digital Studies at Télécom Paris (IP Paris)
AI is not outside the law. Whether its RGPD for personal data, or sector-specific regulations in the health, finance, or automotive sectors, existing regulations already apply.
In Machine Learning (ML), algorithms are self-created and operate in a probabilistic manner. Their results are accurate most of the time, but risk of error is an unavoidable characteristic of this type of model.
A challenge for the future will be to surround these very powerful probabilistic systems with safeguards for tasks like image recognition.
Upcoming EU AI regulations in the form of the “AI Act” will require compliance testing and ‘CE’ marking for any high-risk AI systems put on the market in Europe.
Associate Professor in Innovation Management at École Polytechnique (IP Paris)
Founder and CEO of Lili.ai
Artificial intelligence (AI) is increasingly involved in our daily decisions but raises practical and ethical issues.
A distinction must be made between the notion of interpretability of AI (its functioning) and the notion of accountability (the degree of responsibility of the creator/user).
A draft European regulation should lead in 2023 to a classification of AIs according to different levels of risk.
AI can free humans from time-consuming and repetitive tasks and allow them to focus on more important tasks.
It is in France's interest to invest in this type of AI for very large projects because it has access to colossal amounts of data to process.
Sophy Caulier has a degree in Literature (University Paris Diderot) and in Computer science (University Sorbonne Paris Nord). She began her career as an editorial journalist at 'Industrie & Technologies' and then at 01 Informatique. She is now a freelance journalist for daily newspapers (Les Echos, La Tribune), specialised and non-specialised magazines and websites. She writes about digital technology, economics, management, industry and space. Today, she writes mainly for Le Monde and The Good Life.