2_algorithmes
π Digital π Society
How digital giants are transforming our societies

How profiling influences our behaviour

with Philippe Huneman, CNRS Research Director at Université Paris 1 Panthéon-Sorbonne and Oana Goga, Research Director at Inria and a member of the Inria CEDAR team and the Laboratoire d’Informatique d’Ecole Polytechnique (IP Paris)
On May 28th, 2025 |
5 min reading time
IMG_6691
Philippe Huneman
CNRS Research Director at Université Paris 1 Panthéon-Sorbonne
Oana Goga
Oana Goga
Research Director at Inria and a member of the Inria CEDAR team and the Laboratoire d’Informatique d’Ecole Polytechnique (IP Paris)
Key takeaways
  • The data we leave on websites is shared and sold, in particular to influence our online purchasing behaviour through targeted advertising.
  • To target our needs more effectively, online platforms use high-performance algorithms to understand and predict our behaviour.
  • The MOMENTOUS program aims to understand whether algorithms can exploit individuals’ psychological and cognitive traits to influence their behaviour.
  • We lack data on advertising that targets YouTube channels aimed at children, which increases their exposure to danger.
  • More transparent access to data from online platforms is essential for effective regulatory action.

When dis­cuss­ing upcom­ing vaca­tions or pur­chases with friends, have you ever wondered why highly tar­geted ads sud­denly appear on your Face­book wall or Ins­tagram feed? We some­times feel like our elec­tron­ic devices are watch­ing us. This con­cern is not unfoun­ded: the traces we leave online reveal valu­able inform­a­tion about our lives, often without us being fully aware of it.

Fur­ther­more, on 14th Feb­ru­ary 2025, the Human Rights League filed a com­plaint in France against Apple for viol­a­tion of pri­vacy through the unau­thor­ised col­lec­tion of user data via the Siri voice assist­ant. This case raises the issue of the pro­tec­tion of our per­son­al data, which has become a valu­able resource coveted by companies.

Take cook­ies, for example, those ele­ments that we are asked to accept before access­ing web­sites. Behind their appet­ising name lie oppor­tun­it­ies for com­pan­ies to access our per­son­al data. As Phil­ippe Hun­e­man (CNRS research dir­ect­or at the Insti­tute for the His­tory and Philo­sophy of Sci­ence and Tech­no­logy) shows in his book Les sociétés du pro­fil­age (Pro­fil­ing Soci­et­ies), it is import­ant to dis­tin­guish between “neces­sary” cook­ies, which ensure the prop­er func­tion­ing of a web­site, and “option­al” cook­ies, which are inten­ded to improve the user’s brows­ing exper­i­ence or per­son­al­ise “more rel­ev­ant” advert­ise­ments for the user1. By accept­ing these cook­ies on a par­tic­u­lar web­site, we con­sent to some of our online beha­viour being observed. This beha­viour is often linked to data brokers, com­pan­ies that buy, col­lect, and aggreg­ate data from mul­tiple web­sites and ulti­mately resell it. Among the best known are Acx­iom in the United States.

Predicting and influencing our behaviour

But why share and sell our per­son­al data? One of the main object­ives is to influ­ence our online pur­chas­ing beha­viour through tar­geted advert­ising. As Oana Goga (INRIA research dir­ect­or at Ecole Poly­tech­nique, IP Par­is) points out: “In the field of online advert­ising, track­ing [Editor’s note: mon­it­or­ing users’ online beha­viour on the web] is the basis of two tar­get­ing meth­ods: the first is retar­get­ing, a meth­od that involves tar­get­ing Inter­net users who have already vis­ited a web­site by dis­play­ing advert­ise­ments on oth­er sites they vis­it. The oth­er tech­nique is pro­fil­ing-based tar­get­ing, which involves cre­at­ing a user profile.”

The traces we leave on the web in the digit­al age can there­fore be col­lec­ted to build a pro­file. This prac­tice, known as “pro­fil­ing”, is defined by the GDPR as “the auto­mated pro­cessing of per­son­al data to eval­u­ate cer­tain per­son­al aspects relat­ing to a nat­ur­al per­son2”. It is used to ana­lyse, pre­dict or influ­ence indi­vidu­als’ beha­viour, includ­ing through the use of algorithms. To illus­trate this concept, let’s take the example giv­en by the research­er on Facebook’s pro­fil­ing tar­get­ing and how it has evolved: “In 2018, users were clas­si­fied on Face­book into 250,000 cat­egor­ies by algorithms, based on their pref­er­ences on the plat­form. Today, this clas­si­fic­a­tion is no longer expli­cit. Algorithms no longer place users in cat­egor­ies so that advert­isers can choose who they want to tar­get but instead decide for advert­isers who to send advert­ise­ments to.”

The algorithms used today to pre­dict and influ­ence our actions are extremely effect­ive. They are said to be bet­ter than humans at under­stand­ing our beha­viour and could even influ­ence it. For example, research shows that com­puter mod­els are much more effect­ive and accur­ate than humans at per­form­ing an essen­tial socio-cog­nit­ive task: per­son­al­ity assess­ment3.

But this effect­ive­ness raises sev­er­al ques­tions: to what extent can these algorithms know and pre­dict our beha­viour? And how do they work? To date, the answers remain unclear. Oana Goga says: “One of the big prob­lems with recom­mend­a­tion algorithms is that they are dif­fi­cult to audit because the data is private and belongs to com­pan­ies.” Phil­ippe Hun­e­man adds: “Right now, we don’t know how algorithms use our data to pre­dict our beha­viour, but their mod­els are get­ting bet­ter and bet­ter. Just as with gen­er­at­ive AI, we don’t know how the data is put togeth­er. We have to choose: do we want a world where this soft­ware is effect­ive, or ethical?”

The ethical issues surrounding profiling and algorithms

The eth­ic­al implic­a­tions of these algorithms are fun­da­ment­al. In 2016, the Cam­bridge Ana­lyt­ica scan­dal4 high­lighted the pos­sib­il­ity of exploit­ing user data without their con­sent for polit­ic­al pur­poses, in par­tic­u­lar by devel­op­ing soft­ware cap­able of tar­get­ing spe­cif­ic user pro­files and influ­en­cing their votes, as in the case of Brexit and the elec­tion of Don­ald Trump. How­ever, prov­ing that these man­oeuvres actu­ally influ­enced the out­comes of these events remains dif­fi­cult. More recent cases include the can­cel­la­tion of the pres­id­en­tial elec­tion in Romania by the Con­sti­tu­tion­al Court in Decem­ber 2024, fol­low­ing sus­pi­cions of an illeg­al sup­port cam­paign on Tik­Tok5. The algorithms used by plat­forms such as Face­book and X could also be more likely than oth­ers to rein­force echo cham­bers, i.e. lim­it­ing expos­ure to diverse per­spect­ives and encour­aging the form­a­tion of groups of like-minded users, thereby rein­for­cing cer­tain com­mon nar­rat­ives6.

In this con­text, Oana Goga and her team have been run­ning the MOMENTOUS pro­gramme since 2022, fun­ded by a European Research Coun­cil (ERC) grant. Its aim is to under­stand how algorithms can exploit psy­cho­lo­gic­al and cog­nit­ive traits to influ­ence people’s pref­er­ences and beha­viour. The pro­gramme offers a new meas­ure­ment meth­od­o­logy based on ran­dom­ised con­trolled tri­als in social media. As Oana Goga points out: “It is import­ant to dis­tin­guish between algorithmic biases, such as algorithms that dis­crim­in­ate against cer­tain pop­u­la­tions, and cog­nit­ive biases, which are biases held by humans. With MOMENTOUS, we are look­ing at wheth­er algorithms can exploit cog­nit­ive biases.” On a sim­il­ar note, Phil­ippe Hun­e­man also men­tions the concept of nudge in pro­fil­ing: “Pro­fil­ing con­veys the idea of soft pater­nal­ism, or liber­tari­an­ism aimed at influ­en­cing an individual’s beha­viour by act­ing on the biases that gov­ern them. Advert­ise­ments and web­site inter­faces exploit these biases to influ­ence users’ decisions; this is nudging.”

In addi­tion, among the eth­ic­al issues raised by tar­get­ing, the first to men­tion is the issue of chil­dren: “From a leg­al stand­point, we don’t have the right to tar­get chil­dren based on pro­fil­ing. How­ever, our stud­ies on You­Tube have revealed that it is pos­sible to tar­get them con­tex­tu­ally, for example by dis­play­ing advert­ise­ments on Peppa Pig videos or on influ­en­cer chan­nels,” explains Oana Goga. “Although plat­forms pro­hib­it tar­get­ing chil­dren under the age of 18, they can tar­get the con­tent they watch. The prob­lem is that digit­al reg­u­lat­ors are focus­ing on ban­ning pro­fil­ing-based tar­get­ing of chil­dren, but not con­tex­tu­al tar­get­ing, which takes into account con­tent spe­cific­ally aimed at them, even though these strategies are well known to advert­isers. There is a lack of data on advert­ising that tar­gets children’s chan­nels, which raises the issue of the risks of rad­ic­al­isa­tion among young people,” adds the researcher.

For more transparent access to online platform data

How can we make things hap­pen? From a reg­u­lat­ory per­spect­ive, Oana Goga believes that one of the most press­ing issues is ensur­ing more trans­par­ent access to online plat­form data: “Con­crete meas­ures must be taken to enable bet­ter access to data so that effect­ive action can be taken. This could be done in two ways: i) through legis­la­tion; and ii) through cit­izen par­ti­cip­a­tion. It is essen­tial to be able to col­lect data in an eth­ic­al man­ner that com­plies with the GDPR.”

With this in mind, Oana Goga has been devel­op­ing tools such as AdAna­lyst and CheckMyNews for Meta and You­Tube for sev­er­al years. Their goal is to col­lect user data to con­duct research on the con­tent and sources of inform­a­tion they receive on these net­works while respect­ing their pri­vacy as much as pos­sible, in par­tic­u­lar by not col­lect­ing people’s emails, by going through eth­ics com­mit­tees, and in com­pli­ance with the GDPR. “It would also be inter­est­ing to have a pan­el of users at the European level. A plat­form obser­vat­ory, with 1,000 to 2,000 users in France, Ger­many, etc., could provide access to data inde­pend­ently of the plat­forms,” she adds.

These are ques­tions at the heart of our soci­ety, which should be at the centre of dis­cus­sions on demo­cracy in the com­ing years.

Lucille Caliman
1Hun­e­man Phil­ippe, Les Sociétés du pro­fil­age. Évalu­er, optim­iser, pré­dire, p. 49–50, Payot-Rivages, 2023.
2https://​www​.cnil​.fr/​f​r​/​r​e​g​l​e​m​e​n​t​-​e​u​r​o​p​e​e​n​-​p​r​o​t​e​c​t​i​o​n​-​d​o​n​n​e​e​s​/​c​h​a​p​i​t​r​e​1​#​A​r​t​icle4
3W. Youy­ou, M. Kos­in­ski, & D. Still­well, Com­puter-based per­son­al­ity judg­ments are more accur­ate than those made by humans, Proc. Natl. Acad. Sci. U.S.A. 112 (4) 1036–1040, https://​doi​.org/​1​0​.​1​0​7​3​/​p​n​a​s​.​1​4​1​8​6​80112 (2015).
4https://​www​.lem​onde​.fr/​p​i​x​e​l​s​/​a​r​t​i​c​l​e​/​2​0​1​8​/​0​3​/​2​2​/​c​e​-​q​u​-​i​l​-​f​a​u​t​-​s​a​v​o​i​r​-​s​u​r​-​c​a​m​b​r​i​d​g​e​-​a​n​a​l​y​t​i​c​a​-​l​a​-​s​o​c​i​e​t​e​-​a​u​-​c​-​u​r​-​d​u​-​s​c​a​n​d​a​l​e​-​f​a​c​e​b​o​o​k​_​5​2​7​4​8​0​4​_​4​4​0​8​9​9​6​.html
5https://​www​.touteleur​ope​.eu/​v​i​e​-​p​o​l​i​t​i​q​u​e​-​d​e​s​-​e​t​a​t​s​-​m​e​m​b​r​e​s​/​r​o​u​m​a​n​i​e​-​d​e​u​x​-​m​o​i​s​-​a​p​r​e​s​-​l​-​a​n​n​u​l​a​t​i​o​n​-​d​-​u​n​-​s​c​r​u​t​i​n​-​t​r​e​s​-​c​o​n​t​r​o​v​e​r​s​e​-​l​e​-​p​r​e​s​i​d​e​n​t​-​k​l​a​u​s​-​i​o​h​a​n​n​i​s​-​a​n​n​o​n​c​e​-​s​o​n​-​d​e​part/
6M. Cinelli, G. De Fran­cisci Mor­ales, A. Galeazzi, W. Quat­tro­cioc­chi, & M. Star­n­ini, The echo cham­ber effect on social media, Proc. Natl. Acad. Sci. U.S.A. 118 (9) e2023301118, https://​doi​.org/​1​0​.​1​0​7​3​/​p​n​a​s​.​2​0​2​3​3​01118 (2021).

Support accurate information rooted in the scientific method.

Donate