2_algorithmes
π Digital π Society
How digital giants are transforming our societies

How profiling influences our behaviour

with Philippe Huneman, CNRS Research Director at Université Paris 1 Panthéon-Sorbonne and Oana Goga, Research Director at Inria and a member of the Inria CEDAR team and the Laboratoire d’Informatique d’Ecole Polytechnique (IP Paris)
On May 28th, 2025 |
5 min reading time
IMG_6691
Philippe Huneman
CNRS Research Director at Université Paris 1 Panthéon-Sorbonne
Oana Goga
Oana Goga
Research Director at Inria and a member of the Inria CEDAR team and the Laboratoire d’Informatique d’Ecole Polytechnique (IP Paris)
Key takeaways
  • The data we leave on websites is shared and sold, in particular to influence our online purchasing behaviour through targeted advertising.
  • To target our needs more effectively, online platforms use high-performance algorithms to understand and predict our behaviour.
  • The MOMENTOUS program aims to understand whether algorithms can exploit individuals’ psychological and cognitive traits to influence their behaviour.
  • We lack data on advertising that targets YouTube channels aimed at children, which increases their exposure to danger.
  • More transparent access to data from online platforms is essential for effective regulatory action.

When dis­cus­sing upco­ming vaca­tions or pur­chases with friends, have you ever won­de­red why high­ly tar­ge­ted ads sud­den­ly appear on your Face­book wall or Ins­ta­gram feed ? We some­times feel like our elec­tro­nic devices are wat­ching us. This concern is not unfoun­ded : the traces we leave online reveal valuable infor­ma­tion about our lives, often without us being ful­ly aware of it.

Fur­ther­more, on 14th Februa­ry 2025, the Human Rights League filed a com­plaint in France against Apple for vio­la­tion of pri­va­cy through the unau­tho­ri­sed col­lec­tion of user data via the Siri voice assis­tant. This case raises the issue of the pro­tec­tion of our per­so­nal data, which has become a valuable resource cove­ted by companies.

Take cookies, for example, those ele­ments that we are asked to accept before acces­sing web­sites. Behind their appe­ti­sing name lie oppor­tu­ni­ties for com­pa­nies to access our per­so­nal data. As Phi­lippe Hune­man (CNRS research direc­tor at the Ins­ti­tute for the His­to­ry and Phi­lo­so­phy of Science and Tech­no­lo­gy) shows in his book Les socié­tés du pro­fi­lage (Pro­fi­ling Socie­ties), it is impor­tant to dis­tin­guish bet­ween “neces­sa­ry” cookies, which ensure the pro­per func­tio­ning of a web­site, and “optio­nal” cookies, which are inten­ded to improve the user’s brow­sing expe­rience or per­so­na­lise “more rele­vant” adver­ti­se­ments for the user1. By accep­ting these cookies on a par­ti­cu­lar web­site, we consent to some of our online beha­viour being obser­ved. This beha­viour is often lin­ked to data bro­kers, com­pa­nies that buy, col­lect, and aggre­gate data from mul­tiple web­sites and ulti­ma­te­ly resell it. Among the best known are Acxiom in the Uni­ted States.

Predicting and influencing our behaviour

But why share and sell our per­so­nal data ? One of the main objec­tives is to influence our online pur­cha­sing beha­viour through tar­ge­ted adver­ti­sing. As Oana Goga (INRIA research direc­tor at Ecole Poly­tech­nique, IP Paris) points out : “In the field of online adver­ti­sing, tra­cking [Editor’s note : moni­to­ring users’ online beha­viour on the web] is the basis of two tar­ge­ting methods : the first is retar­ge­ting, a method that involves tar­ge­ting Inter­net users who have alrea­dy visi­ted a web­site by dis­playing adver­ti­se­ments on other sites they visit. The other tech­nique is pro­fi­ling-based tar­ge­ting, which involves crea­ting a user profile.”

The traces we leave on the web in the digi­tal age can the­re­fore be col­lec­ted to build a pro­file. This prac­tice, known as “pro­fi­ling”, is defi­ned by the GDPR as “the auto­ma­ted pro­ces­sing of per­so­nal data to eva­luate cer­tain per­so­nal aspects rela­ting to a natu­ral per­son2”. It is used to ana­lyse, pre­dict or influence indi­vi­duals’ beha­viour, inclu­ding through the use of algo­rithms. To illus­trate this concept, let’s take the example given by the resear­cher on Facebook’s pro­fi­ling tar­ge­ting and how it has evol­ved : “In 2018, users were clas­si­fied on Face­book into 250,000 cate­go­ries by algo­rithms, based on their pre­fe­rences on the plat­form. Today, this clas­si­fi­ca­tion is no lon­ger expli­cit. Algo­rithms no lon­ger place users in cate­go­ries so that adver­ti­sers can choose who they want to tar­get but ins­tead decide for adver­ti­sers who to send adver­ti­se­ments to.”

The algo­rithms used today to pre­dict and influence our actions are extre­me­ly effec­tive. They are said to be bet­ter than humans at unders­tan­ding our beha­viour and could even influence it. For example, research shows that com­pu­ter models are much more effec­tive and accu­rate than humans at per­for­ming an essen­tial socio-cog­ni­tive task : per­so­na­li­ty assess­ment3.

But this effec­ti­ve­ness raises seve­ral ques­tions : to what extent can these algo­rithms know and pre­dict our beha­viour ? And how do they work ? To date, the ans­wers remain unclear. Oana Goga says : “One of the big pro­blems with recom­men­da­tion algo­rithms is that they are dif­fi­cult to audit because the data is pri­vate and belongs to com­pa­nies.” Phi­lippe Hune­man adds : “Right now, we don’t know how algo­rithms use our data to pre­dict our beha­viour, but their models are get­ting bet­ter and bet­ter. Just as with gene­ra­tive AI, we don’t know how the data is put toge­ther. We have to choose : do we want a world where this soft­ware is effec­tive, or ethical?”

The ethical issues surrounding profiling and algorithms

The ethi­cal impli­ca­tions of these algo­rithms are fun­da­men­tal. In 2016, the Cam­bridge Ana­ly­ti­ca scan­dal4 high­ligh­ted the pos­si­bi­li­ty of exploi­ting user data without their consent for poli­ti­cal pur­poses, in par­ti­cu­lar by deve­lo­ping soft­ware capable of tar­ge­ting spe­ci­fic user pro­files and influen­cing their votes, as in the case of Brexit and the elec­tion of Donald Trump. Howe­ver, pro­ving that these manoeuvres actual­ly influen­ced the out­comes of these events remains dif­fi­cult. More recent cases include the can­cel­la­tion of the pre­si­den­tial elec­tion in Roma­nia by the Consti­tu­tio­nal Court in Decem­ber 2024, fol­lo­wing sus­pi­cions of an ille­gal sup­port cam­pai­gn on Tik­Tok5. The algo­rithms used by plat­forms such as Face­book and X could also be more like­ly than others to rein­force echo cham­bers, i.e. limi­ting expo­sure to diverse pers­pec­tives and encou­ra­ging the for­ma­tion of groups of like-min­ded users, the­re­by rein­for­cing cer­tain com­mon nar­ra­tives6.

In this context, Oana Goga and her team have been run­ning the MOMENTOUS pro­gramme since 2022, fun­ded by a Euro­pean Research Coun­cil (ERC) grant. Its aim is to unders­tand how algo­rithms can exploit psy­cho­lo­gi­cal and cog­ni­tive traits to influence people’s pre­fe­rences and beha­viour. The pro­gramme offers a new mea­su­re­ment metho­do­lo­gy based on ran­do­mi­sed control­led trials in social media. As Oana Goga points out : “It is impor­tant to dis­tin­guish bet­ween algo­rith­mic biases, such as algo­rithms that dis­cri­mi­nate against cer­tain popu­la­tions, and cog­ni­tive biases, which are biases held by humans. With MOMENTOUS, we are loo­king at whe­ther algo­rithms can exploit cog­ni­tive biases.” On a simi­lar note, Phi­lippe Hune­man also men­tions the concept of nudge in pro­fi­ling : “Pro­fi­ling conveys the idea of soft pater­na­lism, or liber­ta­ria­nism aimed at influen­cing an individual’s beha­viour by acting on the biases that govern them. Adver­ti­se­ments and web­site inter­faces exploit these biases to influence users’ deci­sions ; this is nudging.”

In addi­tion, among the ethi­cal issues rai­sed by tar­ge­ting, the first to men­tion is the issue of chil­dren : “From a legal stand­point, we don’t have the right to tar­get chil­dren based on pro­fi­ling. Howe­ver, our stu­dies on You­Tube have revea­led that it is pos­sible to tar­get them contex­tual­ly, for example by dis­playing adver­ti­se­ments on Pep­pa Pig videos or on influen­cer chan­nels,” explains Oana Goga. “Although plat­forms pro­hi­bit tar­ge­ting chil­dren under the age of 18, they can tar­get the content they watch. The pro­blem is that digi­tal regu­la­tors are focu­sing on ban­ning pro­fi­ling-based tar­ge­ting of chil­dren, but not contex­tual tar­ge­ting, which takes into account content spe­ci­fi­cal­ly aimed at them, even though these stra­te­gies are well known to adver­ti­sers. There is a lack of data on adver­ti­sing that tar­gets children’s chan­nels, which raises the issue of the risks of radi­ca­li­sa­tion among young people,” adds the researcher.

For more transparent access to online platform data

How can we make things hap­pen ? From a regu­la­to­ry pers­pec­tive, Oana Goga believes that one of the most pres­sing issues is ensu­ring more trans­pa­rent access to online plat­form data : “Concrete mea­sures must be taken to enable bet­ter access to data so that effec­tive action can be taken. This could be done in two ways : i) through legis­la­tion ; and ii) through citi­zen par­ti­ci­pa­tion. It is essen­tial to be able to col­lect data in an ethi­cal man­ner that com­plies with the GDPR.”

With this in mind, Oana Goga has been deve­lo­ping tools such as AdA­na­lyst and Che­ck­My­News for Meta and You­Tube for seve­ral years. Their goal is to col­lect user data to conduct research on the content and sources of infor­ma­tion they receive on these net­works while res­pec­ting their pri­va­cy as much as pos­sible, in par­ti­cu­lar by not col­lec­ting people’s emails, by going through ethics com­mit­tees, and in com­pliance with the GDPR. “It would also be inter­es­ting to have a panel of users at the Euro­pean level. A plat­form obser­va­to­ry, with 1,000 to 2,000 users in France, Ger­ma­ny, etc., could pro­vide access to data inde­pen­dent­ly of the plat­forms,” she adds.

These are ques­tions at the heart of our socie­ty, which should be at the centre of dis­cus­sions on demo­cra­cy in the coming years.

Lucille Caliman
1Hune­man Phi­lippe, Les Socié­tés du pro­fi­lage. Éva­luer, opti­mi­ser, pré­dire, p. 49–50, Payot-Rivages, 2023.
2https://​www​.cnil​.fr/​f​r​/​r​e​g​l​e​m​e​n​t​-​e​u​r​o​p​e​e​n​-​p​r​o​t​e​c​t​i​o​n​-​d​o​n​n​e​e​s​/​c​h​a​p​i​t​r​e​1​#​A​r​t​icle4
3W. Youyou, M. Kosins­ki, & D. Stil­l­well, Com­pu­ter-based per­so­na­li­ty judg­ments are more accu­rate than those made by humans, Proc. Natl. Acad. Sci. U.S.A. 112 (4) 1036–1040, https://​doi​.org/​1​0​.​1​0​7​3​/​p​n​a​s​.​1​4​1​8​6​80112 (2015).
4https://​www​.lemonde​.fr/​p​i​x​e​l​s​/​a​r​t​i​c​l​e​/​2​0​1​8​/​0​3​/​2​2​/​c​e​-​q​u​-​i​l​-​f​a​u​t​-​s​a​v​o​i​r​-​s​u​r​-​c​a​m​b​r​i​d​g​e​-​a​n​a​l​y​t​i​c​a​-​l​a​-​s​o​c​i​e​t​e​-​a​u​-​c​-​u​r​-​d​u​-​s​c​a​n​d​a​l​e​-​f​a​c​e​b​o​o​k​_​5​2​7​4​8​0​4​_​4​4​0​8​9​9​6​.html
5https://​www​.tou​te​leu​rope​.eu/​v​i​e​-​p​o​l​i​t​i​q​u​e​-​d​e​s​-​e​t​a​t​s​-​m​e​m​b​r​e​s​/​r​o​u​m​a​n​i​e​-​d​e​u​x​-​m​o​i​s​-​a​p​r​e​s​-​l​-​a​n​n​u​l​a​t​i​o​n​-​d​-​u​n​-​s​c​r​u​t​i​n​-​t​r​e​s​-​c​o​n​t​r​o​v​e​r​s​e​-​l​e​-​p​r​e​s​i​d​e​n​t​-​k​l​a​u​s​-​i​o​h​a​n​n​i​s​-​a​n​n​o​n​c​e​-​s​o​n​-​d​e​part/
6M. Cinel­li, G. De Fran­cis­ci Morales, A. Galeaz­zi, W. Quat­tro­cioc­chi, & M. Star­ni­ni, The echo cham­ber effect on social media, Proc. Natl. Acad. Sci. U.S.A. 118 (9) e2023301118, https://​doi​.org/​1​0​.​1​0​7​3​/​p​n​a​s​.​2​0​2​3​3​01118 (2021).

Support accurate information rooted in the scientific method.

Donate