CSR: why overly simplistic indicators can be misleading
- Measuring the real impacts of new technologies is complicated because, among other things, user behavior evolves rapidly and creates new demands.
- Currently, measurements are primarily focused on direct effects: device lifecycle, extraction, manufacturing, transportation, etc.
- However, socio-economic considerations are just as important, such as technology adoption, rebound effects, infrastructure development, etc.
- Systemic effects encompass these socio-economic considerations for inclusion in assessment tools.
- In the future, simplifying measurement methods and enabling more dynamic, open approaches is necessary to ensure that measurements are accessible and comprehensive.
For years, we’ve heard the enticing story that digital technologies will save the planet. Swap flights for videoconferencing, replace CDs with streaming, optimise traffic with AI—the logic seems clear: fewer physical resources, more digital activity, lower emissions. But this is only part of the story. Digital systems may seem intangible, but they depend heavily on material reality: chips require rare-earth mining, and data centres consume vast amounts of water for cooling. As these technologies become more popular, users also change their behaviours, creating new demands that are hard to measure. Capturing the full, systemic impact of these technologies is a complex challenge.
Nevertheless, tools capturing systemic impacts are more important than ever. In France, tech companies have shifted from resisting environmental regulations to starting to voluntarily integrate assessment tools, sometimes repurposed beyond their original scope. These are becoming key management instruments that are supposed to influence strategy, investment decisions, and competitiveness. In our paper, we tackle the dual challenge of accurately assessing the environmental impacts of digital technologies—especially their complex systemic effects—and ensuring that these evaluation tools are deeply integrated within organisations to drive actual transformation.
In this paper, we discuss existing literature, international standards (such as ISO 140401 and ITU L.1410)2, and provide examples about how the environmental impacts of digital technologies are currently measured.
We find that we’re often assessing digital impacts too narrowly. Evaluation tools too often focus on first-order (direct) effects—e.g. direct life‑cycle impacts of devices, extraction, manufacture, transport, use, disposal of materials for components, and environmental cost of running data centres—rather than socio-economic considerations. Such is the case, for example, of the widely used greenhouse gas emissions balance sheet (Bilan Carbone ©) or of direct life-cycle assessment tools.
This is relevant for carbon reporting, but ignores the knock-on effects of the technology, such as feedback loops like rebound effects, infrastructure build‑out, and behavioural shifts. Take the roll-out of 5G3 as an example. At first glance, 5G is a winner for environmental gain. It transmits more gigabits per unit of energy, hence using 5G should reduce the energy used to transmit information. However, 5G effectively puts ultra-high-definition streaming in people’s pockets, giving access to data-heavy material all day, anywhere, at cut prices. This ease of access entails an increase in demand, adding to the weight of the technology on greenhouse gases. Moreover, with more demand comes more infrastructure and more hardware, and, as it follows, more precious metals, energy, carbon emissions, and human health damages throughout the whole manufacturing process.

Historical parallels exist. While providing huge efficiency gains, mechanisation, electrification, and automation all increased total energy consumption4 and resource use in the long run. The digital sector appears to follow the same trend. What we propose to call the “systemic effects” (more often referred to in literature as second-order and third-order effects, regrouping indirect changes in behaviour, demand growth, and macro-economic transformations) are usually not captured by assessment tools, per our analysis.
Some tools exist to capture second-order and third-order effects, but these can be imprecise and biased. This leads to wildly optimistic assessments like Global e‑Sustainability Initiative (GeSI)’s5 projection that ICT could cut global GHG emissions by 20% by 2030, or GSMA’s6 claim that mobile networks “avoided” ten times their direct emissions7. Other tools can be data hungry and difficult to apply to an organisational level, like the consequential life‑cycle assessment (CLCA)8, which can produce scenario ranges rather than precise figures. These tools, more relevant for systemic analysis but also more complex, have little uptake among organisations.
In short, making evaluation methods more rigorous and accurate can also make them so complex that they no longer help people learn or drive change.
Cultural change, not just calculus is needed
Even the best method is powerless if it stays locked with specialists. Many organisations outsource digital footprinting, receiving only a report in return. However, learning and the potential shift in mindset happen in the course of the process itself, not just with producing the final figures. Research shows that tools can spark change only when embedded in routines, discussed across teams, and revisited over time. Faced with missed environmental goals, some firms engage in re-operationalisation; lowering their targets rather than changing strategy, sustaining “business as usual” under a greener label.
Our analysis suggests that the environmental dynamics of digitalisation require a shift from a static, attributional approach to a more dynamic, systemic and consequential one. From isolated reporting to collaborative learning and from comforting narratives to evidence-based realism. We propose using openness as a lever to achieve this. Open approaches make results more transparent and easier to relate to their underlying assumptions. They let organisations apprehend these methods, even without large budgets. Finally, we need real-world research on how such methods are applied in practice, so that we can understand how they actually help organisations learn and transform.

