Behind the scenes at CRCM: meeting with Michel Baccini and Gagik Hovhannisyan, pillars of the IT department
26 January 2026
We’re officially celebrating ChatGPT’s 3rd anniversary. With 800 million active users per week, it’s the fastest-adopted tool in human history. For some of us, it’s part of our daily lives.
The aim of this article is to offer you a brief recap of the conference on the impacts of AI which took place on January 13, organised by the General Secretariat for Defence and National Security (SGDSN), with the support of the Interministerial Directorate for Digital Affairs (DINUM) and the Interministerial Mission for Eco-responsible Digital Technology (NiNumEco).
Tristan Nitot, founder of Mozilla Europe and now a speaker at OCTI Technology, spoke on the topic: “Can humanity afford AI?”
AI has an ecological impact, but not only that!
OpenAI and its partners are trying to make us forget that to run their AI, enormous data centers are being built all over the world. More than the water consumption for cooling them, we far too often forget the soil sealing required for the construction of these new infrastructures, and therefore the impact on biodiversity.
The electronic equipment, of which these organizations are huge consumers, is built from ores sourced from mines located in developing countries, where child labor, ecological disasters (sulfuric acid lakes and others) and political conflicts are intertwined.
Artificial intelligence requires training that rests on the shoulders of “click workers,” precarious populations who find themselves viewing traumatic content (rape, murder, pedophilia, etc.) to classify it as immoral, without any psychological assistance.
From the user’s perspective, AI requires a strong critical mind. Our social networks are now flooded with fabricated content, even fake news! AI also replicates the cognitive biases of our society (and thus classifies a white hand holding a garden hose as a tool and as a weapon if the hand is black).
Because ChaGPT relies on human input, it is limited in this respect. ChaGPT is also trained to constantly satisfy you. It analyzes your question to guess the answer that will best suit you, even if it’s wrong! Thus, its answers are systematically biased. Even worse, ChaGPT will prefer to give a wrong answer rather than admit that it doesn’t have the answer or that it’s unable to provide it. Therefore, the content it generates requires careful verification: making it cite its sources, counting the number of legs of the individuals in your image…
Thus, the effectiveness of these AIs is questionable. AIs are also a weapon for telemarketing and scams. They are now capable of reproducing the voices of your loved ones to send you an urgent phone message based on videos posted on social media. AI is therefore a very powerful and potentially dangerous tool. ChatGPT’s power stems from its nature: it’s a large language model (LLM), the most powerful model currently available. But asking an LLM for a brownie recipe is like trying to get bread from a space shuttle. Thus, there are other tools outside of LLM, just as effective but less resource-intensive and less problematic, allowing you to respond with the same precision to your daily needs. Finally, don’t forget the old tools we all used to use: word processing, search engines… or our own brains!
The purpose of this article is not to condemn the use of AI, which is a fantastic tool for highly complex tasks. In fact, some of our teams make incredible use of it for their research projects. Nevertheless, we must be mindful of our daily use of this tool. Think about it: “Do I really need this space shuttle to go get my bread? Is my request important/complex enough to require mobilizing this entire system?”
Other events are planned in the coming months to reflect on a more sustainable digital world in the context of our professional use, but also in our daily use so that everyone can act more responsibly.
PS: This post was not generated with AI 🙂
