top of page

AI and automation: what are the real risks for our society?

  • Writer: Stéphane Guy
    Stéphane Guy
  • Nov 26
  • 10 min read

As generative artificial intelligence continues to become part of our daily lives with tools such as ChatGPT, Gemini, and Claude, and automation accelerates in all sectors thanks to these AI technologies, a crucial question arises: what are the real risks of these technologies for our way of life, particularly in the employment sector? Between promises of efficiency and legitimate fears, it is time to take stock of the real challenges that lie ahead. Employment, ethics, security, democracy: discover the major challenges posed by this unprecedented technological revolution.


Photo from Igor Omilaev on Unsplash

In short


  • Automation threatens around 16.4% of current jobs, with 32.8% of jobs profoundly transformed by AI in France, according to the OECD.

  • Algorithmic biases reproduce and amplify discrimination in recruitment, justice, and access to services, creating invisible systemic discrimination.

  • Deepfakes and AI-assisted disinformation are disrupting elections around the world, threatening democratic trust.

  • The concentration of power in the hands of GAFAM creates a technological dependency that erodes the sovereignty of states.

  • Cognitive atrophy and the loss of human skills are underestimated long-term risks, with implications for our collective autonomy.


The transformation of work: when AI accelerates a revolution already underway


Figures that help to understand the challenges of automation


Automation is not a new phenomenon, but artificial intelligence is radically changing the game. According to an OECD study, 16.4% of jobs are directly threatened, and 32.8% will undergo major transformations in France.*


*Le Monde, La robotisation devrait faire disparaître 14 % des emplois d’ici à vingt ans, selon l’OCDE


Unlike previous industrial revolutions, which mainly affected manual and repetitive tasks, AI is now tackling digital tasks and beginning to encroach increasingly on cognitive tasks. Customer service, accounting, law, journalism, and translation professions are particularly vulnerable. Transportation with autonomous vehicles, logistics with automated warehouses, and administration with intelligent virtual assistants.


Why this transition is different


While every technological revolution has historically created more jobs than it has destroyed, several factors make this transition particularly worrying. First, the speed: AI is advancing exponentially, leaving little time for retraining. Fifty-six percent of the adult population in OECD countries has only "basic" or no skills in information and communication technologies.*


*Cnews, 16,4 % des emplois en France sont menacés par la robotisation


Next, accessibility: unlike expensive industrial robots, AI software can be deployed on a massive scale at low cost in thousands of companies simultaneously. Finally, the nature of the jobs created: many require very high qualifications, dangerously widening the gap between graduates and non-graduates, and between young people and seniors.


The risk? A painful transition period with mass technological unemployment, particularly for low-skilled workers or those in mid-career who are less mobile professionally. If left unchecked, this disruption could create deep and lasting social divisions.


Invisible discrimination: when algorithms reproduce our prejudices


The illusion of objectivity


One of the most insidious dangers of automation lies in algorithmic bias. Artificial intelligence learns from historical data that reflects our imperfect societies. The result? It reproduces and amplifies existing biases, but under the guise of apparent mathematical neutrality.


The case of Amazon is emblematic: "In 2015, the e-commerce giant had [...] to admit that its algorithmic recruitment process evaluated candidates for developer positions in a sexist manner."*


*EDHEC, Peut-on éviter les biais du recrutement par algorithme ?


The algorithm, trained on ten years of CVs from predominantly male employees, concluded that this trend should be continued. This situation is not isolated: "According to a study by PwC (2023), recruitment algorithms increase the chances of discriminatory bias in 45% of cases where they are used without regular human supervision."*


*Mercato de l'Emploi, Les Algorithmes de Recrutement : Où Finit l’Objectivité, Où Commence le Biais ?


A study conducted by Northeastern University and USC found that "posts about supermarket cashier recruitment on Facebook were shown to an audience that was 85% female," reinforcing gender stereotypes.*


*Online EDHEC, Peut-on éviter les biais du recrutement par algorithme ?


Systemic and invisible discrimination


According to the National Bureau of Economic Research, "applications from people with foreign-sounding names are rejected 30% more frequently when screening processes are automated."*


*Mercato de l'Emploi, Les Algorithmes de Recrutement : Où Finit l’Objectivité, Où Commence le Biais ?


The problem is all the more serious because these algorithmic decisions are often perceived as objective and neutral. They benefit from a "mathematical authority" that masks their underlying biases. When an AI automatically rejects your job application or loan request, it is difficult to understand why such a decision was made and even more difficult to challenge it. This opacity poses a major problem: how can we accept that a machine makes important decisions about our lives without being able to understand the exact logic behind them?


The industrialisation of propaganda


AI enables the production of disinformation content on an industrial scale. Networks of bots can flood social media with thousands of coordinated messages, creating the illusion of a massive shift in opinion. Generative AI can write propaganda articles in dozens of languages, tailored to each target audience.


The real danger? The collapse of trust. Rijul Gupta, CEO of DeepMedia, a company specializing in deepfake detection, says, "Most of the time, you don't even need a credit card: you can create deepfakes of anyone in just five seconds." That's what's new, and "that's why 2024 will be the election of deepfakes."


*Epoch Time, L’élection du « deepfake » : Comment l’IA et les faux contenus risquent d’influencer l’élection présidentielle américaine de 2024


When any video can be fake, how can we distinguish between what is true and what is false? This erosion of trust in information threatens the very foundations of our democracies and our relationship with information.


Photo from Tom Kotov on Unsplash

The concentration of power: when automation reinforces the technological oligopoly


A direct link between AI, automation, and the dominance of GAFAM


Developing the most powerful AI and the most advanced automation systems requires enormous resources: thousands of high-performance GPUs, massive data infrastructures, teams of specialised engineers, and colossal amounts of water. This barrier to entry explains why only a few companies—mainly American (OpenAI, Google, Meta, Microsoft) and Chinese (Baidu, Alibaba, Tencent)—have the means to develop these cutting-edge technologies.


Automation, far from being neutral, reinforces this concentration. Every company that adopts AI solutions to automate its processes becomes dependent on GAFAM cloud platforms (Amazon's AWS, Microsoft's Azure, Google Cloud). Every automation system that uses learning algorithms requires the infrastructure and models developed by these giants. It's a vicious circle: the more widespread automation becomes, the more GAFAM strengthens its grip on the global economy.


Sovereignty in jeopardy


According to law professor Frank Pasquale, GAFAM is replacing "the logic of territorial sovereignty with functional sovereignty," where society is less and less guided by democracy and increasingly guided by private companies.*


*Wikipédia, GAFAM


This concentration poses several major problems. First, there is the risk of monopoly: these companies control the digital infrastructure on which our increasingly automated societies depend. Second, there is the issue of sovereignty: European, African, and South American countries find themselves dependent on automation technologies that they do not control. GAFAM not only owns the digital infrastructure (cloud, servers, networks), but also has the power to influence our behaviour and decisions.*


*Ileri, IA et souveraineté : le défi du XXIe siècle pour les États-nations


Finally, a democratic issue: who controls these private companies that are shaping the future of our automated societies? 


Dependence on critical infrastructure


AI automation is gradually being integrated into critical infrastructure: power grids, healthcare systems, transportation, defense, finance, etc. This growing dependence creates systemic vulnerability. A failure, cyberattack, or malicious manipulation of a critical AI system could have catastrophic consequences.


There is also the risk of a "kill switch": what would happen if an American company decided, under government pressure, to cut off access to its AI and automation services for certain countries? This issue of technological sovereignty has become a major geopolitical challenge, directly linked to the race for automation.


Cognitive atrophy: when machines think (and do) for us


The gradual loss of our skills


A more insidious and often overlooked risk of automation is the atrophy of our cognitive and practical skills. By delegating more and more intellectual and manual tasks to AI, such as writing, calculating, problem-solving, decision-making, driving, navigation, and manufacturing, we risk gradually losing these abilities.


GPS has already affected our spatial navigation abilities.* What will happen when we systematically delegate our writing tasks to ChatGPT, our analyses to specialised AI, our driving decisions to autonomous vehicles, or our manufacturing processes to fully automated factories? This cognitive and practical dependence could make us vulnerable: what if the systems fail or are compromised?


*Le Figaro, « Je conduis bêtement et je ne retiens jamais rien de mes trajets » : quand l’addiction au GPS réduit le sens de l’orientation


The problem of diluted responsibility


The automation of decisions also raises a fundamental issue of responsibility. When an AI makes an automated decision, who is responsible if it makes a mistake? The developer? The company that deploys the system? The user who activates it? The manager who trusts the automated system? This dilution of responsibility can lead to a general lack of accountability.


In the medical field, for example, if a doctor systematically follows the recommendations of an automated diagnostic AI, does he or she still develop his or her own clinical judgment? In the event of an error, who bears responsibility? These ethical and legal questions remain largely unresolved and are exacerbated with each new automated field.


Autonomous AI: the worst-case scenario in the long term


When automation escapes human control


In the longer term, we can imagine the emergence of "artificial general intelligence" (AGI) – AI that would equal or exceed human intelligence in all areas. If such AI were to emerge with full automation capabilities and pursue goals that were misaligned with human interests, the consequences could be catastrophic.


This scenario may seem like science fiction, but it is a serious concern for the scientific community. The "alignment problem," or how to ensure that super-intelligent AI capable of automating all of our systems remains aligned with our values, is considered one of the most important challenges for the future development of artificial intelligence. Because once we are surpassed in intelligence, we may no longer have the means to control these automated systems.


Autonomous weapons, an immediate threat


More immediate is the development of lethal autonomous weapons: systems capable of selecting and eliminating targets without human intervention. Several countries are already developing such military automation systems. The risk? Life-and-death decisions made by machines, without moral judgment or human context, with the risk of uncontrolled escalation in conflicts.


AI and automation: how to limit the risks? 

The urgency of appropriate international regulation


In light of these multiple risks, regulation appears essential. The European Union has paved the way with its AI Act. Other jurisdictions are working on similar frameworks.


But regulation must be agile: if it is too strict, it could stifle innovation; if it is too lax, it could leave the door open to abuses. It is difficult to strike the right balance, especially since technology is evolving faster than legislative processes. Furthermore, purely national or European regulation is not enough: international coordination is necessary to prevent risky activities from relocating to less strict jurisdictions.


The crucial role of education and continuing education


To cope with the transformation of work, education and continuing training are essential. Skills that complement AI must be developed: creativity, empathy, critical thinking, and adaptability. The emphasis must be placed on lifelong learning rather than on a single initial training program.


Education systems must also teach digital literacy and critical thinking skills when it comes to AI-generated content. Understanding how these automated systems work, their limitations, and their biases should be part of every citizen's knowledge base.


Developing ethical, transparent, and auditable AI


AI developers have a major responsibility. Ethical AI principles are emerging: transparency, explainability, fairness, privacy, and environmental sustainability. Initiatives such as "AI Safety" and "Alignment Research" are working to make AI safer and more aligned with human interests.


Diversity in development teams is also crucial for limiting bias. AI developed by diverse teams will be less likely to reproduce the prejudices of a homogeneous group. "A 2018 study by the AI Now Institute highlights that only 15% of artificial intelligence research staff at Facebook and 10% at Google are women,"* an underrepresentation that can lead to overlooking certain potential biases and discrimination.


*Wikipédia, Biais algorithmique


Rethinking our social and economic model


Finally, some argue for more profound social change. A universal basic income is often cited as a solution to technological unemployment. Reducing working hours could make it possible to share available jobs and free up time for other activities.


Beyond that, perhaps it is our relationship to work and our definition of progress that need to evolve. Automation can be an opportunity to free humanity from tedious tasks so that we can devote ourselves to meaningful pursuits: creativity, human relationships, civic engagement. But this optimistic vision requires strong political choices and a fair distribution of the productivity gains generated by automation.


Between lucidity and collective action


The risks associated with AI and automation are real and manifold. They should neither be minimised by naive technological optimism nor exaggerated by a catastrophic vision that would lead to inaction. The path is narrow: we must harness the immense potential of these technologies to improve our lives, while putting in place the necessary safeguards to prevent abuses.


This requires collective awareness, informed democratic debate, appropriate regulations, and above all, constant vigilance. Because contrary to what some people would have us believe, the future is not set in stone: it depends on the choices we make today. AI and automation are powerful tools that can serve the common good or widen inequalities, strengthen our democracies or weaken them, liberate humanity or enslave it. It is up to us to decide.



FAQ


1. Will AI really destroy millions of jobs?

Yes and no. AI will profoundly transform the job market, and some jobs will indeed disappear, while others will emerge. The important thing is to prepare for these changes to anticipate them as best as possible.


2. How can you tell if a video is a deepfake?

Several clues can give away a deepfake: inconsistencies in lighting, abnormal eye movements, lip-syncing issues, or visual artefacts during rapid movements. Detection tools are being developed.


3. Are regulations such as the European AI Act sufficient?

The European AI Act is an important first step that creates a new legal framework. However, its effectiveness will depend on its practical implementation and its ability to adapt quickly to technological developments.


4. Can we trust AI to make important decisions?

It depends on the context and the type of decision. AI can be an excellent decision-making tool, but it should not be the sole decision-maker for choices that have a significant impact on people's lives. Algorithmic bias is very real.


5. How can individuals prepare for this technological revolution?

Several strategies are recommended: developing skills that complement AI (creativity, emotional intelligence, critical thinking), continuously learning about new technologies without becoming dependent on them, cultivating your ability to adapt and learn, etc.


Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
Sitemap of the website

© 2025 by 360°IA.

bottom of page