Deep Insight

AI impact in post truth era

Bertrand Hassani

Group CEO of QUANT AI Lab

09/07/2025

The era of post-truth refers to a contemporary historical period in which objective facts have less influence on public opinion than emotions or personal beliefs. By having less influence, we mean that they are sometimes not even considered, and that the story is more important than the truth, or, being a bit cynical, the truth has a larger spectrum.

This term gained popularity around 2016, especially after: The Brexit vote, the election of Donald Trump in the U.S., and the widespread dissemination of false information on social media. Unfortunately, fake news works in a positive reinforcement process for people believing something alternative to a fact, making them believe that as they are not the only ones thinking this, it must be partially true. The concept of fact is itself now questionable as a fact comes with an orientation and that is a real problem as a fact is supposed to be objective and not subjective. Actually, the word post-truth was chosen as the 2016 Word of the Year by the Oxford Dictionary, which defined it as: ”Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

The current era is facing the spread of fake news i.e. false or misleading information shared on a large scale, relativism i.e. truth becomes subjective — “everyone has their own truth,” emotional influence, i.e. feelings matter more than evidence, widespread by social media working as echo chambers and confirmation bias, which leads to mass disinformation and confusion about what is true, crisis of trust in institutions, media, and scientists, and easier political manipulation through viral emotional content. Manipulation might actually not be accurate; the term use or reuse would be more appropriate as people are willing, they are not forced, e.g. conspiracy theories replacing scientific explanations, political debates driven by emotional slogans rather than data, rejection of science in issues like the COVID-19 pandemic or climate change.

One may ask, what about AI in all this…. Easy today AI is a core element. AI can generate realistic fake videos, images, and audio (deepfakes) that are difficult to distinguish from real ones. These can be used to spread false narratives, impersonate public figures, or manipulate public opinion. It is even more insidious, as we observe that even if we know that it is false it stains our relationships with reality. Besides, AI-driven algorithms on platforms like Facebook, YouTube, or TikTok prioritize content that triggers strong emotions, even if it’s misleading. Bots and AI tools can create and spread fake articles or comments at scale, making disinformation appear popular or credible. To be honest LLMs (chatGPT for example) plausible hallucinations are working similarly.

Furthermore, AI-powered recommendation engines show users content aligned with their existing beliefs, reinforcing biases and reducing exposure to opposing views. This deepens polarization and makes it harder to agree on basic facts. AI language models can be used to mass-produce misleading or biased content, including propaganda, conspiracy theories, or fake social media posts.

Fortunately, AI can help us fight post-truth dynamics, for example through automatically detecting and flagging false information, comparing claims against verified sources, or summarizing trustworthy content. Projects such as Claim-Review, Google’s Fact Check Tools, or AI-enhanced journalism are examples. Besides, some algorithms are being developed to analyse digital content for signs of manipulation, like video tampering or synthetic speech. For instance, Blockchain and AI combinations can help verify media provenance (where it comes from). It can also detect bots and fake accounts. Indeed, some platforms use AI to identify coordinated inauthentic behaviour, bots, and troll farms that spread false information. Finally, AI can power educational tools that teach users how to spot misinformation or analyse biases in the content they consume. We started the paragraph saying that AI can help us fight post-truth dynamics, but I start wondering if we really want that. Today it seems that we are in love with our own truth, we love confirmation bias, and medias are now more considered entertainment than information…

In conclusion, I would say that AI in the post-truth era works as a double-edged sword; it accelerates the spread of false information and makes deception more convincing on the one hand, but it also offers powerful tools to detect and counteract that misinformation — if developed and used responsibly on the other hand. The ultimate impact of AI depends on who controls it, how it is regulated, and whether society invests in media literacy and critical thinking alongside technological defences. But once again, one may wonder what do we want for our society…

Read the full article from Bertrand Hassani on QUANT AI Lab's Linkedin

Let’s stay in touch !