By: Isha Kumari 

Are you one of those who go by “I’ll believe it when I see it”? Well I got news for you mate, this idiom is about to go out of business, like nobody’s business. We’ve arrived at a time where you can no longer trust what you see and what you hear.

Ladies & Gentlemen, I welcome you into the world of AI and Deepfakes.

Deepfakes are doctored images,  videos or audio recordings made by artificial intelligence, showing people saying things or doing actions that never happened. The alarming thing, however, is that they have become all the more prominent and pervasive these days. Leveraging advanced machine learning techniques such as Generative Adversarial Networks (GANs) and autoencoders, deepfakes are highly realistic and often indistinguishable from genuine content, making it possible to create a version of reality that feels completely convincing. While the same technology helps in film production and interactive education, it’s now being misused to mislead people and spread lies on the internet.

In many ways, deepfakes have changed the way people think about truth online. A single video shared on social media can take seconds to spread and hours or even days to verify. By then most people have already made judgments. Once that happens the damage is done. Even if experts later prove it false, many people still hold on to doubt, or worse, question the authenticity and bias of the expert themselves. It’s this lasting confusion that makes deepfakes such a powerful weapon for disinformation. Not to mention, the credibility of videos as evidence in court, would now be under scrutiny and question . Unfortunately for us, justice just moved a whole step farther from our reach.

Politics and Manipulation

Political deepfakes cause some of the worst problems. During the 2020 U.S. elections, fabricated clips appeared showing famous leaders making shocking comments. One fake video, even showed Barack Obama being arrested while President Trump watched on with a smirk, something meant only to provoke outrage. Funny enough, this video was shared by Trump himself. The line however, between ‘just for fun’ and defamation often blurs in such cases and are often undefined by law. India saw similar manipulation around its 2024 elections, with fake footage of politicians making aggressive statements and performing controversial acts. Most of those clips were later found to be altered with AI. This shows, how easy it is now to shape political agenda and set new propaganda, with just a few minutes of digital editing.

Military Uses and Psychological Warfare

Artificially generated videos have begun to appear in military conflicts and wars as well, as a strategy to weaken the opposing side. During India’s Operation Sindoor in 2025, fake videos spread all over the Indian Social media, showing officials admitting defeat. The clips were soon disproven, but the dubiety lasted long enough to spur confusion and anger. The Russia‑Ukraine war has seen similar tactics, with videos of soldiers surrendering and leaders making false announcements. These examples prove that disinformation isn’t just political- it can be used as a psychological weapon in warfare too.

Deepfake for Propaganda and Psyops

Revenge Porn and Personal Harm

One of the ugliest and immoral usage of deepfakes is revenge porn. People’s faces, often women’s, are put onto explicit videos without their consent and used to harass them. In 2024, a fake video featuring Taylor Swift attracted millions of views before being confirmed false. In India, such cases have been used for extortion, where victims are threatened with release of their deepfakes on the internet unless they pay or comply to sick demands. The emotional damage and humiliation from such acts are painfully real even though the content is not.

Corporate Deception

Businesses also suffer immensely from deepfakes. In 2019, scammers tricked a UK company into sending €220,000 by using fake audio that sounded like the CEO’s voice. More recently, Bengaluru residents were deceived by fake videos pretending to feature N. R. Narayana Murthy and Mukesh Ambani promoting fraudulent investment platforms. Apart from financial loss, these fake clips hurt public trust and can move stock prices or affect investment decisions.

Social and Communal Unrest

Deepfakes have been responsible for social chaos in several countries. In Nigeria, forged videos of religious figures making offensive remarks resulted in riots. During the COVID‑19 pandemic, fake governmental videos circulated, pushing false health advice and spawning panic and hysteria that spread like wildfire in the absence of free flow of authentic information. In India, misleading protest clips often travel through messaging apps like WhatsApp, showing violence or unrest that never existed, giving rise to counter protests or riots. Each incident chips away at social trust and leaves communities more divided and anxious than before.

Why Deepfakes Feel So Real

It’s difficult to tell a deepfake from reality because of how the system works. One part of the program creates the fake (generator), and another part tests how believable it is until there’s almost no difference from real footage (discriminator). Most people trust what they see and hear. That’s why deepfakes are more effective than written misinformation. On social media, videos that spark anger or fear are shared faster than calm or factual ones. Even after debunking, the picture or voice lingers in memory. It makes people question everything – even the truth.

The Damage That Follows

The consequences percolate to reach almost every level. People lose trust in news, in politicians, and even in their own judgments. Democracies become polarized as false content spreads before elections and people become hostages to their online echo chambers, vilifying the opposite side. Conflicts intensify when fake military recordings go viral. Corporations lose money and credibility. On a personal scale, victims of deepfake harassment often face public shame and mental distress. The overall result is a slow but dangerous erosion of trust in the digital environment.

Why They Are Hard to Detect

Spotting a deepfake used to be easy when the lighting or lip movement looked wrong. Modern ones, however, are nearly flawless. Tools such as Deepware Scanner or Microsoft Authenticator can identify some manipulation, but their accuracy remains limited. In multilingual countries like India, another issue emerges – fakes appear in many languages and move fast onto different platforms, making them harder to debunk. Technology alone isn’t enough; some level of human review and public awareness is always needed.

How to Combat Deepfakes

Responses to deepfake disinformation include :

  1. Fact-Checking : Organisations like BOOM Live and Factly verify manipulated content.
  2. Legal Measures : India’s IT Act 2000, along with proposals in the U.S. and EU, criminalise malicious dissemination of synthetic media.
  3. Platform Accountability : Social media companies implement automated detection, content labeling, and takedown mechanisms.
  4. Media Literacy : Campaigns educate the public on recognising deepfakes and misinformation.
  5. Technological Tools : AI-driven detection systems analyse inconsistencies in audio-visual data.

Mitigation requires a multi-faceted approach, combining detection, regulation, public awareness, and international cooperation. Transparent labeling of AI-generated content with watermark or warning signs, rapid debunking, and investment in detection technologies are critical.

Conclusion

Deepfakes have changed the meaning of truth on the internet. They blur the line between what is real and what is fabricated, and once that line fades it becomes very hard to rebuild trust or even establish justice. From politics and war to personal lives and businesses, they touch every part of society. The solution isn’t simple – it requires laws, technology, and public awareness working together hand-in-hand and keeping up with the developments of these deepwares. Only through collective effort can authenticity survive in a time where lies can be manufactured with a few clicks and reality distorted with even less.

By Quriosity Queen

I like to read, write, daydream and debate.

Leave a Reply

Your email address will not be published. Required fields are marked *