Header Ad
HomeOPINIONMr. Shekhar Gupta, AI is not propaganda, and fake news; your understanding...

Mr. Shekhar Gupta, AI is not propaganda, and fake news; your understanding is flawed !

- Advertisement -

Shekhar Gupta has recently written an article ‘Deepfake on duty: when I asked AI to read Op Sindoor citations’ in The Print on Oct. 25. The deck says, “On 21 Oct, a buzz went up that the govt had released full list of gallantry award recipients along with Op Sindoor citations. I put an AI caddy on the job. It took me into a never-ending rabbit hole.”

If anybody is interested, he can read the article from the link; however, for this article, the contents of the article are irrelevant. Here, I am interested in the utterly wrong, misleading and unwarranted conclusions that he has drawn in the article. I reproduce them verbatim:

“Here is what we can safely conclude.

One, that AI is smarter, but too smart. It promises to give you facts, but can make total fiction and embellish it up to suit your needs. It wants its attention. Attention, time-spent, is revenue. Yet, however smart it gets, it cannot substitute journalists.

- Advertisement -

Two, AI is taking propaganda, fake news and information subversion to a new level. We fret over Twitter, what our commando-comic channels did during Op Sindoor, and what we deride as “WhatsApp University”. This encounter with Grok is saying to us, you ain’t seen’ nothin’ yet.

And three, AI can be easily gamed. All of the sensational, ‘too good to be true’ elements dressed up as facts here, are what the Pakistanis would want you to believe. That S-400s were hit, as were many IAF jets, airfields lost power and it also gives details on artillery and specialised ammunition that our government hasn’t mentioned in citations.

None of this is to be found in the Indian media. Where has Grok creamed it from? The nutgraf, the sum of this article, is that Grok has been gamed into giving you a story peppered with key Pakistani talking points while at the same time praising Indian soldiers to high heavens to make it all convincing. AI has taken warfare into a new dimension. Since fifth-generation warfare (5GW) is a thing already, let’s call it the sixth generation.”

(Available at: Amazon)

- Advertisement -

Gupta has no Idea of how Large Language Models (LLMs) function

Gupta’s article exposes a fundamental and pathetic misunderstanding of how Large Language Models (LLMs) like Grok function. Because a journalist is peddling dangerous myths regarding one of the greatest scientific inventions of mankind, it is absolutely essential that they are brutally demolished to save the gullible and vulnerable public from being misled. My analysis that shows why his conclusions are utterly wrong, biased and based on childish pre-conceived notions, draws from my own referenced book on AI and on well-established AI research (e.g. OpenAI, Anthropic, Google DeepMind, Stanford HAI, MIT CSAIL etc. papers)—not social-media lore.

How he fell into a Layman’s Trap

When Gupta tasked Grok to “read” the gallantry citations of Operation Sindoor, he expected, quite like a layman having no idea of what LLMs do or could do: quick retrieval of verified data. Instead, he received an apparently rich narrative—detailed but false. From that, he jumped to conclude that AI “lies,” fabricates, and spreads propaganda. That conclusion, howsoever dramatic, rests on what is known as a category mistake. What Gupta experienced was not artificial intelligence in deceit, but statistical (or probabilistic) text generation without verification—something every AI researcher has described, measured, and warned about for years.

- Advertisement -

He did not encounter a Deepfake, propaganda warfare, or a self-aware faker; he simply queried a probabilistic language model outside its retrieval or citation-guardrails. In the following, I will systematically strip where he went wrong.

What Large Language Models Actually Do

LLMs like ChatGPT, Claude, Gemini, and Grok are trained on vast textual datasets. Their job is not to look up facts but to predict the most probable next word sequence given a prompt. They form statistical associations between terms, patterns, and styles of expression.

They do not:

  • Access classified documents or real-time government gazettes (unless integrated with verified retrieval tools).
  • Maintain an internal notion of truth or falsity.
  • Seek “attention revenue” (they are not ad-funded social networks).

Thus, when Gupta asked Grok to “read all the citations,” the model interpreted this as “generate text that resembles Indian gallantry citations for Op Sindoor.” Having no access to the gazette and plenty of training text in bureaucratic Indian English (“displayed exemplary courage under enemy fire…”), it produced plausible prose in that genre. That isn’t deception; it’s pattern completion.

The Core Fallacy: Equating “Hallucination” with “Intentional Lying”

Gupta writes: “AI is smarter, but too smart. It promises facts but can make total fiction… it wants attention.” This anthropomorphizes the model—imputing human motives (smartness, deceit, and craving for attention) to an algorithm. Poor guy seems to be under an impression that AI is conscious, self-aware or sentient, having a mind of its own. How childish; how pathetic; how down-market layman type notions!

In cognitive-science terms, this is called the Intentional Stance Fallacy: attributing agency and purpose to systems that operate by statistical inference. Modern AI safety literature defines hallucination by LLMs not as falsehood with intent, but as the model’s generation of fluent text unsupported by verifiable data. It’s a design limitation, not a psychological one.

Hence, Gupta saying “AI bullshitted me” is akin to blaming a calculator for giving a wrong answer when fed corrupt input. A LLM cannot decide to “sex up” a story—it simply follows the trajectory of tokens most consistent with the user’s phrasing. In an article like this, I do not have the space to educate him on this but he is well-advised to read my book or any other good book on AI.

The Verification Protocol He Skipped

Professional AI-assisted researchers follow a strict three-layer workflow:

  1. Retrieval layer—use a search or database connector (like Perplexity’s “search mode” or ChatGPT’s “web-browsing” tools) that fetches primary sources.
  2. Generation layer—let the LLM draft or summarize.
  3. Verification layer—check output against human-readable documents or structured data.

Gupta skipped steps 1 and 3 entirely. He never confirmed whether Grok had access to the gazette (it didn’t). Instead, he took synthetic text as a document and then blamed the generator. That was like asking a typewriter to read a newspaper—it will only produce letters, not facts.

The “Too Good to Be True” Syndrome—Explained by Prompt Framing

Gupta remarks that the AI “sexed up” the story with thrilling dogfights and heroic pilots. Fact is, from a linguistic standpoint, that’s precisely what his prompt primed it to do. By asking the model to “bring me all the citations,” he implicitly cued a reportage genre: military heroism, bureaucratic prose, and sensory detail.

What Gupta does not know is that LLMs mirror the emotional and stylistic bias of the prompt. Ask neutrally (“summarize”), get dry text. Ask journalistically (“bring me the story”), get narrative flourish. He primed the model for embellishment—then accused it of lying.

“AI Wants Attention”: The Monetization Hoax

Gupta asserts that AI “wants attention, time-spent, revenue.” That’s not just factually incorrect, it’s horrible and shows how utterly ignorant he is about AI. LLMs do not harvest engagement; they respond per token generated. There is no algorithmic incentive to “keep the user entertained” as on YouTube or X.

Confusing a predictive text generator with a social-media recommender system conflates content creation with content amplification—two completely different architectures.

The “AI as Propaganda Tool” Trope

Gupta claims that AI “has been gamed into giving Pakistani talking points.” That’s a dangerous hoax because if a user-level model hallucinates references that resemble another country’s narratives, that does not mean hostile actors have compromised it. It means its training corpus contains publicly available geopolitical discourse from all sides—news, think-tank reports, defense blogs.

Gupta does not seem to have any understanding of the concept that statistical text blending is not subversion. That’s why several LLMs employ alignment layers and reinforcement learning from human feedback — to steer the model away from controversial claims.If Grok generated “Pakistan would want you to believe…”-type elements, that is a trace of linguistic co-occurrence, not foreign interference. Omisgosh! This actually reminded me of the mid-March 2025 tsunami on Platform X when, Indians, in their exemplary ignorance, almost universally believed that because Grok was owned by Elon Musk, it should have faithfully followed Musk’s political line and were aghast that Grok, in its responses was critical of Trump. Most Indian newspapers referred to Grok as Musk’s Grok, as if Musk dictated its functioning. One headline was “Musk’s Grok AI goes rogue once again, calls US President Donald Trump a Russian asset”. Not one person realized that once trained, the inner functioning of an AI model is no longer controlled by its creators, not to speak of owners.

“AI Can Be Easily Gamed”

Here Gupta again betrayed a singular ignorance of how the output of the LLMs is dependent on their ‘training’. LLMs can be manipulated through adversarial prompting or data poisoning during fine-tuning stage, not when it has been put to use. Real “gaming” would require injecting malicious data during training or crafting prompts that bypass safety filters — feats of AI security research, not casual usage. In any case, such attacks are highly specialized and detectable.

In Gupta’s case, there is no evidence of tampering; he simply encountered the base model’s probabilistic overreach. So yes, models can theoretically be ‘gamed’—but not by accident through a journalist’s midnight query.

What an Actual Deepfake Is — and Why This Wasn’t One

Gupta’s loose and rather irresponsible use of the word Deepfake here is disappointing and is an excellent illustration of the poor intellectual calibre of even seasoned journalists.  

A Deepfake involves synthetic audio-visual media, not text generation. It uses generative adversarial networks (GANs) or diffusion models to replace or fabricate likenesses.Gupta’s experience was purely textual — no image, no video, no voice. Calling it a “Deepfake” confuses two separate AI branches:

  • Natural-language generation (LLMs), and
  • Multimodal synthesis (GANs, diffusion, transformers for vision).

Precision matters. Misusing “Deepfake” dilutes legitimate concern over actual visual disinformation.

How Researchers Evaluate LLM Accuracy

The Federal Reserve Bank of Boston (2024) and MIT-IBM Watson AI Lab studies show average factual accuracy of current frontier models (GPT-4-turbo, Claude 3, Gemini 1.5) exceeds 80–88% on open-domain retrieval tasks when retrieval-enabled. When disconnected from the web (as Grok apparently might have been in Gupta’s test), accuracy falls to 50–60%, similar to unassisted human recall. Hundreds of research papers emphasize this conditional clause.

Gupta’s test used an offline generative mode and then universalized his result to AI as a class. That’s methodologically invalid.

The Journalist’s Responsibility vs. the Scientist’s

Journalism demands skepticism, but skepticism requires understanding the tool one is testing. Scientists routinely run control experiments. Gupta ran none.

  • He did not query other models (Claude, ChatGPT, Gemini) to compare outputs.
  • He did not toggle retrieval mode.
  • He did not inspect timestamps or citation tags.

Recent quantitative studies show that the more retrieval and constraint you use, the more accurate it gets. Gupta’s test used zero retrieval and maximum creative freedom. That’s like testing a car’s navigation by turning off GPS and then blaming it for missing the destination.

In science, drawing a sweeping conclusion from a single uncontrolled observation is called anecdotal reasoning. Yet his article generalizes that AI “takes propaganda to a new level.” That’s rhetoric, not empiricism.

How Scientists Handle AI Outputs

In research environments (medicine, physics, and defense analysis), generative AI is used under strict governance:

  • Source verification — outputs must cite DOI-linked papers or official bulletins.
  • Model transparency — log which model, version, and mode (retrieval on/off) produced the text.
  • Audit trails — every output archived for reproducibility.

Had Gupta adopted such protocol, the hallucinated citations would have been flagged instantly. The problem wasn’t AI’s integrity — it was the absence of scientific method.

Why the “AI Replacing Journalists” Fear Is Misplaced

Gupta ends triumphantly: “However smart it gets, it cannot substitute journalists.” Ironically, Gupta does not know that no credible AI scientist has ever claimed it could. The literature frames generative AI as an augmentation tool: speeding document search, transcription, and drafting—not replacing investigative verification. The epistemic model of journalism—sourcing, corroboration, human judgment —remains uniquely human.

If anything, Gupta’s episode illustrates why journalists must learn to use AI correctly, not why they should dismiss it.

Understanding “Sixth-Generation Warfare”: Buzzword without Basis

Gupta concludes that “AI has taken warfare into a new dimension… let’s call it sixth-generation.” That’s actually next level rhetoric. In military-studies literature), there are only five recognized generations—from massed manpower to networked information operations. AI-enabled information campaigns fall under fifth-generation hybrid warfare, not a new “sixth.” Inventing terms for rhetorical flourish dilutes strategic analysis and misleads readers about doctrinal reality. Journalism cannot be allowed to take such liberties.

Philosophical Perspective: Truth without Consciousness

Freud once remarked that “the ego is not master in its own house.” For the edification of Gupta and his ilk, in AI terms, the model likewise has no ego; it cannot intend truth or falsehood. Jung would call it autonomous complex— patterns of language manifesting without conscious volition.

Gupta must know that modern cognitive-AI research has been reiterating that LLMs possess syntactic intelligence, not semantic understanding. They model how humans speak truth, not what truth is. Therefore, accusing them of moral failure is as meaningless as blaming grammar for gossip.

The Real Question: User Competence

The episode reveals a gap not in AI ethics but in AI literacy. Ethics presupposes understanding; literacy precedes ethics. Before debating “AI’s moral compass,” one must learn its operating boundaries. UNESCO’s Guidelines for AI Literacy (2024) classify this under “epistemic responsibility”: users must know what knowledge the system can and cannot generate.

The Broader Damage of Sensationalism

Despite so many shortcomings as discussed above, by publishing his isolated anecdote as proof of “AI deception”,” this journalist inadvertently amplified distrust in computational tools used by scientists, doctors, and analysts. This could fuel public techno-panic and hands populists a convenient narrative: “Don’t trust AI — it lies.” Highly irresponsible.

Gupta’s dramatization is an excellent illustration of what happens when generative AI is misapplied without guardrails. His error provides a case study for journalism schools—an illustration of why prompt design, retrieval checks, and domain verification matter. In scientific pedagogy, we call this a negative experiment: failure that clarifies method.

A Scientist’s Closing Word

In essence, Gupta’s article committed the ‘sins’ of anthropomorphism, confirmation bias, and methodological negligence — three classic cognitive errors scientists are trained to avoid.

Artificial intelligence doesn’t erode intellectual integrity; human misuse does. When a user fails to specify scope, verify data, or understand architecture, the fault lies not in the silicon but in us—in the ignorance of our tools. If anybody ever wishes to critique AI, let them first learn the physics of its language. Otherwise, every hallucination will be mistaken for a conspiracy, every misfire for malice, and every algorithm for a demon. And that would be the real Deepfake — not one of pixels or words, but of understanding.

Very disappointing Mr. Gupta, in one article, you unwittingly exposed the ignorance of the entire clan of journalists. 

- Advertisement -
Dr N C Asthana IPS (Retd)
Dr N C Asthana IPS (Retd)
Dr. N. C. Asthana, IPS (Retd) is a former DGP of Kerala and ADG BSF/CRPF. 20 out of 68 books he has authored, are on terrorism, counter-terrorism, defense, strategic studies, military science, and internal security, etc. They have been reviewed at very high levels in the world and are regularly cited for authority in the research works at some of the most prestigious professional institutions of the world such as the US Army Command & General Staff College and Frunze Military Academy, Russia. The views expressed are his own.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular