How the Internet broke everyone’s bullshit detectors


Lego style promotional videos War crimes allegations are Online feed floodsechoing The role of the White House itself Towards enigmatic teasers and original visuals. This isn’t just content drift. It is a new front in the information war, where speed, ambiguity, and algorithmic access matter as much as accuracy.

One Iran-linked outlet, Explosive News, can reportedly produce a two-minute synthetic Lego clip in about 24 hours. Speed ​​is the point. Synthetic media don’t need to last forever; It just needs to travel before it can be verified.

Last month, the White House added to that confusion when it posted two mysterious videos titled “Releasing Soon,” then removed them after online sleuths and open source researchers began analyzing them.

The reveal turned out to be anticlimactic: a promotional push for the official White House app. But this episode showed how fully official communications have grasped the aesthetics of the platform’s leaks, virality, and local machinations. Even when official accounts embrace the aesthetics of the leak, questioning whether the record is real or manufactured is the only defensive step left.

Real versus artificial: the new friction

Zero digital fingerprint used to indicate authenticity. Now, it could point to the opposite. The absence of a trace no longer means that something is original, but rather it may mean that it was not captured by a lens at all. The signal has been flipped. The truth lags behind. Performs engagement.

Automated traffic now requires an estimated 51% of internet activity The expansion is eight times faster than human traffic according to State of AI Traffic and Cyber ​​Threats 2026 Benchmark Report. These systems not only distribute content; Prioritize low quality Virality, ensuring that the artificial record is transmitted while verification is still ongoing.

Open source investigators are still standing their ground, but they are waging a huge war. rise Hyperactive “Super participants,” often backed by paid verification, add a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.

“We always go after someone who hits repost without a second thought,” says Maryam Eshani, an OSINT journalist covering the conflict. “The algorithm prioritizes this reflex, and our information will always be one step behind.”

At the same time, a wave of war watchdog accounts began to interfere with the reporting itself. Manisha Ganguly, head of visual forensics at The Guardian and a specialist in OSINT investigating war crimes, points to the false certainty created by the influx of aggregated content on Telegram and X.

“Open source verification begins to create false certainty when it ceases to be a means of investigation — through confirmation bias, or when OSINT is used to cosmetically validate official accounts or is deliberately misapplied to conform to ideological narratives rather than to interrogate them,” says Ganguly.

While this was happening, access to the verification toolkit itself became more difficult. On April 4, Planet LaboratoriesOne of the most reliant commercial satellite providers of conflict journalism has announced that it will indefinitely withhold images of Iran and the broader Middle East conflict zone, retroactive to March 9, at the request of the US government.

US Secretary of Defense Pete Hegseth responded to Concerns about The delay was unequivocal: “Open source is not the place to determine what did or did not happen.”

This shift is important. When access to primary visual evidence is restricted, the ability to independently verify events is limited. And into this narrow gap, something else is expanding: generative AI is not only filling the silence, it is competing to determine what is seen in the first place.

Generative AI is becoming more difficult to discover

Generative AI platforms have learned from their mistakes. Henk van Es, an investigations coach and verification specialist, says that many classic stories – such as incorrect finger counts, garbled protest signs, and garbled text – have been largely fixed in the latest generation of models. Tools such as Imagen 3, Midjourney, and Dall·E have been optimized for quick understanding, realism, and display of text within an image.

But the trickier problem is what Van Es calls hybridization.

Leave a Reply

Your email address will not be published. Required fields are marked *