Become a Member
Opinion

How the Gaza coverage hard-wired anti-Israel into AI

Years of flawed, emotive reporting – especially from “high-reliability” outlets – have been baked into training data for large language models and the damage may be impossible to undo

November 20, 2025 14:43
Aharon.jpg
Image: Getty
3 min read

It’s not every day I argue with a machine – but a midnight conversation with an AI model about biases in the coverage of the Gaza war left me with a troubling insight: the media’s narrative is burned into large language models and will haunt us for years.

When I asked, “Over the past two years, has Western media been biased toward or against Israel?” the model repeated, time and again: “There is an institutional, clear, and consistent bias in favour of Israel.”

This matters because the BBC, and other reputable outlets – ranked by most AI systems as a high-reliability source – anchor the statistical baseline these models learn from. Two years of overwhelming volume of skewed or prematurely framed BBC coverage becomes statistical “truth” in training data.

Only when I forced the model to sample headlines and articles and score them against known bias criteria (emotive language, reliance on unverified sources, one-sided scrutiny) did the picture shift. For the ordinary user, though, none of this is visible – the model simply delivers what looks like a confident, evidence-based answer, but one built on a profoundly distorted information diet. The sheer volume of false reports, often couched in highly emotive language, skews the models: dramatic claims of “starvation campaigns”, misreported “mass graves near hospitals” and “hospital bombings,” and libellous accusations of “ethnic cleansing” and “genocide” – frequently sourced from Hamas and repeated without verification by outlets like the BBC – overwhelm the system.

To get more from opinion, click here to sign up for our free Editor's Picks newsletter.