At this year’s NASUWT teachers’ conference, delegates voiced their concerns over the rise in pupils encountering far-right material online whilst conducting research for their homework. One teacher suggested that schoolchildren using the internet to search for information about the Holocaust are as likely to find articles written by Holocaust deniers as they are to find genuine historical accounts. Another warned that children’s access to social media and smartphones means they are “more at risk of being exposed to extremist material than ever before”.
Holocaust denial has existed since the 1940s and some antisemites dedicate years of their lives to producing pseudo-academic books and articles to supposedly disprove that the Holocaust happened. In the digital age, the tone of contemporary denial has changed: younger Holocaust deniers dismiss the Holocaust, mock it, or even ironically celebrate it. Holocaust denial, and all manners of extremist content, sadly remain accessible at the click of a button, as Karen Pollock CBE, the chief executive of the Holocaust Educational Trust has warned.
Antisemitism in the UK is at a modern record high. According to the CST there were 2,255 antisemitic incidents in the UK in 2021, the highest number recorded in a single year. This carries through to the digital realm, where Holocaust denial is just one of the many ways online antisemitism presents itself.
Following the rise in Israel-Palestine tensions in May 2021, the Anti-Defamation League documented a disturbing rise in antisemitic content on multiple social media platforms. In October 2021, a Hope Not Hate report found that a new generation of ‘younger’ social media platforms, such as TikTok, are introducing people to antisemitic ideas they would be unlikely to encounter elsewhere.
Lack of action from technology platforms is not only introducing people to hate speech but is also creating online spaces where antisemitism is allowed to flourish. Indeed, the less moderation, the more extreme antisemitism can be found in greater quantities.
Tech companies are able to control the amount and extremity of antisemitism on their platform but are failing to do so. Danny Stone, chief executive of the Antisemitism Policy Trust, has raised issues over Google’s Safesearch facility, which was found to be ineffective in filtering online antisemitism: “If one of the biggest companies in the world isn’t getting it right, it underlines the scale of the problem.”
And the JC has repeatedly highlighted the refusal of YouTube (owned by Google) to remove antisemitic videos.
It is in this context that teachers have called for more support to educate students on how to identify extremist messages, how to avoid them, and how to understand the ways that they are affecting them. Teachers also voted to lobby the government to invest in new international educational programmes to promote diversity and to train teachers to challenge far-right views in the classroom. But more must be done at the source, by the tech companies themselves who allow extremist material to disseminate through their platforms.
In the UK, MPs have been debating the introduction of the Online Harms Bill, with a guiding principle to do more in the digital age to protect children online. For the first time, platforms will be required, under law, to protect children and young people from all sorts of harm.
Tech companies will be expected to use every possible tool to protect children online. Failure to do so will result in severe penalties, including fines up to ten per cent of the company’s annual global turnover, which could amount to billions for the biggest firms, and the possibility of criminal proceedings for senior executives who do not cooperate with Ofcom.
For many years, tech firms have failed to include basic child safety measures in the design of their platforms. Enforcing platform policies is undeniably challenging. Extremist content is able to circumvent most censorship measures taken, as blocked accounts, material and videos can always reappear as copies on other networks and end up back on platforms such as Twitter and Facebook. But users – particularly children - deserve greater protection than tech companies have been willing to provide. The Online Safety Bill is an attempt to bring some accountability and regulation to the online world and tech giants. In a nation where eighty per cent of 6-12 year-olds have experienced some kind of harmful content online, it is clear that greater strides must be taken to prevent children from all sorts of online harm. Antisemitism, found to be present on every platform explored, must be a focus for online protection going forward. It is irresponsible for tech companies to do little to tackle extremist content on their platforms, where dangerous rhetoric and ideas will have harmful effects on many people’s lives.