Digital advertising industry “helped incentivise” false and harmful content after Southport attack

The Online Safety Act cannot keep the public safe as it wasn’t designed to tackle misinformation, that’s according to the Science, Innovation and Technology Committee (SITC).

The group of MPs has published a wide-ranging report that urges the government to go further to regulate social media companies and disincentivise the viral spread of false content.

It focuses on the misleading and hateful messaging which spread online after the 2024 Southport murders and led to violent protests, often targeting Muslim and migrant communities.

The report says that this provided a snapshot of how online activity can contribute to real world violence and hate. At the time many parts of the Online Safety Act were not fully in force, but the committee said “we found little evidence that they would have made a difference if they were. Moreover, the Act is already out of date, failing to adequately address generative AI—a technology evolving faster than governments can legislate—which could make the next misinformation crisis even more dangerous. Regulating technology alone is not sufficient—our online safety regime should be based on principles that remain sound in the face of technological development.”

As a result it urges the government to base a new and improved online safety regime on 5 fundamental principles: public safety, free and safe expression, responsibility for content, control over content and data, and technological transparency.  

SITC stated that the unrest and riots in 2024 were driven in part by misinformation and hateful content that was “amplified on social media platforms by recommender algorithms.” 

It added that social media companies “often enabled or even encouraged” this viral spread – and may have profited from it. This is through their advertisement and engagement-based business models.

READ MORE – Channel 4 boss calls for urgent action and regulation to protect Gen Z from fake news

The report recommends the imposition of a set of duties on the companies to deprioritise content found to be misleading by fact-checkers. 

It says the government’s policy “is hamstrung by a lack of accurate, up-to-date information about how recommendation algorithms operate, caused by a lack of transparency on the part of social media companies.”

“Social media can undoubtedly be a force for good, but it has a dark side. The viral amplification of false and harmful content can cause very real harm – helping to drive the riots we saw last summer. These technologies must be regulated in a way that empowers and protects users, whilst also respecting free speech,” explained Dame Chi Onwurah MP, The Chair of the Science, Innovation and Technology Committee.

“It’s clear that the Online Safety Act just isn’t up to scratch. The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn’t cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable. To create a stronger online safety regime, we urge the government to adopt five principles as the foundation of future regulation, ranging from protecting free expression to holding platforms accountable for content they put online.  

“Today’s report sets out a way forward for the government to ensure that people in the UK can stay safe online and control what they see, by disincentivising the viral spread of misinformation, regulating generative AI, and placing much-needed new standards onto social media companies.  

“A national conversation is already underway on this vital issue – we look forward to the government’s response to our report and will continue to examine the consequences of unchecked online harms, particularly for young people, in the months to come.” 

Social media companies often argue that they are not publishers, but platforms, meaning that they are not responsible for content that users put online.

READ MORE – Editors warn of AI-generated misinformation

The SITC report concludes:

“We believe that these services, with sophisticated recommendation algorithms that directly amplify and push content to users, are not merely platforms but curators of content. 

“As we have seen, the amplification and spread of this content can have serious, large-scale impacts. We recognise that this is a complex area of law and that defining social media companies as publishers would have major consequences, but the current situation is deeply unsatisfactory. We call on the government to set out its position on this question in its response to this report.”

Referencing the advertising-driven nature of social media companies and how they may have profited from increased engagement at the time of the Southport attack, it concluded:

“The global digital advertising market is overcomplicated, opaque and under-regulated, operating through an enormous, automated and inaccessible supply chain. 

“This directly leads to the production, viral spread and monetisation of harmful and deceptive content, often without advertisers’ knowledge. 

“Platforms and advertisers appear to be either unable or unwilling to address this problem.”

It added:

“In particular, we were concerned by evidence that Google may have helped to monetise misinformation relating to the attacks, contributing to the violence. This is unacceptable, and is just one example of a much wider problem with the digital advertising industry. We are concerned that Google was seemingly unaware of the chain of events when we asked them about it; failed to tell us how much revenue was earned from this; and failed to reassure us that the company would prevent this from happening again.”

The committee found a “regulatory gap” around digital advertising, because much of the regulation and interventions have been industry-led and focused on tackling harmful advertising content, as opposed to the monetisation of harmful content through advertising. 

“We are not convinced that the digital advertising industry is able, or willing, to effectively self-regulate,” it continued.

“The government’s reliance on industry-led, content-focused solutions, is insufficient to meet the current scale of harm.”

It recommended that the government should extend Ofcom’s powers to explicitly cover this form of harm, and regulate “based on the principle of preventing the spread of harmful or misleading content through any digital means, rather than limiting itself to specific technologies or sectors.”

Also that the Advertising Standards Authority should establish comprehensive guidelines for all actors within the digital advertising ecosystem and supply chain. 

“These should be informed by the UN’s 2024 Guiding Principles for Information Integrity and developed in consultation with civil society, academics, experts, industry and policymakers. It should be designed to remove incentives for algorithmic acceleration of harmful or misleading content whilst upholding freedom of expression; ensure advertisers can avoid harmful content; and ensure transparency in technologies with public safety implications, such as digital advertising.”

Finally it added:

“There are insufficient disincentives for bad practice in the digital advertising market. Bad actors can exploit the ecosystem, monetising harmful content through major platforms. 

“[..] Ofcom should be empowered to give penalty notices to platforms when they allow harmful content to be monetised through their services. These penalties should be based on a formula that considers: the severity of harm, the amount of revenue the publisher received, the amount of revenue the platform received, and the number of individuals that encountered the harmful content. The revenue generated from these penalties should be used to support victims of online harms.”

Subscribe to the Prolific North Daily Newsletter Today!

Want all the latest content from Prolific North delivered direct to your inbox daily? Of course you do!

Related News

Sign up to the Prolific North Daily Newsletter

Keep up with the latest developments in the creative, digital, tech, media, and marketing industries in the North