Britain’s Ofcom enforces tough obligations under the Online Safety Act

Rate this post


Sebastien Bozon | AFP | Getty Images

LONDON – Britain officially brought its online safety law into force on Monday, paving the way for stricter controls on harmful online content and potentially hefty fines for tech giants. MetaGoogle and TikTok.

Britain’s media and telecommunications watchdog, Ofcom, has published its first edition of codes and guidelines for tech firms, outlining what they must do to combat illegal harm, such as terrorism, hate, fraud and child sexual abuse, on their platforms.

The measures are the first set of duties imposed by the regulator under the Online Safety Act, a sweeping law requiring tech platforms to do more to combat illegal content online.

The Online Safety Act imposes certain so-called “duty of care” on these tech firms to ensure they are held accountable for harmful content uploaded and distributed on their platforms.

Although the act came into law in October 2023, it was not yet fully in force – but Monday’s development effectively marks the formal entry into force of the security obligation.

Tech platforms have until March 16, 2025 to complete an assessment of the risk of illegal harm, Ofcom said, giving them three months to bring their platforms into compliance.

After this period, Ofcom said, platforms must start taking measures to prevent the risks of illegal harm, including better moderation, easier reporting and internal security tests.

Melanie Dawes, chief executive of Ofcom, said: “We will be closely monitoring the industry to ensure that firms meet the strict safety standards set for them under our first codes and guidance, with additional requirements coming in rapidly in the first half of next year.” in a statement on Monday.

Big fines, risk of service suspension

Based on the first edition code, the reporting and complaint functions should be easier to find and use. For high-risk platforms, firms will be required to use a technology called hash-matching to detect and remove child sexual exploitation material (CSAM).

Hash-matching tools link known images of CSAM from police databases to encrypted digital fingerprints known as “hashes” for each piece of content to help social media sites’ automated filtering systems recognize and remove them.

Ofcom stressed that the codes published on Monday were just the first set of codes and that the regulator would look to consult on further codes in spring 2025, including blocking accounts found to be sharing CSAM content and allowing the use of AI to deal with illegal damages.

“Ofcom’s illegal content codes are a significant change in online safety, meaning that from March platforms will have to actively remove terrorist material, child abuse and intimate images and a range of other illegal content. we know in the offline and online world,” UK Technology Minister Peter Kyle said in a statement on Monday.

“If the platforms fail to step up, the regulator has my support to use its full powers, including issuing fines and asking the courts to block access to the sites,” Kyle said.

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *