Authored by Daniel Lü via The Daily Sceptic,
Ofcom has confirmed it is referring 4chan to a final enforcement decision under the Online Safety Act. The target is a Delaware company that runs an entirely anonymous imageboard from the United States, with no offices, staff, servers or assets in Britain.
The demand: install age-verification systems and content filters so that British children cannot access the site or face daily fines levied from London on an American platform.
This case is not an outlier.
It is the clearest real-world demonstration of what the new generation of “online safety” laws requires: private companies must build automated filters that decide, in advance, which legal speech is too harmful for minors to see. The question the regulators never quite answer is simple: what exactly does the filter catch?
In the early 2020s, a political consensus formed on both sides of the Atlantic: social media is harming children and something must be done. The result in Washington was the Kids’ Online Safety Act (KOSA); in Westminster, the Online Safety Act (OSA), which received Royal Assent in October 2023 and began enforcement in 2025. The political appeal of both measures is genuine. Adolescent mental health deteriorated in the 2010s, parents are alarmed and platforms have appeared indifferent. But good intentions do not make good law, and the form these interventions took is constitutionally and morally indefensible. Both KOSA and the OSA rest on a duty-of-care model: platforms must take “reasonable measures” or implement “proportionate systems” to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide. This is not a regulation of conduct. It is a mandate to suppress speech based on its topic and its predicted emotional effect on a reader: the very definition of content-based regulation.
The American Civil Liberties Union (ACLU) stated the constitutional problem plainly in its July 2023 letter opposing KOSA: the bill “is a content-based regulation of constitutionally protected speech” that “will silence important conversations, limit minors’ access to potentially vital resources and violate the First Amendment”. Under Reed v. Town of Gilbert, a law is content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed”. Content-based regulations are “presumptively unconstitutional”.
The ACLU identified three specific constitutional failures.
First, the speech targeted is protected. The Supreme Court has never permitted government to suppress legal speech simply because a legislature finds it unsuitable for children. In Brown v. Entertainment Merchants Association, the Court was unambiguous: “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Creating a “wholly new category of content-based regulation” permissible only for speech directed at children would be “unprecedented and mistaken”.
Second, these regimes fail strict scrutiny because they are not premised on demonstrated causation. As the ACLU wrote, KOSA “is not premised on a direct causal link, but instead is based on correlation, not evidence of causation”. This is a decisive legal and moral point. In Brown, the Court struck down California’s video game restriction on exactly the same grounds: the state had produced only correlative data. A law that restricts the speech of millions of people must show that the restriction will actually prevent the harm it identifies. Neither KOSA nor the OSA can clear that bar.
Third, these regimes are both under- and over-inclusive. They leave news media, books, music and magazines entirely unregulated while targeting social media platforms. And they will, inevitably, sweep up beneficial speech alongside harmful speech: 92% of parental control apps have been found to incorrectly block LGBTQ+ content and suicide-prevention resources alongside material that is genuinely harmful. Congress, the ACLU concluded, may not rely on unproven future technology to save the statute.
The empirical premise of both regimes is that social media causes mental illness in adolescents. This claim is contested by a substantial body of peer-reviewed research. In a widely noted book review in Nature, Candice L. Odgers, a psychologist specialising in adolescent mental health at UC Irvine, wrote that the graphs produced by Jonathan Haidt in his work The Anxious Generation, which align the rise in teen mental illness with smartphone adoption, “will be useful in teaching my students the fundamentals of causal inference, and how to avoid making up stories by simply looking at trend lines”. Hundreds of researchers, Odgers wrote, “have searched for the kind of large effects suggested by Haidt. Our efforts have produced a mix of no, small and mixed associations. Most data are correlative.” The direction of causality may run the other way: distressed and isolated adolescents gravitate toward online community; social media does not necessarily create the distress.
The practical implication is stark. Existing criminal law already covers the most serious harms comprehensively: child sexual abuse material (CSAM), terrorist content, incitement to violence and harassment are all criminal in both jurisdictions and all designated “priority illegal content” under the OSA’s Schedules 5-7. The genuinely novel element of both regimes is the duty to suppress legal speech about mental health, gender identity and emotional distress. That element is what fails both the First Amendment and basic proportionality analysis.
The most immediate and documented casualty of the OSA’s implementation has been LGBTQ+ communities. This is not an implementation error. It is structural: the content filters platforms deploy to comply with age-assurance obligations cannot distinguish between content that causes harm to LGBTQ+ youth and content that protects them. Following the July 2025 enforcement rollout, Reddit moved significant LGBTQ+ community content behind age-verification barriers on the logic that queer content is “adult content” and therefore, under the Act, presumptively harmful to children. As OpenDemocracy documented, content creators who are “queer, trans or racialised”, or whose content focuses on these communities, have been “disproportionately targeted, with anything ‘queer’ indiscriminately labelled as ‘adult’”. For trans people, the harm is compounded by the identity documentation problem. Age verification requires users to produce government-issued identity matching their legal name and sex. In 2018, fewer than 5,000 trans people in the UK held a Gender Recognition Certificate, out of an estimated 200,000-500,000. For those without legal gender recognition, age verification is not a minor inconvenience, it forces them to out themselves to a commercial third party as a condition of internet access, creating a permanent record linking their legal identity to spaces they may be using precisely to explore their identity in safety. The moral stakes here are not abstract. For LGBTQ+ young people who cannot be open at home or school, online community is not a convenience but a lifeline. Stonewall has warned that anonymity-reduction measures create a “chilling effect” that puts LGBTQ+ people in genuine danger, particularly in the 12 countries where being LGBTQ+ carries the death penalty. As Stonewall’s Director of External Affairs wrote: “The UK’s Online Safety Bill could become the playbook for countries looking to use digital surveillance to identify and persecute their LGBTQ+ citizens.” The US State Department’s 2024 Human Rights Practices Report criticised the OSA for pressuring US social media platforms to “censor speech deemed misinformation or hate speech”.
The regulatory pressure on US platforms is not confined to Ofcom. On February 24th 2026, the Information Commissioner’s Office (ICO), the UK’s independent data protection regulator, issued Reddit, Inc. a £14.47 million fine for unlawfully processing children’s personal information: the largest penalty the ICO has ever imposed for breaches of children’s privacy. The ICO found that Reddit, despite prohibiting users under 13 by its terms of service, applied no robust age assurance mechanism from May 2018 until July 2025, and therefore had no lawful basis for processing the personal data of under-13s under the UK General Data Protection Regulation. Reddit’s omission to carry out a data protection impact assessment (DPIA) focused on the risks to children before January 2025 separately breached Articles 5, 6, 8 and 35 of the UK GDPR. Reddit has announced its intention to appeal, calling the ICO’s requirement to collect identity information from users “counterintuitive and at odds with our strong belief in our users’ online privacy and safety”. The ICO acted under its Age Appropriate Design Code (the ‘Children’s Code’) rather than the OSA, but the two regimes are coordinated: the ICO has openly admitted that it works in partnership with Ofcom, as the ICO stated in its December 2025 children’s privacy progress update, “to ensure efforts are coordinated”. The fine is legally distinct from OSA enforcement but functionally complementary to it: where Ofcom targets platforms’ content-governance duties, the ICO targets their data-governance failures, and the same underlying conduct of allowing age-unverified users to access content triggers liability under both regimes simultaneously. The ICO is now conducting a broader review of at least 17 platforms popular with children in the UK, including Discord, Pinterest and X. Reddit’s objection also surfaces another contradiction the ICO has not resolved: the age verification it effectively mandates creates a permanent record linking users’ legal identities to their platform activity, held by third-party age verification processors entirely outside the platforms’ own systems, and the data practices of those processors are, as the ICO’s own enforcement demonstrates, largely beyond the regulator’s concern.
The contrast between the ICO’s vigour against American social media platforms and its passivity toward British police forces is, on its face, a study in selective enforcement.
The same week that John Edwards announced the £14.47 million Reddit fine and spoke at the IAPP UK Intensive, the story of Alvi Choudhury was making national television. Choudhury, a 26 year-old British Bangladeshi software engineer, had been arrested at his home in Southampton in January 2026 by Thames Valley Police, who suspected him of committing a £3,000 burglary in Milton Keynes: a city he has never visited, 100 miles away. The arrest was triggered by a retrospective facial recognition match against Cognitec software that runs 25,000 searches per month against approximately 19 million custody photographs held on the Police National Database. Choudhury was held in custody for nearly 10 hours before officers examined the alibi evidence he had been offering since his arrest. When he eventually saw the CCTV footage that had identified him, he told the Guardian the suspect looked approximately 10 years younger, with lighter skin, a bigger nose, no facial hair and different eyes and lips. His own mugshot had been on the police system in the first place only because he was wrongly arrested in 2021 after being the victim of an assault; his DNA was subsequently deleted, but his custody photograph was not. Thames Valley Police’s response was, on its own account, revealing. The force acknowledged the arrest “may have been the result of bias within facial recognition technology”, but an officer told Choudhury that “as the use of facial recognition is already subject to review at a strategic level”, he did not feel the need to raise the matter for wider organisational learning. The force’s public statement went further, reframing the failure entirely: the arrest, it said, was based on the investigating officer’s own visual assessment after the algorithmic match, and therefore “was not influenced by racial profiling”. The position that a human officer confirming a racially biased algorithmic result absolves the institution of responsibility for racial bias merits no extended comment. This is not an isolated incident. In January 2026, another force paid damages to a black man wrongly arrested using the same technology. Home Office research, suppressed until December 2025 when it was published deep within a consultation document by Liberty Investigates, found that the algorithm generates false positive matches at a rate of 5.5% for Black faces and 4.0% for Asian faces, compared with 0.04% for white faces: a disparity of more than 100 to one.
When Edwards took the stage, he explained the ICO’s enforcement philosophy: the regulator must “very deliberately choose our focus”, concentrating on “AI and biometrics, children’s privacy and online tracking”. Police facial recognition involves all three. But the ICO has conducted audits, expressed concern through its Deputy Commissioner, and asked the Home Office for “urgent clarity” and stopped there. The Equality and Human Rights Commission has been more forthright: it was granted permission in August 2025 to intervene in a judicial review of the Metropolitan Police’s live facial recognition programme, arguing the deployments are unlawful for want of a clear legal basis. A comment made at the time about the ICO’s posture proved apt: the regulator had “stressed the need for FRT deployment with appropriate safeguards” while sitting “on the fence” as others sought judicial determination of whether current use is “strictly necessary”. The juxtaposition is instructive. The regulator charged with protecting personal data finds £14 million worth of urgency in Reddit’s failure to age-verify its users, and no comparable urgency in a biometric surveillance system that its own deputy has called “disappointing”, that the government’s own research shows discriminates against minorities by a factor exceeding 100, and that has produced wrongful arrests of racial minorities on the basis of a technology the operating force itself concedes may be racially biased. The filter, as always, catches what the filter is not intentionally designed to catch.
All of this would be a domestic British problem if the OSA’s reach were confined to British soil. It is not. Section 3 of the OSA applies to any service with “links with the United Kingdom”, which Ofcom has interpreted to include any platform with a significant UK user base regardless of where it is domiciled, incorporated or operated. In March 2025, Ofcom wrote to 4chan Community Support LLC, a Delaware LLC with no offices, staff or assets outside the United States, to inform it that it was a regulated service because approximately 7% of its traffic came from UK IP addresses and must therefore provide information regarding its illegal content risk assessment and its qualifying worldwide revenue. 4chan refused to respond to either request. In October, Ofcom issued escalating demands, investigations and a £20,000 fine plus a penalty of £100 per day for up to 60 days for non-compliance with information requests, all served by email to US addresses. 4chan again refused to pay. In August 2025, 4chan and Kiwi Farms (Lolcow LLC) filed a federal lawsuit against Ofcom in the District of Columbia, alleging violations of the First, Fourth and Fifth Amendments, pre-emption by Section 230 of the Communications Decency Act and conflict with the SPEECH Act. Ofcom responded by asserting sovereign immunity under the Foreign Sovereign Immunities Act, claiming both the right to issue binding censorship orders to Americans on American soil and immunity from any American legal response.
Ofcom’s enforcement action against 4chan did not end with the October 2025 information-gathering fine. On February 12th 2026, Ofcom issued a second Provisional Decision against 4chan, proposing both a single penalty and a daily rate penalty for contraventions of sections 9, 10, and 12 of the OSA: its substantive duties to conduct a suitable illegal content risk assessment, to set out adequate user protections in its terms of service, and to implement age verification to prevent children from encountering explicit content. Counsel for 4chan, Preston Byrne, replied the same day: “Increasing the size of a censorship fine does not cure its legal invalidity in the United States.” The deadline for representations having passed without compliance, Ofcom confirmed on February 27th that it was referring the matter to a final decision maker under its Online Safety Enforcement Guidelines. The progression is systematic: from information requests under section 100, to a confirmation decision imposing penalties, to a second provisional decision targeting the Act’s substantive content-safety and age-verification duties. Each escalatory step expands the scope of demanded compliance and raises the potential penalty exposure. For an anonymous platform operating exclusively in the United States, age verification for an anonymous imageboard is not a technical requirement: it is an existential one.
The domestic British appeals framework for these decisions is itself still being constructed. On February 26th 2026, the Tribunal Procedure Committee (TPC) opened a consultation on amending the Upper Tribunal Procedure Rules to accommodate the new rights of appeal created by the OSA. Under section 168 of the Act, any person with a sufficient interest may challenge Ofcom’s confirmation decisions, penalty notices and technology notices before the Upper Tribunal. The TPC provisionally proposes a three-month window for permission-to-appeal applications by interested persons who are not the direct recipients of an Ofcom notice, departing from Ofcom’s own preference for one month. On costs, the TPC agrees with Ofcom’s proposal to displace the usual no-costs rule, recognising that the tribunal should have broader discretion to award costs in OSA cases given the likely complexity and evidence-heavy nature of such appeals, and that the existing rule would leave Ofcom unable to recover costs even where it successfully defends a decision. Ofcom is a regulator with the power to fine companies hundreds of millions of pounds, funded by fees levied on the very industry it regulates, and it is now asking for the right to make anyone who challenges it in court pay Ofcom’s legal bills if they lose. The consultation closes May 21st 2026.
This structural asymmetry is what the GRANITE Act directly addresses. Conceptualised by Byrne and introduced in the Wyoming Legislature as HB 70, the ‘Guaranteeing Rights Against Novel International Tyranny and Extortion Act’ passed the Wyoming House of Representatives 46-12 on February 23rd 2026. It strips foreign sovereigns of immunity in US state courts when they attempt to enforce censorship orders against US persons and creates a private right of action with minimum statutory damages of $1 million per violation, or 10% of the defendant’s annual US-related revenue, whichever is greater. It also prevents Wyoming courts from recognising any foreign judgment that infringes constitutionally protected speech, extending the model of the SPEECH Act (28 U.S.C. §§ 4101-4105) from defamation to the full range of First Amendment-protected expression. If censoring an American exposes a foreign regulator to a sufficiently significant civil judgment, the cost-benefit calculation changes dramatically.
A separate American legal theory operates through the Sherman Act and does not depend on overcoming FSIA immunity at all. Ofcom’s sovereign immunity defence may insulate the regulator itself from direct suit, but it extends no protection to the private actors who shaped the OSA’s regulatory design. The OSA imposes identical nominal obligations on all regulated services, but its fixed compliance costs fall proportionally far harder on smaller platforms than on large incumbents with existing legal, technical and compliance teams that can simply be redirected to satisfy new requirements: a pattern antitrust economists describe as raising rivals’ costs. For example, where well-resourced incumbents privately coordinated with regulators to embed compliance standards they could more easily satisfy than their rivals, the resulting framework may reflect competitive preferences rather than independent regulatory judgement. Under Continental Ore Co. v. Union Carbide & Carbon Corp., routing an anticompetitive scheme through a foreign governmental apparatus does not immunise the private actors who designed it. The Noerr-Pennington doctrine, which ordinarily protects petitioning activity, rests on First Amendment foundations that protect the right to petition American government; the stronger legal argument is that it does not extend to petitioning of foreign regulators. Where the factual record supports coordination beyond ordinary advocacy, Sections 1 and 2 of the Sherman Act remain available tools even where the regulatory mechanism is British.
If you care about children’s mental health and safety online, there are three new bills in Congress that are worth knowing about: the SAFE Act, the ECCHO Act and the Stop Sextortion Act (collectively known as the James T. Woods Act). Together they address real, documented harm in ways that KOSA and the UK’s Online Safety Act, simply do not.
The package addresses three documented gaps in federal law.
The SAFE Act repeals outdated CSAM sentencing provisions and directs the US Sentencing Commission to develop updated guidelines reflecting modern patterns of dangerous conduct. Right now, federal sentencing rules are outdated and largely ignored: fewer than one in three cases are sentenced within the existing guidelines. This bill would clear the way for the US Sentencing Commission to write new, updated rules that reflect how online abuse works today.
The ECCHO Act creates a new federal crime targeting networks, most notoriously Network 764, that use online group chats to coerce emotionally vulnerable children into self-harm, suicide and violence, with penalties up to life imprisonment where a victim dies or attempts suicide.
The Stop Sextortion Act explicitly criminalises sextortion for the first time under federal law, responding to a 33% rise in financially motivated cases in 2024 and more than 40 child deaths linked to these schemes. Unlike KOSA or OSA, the James T. Woods Act does not try to police what people say online. They target what predators do: coercion, blackmail and the deliberate manipulation of children into harm. That is a meaningful distinction, and it is why this package has earned support from more than two dozen organizations across the political spectrum, including the FBI Agents Association, RAINN, the National District Attorneys Association, the National Centre for Missing and Exploited Children and Thorn.
The moral case against both the OSA and KOSA is not that children’s wellbeing is unimportant. It is that suppressing protected speech is both the wrong instrument and a dangerous one. The wrong instrument because the science does not establish that social media causes the harms these laws address, and because the content filters that implement these regimes cannot distinguish beneficial from harmful speech. A dangerous one because the same mechanism that blocks, for example, pro-anorexia posts will also block access to eating disorder recovery communities; the same filter that catches self-harm instructions will catch trans youth support forums; and the same regulator empowered to define ‘harmful’ content today may be led by someone with very different ideas about what speech is harmful tomorrow. Above all, it is dangerous because the machinery of protection, once built, does not confine itself to its original target: Japanese Americans were interned after Pearl Harbour; Muslims were surveilled, infiltrated and placed on no-fly lists after September 11th, some rendered to CIA black sites abroad and others tortured at Guantanamo Bay without charge or trial; McCarthyite loyalty boards destroyed careers on the basis that association predicted subversion; and the FBI’s COINTELPRO program turned the apparatus of domestic security against the civil rights movement, monitoring Martin Luther King Jr. as a threat to national security on the pretext of alleged communist infiltration. In each case, the instrument was constructed in good faith to address a genuine fear; in each case the stated rationale was correlation dressed as causation; and in each case the same institutional machinery, once normalised, was available for use against the next group a future administration found threatening.
Ofcom’s attempt to extend this regime to American soil raises the stakes further. It asserts, in effect, that British regulators may determine what Americans are permitted to say on the American internet and that American law has no recourse. That is not a tenable position under the First Amendment, under any established principles of international jurisdiction or under any defensible conception of democratic self-governance. The GRANITE Act is the beginning of the American legal system’s answer.
A brief postscript. I recently sent a prior version of this article to a member of the House of Lords who had asked to read it. Parliament’s email filter blocked it. Repeatedly. The peer could not open the attachment because the system flagged it as suspicious. The article, with working title ‘What the Filter Catches’, was itself caught by a filter. I could not have asked for a better illustration of the argument. Sometimes the world just does the work for you.
Note: The author has submitted Freedom of Information requests to the US Department of State, the Department of Justice, the National Security Council, the Federal Bureau of Investigation, the Federal Trade Commission, the UK ICO as well as Ofcom itself seeking documents relating to Ofcom’s extraterritorial enforcement strategy. Those requests remain pending.
Loading recommendations...