(Image courtesy of Pixabey on Pexels)
Because none of this was ever about protecting children.
Preface; What The U.K. is Doing
As we grow up in an age of the internet, various regulations have been made across websites and apps to make sure the minor user base of said sites are protected.
However, this is never without drawbacks. Predation is a common conflict for children that use the internet. Grooming does happen, and it’s the website’s responsibility to make sure those people are reported and banned from the site, to make sure the children that use that platform are safe, and don’t risk another repeated (or continuous) incident(s).
This line of thinking has been a popular talking point in various laws being talked about in offices around the world. Infamously, on July 25 of 2025, the U.K. ruled out its own act, known as the “Online Safety Act.”
It sounds good in theory: Make the corporations behind some of the most popular apps and websites be held accountable for the problems regarding children on their sites.
Yet, all it has done is start controversy, and for good reason.
In practice, the act has mandated many websites to prevent children (defined as people under the age of 18) from accessing certain websites, such as Reddit, Spotify, and even Wikipedia, which attempted a campaign to prevent it from being blocked to children in the U.K.
So, if you’re under the age of 18, how does one prove themselves to be older?
Personal doxxing.
Adults are obligated to disclose personal information about themselves in order to bypass these firewalls. They can be in the form of submitting a picture of their I.D., a video selfie, or their credit card information. All of this, just to access websites like Wikipedia and Spotify.
This would all be one thing if this was targeted to places with a common adult audience, such as porn websites. However, they are explicitly targeting platforms that could host “adult” content. It’s also not directly mentioned what “adult content” even is. It’s a case that is purposely kept vague (which we’ll get to in depth later).
According to the U.K. government itself, the term “pornagraphy” is thrown around a lot. However, they too, don’t properly disclose what it even is.
This creates various different scenarios regarding what people search for online, and especially the children the act wants to protect. If a child is doing research for their school’s sexual education program, can they just not research anything because the very idea of sex is banned from being searched up?
This, unfortunately, isn’t just a case that affects the U.K. on its own. In fact, places such as Australia, Norwegian countries, and even certain websites in the U.S.A. have been hit, or have been going through very similar legislation trying to get passed.
Collective Shout
Australia is a bit of a unique case. They haven’t been hit directly with legislation banning kids from looking up “adult material” (whatever that means in this context). Rather, folding to the whims of an organization that is contributing to this greater case of online censorship.
And that is the organization known as “Collective Shout.”
Collective Shout is an Australian organization formed back in 2010, that according to them, “is a grassroots campaigns movement against the objectification of women and the sexualisation of girls.”
Essentially, they are against the objectification of women in media. This goes for television, advertisements, and especially video games.
In late July of 2025, Collective Shout reached out to various payment processors, like Visa and Mastercard, reporting that people were buying “questionable” and “offensive” games with their accounts respectively. This sets off a chain reaction, leading to gaming platforms such as Steam and Itch (or itch.io) to remove the “offensive” games from their platforms.
These games included titles such as “No Mercy” and “Detroit: Become Human.” The severity of these games ranging from actual sexual abuse simulators (in the case of “No Mercy”), to games that even dared to discuss child abuse, even in a negative light.
In Itch’s case, many creators on their website have been targeted, specifically with their work that has been tagged as “LGBTQ”, or “NSFW” (short for “not safe for work”), or if their account mentioned being “kink-friendly.”
As WIRED put it: “This is an example of financial censorship.” Which is censorship that utilizes finances and payment processing to censor what people can and can’t consume.
The payment processors would either not respond to comments, or double down on the very few times they would respond. Mastercard in specific told the CBC that it “has not evaluated any game or required restrictions of any activity on game creator sites and platforms,”
To put this into perspective, all of these sudden rollouts of attempted censorship have a similar blueprint to their tactics: targeting adult content.
You target the objectively “bad” things on the internet (such as porn, or games that depict rape and incest) and gain the public’s favor because chances are, they don’t like that content either. Once the public is focused on those games being removed, they then target personal biases, such as getting rid of LGBTQ+ content.
Collective Shout isn’t a “feminist” group that many online like to claim that they are, but interestingly enough, Collective Shout is progressively right-wing, with the founder of the very organization, Melinda Tankard Reist, is a “pro-life Christian.”
Let that sink in for a moment; The frontwoman behind the takedowns of games that “promote rape, sexual abuse, and incest” is not only pro-life, but a Christian. Those things are almost mutually exclusive from one another in the internet zeit guist. Being against rape depicted in media, and yet against abortions? Strange.
However, this has snowballed into many other similar actions being taken against adult content– and it’s not just in the U.K. or Australia, it’s happening in the U.S. as well.
The U.S. And KOSA
Taking a page out of the U.K.’s book, the U.S. has started implementing similar policies to apps in their own app stores and websites. So far, Wikipedia hasn’t needed to implement the age verification, but Spotify has to select users.
So far, the Kids Online Safety Act (or KOSA) act hasn’t been passed yet, and it’s only been introduced to talk about and introduced to congress and that was back in May 14, 2025.
Yet, this doesn’t stop certain websites from implementing those age verification policies, taking various pages out of the U.K.’s book.
For the U.S.’s case, many can argue on how constitutional it is, as they have. And it’s because of how potentially unconstitutional it is that has people worried about its possibility getting passed as a law.
According to the law itself, here are the safeguards that it describes will help children:
“ (A) limit the ability of other users or visitors to communicate with the minor;
(B) prevent other users or visitors, whether registered or not, from viewing the minor’s personal data collected by or shared on the covered platform, in particular restricting public access to personal data;
(C) limit by default design features that encourage or increase the frequency, time spent, or activity of minors on the covered platform, such as infinite scrolling, auto playing, rewards for time spent on the platform, notifications, and other design features that result in compulsive usage of the covered platform by the minor;
(D) control personalized recommendation systems, including the ability for a minor to have—
(i) a prominently displayed option to opt out of such personalized recommendation systems, while still allowing the display of content based on a chronological format; and
(ii) a prominently displayed option to limit types or categories of recommendations from such systems; and
(E) restrict the sharing of the geolocation of the minor and provide notice regarding the tracking of the minor’s geolocation. “
None of these policies are extremely different compared to the U.K.’s online safety act, but both are still incredibly vague.
It’s also very redundant. Many apps, including social media apps, have parental controls built in, or built-in restrictions for minors. Think Instagram as an example: When you register your birthday onto Instagram, and the system detects that you’re a minor, it’ll automatically make your account private. Parental links are also an added feature as well.
Also, the safeguards include something about “preventing users from seeing the minor’s data.” This is also a pointless safeguard. Not because companies actively have policies against other users seeing other’s data, but the companies themselves are actively obtaining this data themselves.
Instagram, Facebook, even the Google search engine, they are all stealing data. That’s how they target advertisements to its user base. You search up something as simple as “dog” onto Google, it’ll feed you advertisements about dogs until you search another key word up.
Yes, the safeguards also mention the prevention of “targeted advertisements,” but Google is already doing that. It’s been actively doing that for years. Is Google going to be sued for targeting their advertisements to children, which would violate KOSA? Doubt it.
Everything Going Wrong With Youtube
And now, the kicker.
Youtube made a press release about their new age verification policies that would make it to the platform on August 13, 2025.
It works like this:
Using artificial intelligence, Youtube (and by extension, Google) will scan through user’s search and watch history. If they can identify that what that user watches is either “childish” or “adult,” they’ll automatically flag that account as such.
To use an example, let’s say this specific user searches up a lot of cartoons typically geared towards kids, or just animation as a whole. Youtube will most likely label that user as a “child’s account” and thus, apply restrictions to that account.
This is worth mentioning that Youtube does not need to be doing all of this. They specifically have a kids app (albeit riddled with controversy and other issues) among built-in age restrictions on their main platform for sensitive content.
Along with this, Youtube historically has always been a 13+ platform. Since its inception in 2007, a 13+ minimum was required to sign up for a Youtube account, alongside a Google (or Google+) account.
There are plenty of concerns regarding this. For one, the heavy reliance of AI to scan through user’s watch history is incredibly faulty, as Youtube doesn’t have a good track record when it comes to cracking down their own issues that users have complained about for years, such as spam bots in comment sections, or content farms abusing the algorithm.
Another concern is the fact that AI can very easily mislabel accounts.
Say that the person I’ve described before, the one who searched up all those kids cartoons, is an adult. An animator, perhaps, or just a nostalgic adult. Their account was still flagged as a “minor” account.
Now, what about a kid looking up material for school, like a history lesson? Depending on the context of the history video, that child could very easily be labelled as an adult, and because Youtube is owned by Google, chances are, their data will be collected and sold to advertisers under the pretense that the child account is an “adult.”
And guess what– the obtaining of said data violates COPPA.
COPPA is short for the “Children’s Online Privacy [and] Protection Act.” The purpose of COPPA is to prevent the data of children being stolen, or fed to advertisers. This can ensure a child’s experience on the internet is as safe as possible, and not as susceptible to being victims of data breaches.
And that’s another major concern– data breaches.
Youtube has stated that in regards to user’s identifiable information will only be kept in its databases for either a couple weeks to 2 months. Because of course, if Youtube’s AI is wrong, users will have to submit a photo of their I.D., credit card information, or a video selfie.
This is concerning if we disregard just the Youtube stuff, because we live in an age where data breaches aren’t a matter of “if,” but a matter of “when.” Someone is bound to take hold of whatever information is stored in a service’s cloud, release, and have bad actors take advantage of it.
Children are going to find ways around age verification, as many have done in the past. They may steal their parent’s I.D. or credit card information, or use some form of software to bypass the selfie feature, as many have done in the U.K.
So, not only could Youtube be held liable for possibly finding a loophole for obtaining the data of minors, but also they’re at risk of getting their data stolen by data brokers and having many, many Youtube user’s data be released without their knowledge or clear consent.
Counterpoints: What Can Be Done
While it may seem like it, corporations are not entirely invincible entities. As many have uncovered, there are loopholes and ways to get around these guidelines.
For one, VPN services have skyrocketed in popularity during the start of the U.K.’s online safety act. While not entirely a fool-proof plan, as there are reports about the concerns of them getting banned in the U.K., but so far, they seem to be working.
A petition has also happened to counteract this, but the U.K. has doubled down on the Online Safety Act completely ignoring the petition and what others have to say about it online.
As for the U.S., there was talk of a youtube boycott over the age-verification policies that would happen on the day the policy would be implemented (August 13), but to mixed results. Some argued how it wouldn’t do anything due to Youtube being so ingrained into people’s lives.
Unfortunately, they’re not wrong, but it’s not just a Youtube problem– it’s an internet problem.
Concluding With The Problem At Hand
This isn’t just about not using social media, or not being able to search for a specific video on Youtube. If this legislation keeps up, this can spread to internet-wide censorship.
These algorithms aren’t going to protect kids if there were already safeguards in place for children to use the internet freely. This is a culmination of the parents of these children not properly parenting their children, and using the internet as a babysitter, and the scary rise of censorship.
This isn’t a matter of protecting kids– because it never is. If it was about the kids, it would purposely target websites that host the “bad” content, not Spotify, Wikipedia, XBox, or even Reddit. They would target the porn websites like these acts promised.
Sure, the personal doxxing of users is still sketchy in practice as it is in theory, but it would make sense for the porn websites to do so, not a search engine or a music streaming platform.
No. It’s a matter of controlling what others consume.
If it wasn’t, why is there only one big named porn website being hit with these age verification policies?
Why is everything else being hit with these policies either informational websites (such as Wikipedia or Spotify in the case of their podcast and audio services) and social media websites (in the case of Reddit)?
It’s because of the potentially harmful content. Things like discussions of eating disorders, mental health conditions and/or disorders, LGBTQ+ content, bullying, anything that a child would most likely go to the internet to find. If it isn’t for school, it’s for support for whatever they are going through.
Limiting these topics for children, even if they can be harmful, shouldn’t be blocked for every single other child, because children can develop eating disorders, or seek community online for having been bullied. It’s one of the many reasons why children even go onto social media, or even just the internet at all, to begin with.
They seek community. They seek answers. Putting everything behind an age verification firewall isn’t going to get them the help they need, especially if some don’t come from good homes.
It also affects the adults as well. Having everything ask for your personal information, despite many websites such as Google already taking it from you, gets exhausting after a while. If every other website needs your data just to look up the definition of a word or to access a PDF of a book, what’s the point of using the internet?
But truth be told, every time someone attempts to censor the internet, even if it was just a website, a blog, or something on a smaller scale, it never works.
The most popular example of this is Tumblr, the microblogging website. Around 2018, it attempted to remove all adult content. Because of this, it lost roughly ⅓ of its userbase, and adult content would seep its way back onto the platform regardless, as is the nature of the internet.
People will do anything to access the content they want, no matter the cost, even with policies and legal legislation over their heads.
Even if it doesn’t last for long, even if policies are pushed back, or even if the people themselves advocate and fight back against all this, every single event will still have its impact on the internet at large.
It’ll still be a scary footnote in internet history about how governments and organizations tried to censor the internet and control what the people consumed in such a short amount of time.