📱

Read on Your E-Reader

Thousands of readers get articles like this delivered straight to their e-reader. Works with Kindle, Boox, and any device that syncs with Google Drive or Dropbox.

Learn More

This is a preview. The full article is published at fastcompany.com.

The world is getting tougher on kids’ online safety in 2026

The world is getting tougher on kids’ online safety in 2026

By Chris Stokel-WalkerFast Company

The new year is a time for resolutions. This year, governments, platforms, and campaigners all seem to have hit on the same ones: Children should spend less time online, and companies should know exactly how old their users are. From TikTok ’s infinite scroll to chatbots like xAI’s Grok that can spin up uncensored answers to almost any question in seconds, addictive and inappropriate online options leave legislators and regulators worried. The result is a new kind of arms race: Lawmakers, often spooked by headlines about mental health, extremism, or sexual exploitation, are turning to age gates, usage caps, and outright bans as solutions to social media’s problems. Just in the past week, we’ve seen Grok become Exhibit A in the debate about harmful content as it helps undress users, while states consider or enact bans, blocks and time limits on using tech. “Right now, the regulatory debate seems to exclusively focus on how certain internet services are net negatives, and banning access to minors to such services,” says Catalina Goanta, associate professor in private law and technology at Utrecht University in the Netherlands. That black-and-white approach is easy for politicians to parse, but doesn’t necessarily [communicate] the nuance involved in tech and its potential for good. “The scientific debate shows us a much more nuanced landscape of what can be harmful to minors, and that will depend on so many more aspects than just a child having a phone in their hands,” says Goanta. Legislators are moving quickly to throw a protective shield around younger users. A December 2025 proposed law in Texas would have required Apple and Google to verify user ages and get parental consent for minors’ app downloads, but was blocked just before Christmas . Meanwhile, as outright bans are being blocked, states are pushing forward with rules that cap social media access. Virginia’s default one-hour daily cap for under-16s was launched with a requirement for “commercially reasonable” age checks. However, it has already been challenged in court by a lawsuit filed by NetChoice, an association that seeks to “make the Internet safe for free enterprise and free expression.” The group, which includes Amazon, Google, Meta and OpenAI as members, says imposing a time block on social media is like limiting the ability to read books or watch documentaries. “All of the laws have been challenged, and the court’s ruling on the Texas law doesn’t bode well for the other state laws,” says Adam Kovacevich, founder and CEO of the Chamber of Progress, which he describes as “a center-left tech industry policy coalition.” But, he says, some of this tough talk is also allegedly helped by big tech firms themselves, “It’s important to keep in mind that the app store age verification bills have been written and advanced by Meta, largely as a way of getting themselves from defense onto offense.”. The Texas law isjust one out of many that are cropping up around the United States—and around the world. Across the Atlantic, France is pursuing...

Preview: ~500 words

Continue reading at Fastcompany

Read Full Article

More from Fast Company

Subscribe to get new articles from this feed on your e-reader.

View feed

This preview is provided for discovery purposes. Read the full article at fastcompany.com. LibSpace is not affiliated with Fastcompany.

The world is getting tougher on kids’ online safety in 2026 | Read on Kindle | LibSpace