clock menu more-arrow no yes mobile

Filed under:

How far should the government go to control what your kids see online?

The Biden administration is fighting back against social media platforms that it says are harming children.

A group of children sit on a couch in a row, all looking at their phones.
Children spend a lot of time online. Who should be protecting them?
Peter Cade/Getty Images
Sara Morrison is a senior Vox reporter who has covered data privacy, antitrust, and Big Tech’s power over us all for the site since 2019.

As more evidence emerges that internet platforms can harm children — and that tech companies either can’t or won’t do anything to protect their users — the government has understandably felt the need to step in. President Biden is doing just that with an executive action issued on May 23 that declares an “unprecedented youth mental health crisis” in the country, which he blames at least partially on the internet. The action was accompanied by an advisory from Surgeon General Vivek Murthy about the risks that social media may pose to children.

They join attempts by lawmakers to regulate the internet for kids. States have proposed and even passed laws that restrict what children can access online, up to banning certain services entirely. On the federal level, several recently introduced bipartisan bills run the gamut from giving children more privacy protections to forbidding them from using social media at all. Some efforts also try to control the content that children can be exposed to.

Critics of such legislation point to privacy issues with age verification mechanisms and fears that forced content moderation will inevitably lead to censorship, preventing kids from seeing material that’s helpful along with what’s considered harmful.

We’re already getting a glimpse of what various factions in this country think the internet should look like. We might be getting a much better look soon.

Biden’s latest salvo in his fight against the internet

The president’s feelings about children online are well known at this point. He’s mentioned in two State of the Union addresses that he believes social media harms kids and violates their privacy. His May 23 executive action to “protect youth mental health, safety, and privacy online” attempts to marshal various parts of the administration to address it.

These include a new Task Force on Kids Online Health and Safety, to be headed up by the Department of Health and Human Services and the Department of Commerce. The Department of Education is being asked to improve children’s privacy in educational tools and issue policies on best practices for using internet devices in schools. The Commerce department will promote support services for child victims of online bullying and abuse. And the Department of Homeland Security and the Department of Justice will work with the National Center for Missing and Exploited Children on a database of child sexual abuse material, which can help online services detect when those images are uploaded onto their platforms.

At the same time, Surgeon General Murthy issued an advisory that outlined social media’s perceived risks and benefits to children, saying that “we do not have enough evidence to conclude that [social media] is sufficiently safe for them.” Lawmakers, the advisory said, can mitigate possible harm with policies such as age minimums, increased data privacy for children, and age-appropriate health and safety standards for platforms to implement.

“Our children have become unknowing participants in a decades-long experiment,” the advisory says, echoing Biden’s statements in the last two SOTU addresses.

A new federal push to protect kids online — that states would help enforce

Protecting children from online evils, real or imagined, is a tale almost as old as the modern internet. Some of those fears, we’re increasingly learning, are not unfounded. Recent studies say that kids’ mental health is at crisis levels, and social media is often pointed to as a major contributor to that. Facebook whistleblower Frances Haugen’s 2021 revelations that the company hid research that said its services hurt teens’ mental health — claims that the social media giant says are inaccurate — are also cited as a major motivating factor for the legislative action we’re seeing now. Some researchers say that a link between social media usage and harm to children’s mental health still hasn’t yet been established. The surgeon general’s advisory says there’s an “urgent need” for more research that would fill knowledge gaps, and calls on tech companies to provide their data to researchers to facilitate that.

That action most recently took the form of the Kids Online Safety Act, or KOSA, which was reintroduced on May 2. Cosponsored by Sens. Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT), this legislation would require platforms to implement several safeguards for users under 18. The bill is controversial for a few reasons, one of which is a so-called “duty of care” provision. This would mean that covered platforms have to prevent kids from being exposed to content that promotes or could contribute to mental health disorders, physical violence, bullying, harassment, sexual exploitation, abuse, and drugs, among other things.

On its face, these seem like good things for children to avoid. KOSA’s proponents aren’t wrong when they say that social media platforms don’t just push potentially harmful content at children, but because these companies rely on keeping users’ attention however possible to power their business model, harmful content ends up finding its way to kids. Free speech and civil rights advocates, however, are wary of any legislation that tries to control content, no matter how well-meaning. So several such groups, including Fight for the Future, the Electronic Frontier Foundation (EFF), and the American Civil Liberties Union, have all come out against KOSA. At the same time, the bill already has the support of at least 30 senators from both sides of the aisle.

“We generally don’t like it when the government is trying to tell parents the correct way to parent their children,” said India McKinney, director of federal affairs at the EFF. “Yes, there is harmful stuff that happens online. That is absolutely true. But how do you define that in legislation, to make it clear what you mean and what you don’t mean, and in a way that platforms can [moderate]?”

The bill’s authors believe they’ve made it plenty clear in this latest version of KOSA, where definitions of harmful content are narrower and less open to interpretation than the previous congressional session’s version. For instance, “grooming” — which some on the right wing have adopted as their preferred term for pretty much any LGBTQ+ content — is no longer listed as an example of sexual exploitation. Along with about a third of the Senate, KOSA has the support of many children’s health and safety advocacy groups. Also, Lizzo.

Sen. Richard Blumenthal surrounded by reporters and their phones.
Sen. Richard Blumenthal has made children’s online safety rules one of his major causes.
Bill Clark/CQ-Roll Call, Inc via Getty Images

KOSA’s opponents aren’t just wary of its provisions about content. They also don’t like the power it gives to state attorneys general to enforce it. Some see this as an opening for state leaders fighting a culture war to go after online platforms that host speech about transgender rights, abortion care, or mention that gay couples exist. Or, really, any other content that’s become politically advantageous to censor and can be interpreted to fall under KOSA’s definitions, narrow as they are.

While it may have seemed like a stretch just a few years ago, this highly politicized version of kids’ online safety has become a reality to reckon with in the midst of the latest moral panic that some Republicans have made the center of their campaign strategies. Some of these attorneys general and the states they represent have pushed laws that ban books or public school curricula if they contain sexual, LGBTQ+, or race-related content. The laws are vaguely worded enough that libraries and schools are banning books preemptively just in case someone finds something objectionable in them. Some states are trying to or already have passed anti-trans laws that ban or restrict gender-affirming care for kids and even adults. They’ve even tried to ban drag shows.

Those states could conceivably do something similar to the digital world if given the chance. It’s not lost on some of KOSA’s opponents that Sen. Blackburn represents Tennessee, the state that tried to ban drag shows from being performed for or near children, or that she’s made several anti-gay and anti-trans comments and votes. We also know that platforms tend to over-moderate to ensure they can’t get in trouble, as we’ve seen some of them do to sex and sex-work-related content in the wake of FOSTA-SESTA. The end result is censorship, be it forced or voluntary.

A cautionary tale from state laws

Some recently enacted children’s online safety legislation shows us what state leaders want the internet to look like. These state laws pertain to children, but they impact adults, too.

A Utah law requires social media platforms to verify the ages of their users, which means people of all ages will likely have to submit some kind of verification to log into their social media accounts. The state passed another law that requires porn sites to verify visitors’ ages, which has prompted several porn sites to block Utah IP addresses entirely, saying it wasn’t possible for them to verify ages as the new law required.

Louisiana also banned children from visiting porn sites and requires those sites to verify visitors’ ages by proving their identities. While Pornhub implemented an age verification system to comply with the law, it noted that Louisiana-based traffic decreased by 80 percent after it went into effect. And sure, it’s possible that 80 percent was all children who could no longer access the site. It’s more likely that it was adults who could view that content legally but didn’t want to upload their IDs to be able to do so.

Meanwhile, Arkansas passed a law that requires users under 18 to get parental consent to use certain social media platforms (it’s so far unclear how ages will be verified). California has the Age-Appropriate Design Code, which requires online services to implement certain design features for younger users and limit the data that can be collected on them. Montana passed a law that would ban TikTok entirely, which isn’t exactly a child safety law but does very much affect children, with whom the platform is very popular. The list of other states considering children’s online safety bills goes on and on.

Federal legislation for kids’ online safety is much less likely to be passed than the state versions, as Congress is more divided and moves more slowly than many state legislatures. But there are bipartisan bills that have some potential — and, critics say, problems.

Along with KOSA, there’s EARN IT, which passed out of committee on May 4, setting it up for a vote in the Senate (the last two incarnations of EARN IT similarly passed out of committee, but never got a floor vote). Supporters say it will help law enforcement better fight child sexual abuse material. Opponents fear that EARN IT will be used to weaken or ban encryption for everyone. The Protecting Kids on Social Media Act, introduced last month, bans children under 13 from using social media and requires parental consent for children 13 and over. That would prevent children from seeing social media’s harms, but it would also keep them away from online resources that do some good.

And then there’s the sequel to the Children’s Online Privacy Protection Act, a 1998 law that gave children under 13 certain privacy rights and remains the only federal consumer online privacy law we have, even decades later. The Children and Teens’ Online Privacy Protection Act, or COPPA 2.0, was introduced on May 3 by Sens. Bill Cassidy (R-LA) and Ed Markey (D-MA). Markey was also behind the original COPPA. As a privacy bill, COPPA 2.0 doesn’t have the same content moderation issues that other bills do, but Markey has had a hard time getting it passed in previous sessions. And it stops short of giving privacy protections to adults, which privacy advocates, understandably, very much support.

Any law that covers people regardless of age, critics of these kinds of bills often point out, would take away the need to verify users’ ages — which can be a privacy violation in and of itself. Many of Louisiana’s porn-enjoying adults can probably attest to that. It could also solve or ameliorate some of the children’s safety issues without the need for problematic child-specific safety laws. But Congress so far hasn’t come close to passing that kind of privacy law after years of trying, so it seems unlikely that it will anytime soon.

Children’s online safety measures have been proposed and debated for decades, but they rarely went much further than that. Now, the threat that these ideas become law is very real, in part because the dangers online platforms present to kids are very real. But so is the possibility that kids’ online safety laws could be weaponized to censor content according to subjective and politicized views of what’s harmful. We’ve already seen what those views can do to school libraries. We may soon see what they’ll do to the internet.

Update, May 23, 3:45 pm ET: This article, originally published May 5, has been updated to add the surgeon general’s advisory and Biden’s executive action.

A version of this story was first published in the Vox technology newsletter. Sign up here so you don’t miss the next one!

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.