close
close

“Let parents decide” what kids can do online, a new lawsuit argues

“Let parents decide” what kids can do online, a new lawsuit argues

The Computer and Communications Industry Association (CCIA) and NetChoice, two prominent tech industry trade groups, have filed a lawsuit against a Florida law that prohibits teenagers from using social networks. Their lawsuit, filed Monday in the U.S. District Court for the Northern District of Florida, cites Florida’s First Amendment concerns House Bill 3which the groups also portray as imposing parental rights.

“Florida House Bill 3 is the latest in a long line of government efforts to limit new forms of constitutionally protected expression based on concerns about their potential impact on minors.” complaint opens “In the past, books, movies, television, rock music, video games and the Internet have been accused of being dangerous to minors. Today, a similar debate rages over “social networking” websites.

“This debate is important and the government can certainly take part in it,” the technical groups continued. “But the First Amendment does not take kindly to government efforts to address them. The Constitution instead leaves the authority to decide what language is appropriate for minors where it belongs: with their parents.”

“Like adults, minors use these websites to engage in a variety of First Amendment activities”

The law, signed into law by Florida Gov. Ron DeSantis (R) last March, requires social media platforms to flatly reject accounts from people under the age of 14 (or anyone the company believes to be under 14) and ban 14 – and 15-year accounts. -old users, unless they get parental permission. The law also requires websites and apps to verify the age of all visitors if the platform publishes material that is “harmful to minors” — a category broadly defined to include all types of content that depicts or describes sexual behavior, “appeals to vested interests “, and the state considers it to have no “serious literary, artistic, political or scientific value” for people under the age of 18. “While the bill does not specify exactly how social media should verify a customer’s age, with such serious consequences for violations, companies will likely require customers to provide their government-issued ID, scan their face or otherwise provide sensitive information under the law.” noted ReasonEmma Kemp earlier this year.

The new lawsuit by NetChoice and the CCIA challenges Section 1 of HB3, the section dealing with teenagers and social media.

Opponents of such measures often argue that they are a privacy nightmare — creating a repository of personal information vulnerable to hackers and infringing on the rights of adults in the name of protecting children. It’s all true, but it’s also true children have first amendment rightsalso.

We must also oppose laws like HB3 because they violate children’s free speech rights.

This is the argument CCIA and NetChoice use in their complaint:

Like adults, minors use these websites to engage in a range of First Amendment activities on a wide range of topics. Minors use online services to read the news, connect with friends, explore new interests, follow their favorite sports teams, and research their dream colleges. Some use online services to hone new skills or showcase their creative talents, including photography, writing, or other forms of self-expression. Others use them to raise awareness of social causes and to engage in public debate on current topics. Still others use them to create communities and connect with others who share similar interests or experiences, which is especially helpful for minors who feel isolated or marginalized at home, or who seek support from others who understand their experiences.

Reasonable people can disagree about what kinds of platforms are appropriate for minors and at what age, the complaint suggests, and that’s why such solutions are best left up to parents. There are many tools available that allow parents to monitor and limit their children’s online activities, and they can serve the purpose of protecting children in a less “draconian” way than age verification rules and blanket bans.

“In short, in a country that values ​​the First Amendment, the best government response is to let parents decide what language is appropriate for their minor children, including with tools that make it easier for them to limit access if they choose to do so.” “, CCIA and NetChoice claim.

“The burden on protected speech that citizens find particularly interesting is particularly inconsistent with the First Amendment.”

CCIA and NetChoice allege more than violations of minors’ First Amendment and parental decision-making rights in their complaint. He also criticizes HB3’s strange parameters, which only cover social media platforms where 1) 10 percent or more of daily active users under the age of 16 spend an average of two hours per day or more on the platform on the days they use the platform, and 2) customized recommendation algorithms and features that the law defines as “addictive features” (such as endless scrolling, push notifications, autoplay features, live streaming features, and “personal interactive metrics”). The law specifically excludes services intended solely for e-mail or direct messaging.

The groups point out that HB3’s first section “does not focus on any specific content that may pose a particular risk to minors, nor on ‘identifying specific means or forums for communication that those seeking to exploit minors are most likely to use.’ .” Instead, he focuses on how much minors seem to like a particular platform and “whether it uses tools designed to attract them to content they might like.”

“By this metric, the state can restrict access to the most popular segments of almost any media for constitutionally protected speech, whether it’s engaging video games, page-turning novels, or binge-worthy television shows,” the groups suggest. “The burden on protected speech that citizens find particularly interesting is particularly inconsistent with the First Amendment.”

The groups say there are practical challenges to enforcing HB3. The law lists some broad parameters for how platforms should determine who is a child’s parent or guardian, but they seem a mix of toothless and downright invasive.

There are many similar laws and challenges

Lawsuits against social media age verification rules in other states, including Utah and state of tennessee– are currently ongoing. And a number of such laws already rejected by federal judges.

So are the number of lawsuits challenging age verification rules for adult websites.

Several states introduced requirements similar to Florida’s rule on sexually oriented content online. So far, the federal courts have been pretty good at treating these unconstitutional messes for what they are. This includes cases outside California, Arkansas, Texas, Indiana and Mississippi— although in the Texas case the appeals court overturned the lower court’s ban on mandatory age verification, and in April The US Supreme Court refused leave the decision of the Court of Appeal. However, in July SCOTUS said it would take up the case in full sometime during the term that began this month.

Laws requiring age verification on adult sites have become popular among some Republicans, including those behind Project 2025, who see them as such. as a back door to banning porn. Many states have age verification laws for adult content porn platforms blocked viewers from these states rather than complying with the duty to verify IDs.


More sex and technology news

• Use of tracking technology to collect data about website visitors is not considered illegal wiretappingthe highest court of Massachusetts conducted. Regarding the state’s Wiretapping Act, which prohibits the interception of “wire and oral communications,” the justices could not “conclude with certainty that the Legislature intended to extend “communication” so broadly as to criminalize the interception of web browsing.” and other similar interactions,” they wrote.

• Federal judge extended the temporary preventive measure against Florida officials who threatened television stations that aired ads promoting a reproductive freedom initiative that will be on the ballot this year. (More details about the case, p Reason, here.)

• The family of a teenager who committed suicide sues via an AI chatbot which they claim drove him to commit suicide.

• “Part of being internet literate is realizing that the algorithm is only giving you a suggestion, not fully engaging your brain with the algorithm,” writes Mike Masnick. in a Techdirt piece ridiculing yet another New York Times article that mischaracterizes Section 230. “If the problem is people giving their brains to an algorithm, it can’t be solved by banning algorithms or adding accountability to them.” Masnick goes on to explain why algorithmic recommendations are protected by the First Amendment — and, of course, should be.

Today’s image

Indianapolis | 2012 (ENB/Reason)