Why the internet as we know it could be over by the end of the summer
Could the U.S. be headed for an over-filtered balkanized splinternet? Nine men and women in robes could decide.
The future of the internet is in the U.S. Supreme Court’s hands.
As social networks have grown in power and influence, content moderation has become a political issue. Some argue for more, others for less, and this year the court could determine which direction companies must take.
Oral arguments begin Feb. 21 in Gonzalez v. Google, and at stake is the current interpretation of Section 230 of the Communications Decency Act, or CDA. It’s a foundation of the modern internet, granting legal immunity to sites for content posted by third parties, like users’ message boards posts, tweets, and YouTube uploads.
In the case, the family of Nohemi Gonzalez, a U.S. citizen killed in a 2015 terrorist attack in Paris, argue Google’s algorithms promoted ISIS content on YouTube and therefore violate federal antiterrorism law, voiding the company’s Section 230 immunity. In a similar case the court is also set to hear in February, Twitter v. Taamneh, Twitter is appealing a ruling related to content moderation and 2017 ISIS attack in Istanbul.
The Gonzalez case has attracted interest from companies including Craigslist, Microsoft, Meta, and Reddit, who have filed amici curiae asking the court to protect Section 230. Two co-authors of the 1996 CDA, Sen. Ron Wyden (D-Ore.) and Rep. Chris Cox (R-Calif.), argue the same algorithms that recommend content are important for also removing harmful content.
“The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable internet users and platforms alike,” Wyden and Cox wrote in a statement. “Section 230’s protection remains as essential today as it was when the provision was enacted.”
The case has the potential to set new precedent about the liability of social media companies. Google argues that undermining Section 230 could make internet recommendations less useful as companies would over-filter over fear of litigation, or they might under-moderate to absolve themselves of being aware of harmful content on their sites. “The stakes could not be higher,” Google general counsel Halimah DeLaine Prado wrote in a blog post.
“A decision undermining Section 230 would make websites either remove potentially controversial material or shut their eyes to objectionable content to avoid knowledge of it,” Prado wrote. “You would be left with a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content.”
Meanwhile in Florida and Texas, lawmakers have called for less moderation, passing laws barring large social media companies from censoring users based on political viewpoint. “If Big Tech censors enforce rules inconsistently, to discriminate in favor of the dominant Silicon Valley ideology, they will now be held accountable,” Florida Gov. Ron DeSantis said about his state’s law.
The laws evoke a future balkanized splinternet where Americans have different online experiences depending on which party controls their state government. The U.S. Supreme Court is considering hearing cases challenging both laws, and last week it asked the Biden administration for its views.
Netchoice, a tech industry trade group suing Texas over its law, argues it violates social media companies’ First Amendment rights by imposing government mandates over what they can allow on their platforms. The laws are “crazy and also very sloppy” and have “all kinds of details that nobody thought through,” Stanford Center for Internet and Society lecturer in law Daphne Keller told New Yorker.
These cases speak to the widespread concern Americans have over social media companies and algorithms — the new gatekeepers — and show there’s dissatisfaction with the status quo.
While most can agree that algorithms that radicalize users by serving them extremist content is bad and allowing a few big companies to control the state of online discourse isn’t great, these cases could have implications far beyond the narrow scope partisans view them through today, in 2024 and beyond.
Have you seen this?
Surgeon General says 13 “too early” to join social media. “I, personally, based on the data I've seen, believe that 13 is too early,” U.S. Surgeon General Vivek Murthy said Saturday. “It's a time where it's really important for us to be thoughtful about what's going into how they think about their own self-worth and their relationships and the skewed and often distorted environment of social media often does a disservice to many of those children.” TikTok, Instagram, and Twitter require users to be at least 13 to use their apps, but Murthy suggested parents wait until their kids are 16 or 17. [CBS News]
Twitter has a new font. You’ll never be punked by E10nmusk ever again. Since offering paid blue verification badges, Twitter has been plagued by impersonators. In an apparent attempt to combat impersonation, the site has now rolled out a new font for Twitter handles that better differentiates between 0s and Os and 1s and ls.
11 of the sleaziest snake oil ads from Trump’s Truth Social. Former President Donald Trump’s social network has failed to attract big name advertisers, but there are ads (called “Sponsored Truths”) for “free gold,” Trump-themed insurance, and “vaccination exemption” cards. [Gizmodo]
It’s true, campaign fundraising is out of control. Here’s how to fix it. Digital fundraising has grown too broad, and supporters find tactics "annoying" and "rude." [𝘠𝘌𝘓𝘓𝘖]