Security news that informs and inspires

FCC Will Clarify Section 230 Rules on Content Moderation

By

The Federal Communications Commission said it will clarify the rules on the legal liability protections that exist for social media companies, as pressure grows over how they manage what users post on their platforms.

The agency will clarify the meaning of Section 230 of the Communications Act as it relates to moderating user-generated content on websites, said FCC Commissioner Ajit Pai. The move follows the executive order targeting technology companies such as Google and Facebook signed by the president in May. The executive order came after Twitter labelled two posts by the president about mail-in voting as containing “potentially misleading information.”

As elected officials consider whether to change the law, the question remains: What does Section 230 currently mean?" Pai asked in his statement. "Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230.

A provision of the 1996 Communications Decency Act, Section 230 protects companies that host user-created content on their websites from lawsuits over something a user posted. Under the law, providers and users of interactive computer services shall not be held liable for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." Under the law, there is a difference between being a platform which others may use for content, and a publisher (or speaker), who curates or creates content.

For example, if a user says something about another person on an online platform, the second person cannot hold company responsible for allowing the first person to say something. The law protects speech, not intellectual property claims or illegal content. If the company knowingly allowed users to post illegal content, they are liable and can be sued.

While the current discussion has centered on social media platforms such as Twitter and Facebook, this law protecting free expression online applies to pretty much any online platform that allows users to post. That list includes sites such as YouTube and Wikipedia, as well as internet service providers such as AT&T, Comcast, and Verizon. The protections mean these platforms and sites can’t be sued for taking down content or leaving them up. Technology companies say Section 230 shields them from liability for what users post while giving them the space to moderate harmful content.

Lawmakers have been talking about reforming Section 230, but they are split on their perception of the law. Some would like to see companies take a more active role in moderating content, especially with hate speech and disinformation. Others claim the platforms are abusing the protections and censoring certain types of speech, a claim the companies have strongly denied repeatedly.

“Members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set for in Section 230 of the Communications Act," Pai said. “Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Depending on the form the "clarification" winds up taking, some platforms may decide to take a more hands-off approach to moderation, which would allow more disinformation and fringe conspiracies (such as QAnon) to proliferate online. Other companies may decide to curate and screen content the way a news publisher would, which would change user experience (and perhaps, the company's business model) since the decision of what to post would shfit to the platform and not the user.

The FCC’s effort to make platforms liable for editorializing and content takedowns “make no sense,” Free Press Senior Policy Counsel Gaurav Laroia said in a statement. Websites have a First Amendment right to disassociate themselves from speech they disapprove of. Pai’s moves against Section 230 could expose websites to liability if they remove lies made by political figures, or provide context to clarify misleading statements (disinformation).

“[Pai] declares himself a champion of the First Amendment and claims he doesn’t want heavy-handed internet regulation — then pushes policies that stifle free expression online,” Laroia said.

Despite the announcement, any decision on how Section 230 should be interpreted would take months since the full commission would need to meet to discuss, and there will likely be a public comment period to collect input from outside the commisison. Depending on the outcome of the presidential election, there's no guarantee that Pai would even remain commissioner long enough to complete the process.

"The timing of this effort is absurd. The FCC has no business being the President's speech police," Jessica Rosenworcel, a Democratic FCC commissioner, posted on Twitter.

Considering that the presidential election is less than 20 days away, the timing of the FCC’s decision to clarify the rules is highly political, especially since Facebook and Twitter decided to restrict how users could share a New York Post story with unverified claims concerning the son of presidential candidate Joe Biden. Facebook said the story was eligible for third-party fact checking while Twitter banned any links to the story, citing a 2018 rule against posting hacked or stolen information.

“It’s no coincidence that this charade is happening during the final weeks of the 2020 presidential election,” Laroia said. “The Trump administration and its FCC allies are trying to bully and intimidate social-media companies into rolling back their content-moderation efforts for election-related disinformation.”