Fotolia

Big Tech's uneasy balance of capitalism, censorship

Big tech companies' enforcement of acceptable use policies following Capitol riots is questioned, as some say First Amendment rights to free speech should extend to social networks.

Major tech companies cut access to their platforms after the U.S. Capitol riots by removing applications and suspending accounts. Some see these moves as censorship, but the companies point to their acceptable use policies, which are selectively enforced by algorithms, moderators or both.

Facebook, Twitter and Instagram removed President Donald Trump from their platforms following the Jan. 6 Capitol riots, citing "risk of further incitement of violence." Twitter suspended more than 70,000 accounts which the company identified as promoting conspiracy theories. Other social media platforms followed suit in suspending accounts, with Reddit banning the subreddit page r/donaldtrump and YouTube suspending Trump's channel.

A lesser-known social platform called Parler, which billed itself as a platform for "free, unmoderated speech," quickly grew to fill the void, until AWS removed the site from its cloud service on Sunday. AWS cited Parler's inability to moderate posts that called for violence as the reason for its removal.

Speech restrictions on social platforms

Social media is so pervasive in American culture that citizens may assume they have a right to use it and that their First Amendment rights extend to those platforms. However, most First Amendment experts say the constitutional right to free speech does not apply to private institutions.

That means the moves by Twitter, Facebook and Instagram to remove the accounts of Trump and others after a violent mob of his supporters broke into the Capitol building on Jan. 6 were entirely within the legal rights of the tech companies.

Today, there's nothing that prevents any social media site from banning any individual user whose post they dislike.
Gregory SullivanFirst Amendment lawyer, professor at Suffolk University Law School

"If it's not the government, then it's not the First Amendment," said Gregory Sullivan, a First Amendment lawyer in Hingham, Mass., and a professor at Suffolk University Law School in Boston. "Today, there's nothing that prevents any social media site from banning any individual user whose post they dislike."

Meanwhile, AWS' removal of the social media platform Parler has drawn criticism from some quarters, including both conservative fans of the site and left-wing critics who say it illustrates the unchecked power of big tech and who fear the vendors could similarly exile liberal sites and voices they don't like.

Chris Carter, CEO of Approyo, an SAP partner in Brookfield, Wis., and a self-described technologist and "Reagan Republican," said AWS' ejection of Parler -- which Parler has challenged in a breach of contract lawsuit -- was ill-advised. Carter suggested that the move indicates that AWS or other big tech platforms could use that same power against tech vendors they dislike.

Meanwhile, the government's antitrust lawsuit against Google, in a similar vein, is based in large part on allegations that Google purposely lowers search result positions of customers it considers competitors.

"This should be eye-opening for all Americans," Carter said, emphasizing that he found the Jan. 6 invasion of the Capitol building abhorrent. "It's a slippery slope."

Trump's Twitter account suspended.
President Trump's Twitter account has been suspended for violating the company's terms of service against violence.

Carter added that in his opinion, Parler should never have opted for AWS for cloud hosting, given that the vendor is owned by Jeff Bezos, who has been a target of Trump. Rather, Parler could have gone with Chinese cloud giant Alibaba or even Oracle, which is trying to compete with AWS and Google in the public cloud market and is run by noted conservative tech figure Larry Ellison.

While right-wing activists and Trump supporters reeled from what many of them saw as coordinated actions by Amazon, Facebook, Google and Apple, some left-wing free speech advocates also decried the moves.

In a long post on his Substack newsletter, "How Silicon Valley, in a Show of Monopolistic Force, Destroyed Parler," journalist Glenn Greenwald blasted big tech for allegedly stifling speech on the Internet.

"If one were looking for evidence to demonstrate that these tech behemoths are, in fact, monopolies that engage in anti-competitive behavior in violation of antitrust laws and will obliterate any attempt to compete with them in the marketplace," Greenwald wrote, "it would be difficult to imagine anything more compelling than how they just used their unconstrained power to utterly destroy a rising competitor."

Social media companies have been pressed to figure out how to manage and monitor users on their platforms in recent years, given the increase in misinformation and violent rhetoric.

Since the 2016 U.S. presidential election, Facebook and Twitter have increasingly taken on policing content shared on their global platforms, implementing polices to flag, fact check and tamp down on the spread of incendiary comments.

Andy Sellars, a law professor at the Boston University School of Law and director of the Technology Law Clinic, said the recent suspension of accounts puts social media companies in new territory.

"The tech platforms have been at an interesting cross section when it comes to speech rights for a very long time," Sellars said. "They have consistently held that they have an editorial right to decide who gets to use their platform, and I think it's a fairly accurate way of looking at the First Amendment doctrine … It's only fairly recently that they are appreciating the greater moral responsibility that comes with that power, to decide who actually should be speaking on their platform."

Use policies and agreements

Facebook and Twitter have justified suspending accounts which violate user policies and agreements. Both companies have outlined acceptable use policies for their platforms, including prohibiting violent content, and reserve the right to "immediately and permanently suspend" any account determined to be in violation of their policies.

Big tech censorship
As big tech responds to the Capitol riots by removing accounts from social media platforms, some users and advocates have decried the moves as censorship.

Twitter's rules and polices say a user "may not threaten violence against an individual or a group of people" and prohibits the glorification of violence. Facebook also has a Community Standards document outlining appropriate and inappropriate behavior. In it, the company affirms it can remove "language that incites or facilitates serious violence," as well as "remove content, disable accounts and work with law enforcement when we believe there is genuine risk of physical harm or direct threats to public safety."

User policies such as these leave little recourse for individuals whose accounts are suspended or terminated, Sellars said.

"The terms of service of all of these platforms tend to be very protective of the companies' right to delete accounts, suspend accounts and delete content, largely at their discretion," Sellars said.

Last week's riots served as a "wake-up call" for social media companies that have largely stayed away from policing content except when they felt it was legally required, Sellars said.

"Slowly, over the Trump administration, they've started to become more active on these questions, and last week was a key moment in how they approach these things going forward," he said.

Calls for social media regulation

The U.S. government has not passed laws pertaining to social media companies since the passing of Section 230 in 1996. The law ensured that companies like Facebook and Twitter are not responsible for the content on their platforms and that they shouldn't be treated as publishers.

Policy experts said allowing the president to use social media platforms to spread misinformation and hate speech was fully within the scope of the law. Likewise, when Amazon, Apple, Google, Facebook and Twitter attempted to deplatform the sitting U.S. president, that was their prerogative as well. The fact that both actions were legal is the problem.

"You do need some sort of regulation," said Mark MacCarthy, nonresident senior fellow in governance studies at the Center for Technology Innovation at the Brookings Institution. "You can't just leave these guys alone."

Derek Bambauer, professor of law at the University of Arizona, said that because social platforms can be used as megaphones to incite violence and provide material support to terrorists, which is illegal under the First Amendment, the government has reason to regulate speech on social media to some extent.

So far, Facebook and Twitter have been subject only to market forces -- sometimes removing illegal or harmful content and other times ignoring it. There was tremendous pressure on these platforms to mute President Trump's inciting comments.

"They have a moral obligation and probably an obligation to their customers to have some sort of regulation," said James Waldo, professor of policy at the Harvard Kennedy School. "If the government decides … that these platforms are incapable of regulating themselves, then it has certainly been the role of government to go in and create regulations."

Regulating social media platforms is a tricky task, Waldo said. However, it can be done. There are many suggestions floating around Congress and policy circles for how to create an environment in which social media platforms can better police themselves.

Politicians from both sides of the aisle have lobbed complaints at these platforms for years, with the left criticizing social media companies for their lack of action in quelling the rhetoric of the president and his supporters, and the right speaking out against the alleged silencing of right-wing speech. This mutual frustration led to some bipartisan efforts at reforming Section 230.

Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.) introduced the Platform Accountability and Consumer Transparency Act, or PACT Act, in late 2020. It stipulates that providers of "interactive computer services," such as social media platforms, must publish their acceptable use policies and provide a complaint system for users to go to if those policies are violated. The bill would create an enforcement commission and subject non-complying companies to penalties.

Another bill is the Online Consumer Protection Act that Rep. Jan Schakowsky (D-Ill.) has announced plans to introduce this month. That bill will require platforms to disclose their policies regarding, among other things, the incitement of violence in "easily understood terms," and will hold them liable for these new regulations, despite the liability shield otherwise afforded them by Section 230.

According to MacCarthy, the PACT Act and the Online Consumer Protection Act are the kind of Section 230 reforms that will lead to better social media ecosystems, where users have transparency into the decisions made surrounding the platform's content rules and the actions taken by the platform. 

"What everyone wants from the platforms is the Goldilocks solution," Bambauer said. "It must regulate just enough. Not too little, not too much."

MacCarthy pointed to the kinds of stringent regulations Europe imposed on Google as a good example for the U.S. to follow. European courts assigned Google the responsibility for figuring out when a search request should not return certain things. The courts told Google to balance the interests of the public in privacy and free speech. They set standards by which Google could make their decisions, and set up a system by which the courts could review the content if Google made a mistake.

"We haven't done that in this circumstance," MacCarthy said of the U.S. "We just said, 'You're a private company. Do whatever you want.'"

Not everyone agrees that Section 230 should be modified to prevent the kind of speech that led to the events of Jan. 6.

"If I post something that's illegal, like libel or child pornography, the company is not responsible, I am," said Milton Mueller, program director for Cybersecurity Policy at Georgia Institute of Technology.

Common communications carriers like telephone companies are not subject to speech patrol, and social media shouldn't be either, Mueller said.

Hans Klein, an associate professor in the School of Public Policy at the Georgia Institute of Technology, said allowing the government to regulate speech online will narrow the content on social media to what the government finds acceptable. He added that he fears this will, in turn, stifle dissent.

Suffolk University Law School's Sullivan noted that a key Supreme Court case involving free speech and a private company offers a glimpse of a potential change should social media platforms be considered public forums subject to First Amendment speech rights.

The case he cited is PruneYard Shopping Center v. Robins from 1980, in which the court ruled that California could interpret its state constitution to protect political protesters from being evicted from a shopping center in Campbell, Calif. High school students opposing a U.N. resolution against Zionism had set up a table at the shopping center to distribute literature and solicit signatures for a petition. They sued the company that owned the center, alleging their free speech rights had been violated.

Sullivan said the legal principle at play in the case was that the court in effect equated the private mall with the classic public town common on which people espousing any views, other than inciting to riot, historically have had a constitutional right to speak freely.

"Someday, will social media platforms -- Twitter, Facebook etc. -- get footing the way the PruneYard Shopping Center got?" he said. "Whether or not PruneYard will ever apply to social media platforms is anybody's guess at this point."

Service providers pull the plug

While the violent Capitol riots may have caused a pivotal change in content on social media moving forward, the waters can become murkier the further down the technology stack you go, Sellars said.

Case in point: Parler. The Twitter alternative has attracted right-wing extremists who sought an unmoderated communication platform. That lack of moderation was cited as the reason Google and Apple pulled it from their app stores. More damaging, though, is that its cloud infrastructure provider, AWS, has kicked Parler off its service. This week, Parler filed a lawsuit for breach of contract and anti-competitive behavior against AWS.

AWS's Acceptable Use Policy outlines a series of forbidden uses of its cloud services, including "any activities that are illegal, that violate the rights of others, or that may be harmful to others, our operations or reputation."

The policy also includes provisions for network abuse, security violations and states that AWS may disable or remove offending content. AWS competitors such as Microsoft and Google have similar policies.

In response to the lawsuit, AWS filed a response saying it made its concerns about violent content known to Parler and requested content be removed before taking steps to remove the site.

"AWS suspended Parler's account as a last resort to prevent further access to such content, including plans for violence to disrupt the impending Presidential transition," according to the filing.

Parler could find a new home at a smaller web hosting provider, but that would likely constrain the app's scalability and reliability compared with service offered by a hyperscaler's redundant and globally distributed footprint.

In a message posted to Parler before it was shut down, CEO John Matze said the service could be down for up to a week as the company "rebuild[s] from scratch."

He added that Parler prepared for events like this by never relying on Amazon's proprietary infrastructure and building Bare metal products. This implies that Parler's mobile app and website didn't make heavy use of native AWS services, which can provide performance and other advantages but leads to vendor lock-in. However, the lawsuit Parler filed against AWS uses different language.

"[B]oth the apps and the website are written to work with AWS's technology," the complaint states. "To have to switch to a different service provider would require rewriting that code, meaning Parler will be offline for a financially devastating period."

Parler couldn't immediately be reached for comment about the particulars of its architecture. How quickly the company can get its services up and running again, if it all, remains to be seen.

Given the circumstances of Parler's dismissal from AWS and resulting non-grata status with other major providers, finding a new home that offers equivalent services could be difficult. Parler could rent dedicated servers from a provider or move to a colocation model; the latter would require Parler to purchase, install and test its own hardware inside a colocation vendor's data center at a cost much higher than AWS.

Shaun Sutner is news director for TechTarget's Information Management sites. Find him on Twitter at @ssutner.

Chris Kanaracus is news director for TechTarget's Cloud Computing and DevOps sites. Find him on Twitter at @chriskanaracus.

Makenzie Holland is a news writer covering big tech and federal regulation. Find her on Twitter at @m_holland6.

Maxim Tamarov is a news writer covering unified communications for TechTarget. Find him on Twitter at @MaximTamarov.

Bridget Botelho, Editorial Director, News, contributed to this report.

Dig Deeper on Team collaboration software

Networking
ITChannel
Close