This piece by Professor Ellen P. Goodman was published by Tech Policy Press.

Twitter adopted a policy last week that forbids the posting of private photos and videos of someone without their permission. The reason, it said, was to protect the vulnerable, especially “women, activists, dissidents, and members of minority communities” who are harassed and silenced through unwanted exposure. It would make an exception if the posted “media and the accompanying tweet text add value to the public discourse or are shared in public interest”. 

This kind of public interest exception was neither new for Twitter nor for platforms generally: 

  • Twitter’s moderation policies already stated that “sometimes it may be in the public interest to allow people to view Tweets that would otherwise be taken down [such as when] it directly contributes to understanding or discussion of a matter of public concern.”
  • TikTok says content “in the public interest that is newsworthy or otherwise enables individual expression on topics of social importance” stays up even if it violates the platform’s standards. 
  • Meta allows violating content to remain on Facebook and Instagram “if it’s newsworthy and if keeping it visible is in the public interest” – a topic about which Thomas Kadri and Kate Klonick wrote at length.

Immediately after Twitter announced its private information policy along with the exception, commentators worried that it would hurt the powerless. Would the explosive 2020 video of Amy Cooper calling the cops on birdwatcher Christian Cooper come down? Would video of the Kenosha shootings be taken down? What about video of police misconduct or militants in action? Twitter tried to anticipate these criticisms in its policy, saying that private media can stay up if it “contains eyewitness accounts or on the ground reports from developing events,” and also if it concerns a public figure.

But the truth is, there is no way Twitter can assure us it won’t take down important, newsworthy content of the sort that inspires movements or exposes injustice. Because that’s the nature of editorial choices: as Twitter moves towards a more mature understanding of its responsibilities, it takes on more and more of the character and fallibility of an editor. However, because Twitter is protected by Section 230 from legal probes of its judgments, we have to rely entirely on its voluntary disclosures.

We have been here before. In 2014, the European Court of Justice ordered Google to give Europeans the Right to be Forgotten (RTBF)– since encoded in the GDPR— in search results. That meant that Google would have to, upon request, remove links to personal information that is “inaccurate, inadequate, irrelevant, or excessive” and holds no public interest. After a year of watching Google provide transparency-by-anecdote and gross statistics about how it removed about 40% of the searches it was asked to, I worked with my colleague Julia Powles to push for more transparency. Our point was that Google was creating a common law of content removal decisions – protected from almost any legal process –  and the public should know its reasoning in order to evaluate the balance being struck between informational privacy and access.

We got more than 80 scholars to sign a letter asking for data, including “what sort of information typically gets delisted (e.g., personal health) and what sort typically does not (e.g., about a public figure), in what proportions and in what countries?” We said the public deserved this information because Google was making “decisions about the proper balance between personal privacy and access to information. The vast majority of these decisions face no public scrutiny, though they shape public discourse. What’s more, the values at work in this process will/should inform information policy around the world. A fact-free debate about the RTBF is in no one’s interest.” Six years later, Google has gotten better about revealing its RTBF decisionmaking, including providing annotations from its human reviewers and much more data about the requests it receives, but it still does not disclose enough. 

Casey Newton says that Twitter is in essence voluntarily adopting something like the Right to be Forgotten with its private information policy. So, let’s have at least as much transparency. Indeed, we can have much more. Because the delisting requests Google receives are private, it has to be careful not to “out” the requests by providing too fine-grained information. Twitter, of course, has no similar constraint. What we need from Twitter is not perfect judgment, but an explanation of its judgment. This is how newsworthiness law is made. Twitter says it will consider context in making this judgment. That’s great. That’s exactly what social media too often strips from media content. But we should hear what context Twitter thinks is important. Ultimately, newsworthiness is an editorial choice having to do with motivation, subject matter, context, and power. 

What do we want to know about Twitter’s policy implementation? It’s not like we have a well-developed jurisprudence of “newsworthiness” that Twitter can conform to. As Amy Gadja writes, there has been an “absence of clear norms regarding newsworthiness” when people aggrieved by publication of private information have sued publishers. “As long as journalists stuck to the standards of their field, courts treated them with deference.” 

The Restatement of Torts definition of “newsworthiness” is pretty much anything the news media thinks is interesting unless it crosses the line into “a morbid and sensational prying into private lives for its own sake.” It has been the news media that has set the boundaries for newsworthiness. Journalistic codes of conduct and editorial standards have been made public. For most of the time that newsworthiness has existed as a concept, what editors thought counted was transparent to everyone– even if decisions about what not to publish were hidden. Now though, as Gadja, Kadri, and Klonick point out, the newsworthiness choices of platforms are effectively hidden.  

Bottom line? Twitter should make its news judgement public, especially given the inability of aggrieved individuals to depose Twitter decision makers. It has been noted that Twitter’s policy could result in the removal of photos taken in public, which are usually fair game for journalism. Twitter may well decide that the “context” of being in a public place triggers the private information exception. We should know that. Twitter says its policy about private information is designed to protect the vulnerable. How it evaluates power in the dynamics of posting private information is therefore a key issue to be interrogated. Twitter owes this information to its users.