Professor Ellen P. Goodman

It’s no surprise that Mark Zuckerberg has called for “updating the rules for the Internet” – he signaled as much last April in his testimony before the U.S. Senate, inclining his head towards the yoke of regulation.  But things have changed significantly since then and Zuckerberg has missed his moment to lead on the design of regulation addressing toxic content, online accelerants of hate, security breaches, abusive data practices, and all the rest.

If it’s curdled public opinion Zuckerberg is seeking to refresh, his ideas are not bold enough.  If it’s clueless politicians he is seeking to enlighten, they have advanced beyond these ideas with concrete policy proposals.  Senator Mark Warner has laid out a roadmap for big tech regulation.  Senator Liz Warren wants to break up big tech. The Australian Competition and Consumer Commission has laid out its big ideas to reign in big tech in its Digital Platforms Inquiry. The UK Parliament issued similar calls for action.  And so it goes.

If Zuckerberg believes that self-regulation is insufficient, he’s right. But he’s a poor messenger when Facebook’s own attempts at self-regulation have been so paltry.  On its own initiative, it could by all accounts have unilaterally improved the information ecosystem.  It could have further reduced the amplification of toxic content, increased the salience of outlets that follow journalistic standards, and increased transparency.  It has not done enough to stop the damage – most evident in New Zealand — and is now asking the government to tie is hands to the mast, a little, so long as it doesn’t hurt too much. 

There has been so much good reporting on how Facebook amplifies content and benefits from diseased conversations:  ZuckedAnti-Social MediaCustodians of the Internet, the work of Renee DiResta  and Carole Cadwalladr on disinformation, Surveillance Capitalism.  One understands by now that “it’s the business model, stupid.” Policies that preserve and defend the model will fail the public, while benefitting the business.   

For each of the four areas that Zuckerberg has flagged for global regulation, there is more that Facebook could have done and should still do; and there are stronger solutions less accommodating to Facebook’s business model.   

Take Harmful content.  Zuckerberg proposes that an independent organization “set standards governing the distribution of harmful content” and that regulators “set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum.” In terms of self-regulation, he says, internet services should be transparent about content removals following Facebook’s lead.  Of course content moderation can be done better, but it’s a rearguard action. Facebook’s own White Nationalism ban, while welcome, is a rearguard action after the Christchurch catastrophe. 

What content standards does Zuckerberg envision?  Something like Facebook’s long secret 27-page content moderation guide?   Or the old Television Code for broadcasters, adopted in 1951 by the National Association of Broadcasters? That was a self-regulatory set of standards designed to stave off content regulation and it is from this that we originally got TV shorn of profanity, nudity, and frisky bedroom scenes. The code was dissolved in antitrust and First Amendment litigation in 1983 and a uniform internet code of conduct – even if it could be constructed — might have similar problems. 

Transparency is a piece of the solution, but not how Facebook does it.  Zuckerberg touts his company’s bi-annual transparency reports on its enforcement of community standards. This is hardly the gold standard of transparency and in fact is considerably worse than Google’s reports on Right to Be Forgotten compliance (which are also inadequate). Transparency should be much closer to real time, provide information on the source of speech that is taken down, the kinds of speech, and the users affected.

The more ambitious solutions to harmful content are structural, including revision of Section 230 to subject platforms (or the most dominant) to ordinary liability for content harms and antitrust to break up companies with data dominance.  Zuckerberg doesn’t touch such structural reforms, preferring instead uniform “Internet rules.” Such a flat response ignores Facebook’s Himalayan influence. Not every internet player needs to abide by uniform content rules.  Some players need more Section 230 immunity than others.  We should be worrying about platforms with outsized market share. 

On election integrity, Zuckerberg wants regulation to create common standards for “verifying political actors,” legislation to address political advertising throughout the year (not just at election time), and an exploration of “how political campaigns use data and targeting.”  He notes that Facebook has tried to increase political ad transparency with its new searchable ad library. This is a good step, and the company has said it supports the Honest Ads Act proposed in the U.S. Congress. But let’s not forget what Facebook has done to retard transparency, for example when it blocked ProPublica’s transparency tool that made it easier to track the reach of political ads.  

Facebook acknowledges monitoring must be year-round and include controversial issue ads.  Good.  Really, though, an ad database should include all ads.  Zuckerberg should take the next step and embrace Professor Philip Howard’s proposal of a comprehensive ad archive: “A fully searchable public archive, maintained by an independent ad council financed by a fraction of ad revenues, will give democracy a healthy shot of algorithmic transparency.” Facebook could go further and insist that dark money groups must list their donors to ensure the kind of transparency that even the Supreme Court majority in Citizens United expected the Internet to provide.  In the U.S. at least, there will not be a mandate soon for Facebook or any company to contribute to such an archive, but that doesn’t stop Facebook from doing it. 

On privacy, Zuckerberg says he wants GDPR to be adopted globally. Is it churlish to observe that Facebook has aggressively lobbied against the privacy regulation Zuckerberg now urges global leaders to adopt? Putting that aside, the GDPR approach on its merits is insufficient.  There almost certainly will be more widespread adoption of GDPR-like privacy laws, as California has shown already.  They will not fundamentally disturb an ecosystem built on intrusive data harvesting.  The notice and consent required by these laws can be overcome. The informed consent model imported from medicine is probably not the right one. The negative externalities from Facebook’s activities are more like a toxic chemical spill.  Certain data practices need to be banned. It’s no surprise that that’s not on Zuckerberg’s menu. 

Finally, we come to data portability.  Zuckerberg says, “If you share data with one service, you should be able to move it to another. This gives people choice and enables developers to innovate and compete.”  Data portability is a great idea, and here too, Facebook could already be doing much more to make it happen.  Its efforts so far have been lame. You can’t for example effectively take your friends with you to a rival platform.  Zuckerberg flags the Data Transfer Project as Facebook’s contribution to portability, but as the Electronic Frontier Foundation reports, this won’t meaningfully help consumers “to move their data beyond a small cabal of incumbent services.” Real freedom of movement among platforms, as among rival wireless providers, requires portability and interoperability. This is something Facebook can advance, right now, without regulation by letting users export their connections and tracking data and let developers use Facebook’s API (or open standards) to make products that compete with or modify the platform experience.   

At a moment when Facebook is under scrutiny for discriminatory housing ads, for its handling of consumer privacy, for facilitating the spread of the Christchurch video and, indeed, the underlying racist ideology, for treating its content moderators badly, and for its voracious  market expansion, it is natural for the company to try to “get out ahead of regulators.”  The problem is it’s too late.  Facebook is too far behind. It has recognized as much with its announced “pivot to privacy” – a focus on encrypted private messaging instead of the public social network.  If communications migrates more fully to messaging, Facebook can slough off the regulation it is may now be getting behind.  It will have thrown regulators off the scent.