One year ago, the European Court of Justice, in Google Spain v AEPD and Mario Costeja González, ordered search engines to respond to users’ requests to delist results on searches of their names that allegedly violate user privacy rights.   This has become known as the Right to Be Forgotten (RTBF) ruling.  Today, 80 technology scholars from five continents and 57 universities released an open letter to Google, urging it to provide more transparency, without compromising privacy.  Julia Powles and I organized the effort, finding that scholars are frustrated with the state of the data. Around the world, some hate the decision and some love it.  Partisans debate should we or shouldn’t we have a RTBF?  What are the privacy benefits?  What are the information costs?  In the meantime, we know next to nothing about what’s happening on the ground.  Having fielded more than 250,000 requests, Google is in possession of the vast bulk of information about who wants information delisted and why.  So far, it has revealed its reasoning in only some 40 decisions.  Otherwise, there is very little public scrutiny of how the search engine strikes the balance between individual privacy and access to information concerns. The argument for transparency is that:  (1) the public should be able to find out how digital platforms exercise their tremendous power over readily accessible information; and (2) implementation of the ruling will affect the future of the RTBF in Europe and elsewhere, and will more generally inform global efforts to accommodate privacy rights with other interests in data flows. Without transparency, arguments risk descending into intractable tussles about ideology:  Pro-speech against pro-privacy.  It’s not either-or.  So far, the anecdotal RTBF decisions that Google has released give reason to hope that we can have both.  But we have no idea whether these decisions are representative and what the more liminal cases might look like. Forget.me, a service that submits delisting requests, has released some useful data about RTBF decisions based on the responses it receives from search engines.  But beyond the limitations of its sample size, its categories are not optimally illuminating.  It tells us, for example, that most requests are for “invasion of privacy.”  This covers a lot ground.  We don’t know if the alleged invasion concerns, for example, health information or a political opinion.  Forget.me tells us that Google’s most common reason for denying delisting (26%) is that the information “concerns your professional activity,” but this says little unless we know how many such requests there are, what percentage are denied, and therefore, whether we can conclude that these request are presumptively weak. Nor can we glean much from the Data Protection Authorities.  The way the process is structured, only delisting denials can be appealed.  Those that reach a published decision are likely to be the edge cases and not broadly representative.  We understand that the DPAs are considering releasing more information, but we’re not there yet.  This call for more transparency is now quite ripe.  Google’s own Advisory Council on the RTBF in February 2015 recommended more transparency, as did the Article 29 Working Party in November 2014. Only by looking at the balance that Google is striking – through aggregate statistics and anonymised cases — can we know if RTBF and other privacy protection policies being considered can deliver both adequate privacy and speech protection. The Open Letter’s hope seems also to be that more transparency will enable Google and other search engines to develop processes that engender public confidence in the “black box” operation that is search. The Open Letter, though addressed to Google, is obviously directed at all search engines subject to the ruling.  It summarizes its request and rationale as follows:  
What We SeekAggregate data about how Google is responding to the >250,000 requests to delist links thought to contravene data protection from name search results. We should know if the anecdotal evidence of Google’s process is representative:  What sort of information typically gets delisted (e.g., personal health) and what sort typically does not (e.g., about a public figure), in what proportions and in what countries? Why It’s ImportantGoogle and other search engines have been enlisted to make decisions about the proper balance between personal privacy and access to information. The vast majority of these decisions face no public scrutiny, though they shape public discourse.  What’s more, the values at work in this process will/should inform information policy around the world.  A fact-free debate about the RTBF is in no one’s interest. Why GoogleGoogle is not the only search engine, but no other private entity or Data Protection Authority has processed anywhere near the same number of requests (most have dealt with several hundred at most). Google has by far the best data on the kinds of requests being made, the most developed guidelines for handling them, and the most say in balancing informational privacy with access in search.