Tuesday, February 22, 2011

Deconstructing the CALEA hearing

Last Thursday, the House Judiciary Committee held a hearing focused on law enforcement surveillance of modern Internet services.

Although both the New York Times and CNET have stories on the hearing, I don't think either publication covered the important details (nor did they take the time to extract and post video clips).

The FBI is no longer calling for encryption backdoors

When Charlie Savage at the New York Times first broke the news last year that law enforcement officials were seeking more surveillance capabilities, it seemed quite clear that the FBI wanted to be able to access to encrypted communications. Consider, for example, this statement by the General Counsel of the FBI:
"No one should be promising their customers that they will thumb their nose at a U.S. court order," Ms. Caproni said. "They can promise strong encryption. They just need to figure out how they can provide us plain text."
That threat spooked the hell out of a lot of people in the privacy community and at technology companies. However, in the months that followed, rumors started to circulate that as a result of negotiations within the administration encryption was now "off the table."

Thus, many of us in Washington were not entirely surprised to see Ms. Caproni walk back her previous statements on encryption when she testified last Thursday:
Law enforcement (or at least, the FBI) has not suggested that CALEA should be expanded to cover all of the Internet...

But lets turn directly to encryption. Encryption is a problem. It is a problem we see for certain providers. Its not the only problem.

If I don't communicate anything else today, I want to make sure that everyone understands. This is a multifaceted problem. And encryption is one element of it, but it is not the entire element. There are services that are not encrypted, that do not have an intercept solution. So it's not a problem of them being encrypted. It's a problem of the provider being able to isolate the communications and deliver them to us in a reasonable way so that they are usable in response to a court order...

There are individual encryption problems that have to be dealt with on an individual basis. The solution to encryption that is part of CALEA. Which says that if the provider is encrypting the communications, and so if they have the ability to decrypt and give them in the clear, then they're they're obligated to do that. That basic premise. That provider imposed encryption, that the provider can give us communications in the clear, they should do that. We think that is the right model. No one's suggesting that Congress should re-enter the encryption battles that were fought in the late 90's, and talk about sequestered keys or escrowed keys and the like. That is no what this is about.




Why the FBI doesn't really need encryption back doors

The bit of CALEA that she is talking about is 47 USC 1002(b)(3), which states that:
A telecommunications carrier shall not be responsible for decrypting, or ensuring the government’s ability to decrypt, any communication encrypted by a subscriber or customer, unless the encryption was provided by the carrier and the carrier possesses the information necessary to decrypt the communication.
US law is surprisingly clear on the topic of encryption -- companies are free to build it into their products, and if they don't have the decryption key, they can't be forced to deliver their customers' unencrypted communications or data to law enforcement agencies.

While Skype uses some form of proprietary end-to-end encryption (although it should be noted that the security experts I've spoken to don't trust it), and RIM uses encryption for its Enterprise Blackberry messaging suite, the vast majority of services that consumers use today are not encrypted. Those few services that do use encryption, such as Google's Gmail, only use it to protect the data in transit from the user's browser to Google's servers. Once Google receives it, the data is stored in the clear.

There is one simple reason for this, which I described in a law journal article last year ago:
It is exceedingly difficult to monetize a data set that you cannot look at. Google’s popular Gmail service scans the text of individual emails, and algorithmically displays relevant advertisements next to the email. When a user receives an email from a friend relating to vacation plans, Google can display an advertisement for hotels near to the destination, rental cars or travel insurance. If those emails are encrypted with a key not known to Google, the company is unable to scan the contents and display related advertising. Sure, the company can display generic advertisements unrelated to the user’s communications contents, but these will be far less profitable.

Google’s Docs service, Microsoft’s Hotmail, Adobe’s Photoshop Express, Facebook, and MySpace are all made available for free. Google provides its users with gigabytes of storage space, yet doesn’t charge a penny for the service. These companies are not charities, and the data centers filled with millions of servers required to provide these services cost real money. The companies must be able to pay for their development and operating costs, and then return a profit to their shareholders. Rather than charge their users a fee, the firms have opted to monetize their user’s private data. As a result, any move to protect this data will directly impact the companies’ ability to monetize it and thus turn a profit. Barring some revolutionary developments from the cryptographic research community, advertising based business models are fundamentally incompatible with private key encrypted online data storage services.
Robert Scoble also addressed this very same issue last year, writing about the reasons why major location based services have not adopted privacy preserving technologies:
Well, there’s huge commercial value in knowing where you’re located and [service providers] just aren't willing to build really private systems that they won’t be able to get at the location info. Think about a Foursquare where only your friends would be able to see where you were, but that Foursquare couldn’t aggregate your location together with other people, or where it wouldn’t be able to know where you are itself. They wouldn't be able to offer you deals near you when you check in, the way it does today.
The FBI knows that most services are not going to be using full end-to-end encryption, and as such, there is not much to be gained by fighting a public battle over encryption backdoors. In her testimony on Thursday, Ms. Caproni drove this point home:
We're suggesting that if the provider has the communications in the clear and we have a wiretap order, that the provider should give us those communications in the clear.

For example, Google for the last 9 months has been encrypting all GMail. As it travels over the internet, its encrypted. We think that's great. We also know that Google has those communications, and in response to a wiretap order, they should give them to us, in the clear.




Privacy by design vs. insecurity by design

In the report it issued in December, the Federal Trade Commission called on companies to embrace "privacy by design":
[C]ompanies should adopt a "privacy by design" approach by building privacy protections into their everyday business practices. Such protections include providing reasonable security for consumer data, collecting only the data needed for a specific business purpose, retaining data only as long as necessary to fulfill that purpose, safely disposing of data no longer being used, and implementing reasonable procedures to promote data accuracy.
Building encryption into products, turning it on by default, and using it to protect all data is the ultimate form of privacy by design. While the FTC is encouraging firms to embrace this philosophy, the FBI is betting that poor security will remain the default. Sure, a few individuals will know how to encrypt their data, but the vast majority will not. It is because of this that the FBI can avoid a fight over encryption. Why bother, when so little data is encrypted?

Consider Ms. Caproni's argument:
There will always be criminals, terrorists and spies who use very sophisticated means of communications that create very specific problems for law enforcement. We understand that there are times when you need to design an individual solution for an individual target. That's what those targets present. We're looking for a better solution for most of our targets, and the reality is I think sometimes we want to think that criminals are a lot smarter than they really are. Criminals tend to be somewhat lazy, and a lot of times, they will resort to what is easy.

So long as we have a solution that will get us the bulk of our targets. The bulk of criminals, the bulk of terrorists, the bulk of spies, we will be ahead of the game. We can't have to design individualized solutions as though they were sophisticated targets, who was self-encrypting, putting very difficult encryption algorithm on, for every target we find. Because not every target is not using such sophisticated communications.
While I understand her perspective, the problem I have is that her description of criminals as "lazy" people who use technology that is "easy" similarly describes the vast majority of the general public. As such, for the FBI's plan to work, encryption technology needs to be kept out of the hands of the general public in order to similarly keep it out of the hands of lazy criminals.



If encryption is off the table, what is the FBI after?

During the hearing Ms. Caproni noted that both RIM and Skype were foreign companies, and not subject to CALEA. She had ample opportunities to call out these companies, and instead, opted to not do so. As such, at least right now, it looks like the two firms may be safe.

As such, with Skype, RIM, and the general encryption issue off the table, you must be wondering, what exactly does the FBI want? From what I can gather, quite a few things, many of which impact privacy in a big way, but which will lead to far less press than those other high profile issues.




Ms. Caproni didn't name names at the hearing, but it is pretty easy to identify the companies and services that she and her colleagues are interested in.

  • Real-time interception of cloud services. Google, Microsoft, Facebook and Twitter are all legally required to provide after-the-fact access to their customers' stored data, in response to a valid legal process. The law does not require them to provide real-time interception capabilities. What this means is that while the government can go to Google and ask for all searches conducted by a particular user, they can't ask for all future searches or Google Chat instant message communications. These companies are under intense pressure to provide such real-time, prospective access to user data.

  • Voice services that do not connect to the public telephone network. Google and Facebook both offer in-network audio chat to their users (Google also offers video). Microsoft's XBox 360 service, Blizzard and several other online video game platforms allow users to insult each other chat while they play against other users online. At least from published information, I'm not aware of any one of these companies offering interception capabilities -- and so law enforcement agencies almost certainly want access to this

  • Virtual Private Network (VPN) services. These services, many of them paid, are increasing in popularity among users who want a bit of privacy when they surf. They enable users to browse the web when using unsecured public WiFi networks without having to worry about hackers stealing their data; browse the web at home without having to worry about their broadband Internet Service Provider using Deep Packet Inspection technology to spy on them; access streaming content that is restricted by country (for example, allowing foreigners to watch hulu, or US residents to watch the BBC); and download files from P2P networks without having to worry about Hollywood studios, record labels and porn companies suing them.

    Many users turn to these commercial VPN services in order to obtain privacy online, and it is because of this that many services have strict no-logging policies. They do not know what their users are doing online, and don't want to know. However, many of these services are based in the US (or at least, have many servers in US datacenters), and could very easily keep logs if they were forced to do so.

What happens next?

Last week's hearing was just the first step in what will likely be a long battle. There will be more hearings, and eventually, the FBI will return with draft legislation. In the mean time, all the major tech companies in Silicon Valley will no doubt continue to engage in private, high-pressure negotiations with senior FBI officials who will tell them they can avoid new legislation by voluntarily building new surveillance capabilities into their products.

Friday, February 18, 2011

No New Surveillance Powers For The War On Drugs

At two hearings over the past month, including one yesterday, senior officials from the Department of Justice asked Congress to significantly expand its ability to monitor and investigate the online communications of Americans.

Law enforcement officials claim that it is too difficult to snoop on users of modern services like Skype, Blackberry, Facebook and Google, as the companies have not built wiretap capabilities into their services. The Department of Justice would also like wireless and residential Internet Service Providers to keep records that would make it easier to determine after-the-fact which particular customer visited specific websites.

These officials argue that technology companies should be required to build new surveillance capabilities in order to more effectively investigate child pornographers and terrorists. This is a politically savvy argument, as no member of Congress will want to risk appearing weak on terrorism or child pornography.

The reality is that most law enforcement surveillance powers are used in support of the war on drugs, not to investigate terrorists or pedophiles. As such, Congress should first demand reliable statistics on law enforcement’s existing Internet surveillance activities before even considering the FBI’s request for more powers.

The American public may be willing to give up their privacy and civil liberties in order to actually prevent terrorism and the sexual exploitation of children. This deal is far less attractive if the new surveillance powers will instead be used to to continue a failed prohibition opposed by millions of Americans.

Statistics are useful


Each year, federal and state law enforcement agencies obtain thousands of court orders that allow them to secretly wiretap the telephones of American citizens. We know this because Congress requires annual reports regarding the use of these surveillance powers.

The first documented instances of law enforcement wiretaps were used to investigate bootleggers during the prohibition. Decades later, as the wiretap reports confirm, the vast majority of intercepts are used to enforce our modern day prohibition: the war on drugs. For example, of the 2,376 wiretap orders issued in 2009, 86% (2,046) were obtained as part of narcotics investigations.


Similarly, of the 763 “sneak and peek” search warrants obtained in 2009, 474 were obtained in investigations of drugs, and only 3 were used in investigations of terrorism. These surveillance orders allow government agents to search a home without telling the owner or resident until weeks or months later. Law enforcement agencies were given this authority as part of the Patriot Act, after the Department of Justice claimed that the powers were necessary to allow “law enforcement to conduct investigations without tipping off terrorists.” However, a report published by the Administrative Office of the Courts in 2009 revealed that the powers are primarily used to investigate drugs, not terrorism.

Unfortunately, while accurate statistics exist for wiretaps, and for the sneak and peek authority granted as part of the Patriot Act, we are largely in the dark regarding most of the tens of thousands of requests made each year to phone companies and Internet service providers. There are no statistics that document law enforcement requests for email, instant messaging, social network profiles, search engine history, or geographic location information from mobile phones.

Not only do we have no way of knowing the total number of requests made by law enforcement officers each year, but we also do not know what kinds of crimes they are investigating. Instead, all we have are unverifiable anecdotes from law enforcement officials, who selectively reveal them in order to justify their push for increased surveillance powers.

If the statements of law enforcement officials are to be believed, most of their online investigations involve child pornography. However, the published statistics for other forms of surveillance suggest that they are likely in support of the war on drugs. The only way to be sure would be for Congress to require the collection and publication of statistics covering law enforcement agencies’ surveillance of Internet applications and communications. As Senator Leahy noted more than 10 years ago, surveillance statistics serve as a “more reliable basis than anecdotal evidence on which to assess law enforcement needs and make sensible policy in this area.”

Rather than granting the Department of Justice the sweeping new surveillance powers it seeks, Congress should first seek and obtain detailed reports on the use of modern surveillance techniques. There is no need to rush the passage of new authority; especially since, as the debate over the renewal of the Patriot Act has clearly demonstrated, rolling back powers is much tougher than granting new ones.

Wednesday, February 16, 2011

CALEA: It is about the money

Cash Rules Everything Around Me
C.R.E.A.M.
Get the money
Dollar, dollar bill y'all
-- Wu Tang Clan
Tomorrow, the House Judiciary Committee will hold a hearing on the topic of CALEA, and the FBI's desire to get backdoors in modern services like Skype, Google, Facebook and RIM's Blackberry. The mass adoption of these services, the FBI claims, is leading to a situation where law enforcement agencies have "gone dark," and lost the ability to intercept the communications of suspects in real time.

This is not the first time that the FBI has come to Congress to ask for increased surveillance powers -- The FBI spent a good part of the 90s sending people to Capitol Hill, asking for backdoors in encryption.

What does surprise me is that the tech companies are nowhere to be seen, and have not deployed anyone publicly to fight this proposal. Compare this, for a moment, to the cloud computing privacy hearing held by the same House Committee last September, where Google, Microsoft, Amazon, Rackspace and Salesforce all sent executives to argue for stronger privacy laws.

Last year, those companies were vocally asking for stronger privacy laws that would make it more difficult for law enforcement agencies to access their customers' data. Now, these same firms are being asked to put backdoors in their services, and make it easier for the government to snoop on their customers. Are they fighting this? No.

Instead, they are hiding behind industry-funded advocacy groups, like the Center for Democracy and Technology, which has written a softly-worded statement of concern.

Google, Microsoft and Facebook have excellent, well-funded teams of lobbyists. The fact that they are not appearing at the hearing tomorrow and have not issued any public statements about the topic is a clear sign that these companies are doing everything possible to keep a low profile on this issue.

If I had to guess why, I suspect that they don't want to do anything to upset Congress, particularly now that topic of commercial privacy is very much on the legislative agenda. If they put their foot down on CALEA, they may find themselves with few friends when members start considering bills to limit behavioral advertising.

Priority #1: Gotta get paid

When Congress passed CALEA in 1994, it set aside $500 million to help with the cost of designing and deploying wiretap capable networking equipment. Unfortunately, as 2008 DOJ Inspector General report (pdf) revealed, it was not possible to tell if the money was well-spent, since neither the telecoms nor the switch makers were willing to share the necessary information.

With that in mind, this bullet point from CDT's statement of concern caught my eye:
"Avoid unfunded mandates: The costs of implementing any new proposals should be borne by the government."
While tech companies aren't particularly crazy about adding new snooping capabilities into their services, they are even less excited about having to eat the financial cost of developing and deploying those backdoors.

Even though CDT seems to think otherwise, there are strong policy advantages to sticking companies with these costs. The most important one being that Google and Facebook are far more likely to take a strong position against CALEA II if they are going to get stuck with the check. If these firms know they are going to get millions of dollars for upfront surveillance development, they are far less likely to fight, and will instead spend more of their time haggling over the details, and in particular, lobbying for a larger payout with less oversight.

Charging the government for individual requests is good
"When I can follow the money, I know how much of something is being consumed - how many wiretaps, how many pen registers, how many customer records. Couple that with reporting, and at least you have the opportunity to look at and know about what is going on.
-- Albert Gidari Jr., Keynote Address: Companies Caught in the Middle, 41 U.S.F. L. Rev. 535, Spring 2007.
This is not to say that I am opposed to companies making the government pay for the assistance they are legally required to provide. I just think that the payment should be associated with specific investigations and requests, rather than a huge cash payment for developing and deploying surveillance capabilities.

The reason for this is that invoices for surveillance serve as a fantastic paper trail documenting the scope and scale of government snooping. Through Freedom of Information Act requests, I have obtained invoices from both Google and Yahoo, which detailed the kinds of requests they were getting, and helped me to discover that the US Marshals have essentially granted themselves a new surveillance power that is not in the law.

Charging for law enforcement assistance also tends to limit their use to only those records necessary. As Al Gidari told the House Judiciary Committee in testimony last year:
When records are "free," such as with phone records, law enforcement over consumes with abandon. Pen register print outs, for example, are served daily on carriers without regard to whether the prior day's output sought the same records. Phone record subpoenas often cover years rather than shorter, more relevant time periods. But when service providers charge for extracting data, such as log file searches, law enforcement requests are more tailored.

It is for these reasons that I have pleaded with attorneys at Microsoft and Facebook to start charging the government. Even though the law permits them to do so, both firms currently deliver user data to law enforcement agencies for free.

Recoup the high costs of surveillance technology though high per-request fees

A 2006 report from the DOJ Inspector General revealed that:
One carrier informed us that most of the costs it billed to law enforcement are for overtime and recovery of capitalized hardware and software costs. These representatives stated that capital costs are the major costs incurred by a carrier, and that these costs are entirely proper for carriers to recover.
For once, I actually agree with the carriers. If they had to spend millions of dollars deploying CALEA compliant intercept equipment, then it is only reasonable that they recoup it by charging $3500 for a 30 day wiretap (as Cox communications does).

The problem with charging $3500 for a wiretap, is that the police will complain, as this money comes out of their budget. The same 2006 Inspector General report confirmed this:
Law enforcement's biggest complaint regarding CALEA is the relatively high fees charged by carriers to conduct electronic surveillance. A traditional wiretap costs law enforcement approximately $250. However, a wiretap with CALEA features costs law enforcement approximately $2,200 according to law enforcement officials and carrier representatives we interviewed. A law enforcement official noted that, "[w]ith CALEA, the carriers do less work but it costs approximately 10 times as much to do a CALEA-compliant tap versus a traditional tap."

If Congress is considering spending another $500 million on CALEA II (and I hope it doesn't), it should give it out in grants to state and local law enforcement agencies. Give them each a pool of money, and let them decide how they want to spend it. If they want to use it to hire more officers, or buy body armor, that is their choice. If they want to pay for CALEA II wiretaps provided by Google, Facebook and Skype, well, that is their choice too. In the real world, there are opportunity costs associated with every purchase, and the police should have to experience these too. Surveillance should be expensive -- that is the best way to make sure these powers are not overused, or abused. Unfortunately, at just $25 for an individual user's account, Google and Yahoo are not charging nearly enough.

Wednesday, February 09, 2011

Web 2.0 FBI backdoors are bad for national security

Charlie Savage broke the news yesterday that the House will be holding a hearing in two weeks on the subject of CALEA – the 1994 law that forced telecommunications companies to purchase and deploy intercept capable network hardware. As Savage described in a series of articles last fall, the FBI is no longer happy with these intercept capabilities – it now wants modern technology firms like Skype, RIM, Google and Facebook to provide similar backdoors in their own services.

Public interest groups like EFF, the ACLU and CDT will of course do their best to argue that such backdoors would totally violate the privacy of millions of Americans. Unfortunately, this criticism will largely fall on deaf ears.

Those members of Congress who are strong believers in privacy will not need convincing. However, those members who are willing to grant any and all additional powers requested by those investigating pedophiles and terrorists have already made up their mind – in their eyes, individual privacy is a small price to pay.

As such, I am not going to waste my time explaining why CALEA II is a horrible idea on privacy grounds. Instead, I will now explain why, for non-privacy reasons, it is a bad idea to give the FBI what it wants -- and why doing so threatens national security.

Surveillance backdoors, like all other software, will have security flaws

Abusable flaws are routinely found in commercial software products (which is not too surprising, since software engineers are rarely trained in software security or the appropriate use of cryptography). To make a product 100% secure, engineers have to get everything right – protecting against all known attacks, as well as attack techniques not yet invented.

Now, consider this question – if the government is going to force Google, Facebook, Skype and RIM to create surveillance backdoors in their own products, how are these companies going to protect the backdoors to make sure they are not accessed by evildoers? If Google knew how to develop software that is 100% secure, surely it would already be applying these software engineering techniques to its products.

Of course, this is an impossible task, which is why the software products we all regularly use seem to constantly bug us to install security updates.

As such, we need to accept the fact that any surveillance backdoors that these firms are required to build will have security flaws, and that they will be abused by people who care even less about privacy than the FBI.

Governments around the world acquire and exploit security flaws in commercial software

More often than not, commercial software vendors do not discover their own flaws. They learn about them because “white hat” security researchers discover them and notify the company, or because “black hat” hackers discover the flaws, and either exploit them directly, or sell them to others who use them to steal users’ data, or use infected computers to deliver spam.

There is now a thriving economy for those wishing to sell “zero day” security flaws and exploits (that is, those not known to the community). While criminal gangs certainly seem to be interested in buying these exploits, they are not the only customers – governments around the world are on the market for this information.

Charlie Miller, a security researcher famous for discovering exploitable software flaws in Apple’s iPhone, also has a bit of experience selling security flaws. In an academic research paper a few years ago, he described how after discovering a flaw in the Linux operating system, he sold the information to a US government agency (presumably, the NSA, his former employer) for a cool $50,000.

Mr Miller’s experience is not unique – security researchers that have spoken with me confirm that US defense contractors (such as SAIC and Booz Allen Hamilton) purchase exploitable security flaws on behalf of their government clients. One researcher told me that bonuses are built into the contracts – that is, the longer the flaw remains useful and unpatched by the operating system or application provider, the higher the payment.

It would be foolish to assume that the US government is the only one doing this – foreign governments are probably buying any exploits they can get their hands on, as well as spending significant resources to discover these through in-house R&D.

Consider the following example - the recent Stuxnet worm that was used to penetrate Iran’s nuclear facilities used a number of zero day exploits in Microsoft Windows.

While no government has claimed credit for the worm, what is clear, is that whoever did it has quite a bit of security expertise. What this also means, at least as long as the US government claims that it had no role in stuxnet, is that there are other governments out there with the ability to discover (or purchase) and exploit flaws in US made software.

By requiring law enforcement backdoors, we open ourselves to surveillance by hackers and foreign intelligence agencies

In 2004, a still unknown entity somehow gained access to the the CALEA-compliant intercept system of Vodafone Greece, the country's largest cellular service provider.

Those customers whose calls were intercepted included the prime minister of Greece, the mayor of Athens and at least 100 other high-ranking dignitaries, including an employee of the U.S. embassy.

The story of the "Athens Affair", as it is commonly known in security circles, is perhaps the best example of the privacy risks associated with lawful interception capabilities in communications infrastructure. I'm not going to go into all the details here, but there is an absolutely fantastic, multi-page writeup of the incident in IEEE's Spectrum magazine.

While Greek investigators still have not been able to conclusively determine who penetrated their network, all signs (including the mysterious "suicide" of a Vodafone employee in 2005) indicate that it was the work of a foreign intelligence service.

Similarly, in 2010, soon after Google disclosed that Chinese hackers had broken into the company's network, news reports surfaced indicating that the hackers had gained access to Google's lawful surveillance systems.
[Google's Chief Legal Officer David] Drummond said that the hackers never got into Gmail accounts via the Google hack, but they did manage to get some "account information (such as the date the account was created) and subject line."

That's because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.

"Right before Christmas, it was, 'Holy s***, this malware is accessing the internal intercept [systems],'" he said.

In April 2010, at a public event at Google's Washington DC office, I asked the Pablo Chavez, the company's director of public policy if the reports were true. His response, while evasive, still seemed to suggest that there was more to this story:
I'm not familiar with the details. But I do know that there is this contining, ongoing investigation of the matter. Hopefully, Over the course of time, we can talk a little bit more about precisely what happened. I am familar with the report, I am just not a in a position to answer any details.
It has now been a year since the company first disclosed that the hack occurred, yet it still has not revealed if its intercept systems were in fact breached.

It is time for Google to fess up - its customers have a right to know, and members of Congress similarly need to be aware of the risks of adding further backdoors to our communications networks.

Tuesday, February 01, 2011

An open letter to Adobe

MeMe Rasmussen
Chief Privacy Officer
Adobe Systems Inc.

Dear MeMe,

Yesterday, as you know, two researchers from Carnegie Mellon University released a study on the extent to which Flash Local Stored Objects ("Flash cookies") are used on popular websites, and in particular, how often sites engage in cookie "respawning".

Before discussing the report, I want to begin by stating that I have great respect for the two researchers, Dr Aleecia McDonald and Professor Lorrie Cranor. They both have truly stellar track records in their area of academic expertise: the study of usable security and privacy.

However, I have serious misgivings about the the motivation of this study, the role that several non-academic entities played in shaping it, its methodology, and the way that it may be used by your company and others in industry to whitewash a significant privacy issue.

The motivation of the study, and the role played by Adobe, CDT and Reed Freeman

It is not entirely clear, at least from publicly available sources, who first came up with the idea for the study. That is, did the researchers decide to conduct the study, and seek funding from Adobe and CDT in order to help pay their costs, or did Adobe seek to repair its own reputation, write a large check to the Center for Democracy and Technology (CDT), which then passed on some of the money to these researchers in order to produce the report?

Update Feb 2: A post by MeMe on Adobe's official blog confirms that:
Adobe commissioned the Carnegie Mellon University research study ... with assistance provided by the Center for Democracy and Technology (CDT)
What is clear, from the acknowledgements at the end of the report, is that the researchers received financial support from Adobe. Looking at CDT's funding charts for 2009 and 2010, it looks like 2010 is the first year that Adobe has given any money to CDT. Was this funding tied to the creation and publication of this report?

Both Adobe and CDT are thanked by the researchers for assistance in developing the experimental protocol, and several CDT staff members are thanked for providing the researchers with assistance and feedback on their report. One other person who is thanked for his assistance is Reed Freeman, a partner at the law firm Morrison & Foerster.

Given the trigger-happy nature with which some firms fire off DMCA cease and desist letters, or call in Department of Justice, it is unfortunately quite common for privacy and security researchers to have to solicit the advice and assistance of attorneys before publishing research. I myself have several attorneys on speed-dial, and have turned to the absolutely amazing attorneys at the Electronic Frontier Foundation (EFF) on several occasions.

What puzzles me though, is why Professor Cranor did not go to the EFF for her legal questions, particularly given that she serves on EFF's board of directors. Instead, she sought and received feedback from Reed Freeman.

As far as I know, Reed has no experience or special expertise in helping academic researchers avoid lawsuits from pissed off companies. However, he does have quite a bit of experience in helping companies engulfed in privacy scandals escape the wrath of the Federal Trade Commission. For example, he represented Netflix a year ago, after the FTC took an interest (pdf) in the company's plan to share a second dataset of its customers' movie reviews.

I would love to find out the role that he played in shaping this study and the final report. Did he provide advice to these researchers on a pro-bono basis, or did Adobe pick up the likely very expensive tab for his assistance?

Research methodology

This study was a response to a 2009 study by Soltani et al, which coined the term "respawning Flash cookies" and exposed several major web properties and advertising networks engaging in the practice.

Leaving aside the potential issues that Joe Hall has raised of how the researchers chose the 500 random sites, I want to focus on one key area which suggest serious limits (and perhaps even flaws) in this study.

Consider the data collection method followed by Soltani:
Each session consisted of starting on a Firefox about:blank page with clean data directories. We then navigated directly to the site in question (by entering the domain name into the browser’s navigation bar) and mimicked a ‘typical’ users session on that site for approximately 10 pages. For example, on a video site, we would search for content and browse videos. On a shopping site, we would add items to our shopping cart. We did not create accounts or login for any of the sites tested. As a result, we had to ‘deep link’ directly into specific user pages for sites such as Facebook.com or Myspace.com since typically these sites do not easily allow unauthenticated browsing.

In the CMU study, the researchers visited the front page only of the top 100 sites, plus an additional random 500 sites. The researchers did not navigate beyond paywalls, conduct searches, click on items to add them to shopping carts, or otherwise interact with the sites. As such, any Flash cookies present on these other pages have gone undiscovered.

Naming names

One important norm in the academic privacy community, is that when researchers discover companies engaged in privacy invasive (or even just problematic) practices, they are named. Soltani et al named the companies they discovered respawning Flash cookies, Krishnamurthy and Wills (pdf) named Facebook, MySpace and a few other social networks that were leaking user identifiers via referrer headers, and Jang et al (pdf) named YouPorn, Morningstar, Charter and the dozens of other firms they discovered abusing CSS flaws to determine users' browsing history.

Similarly, when Professor Cranor, Dr McDonald and several other CMU researchers published a paper last year examining the extent to which major websites misrepresent their privacy policies via machine-readable P3P headers, the researchers identified the offending websites.

It seems curious then that this time around, these same researchers would decide to not identify the two companies that they discovered were engaged in Flash cookie respawning.

It is just a wild guess, but I suspect that the decision not to identify the offending firms was not a decision left up to the researchers. What I do not know though, is if this was a decision made by CDT, or Adobe.

Adobe's commitment to privacy

One year ago, you submitted written comments (pdf) to the FTC as part of its series of privacy roundtables. In your submission, you wrote that:
Adobe condemns the practice of using Local Storage to back up browser cookies for the purpose of restoring them later without user knowledge and express consent.

...

Adobe is committed to supporting research to determine the various types and extent of the misuse of Local Storage. We are eager to participate in the discussion of which uses are and are not privacy friendly. We will support appropriate action, in consultation with the development, advocacy, regulatory, and legislative communities, to eradicate bad, unintended uses of Local Storage.

...

Adobe Supports the Commissions’ Use of its Authority to Police Unfair and Deceptive Acts and Practices in Commerce.

Adobe believes that existing legislation and regulation provide the Commission with robust enforcement authority against deceptive or unfair trade practices, including the use of Local Storage to re-spawn cookies users have deleted.


Adobe should identify the offending websites, or at least rat them out to the FTC

The studies published by Soltani et al, Krishnamurthy and Wills and Jang et al have all lead to class action lawsuits against the companies engaged in the various privacy violating activities exposed by these researchers. As such, it is quite reasonable to assume that had the CMU Flash cookie study identified the two firms that were caught engaging in Flash cookie respawning, class action lawsuits would have soon followed.

Given the strong tone you took in your FTC comments, and the fact that Adobe "condemns" the misuse of your technology to violate consumers' privacy, it is surprising that you have not pushed for the identification of these two companies. Surely the millions of users of Flash who have had their privacy violated by these firms should have an opportunity to seek their day in court?

Even if you do not wish to expose these firms to the threat of class action litigation, at the very least, you should turn them in to the FTC, which would then be able to investigate the firms, and prohibit them from engaging in similar privacy violations in the future.

As such, I hope you will confirm if you know the identity of the two firms discovered by the CMU researchers, and further confirm what plans you have, if any, to provide FTC staff with the evidence that was uncovered.

It is time for Adobe to be a leader on privacy. Turning these two firms in to the FTC would be a good first step.

With regards,

Christopher

A lesson on saying no to governments from Google, Twitter and Vodafone

I've been thinking a lot recently about the role that technology companies play in facilitating or frustrating the efforts of governments to spy on or censor their citizens.

As such, I think it is interesting to compare the actions by a few large firms in response to the recent events in Egypt.

First, from Google and Twitter yesterday:
Like many people we’ve been glued to the news unfolding in Egypt and thinking of what we could do to help people on the ground. Over the weekend we came up with the idea of a speak-to-tweet service—the ability for anyone to tweet using just a voice connection.

We worked with a small team of engineers from Twitter, Google and SayNow, a company we acquired last week, to make this idea a reality. It’s already live and anyone can tweet by simply leaving a voicemail on one of these international phone numbers ...

We hope that this will go some way to helping people in Egypt stay connected at this very difficult time. Our thoughts are with everyone there.
And Vodafone, on Friday:
All mobile operators in Egypt were instructed on Friday to suspend services in some areas amid widespread protests against President Hosni Mubarak's rule, Vodafone Group PLC (VOD) said in a statement.

"All mobile operators in Egypt have been instructed to suspend services in selected areas," the U.K. company said, adding that under Egyptian law it was "obliged" to comply with the order.
The following day, Vodafone issued an updated statement:
Vodafone restored voice services to our customers in Egypt this morning, as soon as we were able.

We would like to make it clear that the authorities in Egypt have the technical capability to close our network, and if they had done so it would have taken much longer to restore services to our customers.

It has been clear to us that there were no legal or practical options open to Vodafone, or any of the mobile operators in Egypt, but to comply with the demands of the authorities.

Moreover, our other priority is the safety of our employees and any actions we take in Egypt will be judged in light of their continuing wellbeing.
These statements reveal significantly different positions by large, multi-national corporations. Google and Twitter opted to thumb their noses at the Egyptian government's attempt to silence its citizens, while Vodafone meekly complied, shutting down one of the largest wireless phone networks in the country.

Does this mean that Twitter and Google value human rights more than Vodafone? Does it mean that Vodafone hates freedom? Not really.

The government has guns, and we don't

For a bit of insight on this, lets turn to Google's CEO, Eric Schmidt, in what is perhaps his most truthful interview ever on the topic of privacy, and reason why consumers should not trust their data to Google:
There is a problem with the government which is that they have guns and we don't. And so the term "resistance", you want to be careful ... We are required to follow US law, and we do so, even if we don't like it. As the CEO of a public company (or a private company) there can be no other answer.




The key difference between these firms, is that neither Google nor Twitter have any infrastructure located in Egypt, while Vodafone likely has hundreds of millions of dollars worth of equipment located in the country. While the Egyptian government could raid Google's Cairo office and arrest its local marketing staff, the government cannot take Google's servers (which are located in other countries) offline. Twitter is in an even safer position, as it doesn't even have a local office in Egypt -- there is nothing that the government can do to hurt the company.

As such, while Google and Twitter certainly deserve praise for going out of their way to frustrate the censorship efforts of the Egyptian government, we should remember that these firms are sacrificing very little in order to do so.

If Vodafone dared to ignore the government's order and kept its network running, it is likely that the authorities would seize or destroy the firm's hugely valuable equipment.

In order to accurately gauge a company's willingness to tell a particular government to go and fuck itself, you have to examine the actions of that company in countries where it actually has significant assets, and where the government can actually shut down its services.

Rather than comparing Vodafone's actions to Google's Tweet-via-voicemail effort, it might be more useful to compare it to Google's recent, voluntary move to scrub the auto-suggest results in its search engine, censoring a few high-profile keywords associated with filesharing and piracy. Google didn't even wait for the government to pass laws requiring it to censor the rules, merely the threat of such legislation on the horizon was enough to get the company to act.

This is not to say that Google is evil, merely that it is a rational actor, and is going out of its way to avoid upsetting governments that can actually harm the company. Keep this in mind the next time that Google (or any other firm) thumbs their nose at the censorship activities of some government in a far away country -- such actions are easy, but much tougher at home.