Monday, January 31, 2011

Which US cable providers have privacy preserving data retention policies?

The decision to not retain logs of its customers' IP addresses is one of the best ways that a company can proactively take a stand in defense of user privacy. Two of Sweden's largest residential ISPs have adopted such policies, with one even taking the additional step of routing all its customers data through an encrypted VPN service in order to anonymize the source of the traffic.

Here in the US, it is unfortunate that Internet Service Providers seem unwilling to embrace such aggressive, pro-user data deletion policies. Instead, firms like Google make full use of Doublespeak in order to justify their retention of user data (Those IP addresses we keep? Yeah, they're not really private information. Quick, look over there! We've created self-driving cars).

However, even if US service providers have not embraced privacy, it appears that one or two firms have enacted policies that as a perhaps perhaps unintentional side-effect do significantly enhance their users' privacy.

At a data retention hearing in the House of Representatives last week, Jason Weinstein, a deputy assistant attorney general in the Department of Justice testified. Half-way through his written remarks is this interesting fact:
"One mid-size cell phone company does not retain any records, and others are moving in that direction. A cable Internet provider does not keep track of the Internet protocol addresses it assigns to customers, at all. Another keeps them for only seven days."
As I described in a blog post last week, the mid-size cell phone company he mentioned is most likely T-Mobile.

Unfortunately, I have no idea about the identity of the two cable companies he identified.

Since one of these firms keeps no logs at all, none of its customers have been targeted in filesharing lawsuits, and it will not have passed on a single DMCA complaint. Depending on how fast the media companies send out their complaints and lawsuit shakedown letters, it is quite possible that the customers of the second cable company may have escaped such harassment too.

My question to those of you who follow the copyright infringement space is this: Do you know of any cable company whose customers have escaped filesharing lawsuits? If so, it might be because the firm has embraced a zero IP data retention policy.

I'd love to know who this is -- and if I live in their service area, I'd love to give them my business.

Saturday, January 29, 2011

Data retention push confirms DOJ hypocrisy

As I described in a lengthy blog post a couple days ago, the US law enforcement community is yet again pushing for mandatory data retention laws, which would require internet service providers to keep records detailing the IP addresses issued to their customers.

At the hearing last Tuesday, Jason Weinstein of the Department of Justice argued that the government needed this data to be able to effectively investigate serious crimes, such as terrorism and child exploitation.

In what truly is a bit of Orwellian doublespeak Mr. Weinstein told the Congressional committee that retaining this data would actually protect privacy:
Unlike the Department of Justice – which must comply with the Constitution and laws of the United States and is accountable to Congress and other oversight bodies – malicious cyber actors do not respect our laws or our privacy. The government has an obligation to prevent, disrupt, deter, and defeat such intrusions. The protection of privacy requires that we keep information from those who do not respect it — from criminals and others who would abuse that information and cause harm.

Investigating and stopping this type of criminal activity is a high priority for the Department, and investigations of this type require that law enforcement be able to utilize lawful process to obtain data about the activities of identity thieves and other online criminals. Privacy interests can be undercut when data is not retained for a reasonable period of time, thereby preventing law enforcement officers from obtaining the information they need to catch and prosecute those criminals. Short or non-existent data retention periods harm those efforts.
My absolute favorite bit of Mr Weinstein's testimony is the first sentence above:
Unlike the Department of Justice – which must comply with the Constitution and laws of the United States and is accountable to Congress and other oversight bodies
What I love, is the fact that Mr. Weinstein was able to repeat this complete and total lie, under oath, without ever once cracking a sheepish smile, or showing any sign of embarrassment.

From The Washington Post, January 19, 2010:
The FBI illegally collected more than 2,000 U.S. telephone call records between 2002 and 2006 by invoking terrorism emergencies that did not exist or simply persuading phone companies to provide records, according to internal bureau memos and interviews... A Justice Department inspector general's report due out this month is expected to conclude that the FBI frequently violated the law with its emergency requests, bureau officials confirmed.... FBI general counsel Valerie Caproni said in an interview Monday that the FBI technically violated the Electronic Communications Privacy Act when agents invoked nonexistent emergencies to collect records.

The Washington Post, January 21, 2010:
FBI agents for years sought sensitive records from telephone companies through e-mails, sticky notes, sneak peeks and other "startling" methods that violated electronic privacy law and federal policy, according to a Justice Department inspector general report released Wednesday.

The study details how the FBI between 2002 and 2006 sent more than 700 demands for telephone toll information by citing often nonexistent emergencies and using sometimes misleading language. The practice of sending faulty "exigent" letters to three telecommunications providers became so commonplace that one FBI agent described it to investigators as "like having an ATM in your living room."

The New York Times, March 10, 2007:
Bipartisan outrage erupted on Friday on Capitol Hill as Robert S. Mueller III, the F.B.I. director, conceded that the bureau had improperly used the USA Patriot Act to obtain information about people and businesses...

The report found many instances when national security letters, which allow the bureau to obtain records from telephone companies, Internet service providers, banks, credit companies and other businesses without a judge’s approval, were improperly, and sometimes illegally, used.

Moreover, record keeping was so slipshod, the report found, that the actual number of national security letters exercised was often understated when the bureau reported on them to Congress, as required.

The Washington Post, October 24, 2005:
The FBI has conducted clandestine surveillance on some U.S. residents for as long as 18 months at a time without proper paperwork or oversight, according to previously classified documents to be released today.
These reports only detail violations of the law during the last few years. Such abuses are not a new phenomenon though - the Department of Justice has abused its powers to illegally spy on Americans as long as the agency has existed.

Furthermore, in spite of the numerous instances in which it was confirmed that FBI agents and DOJ officials violated the law and engaged in illegal surveillance, I can't think of a single instance where they (or the telecommunications carriers that collude in their crimes) have been arrested or prosecuted for doing so. Instead, they get a slap on the wrist, and then it is back to business as usual.

One rule for us, one rule for them

The push for data retention seems to be currently limited to IP address allocation records, but, if successful, it will almost certainly extend to non-content information associated with email, chat and instant messaging communications.

The hypocrisy of the government's push for such data retention is clear when compared to the extreme efforts that government agencies go to in order to shield their own communications, documents and other records from the American people.

Consider for a moment, that this president, like Bush and Clinton before him, does not send any emails. The reason for this? Because such emails would have to be retained under the Presidential Records Act. Rather than let the American people later see a record of his official communications, he simply avoids email, and instead does everything by phone or in-person.

Of course, in this day and age, most people do not have the luxury of going without email. Private citizens, corporations and government employees alike rely on email to go about their daily business. However, while the email accounts that consumers rely on increasingly keep their communications forever (due to essentially unlimited storage), companies and government agencies are increasingly embracing data deletion policies in order to limit the risk that their emails will later see the light of day, due to lawsuits or FOIA requests.

For example, starting in the spring of 2010, the Federal Trade Commission (where I worked until August of 2010) adopted a 90-day email deletion policy. Any email messages that employees did not specifically mark to be saved would be automatically deleted after 90 days. This policy creates a significant barrier for public interest groups wishing to learn about the activities of the agency.

At the FTC, all records about particular investigations are shielded from disclosure as long as the investigation is active. However, since most investigations take 6 months or more, by the time the investigation is eventually made public, many email messages will have already been deleted.

Quite simply, government email deletion policies are specifically designed to circumvent and neutralize open government laws, such as the Freedom of Information Act.

I am sure that the FTC is not the only government agency to embrace an aggressive data deletion policy, and at least right now, there is nothing that legally prohibits agencies from adopting such policies.

This would be a great issue for pro-transparency, pro-oversight House Republicans to tackle. Perhaps once the administration is forced to reveal its own official communications to the whole world, then maybe it'll be a bit more sympathetic to the efforts of privacy groups and corporations that wish to protect privacy of regular users.

US Treasury fudges truth on financial privacy

From the New York Times today:
In May, the government will no longer pay someone eligible for benefits with a mailed check. Instead, the money will be electronically deposited directly into a bank account or made accessible by a debit card. And by March 2013, the 10 million people who receive checks, out of 70 million people in all, must switch over to direct deposit or use a card.


Some see the decision as government meddling and say they fear their spending habits may be traced. But [David A. Lebryk, commissioner of the Treasury department’s Financial Management Service] said that information could be obtained only with a court order in a "rare exception."
That quote caught my eye, because I don't think it is correct.

In 1976, the Supreme Court ruled in United States v. Miller that bank customers have no legal right to privacy in financial information held by financial institutions. Responding to this ruling, Congress passed the Right to Financial Privacy Act (RFPA).

The RFPA requires that "no Government authority may have access to or obtain copies of, or the information contained in the financial records of any customer from a financial institution unless the financial records are reasonably described" and

1. the customer authorizes access;
2. there is an appropriate administrative subpoena or summons;
3. there is a qualified search warrant;
4. there is an appropriate judicial subpoena; or
5. there is an appropriate written request from an authorized government authority.

Administrative subpoenas are not court orders, and are not reviewed by a judge.

As for the government's claim that such requests will be infrequent, occurring in "a rare exception", as I described at length in a blog post just a couple months ago, the Department of Justice has argued in court that its prospective real-time surveillance of financial transactions is "routine". How exactly can something be both routine and a rare exception?

The truth is that warrantless financial surveillance likely occurs on a massive scale. The American people (and Congress) have no idea that this happens, because the courts are largely not in the loop, and the government is not required to compile or publish any aggregate statistics on the use of such surveillance methods. That is, although there are detailed annual reports on the use of wiretaps and other electronic intercepts by law enforcement agencies, we have no similar orders for the surveillance of our financial transactions.

Thursday, January 27, 2011

What the US government can do to encourage Do Not Track

Over the past few months, there has been a lot of discussion about Do Not Track. Although both the FTC and Commerce Department have recently issued privacy reports that mentioned Do Not Track, neither agency has the authority under existing law to make Do Not Track a reality. Either the industry can voluntarily agree to respect such a mechanism, or Congress is going to have to give the FTC the authority to make it happen.

But wait, you might ask, Microsoft has introduced a tracker blocking feature in the upcoming release of IE9 (similar to the massively popular AdBlock Plus add-ons for Firefox and Chrome), and this mechanism doesn't require that the online advertising embrace or respect it.

That is certainly true. However, as the industry has demonstrated time and time again with its use of Flash cookies, css history sniffing, cache cookies, and browser fingerprinting, unless it prohibited from doing so by law, companies will simply "innovate" and engineer around privacy enhancing features in the browser.

What this means is that unless the FTC is given the authority to prevent it, ad networks will either switch domains frequently (so that the blacklists get stale), or host compelling content from the same servers and domains that they use for their ads (for example, if is used to deliver videos and track users, consumers won't be able to effectively block it).

The do not track header

As I described in a 2009 blog post, opt out mechanisms that enable a user to affirmatively express her desire to not be tracked finally free us from this cycle of arms races, in which advertising networks innovate around the latest browser privacy control. At the time that I wrote that blog post, opt out cookies were the only way to express such a preference, which was unfortunate, because opt out cookies have a number of other problems that prevent them from scaling effectively.

However, since then, the Do Not Track header has emerged as a vehicle for users to express their desire to be left alone, via a single preference in the browser, which will then be delivered to all websites that they interact with.

On Monday of this week, Mozilla announced that it will be including support for the header in a future release of the Firefox browser, which should provide a fix for the current chicken/egg problem, in which no browser sends the header, and so no advertising network looks for and respects for the header.

Even though 300 million users will soon be able to send the Do Not Track header, the advertising industry doesn't seem to keen to support it. The Interactive Advertising Bureau's general counsel Mike Zaneis told MediaPost that:
"It's very simplistic to think that you just put something in a header and people will honor it." He adds that it isn't clear whether Mozilla's definition of online tracking for ad purposes aligns with that of self-regulatory groups. "It's an interesting idea that they can offer this header, but if nobody's reading it, and nobody knows what it means, why should we care as an industry?"

Zaneis adds that the IAB is focusing on building out a self-regulatory system that requires companies to honor do-not-track cookies, but not other mechanisms like browser headers.

Why is the IAB focusing on opt-out cookies? Because they are difficult to discover, obtain, use, and easy to delete. Advertisers want to be able to tell Congress that they are doing something to let consumers opt out, but don't actually want that mechanism to be easy to use. The Do Not Track header is so easy to enable that the ad industry is deeply worried that large numbers of consumers just might enable it. As such, the industry will likely do anything it can to derail the header, which almost certainly means that it won't support it until it is absolutely forced to.

How can the Federal government help, without waiting for Congress to pass new laws

The FTC seems to like the idea of the Do Not Track header -- certainly, the tweet that it issued on Monday praising Mozilla suggests as much.

We’re pleased entities like Mozilla recognize that consumers want a choice in online tracking & are taking steps 2 give it 2 them. #dntrackless than a minute ago via web

Unfortunately, as I described above, neither the FTC or Commerce can currently force the advertising networks to support the header. What they can do though, is to publicly embrace the header as the best way for users to achieve Do Not Track. The best way to do this, even moreso than tweeting about it, would be for government sites to support the do not track header.

Federal cookie rules and opt outs

For more than a decade, Federal agencies were prohibited from using long term tracking cookies on their websites. In 2010, these rules were changed (after a lengthy public comment period, in which the government mostly ignored the suggestions of privacy advocates).

The new rules (pdf) permit tracking technologies, but require opt outs:
Clear Notice and Personal Choice. Agencies must not use web measurement and customization technologies from which it is not easy for the public to opt-out. Agencies should explain in their Privacy Policy the decision to enable web measurement and customization technologies by default or not, thus requiring users to make an opt-out or opt-in decision. Agencies must provide users who decline to opt in or decide to opt-out with access to information that is comparable to the information available to users who opt-in or decline to opt-out.

a. Agency side opt-out. Agencies are encouraged and authorized, where appropriate, to use web tracking and measurement technologies in order to remember that a user has opted out of all other uses of such technologies on the relevant domain or application. Such uses are considered Tier 2.

b. Client side opt-out. If agency side opt-out mechanisms are not appropriate or available, instructions on how to enable client side opt-out mechanisms may be used. Client side opt-out mechanisms allow the user to opt out of web measurement and customization technologies by changing the settings of a specific application or program on the user’s local computer. For example, users may be able to disable persistent cookies by changing the settings on commonly used web browsers. Agencies should refer to, which contains general instructions on how the public can opt out of some of the most commonly used web measurement and customization technologies.
Unfortunately, the "recommended" opt out procedures on the website merely tell consumers how they can disable cookies on various popular browsers. Those consumers who neglect to disable cookies in their browsers will be tracked whether they like it or not.

This form of "opt out" (take our long term tracking cookies, or disable them in your browser) was exactly the method of choice that the online behavioral advertising industry long offered, until, bowing to pressure from privacy advocates and regulators, they started to offer the cookie based opt outs now featured on the Network Advertising Initiative website.

Thankfully, not all government agencies have followed the sample opt out features on The Office of Scientific & Technical Information (OSTI), for example, has its own opt out cookie, which disables the collection of web measurement and tracking data on the OSTI website.

This is far better than the approach taken by, and actually gives visitors to the site a usable mechanism in order to protect their privacy. Unfortunately, if each federal agency develops and deploys their own opt out cookie, we will find ourselves in the same problematic situation that currently exists in the behavioral advertising industry (where there are more than 100 different opt out cookies available from various firms).

In my written comments to the White House back in 2009, I highlighted this problem:
The federal government should learn from the mistakes of the behavioral advertising industry. In your blog post, you also propose that federal government web sites be required to "[p]rovide a clear and understandable means for a user to opt-out of being tracked." As you consider a policy that will require federal websites to offer opt-outs to consumers, it would be useful to look to the situation in the behavioral advertising industry (where opt-out capabilities are widespread, yet difficult to use and discover by consumers), in order to avoid some of the many mistakes and pitfalls that have been made there.
In order to avoid these problems, I suggested that the White House:
Require that Federal web sites support a single, browser based universal opt-out header in addition to the opt-out cookie. This header approach has been repeatedly proposed in the behavioral advertising arena, and would solve many of the problems that plague the current cookie-based opt-out model.

Now that Mozilla has actually embraced the Do Not Track header (a proposal that was implemented in a prototype add-on when I submitted my comments in 2009), the Federal Government could realistically embrace the header as an improved mechanism for tracking opt outs on government sites. This would solve two problems at once: 1. Avoiding the chaos of 100+ different federal agency opt out cookies, and 2. providing early support for the Do Not Track header at a time when the technology proposal could very much use a boost.

Wednesday, January 26, 2011

DOJ's push for data retention & competing on privacy

On Tuesday, January 25, 2011, the Republican controlled House Subcommittee on Crime, Terrorism and Homeland Security held a hearing on the topic of data retention. Chairing the hearing was Jim Sensenbrenner, the author of the much-loved USA Patriot Act.

The video of the hearing is online as is the written testimony of Jason Weinstein of the Department of Justice.

Data retention is (for most people) an obscure and boring topic, even if it has a significant impact on end user privacy. As such, I want to try and analyze DOJ's latest attempt to kickstart the debate about this issue, in order to enable those watching at home to understand the politics at play.

A gentle introduction to the DOJ increased powers playbook

The Department of Justice is actually fairly predictable, and each time it calls for increased powers, it follows the same formula.

First, it will repeatedly mention one or two horrific crimes that everyone in society agrees are awful (usually terrorism and child pornography), and claim that those committing these crimes are not getting caught because of the issue at hand.

Second, the government will put out a couple examples, which have never before been disclosed to the public (even if they are several years old), in which horrible things happened because the government didn't have the information or power it now wants.

Third, the government will highlight companies that currently have particularly bad practices (but without naming those firms), and may also specifically identify one or two companies whose practices are excellent, and that should be models for the entire industry.

Fourth, the government will completely dismiss the concerns of the privacy community.

This formula has been used, just in the last couple years, to try and require emergency, warrantless disclosure of cell-tower data, mandatory registration of prepaid mobile phones, and back doors in encryption technology.

Why doesn't DOJ name names?

One of the most interesting things for me, is the practice of not naming names. That is, while the specific problematic practices may be discussed in some detail, the companies that are currently not doing what the government wants are rarely named by the government, either in testimony before Congress, or through the intentional leaks to the government-friendly journalists that are used to seed the debate.

Consider the following quote from yesterday's testimony:
"One mid-size cell phone company does not retain any records, and others are moving in that direction. A cable Internet provider does not keep track of the Internet protocol addresses it assigns to customers, at all. Another keeps them for only seven days—often, citizens don’t even bring an Internet crime to law enforcement’s attention that quickly."
Or, from a New York Times article last year:
Starting in late 2008 and lasting into 2009, another law enforcement official said, a "major" communications carrier was unable to carry out more than 100 court wiretap orders. The initial interruptions lasted eight months, the official said, and a second lapse lasted nine days.

This year, another major carrier experienced interruptions ranging from nine days to six weeks and was unable to comply with 14 wiretap orders. Its interception system "works sporadically and typically fails when the carrier makes any upgrade to its network," the official said.

The official declined to name the companies, saying it would be unwise to advertise which networks have problems or to risk damaging the cooperative relationships the government has with them. For similar reasons, the government has not sought to penalize carriers over wiretapping problems.

Even though the government could significantly increase the pressure on particular firms by naming them, it (wisely) doesn't do so. The reason is that the law gives companies a significant amount of flexibility in the way that they design their networks, the data that they voluntarily retain, and over the warrantless disclosures made to government investigators when they claim an emergency. The government knows that if it plays hardball with these firms, they are perfectly within their rights to stop voluntarily retaining data, and insist on a valid court order or other legal process whenever the government wants to investigate one of their customers.

Naming names

Even though the government won't identify the companies with "good" and "bad" data retention practices, there is nothing stopping me from doing so.

In his testimony, Mr Weinstein stated that "One mid-size cell phone company does not retain any records". If I had to guess, I would bet that Mr. Weinstein is speaking about T-Mobile, which is the largest carrier I know of that does not keep IP allocation logs.

At the ISS World surveillance conference in 2009, I made an audio recording of a panel which featured executives from several telecommunications companies speaking about their relationship with law enforcement agencies, and their own data retention practices (the audio recording of the panel is available here). At that event, a representative from Cricket Communications (a relatively small pre-paid carrier aimed at low income users) told the audience that:
"One of the challenges for Cricket, and a challenge for the law enforcement community, is that we now have broadband and internet access from the handset. And in both instances, the signal goes to our switch, and then is relayed to Level 3 Communications, which then is the conduit to the Internet. From the outside, from the point of capture of the IP address, it is the generic or regional IP address that is picked up. There is no way to come back through our firewall to see which subscriber had a per-session identification on that, and that is something that even if you go to Level 3, they’re not going to have any information either."

T-Mobile's director of law enforcement relations spoke next, and revealed that his company was largely in the same position:
"[T-mobile is] in the same boat that Cricket is, in terms of determining the IP address --- determining the subscriber attached to that IP address.”
Contrast this to the approach taken by Sprint:
Nextel’s system, they statically assign IP addresses to all handsets ... We do have logs, we can go back to see the IP address … On the Sprint 3G network, we have IP data records back 24 months, and we have, depending on the device, we can actually tell you what URL they went to ... If [the handset uses] the [WAP] Media Access Gateway, we have the URL history for 24 months ... We don’t store it because law enforcement asks us to store it, we store it because when we launched 3G in 2001 or so, we thought we were going to bill by the megabyte ... but ultimately, that’s why we store the data ... It’s because marketing wants to rifle through the data.

Unfortunately, representatives from Verizon an AT&T didn't appear at that conference, and so I don't have an on the record statement from those firms describing their IP allocation policies. Luckily, a slide presentation for the law enforcement community detailing Verizon's data retention policies leaked onto the Internet.

From this, it is clear that Verizon keeps logs on the individual IP addresses given to users for a 1 year period, and, even more troubling, it appears that the company retains the "destination" addresses of all sites that its users visit from their mobile handsets for 30 days.

Finally, while we do not know AT&T's data retention policy, this 2009 study by a team at Microsoft Research confirms that AT&T wireless users are at least given individual IP addresses (as compared to the NAT-based scheme that T-Mobile and Cricket use). As such, the only question is if AT&T chooses to retain these IP address allocation logs (and given the company's repeated collusion with law enforcement and intelligence agencies, I think it is fair to assume that it does keep them.)

Competing on privacy

Over the last few years, firms in a few specific markets have begun to compete on privacy. For example, just in the last month, three of the four main web browsers have each announced privacy enhancing features designed to protect their users from online tracking.

Unfortunately, even though telecommunications firms' data retention policies differ in ways that significantly impact end user privacy, these companies do not compete on these differences, and often go out of their way to keep this information secret. Were it not for the work of activists and whistleblowers inside the firms who have leaked key documents, we would never know some of these details.

This widespread lack of public information about data retention policies poses a significant problem for consumers wishing to evaluate potential service providers on their respective privacy merits. Furthermore, differences among providers operating in the same market do vary considerably, which means that the decision to pick a particular service provider can have a significant impact on a user’s privacy.

As a result of these policies, for example, a Sprint Nextel customer can be later tracked down based on an anonymous comment left on a blog, or a P2P file downloaded over the company’s cellular network, while customers of T-Mobile and Cricket can freely engage in a variety of online activities without any risk of later discovery.

This lack of public information about key privacy differences would be bad enough if the firms generally kept quiet about the general topic of privacy. However, these companies actually proudly boast about their commitment to protecting user privacy, while simultaneously going out of their way to keep the substantive details of their practices (and often, their collusion with government surveillance) secret.

Consider, for example, the following statements by Verizon:

"Verizon has a longstanding and vigorous commitment to protecting its customers’ privacy and takes comprehensive steps to protect that privacy."

"At Verizon, privacy is a key priority. We know that consumers will use the full capabilities of our communications networks only if they trust that their information will remain private."

Strangely enough, the Verizon has also argued in court that it has a First Amendment right to voluntarily provide information about its customers’ private communications to the National Security Agency. This may be a valid legal argument, but it is not the kind of position that a company that has pledged to protect users’ privacy should take. Certainly, it is not an official position that the company advertises to its customers on its website or in its privacy policy. Likewise, nowhere on Verizon's website does the company disclose the $1.8 million dollars it has received per year to provide the FBI with "near real-time access to [two years of stored] United States communications records (including telephone and Internet records)."

Why the silence on data retention matters

The fact that most companies do not compete, or even publicly disclose their data retention policies means that the government has the upper-hand in any effort to get firms to retain more data, or keep it for longer periods.

Over the last year or two, multiple wireless carriers have extended the retention period for historical cell site location information. Retention periods of six months to one year for cell site data are now common across the industry, a significant increase over the 30 days or less that the data was retained two years ago.

These companies faced no push-back from consumers or privacy groups when they extended these retention periods, because consumers were never told that it happened.

Likewise, between 2007 and 2008, MySpace and Facebook both increased their data retention periods for user login IP session data. In 2006, MySpace logged IP addresses associated with account logins for 90 days. In 2007, the company expanded its logging of this data to 1 year. Facebook logged IP addresses for 30 days in 2007, but by 2008, the company had opted to keep the logs for 90 days.

Bringing this back to the current debate -- because T-Mobile doesn't compete on privacy, and because its customers are often unaware of the advantage benefit they receive from the firm's current IP network design, the firm has no real incentive to resist pressure from the government to retain data. The only real sticking point for the company, I suspect, will be cost of modifying its network to permit it to uniquely identify and track its users. As such, I fully expect T-Mobile (and any other companies that DOJ leans on) to quietly fold, and establish voluntary data retention policies that are long enough to keep the government happy.

Friday, January 21, 2011

The History of the Do Not Track Header

Last month, both the FTC and Commerce Department published privacy reports that mentioned the possibility of a Do Not Track mechanism. Most people, even those who follow privacy issues, didn't really understand how such a mechanism would work, or where the idea came from. The goal of this lengthy blog post is to try and shed a bit of light on that.

The History of Do Not Track

In 2007, several public interest groups, including the World Privacy Forum, CDT and EFF, asked the FTC to create a Do Not Track list for online advertising. In a very savvy move, these groups named their scheme such that it instantly evoked the massively popular Do Not Call list. That is, even if the average person did not know how the Do Not Track list worked, it would sound like a good idea.

The public interest proposal would have required that online advertisers, not consumers, submit their information to the FTC, which would compile a machine readable list of the domain names used by those companies to place cookies or otherwise track consumers. Browser vendors and 3rd party software makers could then subscribe to this list, and effectively block many forms of tracking. It sounded like a great idea, but, it went nowhere, and as the Google Trends chart below shows, it was largely forgotten by the media until 2010.

What happened to bring Do Not Track back to life? FTC Chairman Jon Leibowitz.

On July 27 2010, the Senate Commerce Committee held a hearing on the topic of online privacy. In his oral testimony at the hearing, Leibowitz stated that the commission was exploring the idea of proposing a "do-not-track" list (he appears to have gone off the official script, as the phrase "do not track" does not appear in his formal written remarks.)

Once the concept (even in the abstract) of Do Not Track has been brought back to life, journalists covering the story assumed that it was the public interest groups' proposal that was now actively being considered by policymakers. However, over the space of a few months, a completely different mechanism, one which relies on web browsers sending a header, seemed to gain momentum.

This seems to have caught many in industry and the press off guard. No one knows where the idea came from, or how it managed to displace the previous public interest groups' effort. The purpose of this blog post is to try and clear that up.

Opt Out Cookies

For more than a decade, the major online advertisers have offered "opt out" mechanisms, through which consumers could signal to the companies that they did not want to receive advertisements targeted based on their online browsing habits. These opt outs worked via cookies (one specific to each ad network), which a consumer could either obtain by visiting each advertising network's website, or (if the company was a member of the Network Advertising Initiative (NAI), from the NAI website.

While certainly a step in the right direction when they were first offered, the opt out cookies have numerous flaws, the most important of which, is that as cookies, they are deleted whenever consumers attempts to protect their privacy and erase other tracking cookies. Quite simply, using the built in browser controls, consumers cannot instruct their browser to "keep the opt out cookies, but delete everything else." Consumers thus have to re-obtain these opt out cookies each time they delete their cookies, or, perhaps more likely the case, privacy conscious consumers gave up on the formal opt outs, and instead relied on frequent cookie deletion as a more reliable means to opt out.

In March 2009, Google released a browser add-on that made Google's own behavioral advertising opt out cookie permanent. Thus, with the add-on installed, users could freely delete their cookies whenever they wanted without accidentally removing Google's opt out cookie. While this was a great move on Google's part, there were more than 100 other advertising networks, and so even if Google's opt out cookie persisted, these other opt out cookies would be erased whenever a consumer took steps to protect their privacy.

My Targeted Advertising Cookie Opt-Out (TACO) add-on

A few days after Google released their opt out tool I bumped into security researcher Dan Kaminsky at a conference. I'm afraid I don't remember the specifics of our conversation anymore, but generally, we spoke about flaws in the opt out system, Google's new tool, and possible technical alternatives to cookie based opt outs, including a browser header.

Soon after (and likely inspired by) my conversation with Dan, I downloaded Google's tool (which the company had released under an open source license) and modified it to include the opt out cookies for several other behavioral advertising networks. I published my TACO add-on and within days hundreds of people downloaded and installed it.

A few days later, Dan emailed me, and urged me to include a browser header in TACO -- not because it would have any immediate impact (since no ad network would look for it), but because it would be a clear expression of user intent:
The reality is you can be tracked no matter what you do or don't set. However -- humor me: Just add an "X-No-Track: user-opt-out=explicit" header to all HTTP requests, and add window.tracking-opt-out=explicit to every DOM.

Oh, and put a comment in the source above it, calling it the Holy Hand Grenade :)

Trust me :)
At the time, I dismissed Dan's suggestion. I wanted to build a tool that would actually improve user privacy, and since cookies were the only way for consumers to opt out, I thought my time was best spent improving that experience. However, on the TACO home page, I noted that a header mechanism would be a far superior replacement for opt out cookies:
The use of individual opt-out cookies for each advertising company is sub-optimial (in fact, the current situation totally sucks). We shouldn't have to identify and seek out each company that might track us in order to opt out. This tool currently supports 90 different advertising networks, some of which require multiple cookies (for different domain names). As a result, this tool installs 90+ opt-out cookies into the browser (they're all generic, and contain no unique, or identifiable information). Since there are still quite a few networks that the tool does not support, it is quite easy to see that the tool could eventually install 100 or more cookies in a user's browser. This solution simply does not scale.

In an ideal universe, we would be able to set a single cookie in the browser stating our preference to be not tracked, without needing to first identify individual advertising networks. Consider, after all, the approach taken with the hugely successful do not call list. You add yourself to a single list, which all telemarketers are then required to honor.

However, for privacy reasons, cookies cannot be accessed by websites hosted in domains different than those that set the original cookie. That is, if sets a cookie in your browser, won't be able to read it if you visit their site. For 99% of cookies (such as the session cookies used to authenticate your login to Facebook), this is a really good idea. However, for a universal opt-out cookie, this presents significant problems.

As a result, cookies are the wrong technology for a universal opt-out mechanism.

One alternative approach would be to permit the browser to send an opt-out HTTP header, which it could then transmit to every web server which the user connected. Such a scenario would require that Microsoft, Mozilla, Apple and Google sit down to design such a technical spec. It would also require that the big advertising networks agree to honor such a HTTP header based method for opt-out.
I spent much of the summer of 2009 immersed in the world of online advertising. This included numerous conference calls with attorneys at advertising networks, and evenings spent on the web, locating new advertising networks with opt out cookie I could clone, and add to TACO. This lead to several updates of my increasingly popular tool, which eventually grew to include more than 100 different opt out cookies.

However, it was never my intention to maintain a browser plugin (even a successful one) -- I am a researcher and an activist, and so my goal in creating TACO was primarily to poke the advertising industry in the eye. As such, within weeks of creating TACO, I reached out to the folks at Mozilla, and begged them to take TACO off my hands by building similar functionality into the Firefox browser.

While several individuals at Mozilla were receptive to the idea of TACO (and had installed it onto their own computers), they weren't so in love with the idea of shipping 100 different opt out cookies with their browser, or having to maintain and update the list for new add networks. Quite simply, TACO was an inelegant kludge, and didn't scale. In March of 2009, Mozilla's VP of Engineering Mike Shaver emailed me to state his own preference for a header:
Could we not just standardize/promote a header like X-Tracking-Opt-Out, and ask the tracking groups to honour it? Simpler to specify, simpler to update (the null case, in fact), forward-effective as new ad networks add support, and separated from the implications and implementation of cookies.

The Do Not Track Header

The header approach suffered from a serious chicken and egg problem. No ad network was willing to look for, or respect the header (primarily because no one was sending the header). Likewise, because no one was looking for the header, the browser vendors weren't ready to add support for it to their products.

In July of 2009, I decided to try and solve this problem. My friend and research collaborator Sid Stamm helped me to put together a prototype Firefox add-on that added two headers to outgoing HTTP requests:
X-Behavioral-Ad-Opt-Out: 1
X-Do-Not-Track: 1

The reason I opted for two headers was that many advertising firms' opt outs only stop their use of behavioral data to customize advertising. That is, even after you opt out, they continue to track you. There are a handful of firms though that do promise to no longer track you when you opt out. One big problem is that it is very difficult for consumers to figure out which company is doing what -- since they all use the term opt out.

I assumed that any header-based system would be voluntary, and so by using two different headers, I would be able to play nicely with whatever a firm was willing to do. That is, if a firm currently agreed to opt consumers out of all tracking, then the firm could look for the Do Not Track header, but if the firm refused to provide a tracking opt out, they could at least agree to respect a behavioral advertising opt out header.

In mid July 2009, the Future of Privacy Forum organized a meeting and conference call in which I pitched the header concept to a bunch of industry players, public interest groups, and other interested parties. I was perhaps slightly over-dramatic when I told them that the "day of reckoning was coming", for opt out cookies, and that it was time to embrace a header based mechanism. I told them that I planned to add the headers (enabled by default) to my TACO add-on in a future release, after which, I would be able to argue that hundreds of thousands of consumers were sending this signal that the advertising firms were ignoring.

In the end, none of the advertising firms showed any interest in the header. A couple months later, I started working at the Federal Trade Commission, and ultimately decided against including the header in TACO, as I thought it might rock the boat at my new job.

In mid 2010, when the FTC Chairman breathed life back into the discussion of Do Not Track, the header I had implemented and lobbied for somehow managed to catch the attention of privacy advocates, public interest groups, regulators and even browser vendors. Ultimately, the Behavioral Advertising Opt Out header seems to have been discarded, and instead, focus has shifted to a single header to communicate a user's preference to not be tracked.

The policy of Do Not Track

The technology behind implementing the Do Not Track header is trivially easy - it took Sid Stamm just a few minutes to whip up the first prototype. The far more complex problem relates to the policy questions of what advertising networks do when they receive the header. This is something that is very much still up in the air (particularly since no ad network has agreed to look for or respect the header).

Over the last few months, a number of privacy experts, including Arvind Narayanan and Jonathan Mayer at Stanford, Lee Tien and Peter Eckersley at the Electronic Frontier Foundation, and Harlan Yu at Princeton University have worked to come up with a solid proposal that will help to shape this more complex part of the debate.

If industry (or the FTC, Commerce and Congress) ultimately settle on the header based approach, there will likely be an intense lobbying effort on industry's part to define what firms must do when they receive the header. Specifically, they will seek to retain as much data as possible, even when they receive the header. As such, the devil will be in the details, and unfortunately, these details will likely be lost on many members of Congress and the press.

Wednesday, January 19, 2011

Google: Iranian Internet users deserve communications security -- Americans, not so much

From The Guardian today:
Google Earth, Picasa and Chrome will be available for download in Iran for the first time from today after the technology firm was granted a communications trade licence by the US government.


[Scott Rubin, Google's head of public policy and communications for Europe, Middle East and Africa] said Google had decided not to make downloads of Google Talk available in Iran because it may have security implications if dissidents used it to communicate. "We're not confident with the security we could provide to keep those conversations private," he said. "Any government that wants to might be able to get into those conversations, and we wouldn't want to provide a tool with the illusion of privacy if it wasn't completely secure."

I am actually quite pleased to see Google acknowledging 1. That it is often very dangerous to offer insecure tools that users might mistakenly believe are in fact secure, and 2. That government agencies can easily monitor the communications of users using insecure tools.

The problem of course, is that Google Talk is widely used by Google's millions of customers in the United States, Europe, Asia and the Middle east, all of whom are at risk of government surveillance.

Here in the United States, the Federal government for years abused its surveillance powers to spy on the phone calls and Internet communications of US citizens without ever seeking a court order. The FBI has abused its National Security Letter powers that were expanded under the USA Patriot Act, and for years, the agency even embedded phone company employees at its offices, who repeatedly disclosed user data in response to requests submitted on post it notes.

All this begs the question: Why is Google more concerned about the privacy of Iranian users than those millions of Google users in the United States?

Google is a US company, is subject to US law, and must disclose communications to the government when law enforcement and intelligence agencies follow the appropriate legal process. As such, no one expects Google to refuse to comply with the law (especially, as Eric Schmidt has acknowledged, the government has guns, and Google does't).

What would be nice though, would be if Google was equally as committed to not giving its US customers the illusion of security and privacy, when, as the firm has acknowledged here, its Google Talk product is simply not capable of delivering anything approaching reasonable security.

Friday, January 14, 2011

The costly anti-piracy lesson Sony failed to learn from Microsoft

Sony is in the news right now. It has taken several security researchers to court, after they released code circumventing the company's digital rights management (DRM) technology. Unfortunately for Sony, this problem could have largely been avoided had it learned from Microsoft's lessons.

The Sony Playstation 3 caught the eye of the technical community when it first came out. The IBM Cell microprocessor is absolutely fantastic at floating point math, which in addition to calculating pretty video game graphics, makes it a great platform for bioinformatics, brute force cryptography, and astrophysics. Sony long encouraged these alternate uses for the PS3 platform, by including the OtherOS feature in its software, which made it easy to install Linux.

Unfortunately for those users who quite liked being able to use their living room game console as a mini-supercomputer, in April 2010, Sony took away the feature with the release of the 3.21 firmware update. Users were thus given a choice. If they kept the old firmware, they got to keep using Linux, but lost access to Sony's Playstation Network, and the ability to play games online. Alternatively, users could upgrade the firmware, keep playing new titles, but lose access to the Linux functionality.

Many users were unsurprisingly angry, and some even sued the company. Other users took matters into their own hands.

Fast forward to December 2010, when a team of European security researchers revealed that they had broken the Playstation 3's cryptographic code signing technology (which stopped software running on the platform unless Sony had blessed it). The video of the talk is worth watching (part 1, 2, 3), but the most interesting thing for me came later, in a comment posted by one of the researchers to the geek news site
However, as a whole, the entire PS3 architecture is terrible. Especially after breaking it open and properly analyzing it and finding a ton of screwups (many critical), there is absolutely no doubt in our mind that the sole reason why the PS3 lasted this far is because OtherOS kept all the competent people happy enough not to try to break into the system (that, and maybe hype around their hypervisor and isolated SPE security, both of which turned out to be terribly bad). If you watch the talk you'll actually see that we make this point clear and address the time-to-hack of the PS3. Given our experience and what we've learned from people who work on console hacks, almost nobody tried until OtherOS was removed, so the only valid measurement for "time to hack", as a strength-of-security measure, is the time since OtherOS was removed (9-12 months or so).

OtherOS was Sony's single best security feature.

In hindsight, taking away OtherOS doesn't seem to be too smart of a decision on Sony's part. However, if they had paid attention to the experiences that Microsoft's XBox team had in dealing with the open source community, perhaps the Playstation 3 DRM flaws would never have been found.

From a law journal article I published in 2007:
Microsoft opted to protect its platform against [those wishing to evade region controls on legitimately purchased games, hobbyist gamers, open source hackers, and software pirates] with one technical solution: any software that ran on the XBox needed to be "digitally signed" by Microsoft. Without a valid digital signature, the software would be rejected by the Xbox. To protect its revenue, Microsoft would only issue a digital signature to those software firms that obtained a license from Microsoft and thus agreed to pay royalties.

The problem of this approach, of course, is that the four different groups, which would normally have very little in common, were now motivated to share information and target the one security system holding them back. While those users who wished to play illegal copies of games were motivated by their desire to avoid paying for software, the other three groups had more personal motivations: creativity, and the desire to do what they felt was their right. Furthermore, both the Linux community and the hobbyist game developer community include skilled and motivated programmers — who by definition — spend their time working on projects for free. In creating a single DRM system, Microsoft inadvertently aligned the "software pirates" with a team of skilled open-source programmers with significant experience in reverse engineering proprietary systems. This is the very same design mistake that was made by the creators of the DVD DRM system.

The first breach of Microsoft’s DRM came from the mod-chip community, but did not pose a significant threat to Microsoft due to the difficult process that installing such a chip required. In July of 2003, the Free-X project announced that its members had figured out a way to get Linux running on the XBox without any hardware modifications. The developers were able to exploit a flaw in one of the system’s games using a "buffer overflow," a technique commonly used in the computer security community. Once they had successfully created a software-based hack, the Linux developers gave Microsoft an ultimatum: release a digital signature for the Linux operating system, which would enable users to legitimately run Linux on the Xbox without having to evade the DRM system or else the developers would release a working implementation of the evasion system to the Internet.

Microsoft refused and so the developers made good on their threat. Other developers took advantage of this information, and thus a number of development communities sprung up around the Xbox. This included the Xbox Media Center, an open-source media player capable of playing videos, multi-region DVDs, streaming video and radio from the Internet, and podcasts. Those wishing to play copied games, both fair use backups and illegal copies, also benefited. In many ways, the software pirates were able to free-ride on the efforts of the Linux hobbyists, although Microsoft attempted to portray them in the media as one and the same.

Wednesday, January 12, 2011

Microsoft: Competing on privacy?

Last week, Dean Hachamovitch, the Corporate VP at Microsoft in charge of Internet Explorer was interviewed on stage at the Consumer Electronics Show (CES) in Vegas. He was there to discuss the next version of the company's browser, and spent most of his time talking about his firm's commitment to privacy. Clearly not a fan of subtlety, Hachamovitch wore a t-shirt with the word "private" printed on it in large letters (the IE logo took the place of the letter e).

A few years ago, advertising executives within Microsoft puled rank and forced the IE team to sabotage an otherwise pretty cool anti-tracking feature in IE8. After the company was rightfully savaged by the Wall Street Journal earlier this summer when it exposed the tale, Microsoft has now decided to offer a far more effective anti-tracking tool in IE9.

As I explained at length in a blog post last month, Microsoft has decided to try to compete on privacy, likely because it is an area which one of its main competitors (Google) is rather weak. During his interview at CES, Hachamovitch himself was quite happy to take potshots at Google, and the fact that the firm's advertising business is dependent upon facilitating, not stopping tracking of users.
Q: A cynical journalist might suggest that you’re embracing privacy and wearing a shirt because Firefox et al are eating your lunch.

A: Paying Windows customers want a great experience that includes privacy, including through their browser. But another way to view people who use browsers is that they’re objects to be boxed and sold. We don’t believe that. We believe Windows customers should have a great experience with their browser.

Q: As opposed to?

A: Well, Chrome, for instance, is funded by advertising.
While I of course believe that Microsoft's new found religion on privacy is motivated by a desire to compete against Google, I see no reason to think that its commitment to "privacy" is anything but genuine. The problem lies with Microsoft's definition of privacy.

When Microsoft talks about the ways that it is innovating and shipping technologies designed to protect its users privacy, it is talking about online tracking, not law enforcement and intelligence agencies that regularly request and obtain private user data. However, as proven by the NSA warrantless wiretapping scandal, and the FBI's repeated abuse of its own surveillance powers, the threat to user privacy from the government is very real. Likewise, as Twitter demonstrated through its bold actions in fighting to have a court order for wikileaks related data unsealed last week, companies can play a vital role, if they choose to do so, in protecting users.

The problem is that Microsoft, like so many firms, has a very narrow definition of privacy. To quote from my latest law journal article:
With few exceptions, the companies to whom millions of consumers entrust their private communications are committed to assist in the collection and disclosure of that data to law enforcement and intelligence agencies – all while simultaneously promising to protect their customers’ privacy.

When these firms speak about their commitment to protecting their customers’ privacy, what they really mean is that they will protect their customers’ data from improper access or use by commercial entities. The fact that these firms have a limited definition of privacy is not made clear to consumers, who may mistakenly believe that the companies to whom they entrust their data are committed to protecting their privacy from all threats, and not just those from the private sector.

It would be bad enough if Microsoft were just ignoring privacy threats from the government, but as I will now explain, the company has repeatedly gone out of its way to assist law enforcement and intelligence agencies in their effort to investigate users. It has put the interests of the government over the privacy of its regular customers.

How Microsoft sacrifices user privacy in order to assist the government

When asked in 2007 by the New York Times if the company was considering a policy to log no search data at all, Peter Cullen, Microsoft’s chief privacy strategist argued that too much privacy was actually dangerous. "Anonymized search," he said, "can become a haven for child predators. We want to make sure users have control and choices, but at the same time, we want to provide a security balance."

Similarly, the company proactively appends the IP address of each Hotmail user's personal computer in the headers of every outbound email. This is not required by any technical standard, and is a purely voluntary act on Microsoft's part. As far as I am aware, Microsoft and Yahoo are the only two major email providers that do this, and the end result is that law enforcement agencies can determine the IP address of the user who sent any Hotmail originated email and thus go directly to the user's ISP to determine their identity, without having to go to the trouble of contacting Microsoft first.

Microsoft has also developed computer forensics software which it freely distributes to government agencies, allowing them to easily extract private data from seized Windows computers. As the company states on the webpage for the COFEE forensics tool, "If it's vital to government, it's mission critical to Microsoft."

Finally, the most frustrating thing for me personally, is Microsoft's position on disk encryption. Microsoft considers BitLocker disk encryption a "premium" feature, and restricts it to only those consumers who buy the Ultimate version of Windows 7. For consumers using the copy of Windows 7 Home Premium that came with the new PC they bought at Staples, the cost of the Ultimate upgrade is $139.95.

In contrast, Google has opted to ship disk encryption enabled by default on its new Chrome OS platform, and both Apple and Ubuntu Linux both include encryption with their systems by default (to be enabled with a single checkbox during or after installation).

The end result of Microsoft's decision is that few regular consumers use BitLocker, and instead, those who do wish to use some form of disk encryption generally seek out third party software, like TrueCrypt.

I would be extremely surprised if Microsoft has extracted much additional profit through this decision. So much so that I suspect that money is not the reason for doing this. Instead, I suspect (and have heard rumors from insiders at Microsoft suggesting so) that it is an intentional move designed to limit the widespread adoption of encryption by regular users.

The man who either made this product decision, or played a significant role in influencing it is Scott Charney, Microsoft's Corporate VP in charge of Trustworthy Computing. Before coming to Microsoft, Charney was a prosecutor in the Department of Justice and served as Chief of the Computer Crime and Intellectual Property Section (CCIPS).

Easy to enable (or worse, deployed by default) disk encryption would seriously frustrate the investigative abilities of the law enforcement community, including many of his former colleagues.

What this means

Based on its current actions, it is clear that Microsoft is not interested in protecting its users from government intrusions into their privacy. Yes, the company has played a significant role in the Digital Due Process coalition, and executives have testified multiple times before Congress in the last year supporting the reform of the Electronic Communications Privacy Act (these actions on Microsoft's part are not entirely altruistic. Updating electronic privacy law would give consumers and businesses more of a reason to entrust their private data to Microsoft's cloud services). However, such reforms (while an improvement) will only require that a judge approve the disclosure of data held in the cloud. If a judge says OK, the data will still be handed over.

As a software and technology company, Microsoft is in a fantastic position to actually offer solid protection to end users and embrace privacy by design. It can make use of limited (or zero) data retention periods, use encryption wherever possible, by default, to make sure that seized data is useless to anyone but its owner, and instead of building forensics software to extract data from Windows computers, the company should be hardening Windows so that all forensics software tools are unable to extract anything of value.

The problem for Microsoft (and so many other large companies), is that pissing off national and state governments isn't good for business, particular when they are some of your largest customers. Furthermore, for a firm that is so actively engaged in Washington DC, any moves that seriously frustrate law enforcement interests would likely consume political capital that could otherwise be used lobbying for things that will actually improve the company's profits.

As such, I don't seriously expect Microsoft to fully embrace privacy, or to deploy any technology that will seriously frustrate law enforcement agencies. I'm not going to waste my time trying to argue that the company should do this. What I will argue though, is that the company should not be permitted to loudly advertise its commitment to privacy, when it is clearly not the case. The company's claims, quite simply, are false and deceptive. At the very least, the company should have to clarify its definition of privacy, and acknowledge, prominently, that it has opted to not protect users from government threats.

This is where the FTC (or other countries' consumer protection agencies) can and should play a role, if they wished. While companies have no obligation to protect their customers from government surveillance, they are at least obligated to make truthful statements when describing their products, particularly when the firms proudly advertise privacy as a major feature.

Sunday, January 09, 2011

Thoughts on the DOJ wikileaks/twitter court order

The world's media has jumped on the news that the US Department of Justice has sought, and obtained a court order seeking to compel Twitter to reveal account information associated with several of its users who are associated with Wikileaks.

Communications privacy law is exceedingly complex, and unfortunately, none of the legal experts who actually specialize in this area (people like Orin Kerr, Paul Ohm, Jennifer Granick and Kevin Bankston) have yet to chime in with their thoughts. As such, many commentators and journalists are completely botching their analysis of this interesting event. While I'm not a lawyer, the topic of government requests to Internet companies is the focus of my dissertation, so I'm going to try to provide a bit of useful analysis. However, as always, I'm not a lawyer, so take this with a grain of salt.

A quick introduction to the law

On December 14, An Attorney in the US the Department of Justice obtained a court order compelling Twitter to reveal records associated with several of its users. The order, issued under 18 USC 2703(d) is not a subpoena (even though the AP, New York Times, Salon and many other outlets have reported that it is). Subpoenas are essentially letters written by law enforcement officers, on official agency letterhead, and have not been reviewed or signed by a judge. The 2703(d) order in question was issued by a magistrate judge.

Per the statute, a judge isn't supposed to issue a 2703(d) order unless the government "offers specific and articulable facts showing that there are reasonable grounds to believe that the contents of a wire or electronic communication, or the records or other information sought, are relevant and material to an ongoing criminal investigation". We don't know what these facts are though -- as it doesn't look as though the government's original request to the court has been made public. (It isn't clear if those records themselves remain sealed. I tried to find the case in PACER, and couldn't locate it, so this will have to wait until Monday, when someone can call up the Clerk's office to ask for the documents).

"d" orders can be used to obtain customer records (name, address, credit card info, IP addresses used to connect to the service), non-content data associated with individual communications (to/from and timestamps from emails, etc). They can also be used to obtain any saved, outbound communications (such as the "sent" mail folder), all communications that are more than 180 days old, as well as those that have been opened and viewed at least once (except in the 9th circuit). If the government wants access to unread messages that are 180 days old or newer, it must seek a rule 41 court order, which requires a showing of probable cause.

The order to twitter

The government's wikileaks "d" order, as the statute permits, requests the customer subscriber info associated with the account (essentially copying this language in full from the statute).

It is the second part of the order that is more interesting. Again, as the statute allows, the government is requesting non-content information associated with individual communications. What the government appears to be seeking in part 2 is the metadata associated with every Twitter communication to and from the users named in the order. What this means is up to debate. It could mean the name and timestamp of every user who has sent or received a private message to one of the named individuals. It might also mean the list of individuals who have publicly communicated, or mentioned the named individual, or who have been named in a tweet by those persons. It might even include a list of followers, although this information is public already, so it is unclear why the government would seek it through a court order.

The statute (and caselaw) permits the government to use a "d" order to get access to communications older than 180 days, those that have been read at least once (outside the 9th circuit), and saved outgoing messages. What isn't so clear to me though, is if the government has requested this information from Twitter or not when it asks for "correspondence and notes of records related to the accounts".

My initial impression is that this is not a request for communications content, but communications between the user and twitter itself (for example, customer service inquiries). However, I'm not really sure about this though... so I'll wait for the real experts to weigh in on this bit.

Reading between the lines

With that discussion of the law out of the way, lets get to the fun part: Speculation. Based on this order, and the events that followed, there are some interesting observations to be made.

1. Amateur Hour. The 2703(d) order misspelled the names of one of the targets, Rop Gonggrijp. It also requested credit card and bank account numbers of several Twitter users, even though Twitter is a free service and so doesn't have such information (presumably someone at DOJ knows a little about Twitter, since the agency has 350,000 followers of its official Twitter account).

The Department of Justice prosecutor named in the order, Tracy Doherty-McCormick, was prosecuting online child exploitation cases just five months before the Twitter order was issued. Given that the wikileaks investigation is the most high-profile national security investigation of the decade, and that the court order seeks records associated with an Icelandic member of parliament, you would think that DOJ would assign this case to someone more senior.

From my own experience, outside of the Computer Crime & Intellectual Property Section (CCIPS) and the National Security Division, most DOJ attorneys know very little about technology. As such, it may simply be that Doherty-McCormick, through her experience in prosecuting pedophiles caught in online stings, may be the most tech savvy prosecutor in her office, and thus could have been brought in to help with the investigation on that basis alone.

However, the technical knowledge involved in tricking a pedophile into meeting what he believes is a 13 year old girl isn't quite the same as is required by someone investigating a sophisticated organization run by skilled computer security researchers. Presumably, Doherty-McCormick is in regular communication with tech-savvy attorneys from CCIPS, who are likely assisting in this matter.

2. Three of the individuals named in the order, Jacob Appelbaum, Rop Gonggrijp, and Julian Assange are computer security experts - Appelbaum has worked with the Tor project, and has co-authored some pretty awesome encryption research, Assange co-authored a deniable encrypted filesystem, and Rop has worked for several years to create mobile phone encryption software. All three likely use strong encryption to store and transmit sensitive communications and use Tor to mask their IP addresses. As such, I'm not really sure what DOJ hopes to gain by asking Twitter for this data -- as it is doubtful that these individuals have entrusted Twitter with anything private.

3. Why the "d" order? For a case this high profile, it is quite shocking that the government is using a "d" order to try and gather information. At least for Assange and Manning, surely there is sufficient evidence already to demonstrate probable cause, and get a rule 41 warrant, which could be used to get full communications content and prospective location information? What is even more surprising though, is that criminal statutes are being used, and not foreign intelligence laws. To be perfectly frank, I would have bet money that DOJ had already obtained a FISA order to monitor Assange and any of his associates. I really don't know what to make of this.

4. Twitter. The bigger story here, IMHO, far more interesting than the government request for wikileaks related info, is the fact that Twitter has gone out of its way to fight for its users' privacy. The company went to court, and was successful in asking the judge to unseal the order (something it is not required to do), and then promptly notified its users, so that they could seek to quash the order. Twitter could have quite easily complied with the order, and would have had zero legal liability for doing so. In fact, many other Internet companies routinely hand over their users' data in response to government requests, and never take steps to either have the orders unsealed, or give their users notice and thus an opportunity to fight the order.

Alex Macgillivray, Twitter's general counsel is clearly behind this strong, pro-privacy move. Macgillivray was one of the first law students at Harvard's Berkman Center. Until he moved to Twitter, he worked on copyright and privacy issues at Google, where, he played a major role in getting the company to contribute takedown requests to Not surprisingly, Twitter recently started sending copies of takedowns to chillingeffects too.

It is wonderful to see companies taking a strong stance, and fighting for their users' privacy. I am sure that this will pay long term PR dividends to Twitter, and is a refreshing change, compared to the actions by some other major telecommunications and internet application providers, who often bend over backwards to help law enforcement agencies. Simply put, the contrast between Amazon, Paypal (owned by eBay) and Twitter couldn't be clearer.

As one further example of this difference, consider Twitter's actions here in contrast with comments from eBay's director of compliance a few years back:
We [eBay] try to make rules to make it difficult for people to commit fraud and easy for you [law enforcement agencies] to investigate. One is our Privacy policy. I know from investigating eBay fraud cases that eBay has probably the most generous policy of any internet company when it comes to sharing information. [emphasis added]

We do not require a subpoena except for very limited circumstances. We require a subpoena when we need the financial information from the site, credit card info or sometimes IP information.

5. Did the government seek the contents of private messages? As I wrote above, it's not clear if the government sought the content of private messages. Had they sought such information, I would have expected them to be clearer in describing that information. However, based on Twitter's actions in getting the court to unseal the 2703(d) order, had the government sought communications content, I would fully expect to see the company to fight that order, on 4th amendment grounds.

My guess is that the government opted to not ask for such information, purely as a strategic matter, as it probably feared that Twitter would lawyer up, refuse to disclose any communications content, and seek to have that part of 18 USC 2703 ruled unconstitutional. Over the past year or so, several courts have taken a dim view of the government's practice of obtaining various forms of private data without probable cause warrants. A 2703(d) request for content from Twitter would be an ideal opportunity for courts to examine this issue, and would likely have been very risky for the government.

What comes next

This case is extremely high profile -- it involves data privacy; Twitter, arguably the hottest communications service a hot communications tool; wikileaks; and a member of the Icelandic parliament. I fully expect this to go to court, and for absolutely everyone to try and get involved in this case -- privacy groups, communications providers, and perhaps even the Icelandic government will all likely file amicus briefs.

As a privacy advocate and researcher, I can't wait to see this situation develop.