Monday, July 30, 2012

Tech journalists: Stop hyping unproven security tools

Preface: Although this essay compares the media's similar hyping of Haystack and Cryptocat, the tools are, at a technical level, in no way similar. Haystack was at best, snake oil, peddled by a charlatan. Cryptocat is an interesting, open-source tool created by a guy who means well, and usually listens to feedback.

In 2009, media outlets around the world discovered, and soon began to shower praise upon Haystack, a software tool designed to allow Iranians to evade their government's Internet filtering. Haystack was the brainchild of Austin Heap, a San Francisco software developer, who the Guardian described as a "tech wunderkind" with the "know-how to topple governments."

The New York Times wrote that Haystack "makes it near impossible for censors to detect what Internet users are doing." The newspaper also quoted one of the members of the Haystack team saying that "It's encrypted at such a level it would take thousands of years to figure out what you’re saying."

Newsweek stated that Heap had "found the perfect disguise for dissidents in their cyberwar against the world’s dictators." The magazine revealed that the tool, which Heap and a friend had in "less than a month and many all-nighters" of coding, was equipped with "a sophisticated mathematical formula that conceals someone’s real online destinations inside a stream of innocuous traffic."

Heap was not content to merely help millions of oppressed Iranians. Newsweek quoted the 20-something developer revealing his long term goal: "We will systematically take on each repressive country that censors its people. We have a list. Don’t piss off hackers who will have their way with you.

The Guardian even selected Heap as its Innovator of the Year. The chair of the award panel praised Heap's "vision and unique approach to tackling a huge problem" as well as "his inventiveness and bravery."

This was a feel-good tech story that no news editor could ignore. A software developer from San Francisco taking on a despotic regime in Tehran.

There was just one problem: The tool hadn't been evaluated by actual security experts. Eventually, Jacob Appelbaum obtained a copy of and analyze the software. The results were not pretty -- he described it as "the worst piece of software I have ever had the displeasure of ripping apart."

Soon after, Daniel Colascione, the lead developer of Haystack resigned from the project, saying the program was an example of "hype trumping security." Heap ultimately shuttered Haystack.

After the proverbial shit hit the fan, the Berkman Center's Jillian York wrote:

I certainly blame Heap and his partners–for making outlandish claims about their product without it ever being subjected to an independent security review, and for all of the media whoring they’ve done over the past year.

But I also firmly place blame on the media, which elevated the status of a person who, at best was just trying to help, and a tool which very well could have been a great thing, to the level of a kid genius and his silver bullet, without so much as a call to circumvention experts.

Cryptocat: The press is still hypin'

In 2011, Nadim Kobeissi, then a 20 year old college student in Canada started to develop Cryptocat, a web-based secure chat service. The tool was criticized by security experts after its initial debut, but stayed largely below the radar until April 2012, when it won an award at the Wall Street Journal's Data Transparency Codeathon. Days later, the New York Times published a profile of Kobeissi, which the newspaper described as a "master hacker."

Cryptocat originally launched as a web-based application, which required no installation of software by the user. As Kobeissi told the New York Times:

"The whole point of Cryptocat is that you click a link and you’re chatting with someone over an encrypted chat room... That’s it. You’re done. It’s just as easy to use as Facebook chat, Google chat, anything.”

There are, unfortunately, many problems with the entire concept of web based crypto apps, the biggest of which is the difficulty of securely delivering javascript code to the browser. In an effort to address these legitimate security concerns, Kobeissi released a second version of Cryptocat in 2011, delivered as a Chrome browser plugin. The default version of Cryptocat on the public website was the less secure, web-based version, although users visiting the page were informed of the existence of the more secure Chrome plugin.

Forbes, Cryptocat and Hushmail

Two weeks ago, Jon Matonis, a blogger at Forbes included Cryptocat in his list of 5 Essential Privacy Tools For The Next Crypto War. He wrote that the tool "establishes a secure, encrypted chat session that is not subject to commercial or government surveillance."

If there is anyone who should be reluctant offer such bold, largely-unqualified praise to a web-based secure communications tool like Cryptocat, it should be Matonis. Several years ago, before he blogged for Forbes, Matonis was the CEO of Hushmail, a web-based encrypted email service. Like Cryptocat, Hushmail offered a 100% web-based client, and a downloadable java-based client which was more resistant to certain interception attacks, but less easy to use.

Hushmail had in public marketing materials claimed that "not even a Hushmail employee with access to our servers can read your encrypted e-mail, since each message is uniquely encoded before it leaves your computer." In was therefore quite a surprise when Wired reported in 2007 that Hushmail had been forced by a Canadian court to insert a backdoor into its web-based service, enabling the company to obtain decrypted emails sent and received by a few of its users.

The moral of the Hushmail story is that web based crypto tools often cannot protect users from surveillance backed by a court order.

Wired's ode to Cryptocat

This past Friday, Wired published a glowing, 2000 word profile on Kobeissi and Cryptocat by Quinn Norton. It begins with a bold headline: "This Cute Chat Site Could Save Your Life and Help Overthrow Your Government," after which, Norton describes the Cryptocat web app as something that can "save lives, subvert governments and frustrate marketers."

In her story, Norton emphasizes the usability benefits of Cryptocat over existing secure communications tools, and on the impact this will have on the average user for whom installing Pidgin and OTR is too difficult. Cryptocat, she writes, will allow "anyone to use end-to-end encryption to communicate without ... mucking about with downloading and installing other software." As Norton puts it, Cryptocat's no-download-required distribution model "means non-technical people anywhere in the world can talk without fear of online snooping from corporations, criminals or governments."

In short, Norton paints a picture in which Cryptocat fills a critical need: secure communications tools for the 99%, for the tl;dr crowd, for those who can't, don't know how to, don't have time to, or simply don't want to download and install software. For such users, Cryptocat sounds like a gift from the gods.

Journalists love human interest stories

Kobeissi presents the kind of human interest story that journalists dream about: A Lebanese hacker who has lived through 4 wars in his 21 years, whose father was killed, whose house was bombed, who was interrogated by the "cyber-intelligence authorities" in Lebanon and by the Department of Homeland Security in the US, and who is now building a tool to help others in the Arab world overthrow their oppressive governments.

As such, it isn't surprising that journalists and their editors aren't keen to prominently highlight the unproven nature of Cryptocat, even though I'm sure Kobeissi stresses it in every interview. After all, which journalist in their right mind would want to spoil this story by mentioning that the web-based Cryptocat system is vulnerable to trivial man in the middle, HTTPS stripping attacks when accessed using Internet Explorer or Safari? What idiot would sabotage the fairytale by highlighting that Cryptocat is unproven, an experimental project by a student interested in cryptography?

And so, such facts are buried. The New York Times waited until paragraph 10 in a 16 paragraph story to reveal that Kobeissi told the journalist that his tool "is not ready for use by people in life-and-death situations." Likewise, Norton waits until paragraph 27 of her Wired profile before she reveals that "Kobeissi has said repeatedly that Cryptocat is an experiment" or that "structural flaws in browser security and Javascript still dog the project." The preceding 26 paragraphs are filled with feel good fluff, including description of his troubles at the US border and a three paragraph no-comment from US Customs.

At best, this is bad journalism, and at worst, it is reckless. If Cryptocat is the secure chat tool for the tl;dr crowd, burying its known flaws 27 paragraphs down in a story almost guarantees that many users won't learn about the risks they are taking.

Cryptocat had faced extensive criticism from experts

Norton acknowledges in paragraph 23 of her story that "Kobeissi faced criticism from the security community." However, she never actually quotes any critics. She quotes Kobeissi saying that "Cryptocat has significantly advanced the field of browser crypto" but doesn't give anyone the opportunity to challenge the statement.

Other than Kobeissi, Norton's only other identified sources in the story are Meredith Patterson, a security researcher that was previously critical of Cryptocat who is quoted saying "although [Cryptocat] got off to a bumpy start, he’s risen to the occasion admirably" and an unnamed active member of Anonymous, who is quoted saying "if it's a hurry and someone needs something quickly, [use] Cryptocat."

It isn't clear why Norton felt it wasn't necessary to publish any dissenting voices. From her public Tweets, it is however, quite clear that Norton has no love for the crypto community, which she believes is filled with "privileged", "mostly rich 1st world white boys w/ no real problems who don't realize they only build tools [for] themselves."

Even though their voices were not heard in the Wired profile, several prominent experts in the security community have criticized the web-based version of Cryptocat. These critics include Thomas Ptacek, Zooko Wilcox-O'Hearn, Moxie Marlinspike and Jake Appelbaum. The latter two, coincidentally, have faced pretty extreme "real world [surveillance] problems" documented at length, by Wired.

Security problems with Cryptocat and Kobeissi's response

Since Cryptocat was first released, security experts have criticized the web-based app, which is vulnerable to several attacks, some possible using automated tools. The response by Kobeissi to these concerns has long been to point to the existence of the Cryptocat browser plugin.

The problem is that Cryptocat is described by journalists, and by Kobeissi in interviews with journalists, as a tool for those who can't or don't want to install software. When Cryptocat is criticized, Kobeissi then points to a downloadable browser plugin that users can install. In short, the only technology that can protect users from network attacks against the web-only Cryptocat also neutralizes its primary, and certainly most publicized feature.

Over the past few weeks, criticism of the web-based Cryptocat and its vulnerability to attacks has increased, primarily on Twitter. Responding to the criticism, on Saturday, Kobeissi announced that the the upcoming version 2 of Cryptocat will be browser-plugin only. At the time of writing this essay, the Cryptocat web-based interface also appears to be offline.

Kobeissi's decision to ditch the no-download-required version of Cryptocat came just one day after the publication of Norton's glowing Wired story, in which she emphasized that Cryptocat enables "anyone to use end-to-end encryption to communicate without ... mucking about with downloading and installing other software."

This was no doubt a difficult decision for Kobeissi. Rather than leading the development of a secure communications tool that Just Works without any download required, he must now rebrand Cryptocat as a communications tool that doesn't require operating system install privileges, or one that is merely easier to download and install. This is far less sexy, but, importantly, far more secure. He made the right choice.

Conclusion

The technology and mainstream media play a key role in helping consumers to discover new technologies. Although there is a certain amount of hype with the release of every new app or service (if there isn't, the PR people aren't doing their jobs), hype is dangerous for security tools.

It is by now well documented that humans engage in risk compensation. When we wear seatbelts, we drive faster. When we wear bike helmets, we drive closer. These safety technologies at least work.

We also engage in risk compensation with security software. When we think our communications are secure, we are probably more likely to say things that we wouldn't if our calls were going over a telephone like or via Facebook. However, if the security software people are using is in fact insecure, then the users of the software are put in danger.

Secure communications tools are difficult to create, even by teams of skilled cryptographers. The Tor Project is nearly ten years old, yet bugs and design flaws are still found and fixed every year by other researchers. Using Tor for your private communications is by no means 100% safe (although, compared to many of the alternatives, it is often better). However, Tor has had years to mature. Tools like Haystack and Cryptocat have not. No matter how good you may think they are, they're simply not ready for prime time.

Although human interest stories sell papers and lead to page clicks, the media needs to take some responsibility for its ignorant hyping of new security tools and services. When a PR person retained by a new hot security startup pitches you, consider approaching an independent security researcher or two for their thoughts. Even if it sounds great, please refrain from showering the tool with unqualified praise.

By all means, feel free to continue hyping the latest social-photo-geo-camera-dating app, but before you tell your readers that a new security tool will lead to the next Arab Spring or prevent the NSA from reading peoples' emails, step back, take a deep breath, and pull the power cord from your computer.

Thursday, July 26, 2012

The known unknowns of Skype interception

Over the past few weeks, the technical blogosphere, and most recently, the mainstread media have tried to answer the question: What kind of assistance can Skype provide to law enforcement agencies?

Most of the stories have been filled with speculation, sometimes informed, but mostly not. In an attempt to paint as clear a picture as possible, I want to explain what we do and don't know about Skype and surveillance.

Skype has long provided assistance to governments

The Washington Post reported yesterday that:
Skype, the online phone service long favored by political dissidents, criminals and others eager to communicate beyond the reach of governments, has expanded its cooperation with law enforcement authorities to make online chats and other user information available to police

The changes, which give the authorities access to addresses and credit card numbers, have drawn quiet applause in law enforcement circles but hostility from many activists and analysts.

To back up its claim, the post cites interviews with "industry and government officials familiar with the changes" who "poke on the condition of anonymity because they weren’t authorized to discuss the issue publicly." Ugh.

However, a quick Google search for "Skype law enforcement handbook" quickly turns up an official looking document on the whistleblower website cryptome.org, dated October 2007, which makes it clear that Skype has long been providing the assistance that the Post claims is new.

From Skype's 2007 law enforcement handbook:

In response to a subpoena or other court order, Skype will provide:
• Registration information provided at time of account registration
• E-mail address
• IP address at the time of registration
• Financial transactions conducted with Skype in the past year, although details of the credit cards used are stored only by the billing provider used (for instance, Bibit, RBS or PayPal)
• Destination telephone numbers for any calls placed to the public switched telephone network (PSTN)
• All service and account information, including any billing address(es) provided, IP address (at each transaction), and complete transactional information
While Skype's law enforcement handbook suggests that the company does not have access to IP address session logs, high-profile criminal case from 2006 suggests that the company does.
Kobi Alexander, the founder of Comverse, was nabbed in Negombo, Sri Lanka yesterday by a private investigator. He is wanted by the US government in connection with financial fraud charges. He is accused of profiting from some very shady stock-option deals, to the detriment of Comverse shareholders. Once the deals became public and he was indicted, he resigned as CEO and fled the US.

Alexander was traced to the Sri Lankan capital of Colombo after he placed a one-minute call using Skype. That was enough to alert authorities to his presence and hunt him down.

This makes sense. Skype clients connect to Skype's central servers (so that users can make calls to non Skype users, and learn which of their friends are online and offline), and so the servers naturally learn the IP address that the user is connecting from. This is not surprising.

Skype voice call encryption

So while it is clear that Skype can provide government agencies with basic subscriber information and IP login info, what remains unclear is the extent to which governments can intercept the contents of Skype voice calls.

Skype has always been rather evasive when it comes to discussing this issue. Whenever questions come up, the company makes it a point to mention that it provides end to end encryption, but then dodges all questions about how it handles encryption keys.

Skype's strategy is genius - most journalists, even those that cover tech, know very little about the more granular aspects of cryptography. When Skype says it provides end to end call encryption, journalists then tell their readers that Skype is wiretapping proof, even though Skype never made that specific claim. Conveniently enough, Skype never bothers to correct the many people who have read a tad bit too much into the company's statements about security.

As Seth Schoen from EFF told Forbes recently, "my view is that Skype has gotten a reputation for impregnable security that it has never deserved." Exactly. Consumers think the service is secure, and Skype has absolutely no incentive to correct this false, yet positive impression.

The mud puddle test

Last year, I directed a bit of a media firestorm at Dropbox, after I filed an FTC complaint alleging that the company had been misleading its customers about the "military grade" security it used to protect the files uploaded by users. Earlier this year, the tech press started to ask similar questions about the cryptography and key management used by Apple's iCloud service.

Soon after, crytographer Matt Green proposed the 'mud puddle test' for easily determining if a cloud based storage solution has unencrypted access to your data.

1. First, drop your device(s) in a mud puddle.
2. Next, slip in said puddle and crack yourself on the head. When you regain consciousness you'll be perfectly fine, but won't for the life of you be able to recall your device passwords or keys.
3. Now try to get your cloud data back.
Did you succeed? If so, you're screwed. Or to be a bit less dramatic, I should say: your cloud provider has access to your 'encrypted' data, as does the government if they want it, as does any rogue employee who knows their way around your provider's internal policy checks.

Both Dropbox and iCloud fail the mud puddle test. If a user's laptop is destroyed and they forget their password, both services permit a user to reset the password and then download all of their data that was stored with the service. Both of these companies have access to your data, and can be forced to hand it over to the government. In contrast, SpiderOak, a competing online backup service (which I use) passes the test. If a SpiderOak user forgets their password, they lose their data.

What about Skype? After all, the company isn't an online backup service, but rather a communications service, right?

Well, as an initial matter, if you forget your password, Skype sends you a reset link by email, which lets you into your account, maintaining the same account balance and restoring your full contact list. Likewise, if you install Skype on a new computer, your contact list is downloaded, and you can conduct conversations that, to the other caller, will not in any way reveal that you recently installed Skype on a new device, or reset your password. It just works.

Encrypted communications require encryption keys.

Some protocols, like Off The Record (built into several Instant Messaging clients, but not to be confused with Google's fake, unencrypted Off The Record), random keys are created by the IM client, and then users are expected to exchange and verify them out of band (usually, by phone, or in person).

The OTR developers realized that users don't like manually verifying random alpha-numeric crypto fingerprints, and so the developers introduced a slightly easier method of verifying OTR keys in recent versions that uses secret questions or shared secrets selected by users (obviously, this is less secure, but more likely to be actually followed by users).

Another scheme, the ZRTP encrypted VOIP protocol, created by Phil Zimmermann of PGP fame avoids the static fingerprint method, and instead requires users to verify a random phrase at the beginning of each conversation. ZRTP (which is also used by Whisper Systems' RedPhone and the open source Jitsi chat tool) can rely on these pass phrase exchanges, because users presumably know each others' voices. Text based IM schemes don't have this voice recognition property, and so slightly heavier weight verification schemes are required there.

While these key/identity verification methods are a pain for users, they are important. Encryption is great, but without some method of authentication, it is not very helpful. That is, without authentication, you can be sure you have encrypted session, but you have no idea who is at the other end (someone pretending to be your friend, a government device engaging in a man in the middle interception attack, etc). The key verification/exchange methods used by OTR and ZRTP provide a strong degree of authentication, so that users can be sure that no one else is snooping on their communications.

Thanks for the crypto lesson

In contrast to the complex, user-visible fingerprint exchange and verification methods employed by OTR and ZRTP, Skype does nothing at all. Skype handles all the crypto and key exchange behind the scenes. When a Skype user installs the software on a brand new device and initiates a conversation with a friend already in their contact list, that friend is not told that the caller's device/software has a new crypto key and that it should be verified. Instead, the call just connects.

While we don't know the full details of how Skype handles its key exchange, what is clear is that Skype is in a position to impersonate its customers, or, should it be forced, to give a government agency the ability to impersonate its customers. As Skype acts as the gatekeeper of conversations, and the only entity providing any authentication of callers, users have no way of knowing if they're directly communicating with a friend they frequently chat with, or if their connection is being intercepted using a man in the middle attack, made possible due to the disclosure of cryptographic keys by Skype to the government.

I suspect that Skype does not create a new private encryption key for each device running Skype. Instead, my guess is that it creates a key once, when the user sets up their account, and then stores this online, along with the user's contact list. When the user installs Skype on a new device, the key is downloaded, along with all of their other account data. The user's public/private key pair would then be used to authenticate a session key exchange. If this is the design that Skype uses, the company can be compelled to disclose the private crypto keys it holds, allowing the government to impersonate users, and perform active man in the middle interception attacks against their communications.

One alternate, but equally insecure approach would be for the Skype clients to create a new public/private keypair each time the a user installs Skype on their computer and for Skype to digitally sign the user's public key using a certificate pre-installed in all Skype clients. In that scenario, while Skype the company won't have access to your private key, it will be able to sign public keys in your name for other people (including the government) that other Skype clients will accept without complaint. Such impersonation methods can then be used to perform man in the middle attacks.

Whatever the key exchange method that Skype uses, as long as users rely on Skype for all caller authentication, and as long as the company provides account access after a forgotten password, and seamless communications after the installation of Skype on a new computer, the company will fail the mud puddle test. Under such circumstances, Skype is in a position to give the government sufficient data to perform a man in the middle attack against Skype users.

Government agencies and encryption keys

Ok, so Skype has access to users' communications encryption keys (or can enable others to impersonate as Skype users). What does this mean for the confidentiality of Skype calls? Skype may in fact be telling the truth when it tells journalists that it does not provide CALEA-style wiretap capabilities to governments. It may not need to. If governments can can impersonate Skype users and perform man in the middle attacks on their conversations (with the assistance of broadband ISPs or wireless carriers), then they can decrypt the voice communications without any further assistance from Skype.

Do we know if this is happening? No. But that is largely because Skype really won't comment on the specifics of its interactions with governments, or the assistance it can provide. However, privacy researchers (pdf) have for many years speculated about governments compelling companies to hand over their own encryption keys or provide false certificates (pdf) for use in MiTM attacks. In such cases, when the requests come, there isn't really anything that companies can do to resist.

We need transparency

I suspect that 99% of Skype's customers have never given a moment's thought to the ease or difficulty with which government agencies can listen to their calls. Most likely use the service because it is free/cheap, easy, and enables them to talk to their loved ones with a minimum of hassle. There are, however, journalists, human rights activists and other at-risk groups who use Skype because they think it is more secure. In terms of Skype's hundreds of millions of users, these thousands of privacy-sensitive users are a tiny rounding error, a drop in the bucket.

Skype is not transparent about its surveillance capabilities. It will not tell us how it handles keys, what kind of assistance it provides governments, under what circumstances, or which governments it will and won't assist. Until it is more transparent, Skype should be assumed to be insecure, and not safe for those whose physical safety depends upon confidentiality of their calls.

Skype of course can't talk about the requests for assistance it has received from intelligence agencies, since such requests are almost certainly classified. However, Skype could, if it wished to, tell users about its surveillance capabilities. It doesn't.

I personally don't really care if Skype is resistant to government surveillance or not. There are other schemes, such as ZRTP, which are peer reviewed, open, documented protocols which activists can and should use. What I would like though, is for Skype to be honest. If it is providing encryption keys to governments, it should tell its customers. They deserve the truth.