Public interest groups like EFF, the ACLU and CDT will of course do their best to argue that such backdoors would totally violate the privacy of millions of Americans. Unfortunately, this criticism will largely fall on deaf ears.
Those members of Congress who are strong believers in privacy will not need convincing. However, those members who are willing to grant any and all additional powers requested by those investigating pedophiles and terrorists have already made up their mind – in their eyes, individual privacy is a small price to pay.
As such, I am not going to waste my time explaining why CALEA II is a horrible idea on privacy grounds. Instead, I will now explain why, for non-privacy reasons, it is a bad idea to give the FBI what it wants -- and why doing so threatens national security.
Surveillance backdoors, like all other software, will have security flaws
Abusable flaws are routinely found in commercial software products (which is not too surprising, since software engineers are rarely trained in software security or the appropriate use of cryptography). To make a product 100% secure, engineers have to get everything right – protecting against all known attacks, as well as attack techniques not yet invented.
Now, consider this question – if the government is going to force Google, Facebook, Skype and RIM to create surveillance backdoors in their own products, how are these companies going to protect the backdoors to make sure they are not accessed by evildoers? If Google knew how to develop software that is 100% secure, surely it would already be applying these software engineering techniques to its products.
Of course, this is an impossible task, which is why the software products we all regularly use seem to constantly bug us to install security updates.
As such, we need to accept the fact that any surveillance backdoors that these firms are required to build will have security flaws, and that they will be abused by people who care even less about privacy than the FBI.
Governments around the world acquire and exploit security flaws in commercial software
More often than not, commercial software vendors do not discover their own flaws. They learn about them because “white hat” security researchers discover them and notify the company, or because “black hat” hackers discover the flaws, and either exploit them directly, or sell them to others who use them to steal users’ data, or use infected computers to deliver spam.
There is now a thriving economy for those wishing to sell “zero day” security flaws and exploits (that is, those not known to the community). While criminal gangs certainly seem to be interested in buying these exploits, they are not the only customers – governments around the world are on the market for this information.
Charlie Miller, a security researcher famous for discovering exploitable software flaws in Apple’s iPhone, also has a bit of experience selling security flaws. In an academic research paper a few years ago, he described how after discovering a flaw in the Linux operating system, he sold the information to a US government agency (presumably, the NSA, his former employer) for a cool $50,000.
Mr Miller’s experience is not unique – security researchers that have spoken with me confirm that US defense contractors (such as SAIC and Booz Allen Hamilton) purchase exploitable security flaws on behalf of their government clients. One researcher told me that bonuses are built into the contracts – that is, the longer the flaw remains useful and unpatched by the operating system or application provider, the higher the payment.
It would be foolish to assume that the US government is the only one doing this – foreign governments are probably buying any exploits they can get their hands on, as well as spending significant resources to discover these through in-house R&D.
Consider the following example - the recent Stuxnet worm that was used to penetrate Iran’s nuclear facilities used a number of zero day exploits in Microsoft Windows.
While no government has claimed credit for the worm, what is clear, is that whoever did it has quite a bit of security expertise. What this also means, at least as long as the US government claims that it had no role in stuxnet, is that there are other governments out there with the ability to discover (or purchase) and exploit flaws in US made software.
By requiring law enforcement backdoors, we open ourselves to surveillance by hackers and foreign intelligence agencies
In 2004, a still unknown entity somehow gained access to the the CALEA-compliant intercept system of Vodafone Greece, the country's largest cellular service provider.
Those customers whose calls were intercepted included the prime minister of Greece, the mayor of Athens and at least 100 other high-ranking dignitaries, including an employee of the U.S. embassy.
The story of the "Athens Affair", as it is commonly known in security circles, is perhaps the best example of the privacy risks associated with lawful interception capabilities in communications infrastructure. I'm not going to go into all the details here, but there is an absolutely fantastic, multi-page writeup of the incident in IEEE's Spectrum magazine.
While Greek investigators still have not been able to conclusively determine who penetrated their network, all signs (including the mysterious "suicide" of a Vodafone employee in 2005) indicate that it was the work of a foreign intelligence service.
Similarly, in 2010, soon after Google disclosed that Chinese hackers had broken into the company's network, news reports surfaced indicating that the hackers had gained access to Google's lawful surveillance systems.
[Google's Chief Legal Officer David] Drummond said that the hackers never got into Gmail accounts via the Google hack, but they did manage to get some "account information (such as the date the account was created) and subject line."
That's because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.
"Right before Christmas, it was, 'Holy s***, this malware is accessing the internal intercept [systems],'" he said.
In April 2010, at a public event at Google's Washington DC office, I asked the Pablo Chavez, the company's director of public policy if the reports were true. His response, while evasive, still seemed to suggest that there was more to this story:
I'm not familiar with the details. But I do know that there is this contining, ongoing investigation of the matter. Hopefully, Over the course of time, we can talk a little bit more about precisely what happened. I am familar with the report, I am just not a in a position to answer any details.It has now been a year since the company first disclosed that the hack occurred, yet it still has not revealed if its intercept systems were in fact breached.
It is time for Google to fess up - its customers have a right to know, and members of Congress similarly need to be aware of the risks of adding further backdoors to our communications networks.
While the dangers of (inevitably) poorly implemented lawful intercept capabilities are certainly real, what are you proposing?
Are you proposing that lawful intercept capabilities not be included in online services, service providers turn off logging, and shrug when law enforcement comes?
In an ideal world market forces would punish US companies (by using services not based in the US) to providing extensive data to (apparent) fishing expeditions.
Post a Comment