This week will be my last at Harvard's Berkman Center for Internet & Society. It has been a fantastic place to work, and for the first time in my academic life, I found a supportive environment where it is OK to be interested in both technology and law/policy. I will miss Berkman and the friends I made there sorely (but not the horrible Boston weather).
In two weeks, I will move to Washington DC, where I will begin working half time as a technical consultant to the Division of Privacy and Identity Protection in the Bureau of Consumer Protection at the US Federal Trade Commission. As I understand it, the FTC has a lot of really smart lawyers, but they (currently) lack geek skills.
David Vladeck, the new head of the Bureau of Consumer Protection recently told the New York Times that "he would hire technologists to help analyze online marketers’ tracking." I guess that means people like me.
Those regular blog readers who are used to my usual acerbic writing style may be disappointed. I expect that my writing on this blog will dry up -- with the occasional post to announce new research papers or updates to TACO. While I haven't been told to do this, I am assuming that it is simply no longer appropriate to use this blog to shame the corporations that continue to do harm to user online privacy -- at least as long as I am also on the government's payroll.
Hopefully, there will be other ways that I can help to achieve this positive change from within the DC beltway.
I also recognize that many people might find it surprising that I am going to work for the US government. After all, I have spent much of my public blogging railing against the oppressive surveillance state and the numerous privacy invasions committed by the law enforcement and intelligence agencies.
My position at the FTC will involve no classified work, I have not, and will not get a security clearance, and I intend to be solely focused on things that improve consumer privacy, not hurt it. The FTC is not in the business of violating the rights of Americans. There are other agencies that seem to be taking care of that.
I will be at the FTC half time. The other (unpaid) half of my time will be spent wrapping up my dissertation, writing research papers, and continuing to work on TACO.
There are likely to be some users of TACO who are not terribly keen on the idea of running code on their computers designed and maintained by someone who is paid by the US government. TACO is open source, which means anyone can look through the source code online to see if there are any hidden backdoors (there aren't). Furthermore, Mozilla won't roll out an update to the 100,000 TACO users until a Mozilla volunteer has looked through the code and verified that it is safe.
As an additional layer of safety for paranoid TACO users, I have added two new people to the TACO development team: Sid Stamm, and Dan Witte, both employees of Mozilla. Sid is also a paranoid security geek, and Dan is in charge of the cookie related code within the Firefox browser. Dan also rewrote the most recent version of TACO to make it several times faster.
Both have agreed to lend a hand if and when I encounter technical problems with future TACO versions (since, my coding skills are not so great). However, they will also be able to act as a layer of protection, should someone try to force me to make changes to the TACO codebase. Defense in depth, I suppose.
Monday, August 17, 2009
My Dissertation Proposal Colloquium
Update: This is my dissertation proposal, which means it has not been written yet. In a year, once the dissertation is done, it will of course be posted online.
Christopher Soghoian
Wednesday, September 2
1:00pm
Informatics East, Room 130
Indiana University Bloomington
PROFITS VS. PRIVACY?
STUDYING THE FAILURE OF THE WEB 2.0 INDUSTRY TO DEPLOY PRIVACY ENHANCING TECHNOLOGIES
It is now more than 30 years since the invention of public key cryptography. Yet, now, in 2009, the vast majority of Internet users still transmit their own personal information over networks without any form of encryption. When consumers check their Google Mail, Facebook or MySpace accounts using the increasingly ubiquitous free wireless networks in public places, they face a very real risk of theft and hijacking of their online accounts. While skilled technical experts and corporations have easy access to effective security technologies, most consumers still lack basic privacy online. The question we must ask is why?
Effective cryptography is no longer restricted by US export laws, protected by patents, or requires so much computing power that it is impractical for all but state secrets. Yet the market has still failed to deliver products that provide strong authentication and confidentiality by default. The problem is not restricted to cryptography and data security – the market has failed to deliver in other areas, such as the increasing amounts of personally identifiable information that is quietly collected by online advertisers, search engines and government agencies.
This thesis will argue that the failure of the market to provide services that are safe and secure by default is not a failure of the computer science research community, but the result of complex and skewed incentives that play out in the policy, legal and business spheres. As a result, those wishing to improve the state of basic security and privacy for end-users must look beyond the search for new algorithms and cryptographic techniques. They must instead work to solve the policy problems which have thus far frustrated the deployment of basic privacy enhancing technologies. This thesis will effectively weave together technical, legal and policy perspectives, allowing us to reach a level of depth and analysis which would be otherwise impossible if we approached this problem from a single angle.
This thesis will consist of a taxonomy detailing numerous market failures, followed by several in depth case studies, and proposed solutions.
I will first survey several ways in which privacy enhancing technologies can fail to reach consumers, such as skewed incentives by dominant service providers, patent thickets, usability problems, and outright government prohibitions on the use and export of particular technologies.
I will then present several case studies: An analysis of key privacy risks associated with log retention by search engines, and the failure of the market to protect consumers from this threat; a look at the industry-wide failure to provide effective cryptographic data confidentiality and authentication to users of “cloud” and other Web 2.0 services; the legal and policy issues surrounding the government’s ability to compel service providers into inserting privacy invading back doors into their own products; and an analysis of the behavioral advertising industry, and its decade-long failure to provide easy to use and effective opt-out mechanisms for end-users.
Finally, I will propose specific legal and policy solutions to the privacy issues highlighted in the case studies, as well as several policy solutions for the general failures highlighted in the initial survey.
Christopher Soghoian
Wednesday, September 2
1:00pm
Informatics East, Room 130
Indiana University Bloomington
PROFITS VS. PRIVACY?
STUDYING THE FAILURE OF THE WEB 2.0 INDUSTRY TO DEPLOY PRIVACY ENHANCING TECHNOLOGIES
It is now more than 30 years since the invention of public key cryptography. Yet, now, in 2009, the vast majority of Internet users still transmit their own personal information over networks without any form of encryption. When consumers check their Google Mail, Facebook or MySpace accounts using the increasingly ubiquitous free wireless networks in public places, they face a very real risk of theft and hijacking of their online accounts. While skilled technical experts and corporations have easy access to effective security technologies, most consumers still lack basic privacy online. The question we must ask is why?
Effective cryptography is no longer restricted by US export laws, protected by patents, or requires so much computing power that it is impractical for all but state secrets. Yet the market has still failed to deliver products that provide strong authentication and confidentiality by default. The problem is not restricted to cryptography and data security – the market has failed to deliver in other areas, such as the increasing amounts of personally identifiable information that is quietly collected by online advertisers, search engines and government agencies.
This thesis will argue that the failure of the market to provide services that are safe and secure by default is not a failure of the computer science research community, but the result of complex and skewed incentives that play out in the policy, legal and business spheres. As a result, those wishing to improve the state of basic security and privacy for end-users must look beyond the search for new algorithms and cryptographic techniques. They must instead work to solve the policy problems which have thus far frustrated the deployment of basic privacy enhancing technologies. This thesis will effectively weave together technical, legal and policy perspectives, allowing us to reach a level of depth and analysis which would be otherwise impossible if we approached this problem from a single angle.
This thesis will consist of a taxonomy detailing numerous market failures, followed by several in depth case studies, and proposed solutions.
I will first survey several ways in which privacy enhancing technologies can fail to reach consumers, such as skewed incentives by dominant service providers, patent thickets, usability problems, and outright government prohibitions on the use and export of particular technologies.
I will then present several case studies: An analysis of key privacy risks associated with log retention by search engines, and the failure of the market to protect consumers from this threat; a look at the industry-wide failure to provide effective cryptographic data confidentiality and authentication to users of “cloud” and other Web 2.0 services; the legal and policy issues surrounding the government’s ability to compel service providers into inserting privacy invading back doors into their own products; and an analysis of the behavioral advertising industry, and its decade-long failure to provide easy to use and effective opt-out mechanisms for end-users.
Finally, I will propose specific legal and policy solutions to the privacy issues highlighted in the case studies, as well as several policy solutions for the general failures highlighted in the initial survey.
Wednesday, August 12, 2009
Google's commitment to transparency
From Google's Privacy Page:
"At Google, we’re committed to transparency and choice."From a February 2009 post to the Official Google Blog by Jonathan Rosenberg, Senior Vice President of Product Management:
"Everyone should be able to defend arguments with data ... Information transparency helps people decide who is right and who is wrong and to determine who is telling the truth ... This is why President Obama's promise to "do our business in the light of day" is important, because transparency empowers the populace and demands accountability as its immediate offspring."From the February 2009 contract signed between Google and the US General Services Administration, enabling government agencies to use YouTube videos on their web sites:
Confidentiality
The parties shall not disclose to any third parties Confidential Information disclosed by one party to the other under this Agreement. Each party shall protect Confidential Information by applying the same degree of care used by the parties to protect their own confidential information. If any Confidential Information is required to be produced by law, the noticed party will promptly notify the other party, and to the extent allowed by law, cooperate to obtain an appropriate protective order prior to disclosing any confidential information. Both parties agree that, notwithstanding any other provision of this Agreement, Provider may be bound by the Freedom of Information Act, as well as other federal laws and regulations that may require disclosure of information, including disclosure of the fact that an agreement is in place between the parties. Provider agrees that any disclosure of information pursuant to the Freedom of Information Act or other law, regulation or compulsory process requiring disclosure will not, to the extent lawfully permitted, include any Confidential Information. Any required disclosure by Provider of documents that may contain Google Confidential Information will be preceded by notice to Google in accordance with applicable law, regulation and policy including 5 USC 552 and applicable agency rules.
....
Provider acknowledges that, except as expressly set forth in this Agreement, Google uses persistent cookies in connection with the YouTube Video Player. To the extent any rules or guidelines exist prohibiting the use of persistent cookies in connection with Provider Content applies to Google, Provider expressly waives those rules or guidelines as they may apply to Google.
Saturday, August 01, 2009
My new paper and Defcon talk
In three hours, I will present my latest research paper at the Defcon computer hacker conference:
The paper was published in First Monday on Friday evening. With that, the secrecy surrounding this work vanished, and so Wired News was free to write about it.
This work has been under fairly tight wraps for the past few months, primarily due to my fear that the credit agencies might lawyer up and try to halt the publication if they were given prior warning. As a precautionary measure, I asked the Defcon organizers to list me as an "anonymous speaker" in the program schedule.
Now that the work is public, my hope is that the three credit agencies will carefully read my analysis of these exploits, and deploy the fixes that I suggest.
Manipulation and abuse of the consumer credit reporting agencies
This paper will present a number of loopholes and exploits against the system of consumer credit in the United States that can enable a careful attacker to hugely leverage her (or someone else's) credit report for hundreds of thousands of dollars. While the techniques outlined in this paper have been used for the personal (and legal) profit by a small community of credit hackers, these same techniques could equally be used by more nefarious persons - that is, criminals willing to break the law, engage in fraud, and make off with significant sums of money. The purpose of this paper is to shed light on these exploits, to analyze them through the lens of the computer security community and to propose a number of fixes which will significantly reduce the effectiveness of the exploits, by both those with good and ill intentions.
The paper was published in First Monday on Friday evening. With that, the secrecy surrounding this work vanished, and so Wired News was free to write about it.
This work has been under fairly tight wraps for the past few months, primarily due to my fear that the credit agencies might lawyer up and try to halt the publication if they were given prior warning. As a precautionary measure, I asked the Defcon organizers to list me as an "anonymous speaker" in the program schedule.
Now that the work is public, my hope is that the three credit agencies will carefully read my analysis of these exploits, and deploy the fixes that I suggest.