Note: This has nothing to do with my research at school. I've submitted my research paper on airport security, and thus am not working on the issue anymore. My attempts to assert my rights and to fly without ID are those of a private citizen, and are conducted on my own time. My actions or writings do not reflect the official positions of my current, past or future employers.
I got back from spring break/Shmoocon today and found a letter from the Metropolitan Washington Airport Authority waiting for me in my mailbox.
Back in February, I was compelled to show ID by a police officer at DCA/Reagan airport. I believed I was in the right, but I didn't want to risk getting arrested, so I complied and wrote a letter afterwards. That letter can be seen here
I've scanned the letter here, but the short summary of it is as follows:
Arlington County (Virginia) has a law that requires a person to identify himself if a police officer requests it, and a reasonable man would believe that the public safety requires such identification.... This is not so different than the Nevada law in question in the Hiibel case.
Interestingly enough, the lawyer who wrote the letter states that "your name alone is insufficient for the quick background check that Sgt. Westbrook had run."
Essentially, what they claim, is that Hiibel, or their reading of the law, allows an officer to compel you to produce ID. I had clearly identified myself to the officer, both by introducing myself to her by name, and due to the fact that my boarding pass - which she had been given by TSA - had my name printed on it in big letters.
The other fishy part of the encounter was after she had run the background check (which came back clear) - at which point, the officer then handed my ID over to TSA. The letter states "The Airport Authority has no specific procedure for handling a situation in which a passenger refuses to provide identification. Your letter has prompted the airports authority to review its policies and to refine the guidance it gives its police officers".
This is a long way of not telling me much.
I'll probably end up filing a FOIA to the airport authority - to see if I can find out who said what as they decided how to reply to this letter.
Monday, March 26, 2007
Physical Security at Microsoft
I signed and sent off my summer internship contract today, so I can now happily announce that I'll be working at DoCoMo's Euro Research Lab in Munich, Germany this summer.
Since that is now safely sorted out, I think it's OK to describe the interview process for another potential internship opportunity: Microsoft.
A little while back, I flew out to Seattle for 3 days. MS pulled out all the stops - a last minute $800 airplane ticket, a rental car, a decent hotel, and $75 per day in food. It's probably chump change for a consultant, but for a grad student, it was really nice to be able to take a taxi to the airport instead of slumming it on a bus.
I had three 1 hour+ interviews, with three different people in the team that I'd potentially be working with.
I have to say, that 2 of the interviews were absolutely fantastic. Really enjoyable interview questions. Thought provoking, and I can honestly say that they were probably the best interviews I've ever had.
Some of the questions included:
* You are standing in front of a vending machine. Tell me everything that you'd do to hack/reverse engineer it.
* What do you think of the DMCA? When is it ethical to violate it, and likewise, when is it ethical to use it to go after someone?
* Given a web based security application X, describe all the potential attack vectors, and describe how you would protect them.
* What do you think of Mike Lynn, and what he did with Cisco/ISS? Do you agree with his actions, or not?
This last question was particularly enjoyable. I strongly believe that he did the right thing - but the fun little trivia tidbit that I was able to throw out there, is that Mike and I were/are represented by the same fantastic lawyer: Jennifer Granick.
The most ironic part of the interview process was the last, and least fun of the three.
I had to go through a less than enjoyable code review (find the bug in these 3 pages of C++). The person interviewing me spent quite some time telling me how a good chunk of his workload was due to the general laziness and poor coding skills of a large number of programmers at Microsoft. Essentially, he said, the programmers are too lazy to write their code properly, and do the little bit of extra work to actually check the values/inputs that their programs take in.
Bear in mind that after each of the previous interviews, the person conducting it would escort me back to the sealed interview area, where I would wait for the next person to appear and escort me past the locked doors to my next session.
However, after my last interview, Mr "programmers are lazy" took me to the main hallway near his office, pointed me to the reception down the stairs, and asked me to see myself out.....
The very same engineer who had complained that his colleagues created most of the company's security woes due to laziness then let a complete stranger - no, worse: Someone he had just quizzed on their ability to think deviously - walk around a restricted access office building...
And so I took the opportunity to walk down a few hallways, smile at the random engineers that I passed and then helped myself to some of Microsoft's pretty rough coffee in one of their break rooms. I didn't linger more than 4-5 minutes...
Oh yes - Microsoft doesn't have open wireless access on their campus. WTF? Google provides it to the entire city of Mountain View, and MSFT can't even have it in their reception areas for guests?
Since that is now safely sorted out, I think it's OK to describe the interview process for another potential internship opportunity: Microsoft.
A little while back, I flew out to Seattle for 3 days. MS pulled out all the stops - a last minute $800 airplane ticket, a rental car, a decent hotel, and $75 per day in food. It's probably chump change for a consultant, but for a grad student, it was really nice to be able to take a taxi to the airport instead of slumming it on a bus.
I had three 1 hour+ interviews, with three different people in the team that I'd potentially be working with.
I have to say, that 2 of the interviews were absolutely fantastic. Really enjoyable interview questions. Thought provoking, and I can honestly say that they were probably the best interviews I've ever had.
Some of the questions included:
* You are standing in front of a vending machine. Tell me everything that you'd do to hack/reverse engineer it.
* What do you think of the DMCA? When is it ethical to violate it, and likewise, when is it ethical to use it to go after someone?
* Given a web based security application X, describe all the potential attack vectors, and describe how you would protect them.
* What do you think of Mike Lynn, and what he did with Cisco/ISS? Do you agree with his actions, or not?
This last question was particularly enjoyable. I strongly believe that he did the right thing - but the fun little trivia tidbit that I was able to throw out there, is that Mike and I were/are represented by the same fantastic lawyer: Jennifer Granick.
The most ironic part of the interview process was the last, and least fun of the three.
I had to go through a less than enjoyable code review (find the bug in these 3 pages of C++). The person interviewing me spent quite some time telling me how a good chunk of his workload was due to the general laziness and poor coding skills of a large number of programmers at Microsoft. Essentially, he said, the programmers are too lazy to write their code properly, and do the little bit of extra work to actually check the values/inputs that their programs take in.
Bear in mind that after each of the previous interviews, the person conducting it would escort me back to the sealed interview area, where I would wait for the next person to appear and escort me past the locked doors to my next session.
However, after my last interview, Mr "programmers are lazy" took me to the main hallway near his office, pointed me to the reception down the stairs, and asked me to see myself out.....
The very same engineer who had complained that his colleagues created most of the company's security woes due to laziness then let a complete stranger - no, worse: Someone he had just quizzed on their ability to think deviously - walk around a restricted access office building...
And so I took the opportunity to walk down a few hallways, smile at the random engineers that I passed and then helped myself to some of Microsoft's pretty rough coffee in one of their break rooms. I didn't linger more than 4-5 minutes...
Oh yes - Microsoft doesn't have open wireless access on their campus. WTF? Google provides it to the entire city of Mountain View, and MSFT can't even have it in their reception areas for guests?
Sunday, March 25, 2007
Querying Bank of America's SiteKey service
I've been meaning to pick up ruby again. It's such a cool language.
This evening, after returning from todays Shmoocon talks, I was inspired to write a bit of code. My ruby skills are still very rough, so I came up with a little project to help me to learn a few concepts.
The tool that I whipped up: a Ruby script that connects to Bank of America, and pulls down your SiteKey image and title.
Why would something like this be useful? Were such code to be taken, improved, and hacked up a bit, one could imagine some kind of Man in the middle proxying-phishing attack, which presented itself as a legitimate financial institution to the user, connected to Bank of America, and then passed the information back and forth between the user and the legitimate financial website that she thinks she is visiting.
The user would see her Sitekey, and thus would feel safe knowing that she was indeed connecting to BoA - while in fact, she was sending her information to someone in Nigeria.
Sitekey, of course, is the technology sold by Passmark/RSA to a number of financial institutions. It's the cheaper way to do 2 factor authentication. Sitekey has been mentioned in the press recently, after researchers found that users rarely noticed when the image was absent from their banking sessions.
I haven't developed anything like that though. This is just a simple Ruby script.
Sample output:
./boa-mechanize
Please enter your Bank Of America username: joeuser
Please enter your state 2 letter code (i.e. CA): NY
Please answer the following SiteKey Question.
What is the name of your best friend: jack valenti
Got it...
Sitekey phrase: She Sells Sea Shells On the Sea Shore
Sitekey image saved to: /tmp/sitekey.28317.0
---
And just to cover my freedom of speech bases, I'm including the text of the script here, instead of linking to a downloadable file.
----
#!/opt/local/bin/ruby
# Christopher Soghoian
# A fun little ruby script which'll login to your BoA account,
# and download your sitekey image and title, after prompting you
# for one of your sitekey questions
# Apologies for my horrible regex and ruby skills.
#I'm sure this can be done far cleaner.
# Copyleft GPL.
require 'rubygems'
require 'mechanize'
require 'tempfile'
agent = WWW::Mechanize.new
agent.user_agent_alias = 'Windows IE 6'
print "Please enter your Bank Of America username: "
username = gets.chomp
print "Please enter your state 2 letter code (i.e. CA): "
state = gets.chomp
# Make initial connection to BoA webserver. Send over our state of residence
page = agent.get("https://sitekey.bankofamerica.com/sas/signon.do?state=" + state)
boa_form = page.form('signonForm')
boa_form.onlineID = username
page = agent.submit(boa_form, boa_form.buttons.first)
# We've got a cookie that refers to our username and state. Keep going
page = agent.get('https://sitekey.bankofamerica.com/sas/signon.do?&detect=5')
# Extract the sitekey question (since our machine is unknown to BoA)
sitekey_question_html = page.search("//label[@for='sitekeyChallengeAnswer']").inner_html
sitekey_question = sitekey_question_html.scan(/\((.*)\)/)
# Prompt the user for it.
puts "Please answer the following SiteKey Question."
print sitekey_question, ": "
sitekey_answer = gets.chomp
# Submit the answer back to BoA
unknown_computer_form = page.form('challengeQandAForm')
unknown_computer_form.sitekeyChallengeAnswer = sitekey_answer
page = agent.submit(unknown_computer_form, unknown_computer_form.buttons.first)
# Neato. It worked. Grab the sitekey image, and the sitekey image title
sitekey_image_url = page.search("//html").inner_html.scan(/img src=\"(getMySiteKey.*)\" border/)
full_sitekey_image_url = "https://sitekey.bankofamerica.com/sas/#{sitekey_image_url}"
sitekey_image = agent.get(full_sitekey_image_url)
sitekey_image_title = page.search("//html").inner_html.scan(/Your SiteKey Image Title:.*nbsp;
(.*?)<\/td>/m)
tf = Tempfile.new("sitekey")
sitekey_image.save_as(tf.path)
print "\nGot it..\n"
print "Sitekey phrase: ", sitekey_image_title, "\n"
print "Sitekey image saved to: ", tf.path, "\n"
sleep 10
This evening, after returning from todays Shmoocon talks, I was inspired to write a bit of code. My ruby skills are still very rough, so I came up with a little project to help me to learn a few concepts.
The tool that I whipped up: a Ruby script that connects to Bank of America, and pulls down your SiteKey image and title.
Why would something like this be useful? Were such code to be taken, improved, and hacked up a bit, one could imagine some kind of Man in the middle proxying-phishing attack, which presented itself as a legitimate financial institution to the user, connected to Bank of America, and then passed the information back and forth between the user and the legitimate financial website that she thinks she is visiting.
The user would see her Sitekey, and thus would feel safe knowing that she was indeed connecting to BoA - while in fact, she was sending her information to someone in Nigeria.
Sitekey, of course, is the technology sold by Passmark/RSA to a number of financial institutions. It's the cheaper way to do 2 factor authentication. Sitekey has been mentioned in the press recently, after researchers found that users rarely noticed when the image was absent from their banking sessions.
I haven't developed anything like that though. This is just a simple Ruby script.
Sample output:
./boa-mechanize
Please enter your Bank Of America username: joeuser
Please enter your state 2 letter code (i.e. CA): NY
Please answer the following SiteKey Question.
What is the name of your best friend: jack valenti
Got it...
Sitekey phrase: She Sells Sea Shells On the Sea Shore
Sitekey image saved to: /tmp/sitekey.28317.0
---
And just to cover my freedom of speech bases, I'm including the text of the script here, instead of linking to a downloadable file.
----
#!/opt/local/bin/ruby
# Christopher Soghoian
# A fun little ruby script which'll login to your BoA account,
# and download your sitekey image and title, after prompting you
# for one of your sitekey questions
# Apologies for my horrible regex and ruby skills.
#I'm sure this can be done far cleaner.
# Copyleft GPL.
require 'rubygems'
require 'mechanize'
require 'tempfile'
agent = WWW::Mechanize.new
agent.user_agent_alias = 'Windows IE 6'
print "Please enter your Bank Of America username: "
username = gets.chomp
print "Please enter your state 2 letter code (i.e. CA): "
state = gets.chomp
# Make initial connection to BoA webserver. Send over our state of residence
page = agent.get("https://sitekey.bankofamerica.com/sas/signon.do?state=" + state)
boa_form = page.form('signonForm')
boa_form.onlineID = username
page = agent.submit(boa_form, boa_form.buttons.first)
# We've got a cookie that refers to our username and state. Keep going
page = agent.get('https://sitekey.bankofamerica.com/sas/signon.do?&detect=5')
# Extract the sitekey question (since our machine is unknown to BoA)
sitekey_question_html = page.search("//label[@for='sitekeyChallengeAnswer']").inner_html
sitekey_question = sitekey_question_html.scan(/\((.*)\)/)
# Prompt the user for it.
puts "Please answer the following SiteKey Question."
print sitekey_question, ": "
sitekey_answer = gets.chomp
# Submit the answer back to BoA
unknown_computer_form = page.form('challengeQandAForm')
unknown_computer_form.sitekeyChallengeAnswer = sitekey_answer
page = agent.submit(unknown_computer_form, unknown_computer_form.buttons.first)
# Neato. It worked. Grab the sitekey image, and the sitekey image title
sitekey_image_url = page.search("//html").inner_html.scan(/img src=\"(getMySiteKey.*)\" border/)
full_sitekey_image_url = "https://sitekey.bankofamerica.com/sas/#{sitekey_image_url}"
sitekey_image = agent.get(full_sitekey_image_url)
sitekey_image_title = page.search("//html").inner_html.scan(/Your SiteKey Image Title:.*nbsp;
(.*?)<\/td>/m)
tf = Tempfile.new("sitekey")
sitekey_image.save_as(tf.path)
print "\nGot it..\n"
print "Sitekey phrase: ", sitekey_image_title, "\n"
print "Sitekey image saved to: ", tf.path, "\n"
sleep 10
Tuesday, March 13, 2007
The Economics of Phishing Emails, and Corporate Logos
Disclaimer: This is all idle speculation. I have no inside info to support my claims.
This evening, I spent some time browsing through Phish Tank - A fantastic live reference for phishing websites.
A shockingly large number of the websites include images from Paypal, Ebay and other .com's own web servers. That is, instead of making a local copy of the image, and hosting it on the server which run the phishing site, they instead include the image directly off Ebay's webserver. Not only does Ebay get phished, but they have to pay the bandwidth costs for the graphics displayed to the victim.
It's almost like the tale of a twisted dictator shooting someone, and then sending the victim's family a bill for the bullet.
This got me thinking.
Paypal, Bank of America, and others know exactly where their graphics should be shown on the web. A general, and reasonable rule would be, anytime a website at Paypal.com loads our logo, let it happen. If someone at evilphisher.com tries to load our image, load up a big warning image instead. This could easily be done by checking the referrer passed by the browser.
This would be trivial to implement. The question then, is why isn't Paypal doing this already?
As crazy as it may be, the answer is probably something like this:
1. Bandwidth is cheap, at least in the huge quantities that Paypal is purchasing.
2. Phishers are often hosting their sites on zombie/hacked machines, so they don't pay for the bandwidth themselves.
3. If Paypal starts checking the referrer string sent by a browser, phishing website designers will simply save a local copy of the image, and host them on their own websites.
Simply put, Paypal doesn't really gain much by disallowing the phishers from using Paypal.com to host their images, and in fact, loses quite a bit.
As things stand right now, Paypal can analyze their logs, and see exactly which websites are causing people to load their images. Paypal probably has a team of people, or several scripts hitting each one of these websites to see if they are indeed a phishing site. If Paypal cuts off the flow of images, and forces phishers to host their own image files, they will immediately lose this valuable source of intelligence.
In this case - it seems that the enemy you know is far better than the enemy you've forced underground.
Wednesday, March 07, 2007
Is the Terrorist Surveillance Program exempt from FISA?
Bit by bit, I'm slowly learning to appreciate the law, and I'm learning how to read it. At times, I actually browse SSRN for pleasure...
For those of you who don't know what the Terrorist Surveillance Program is, go read about it elsewhere. It's old news now.
I read a few parts of the FISA statute this evening, and a couple things jumped out at me. Lets look at 50 U.S.C. § 1801 (f).
“Electronic surveillance” means—
(1) the acquisition by an electronic, mechanical, or other surveillance device of the contents of any wire or radio communication sent by or intended to be received by a particular, known United States person who is in the United States, if the contents are acquired by intentionally targeting that United States person, under circumstances in which a person has a reasonable expectation of privacy and a warrant would be required for law enforcement purposes;
(2) the acquisition by an electronic, mechanical, or other surveillance device of the contents of any wire communication to or from a person in the United States, without the consent of any party thereto, if such acquisition occurs in the United States, but does not include the acquisition of those communications of computer trespassers that would be permissible under section 2511 (2)(i) of title 18;
(3) the intentional acquisition by an electronic, mechanical, or other surveillance device of the contents of any radio communication, under circumstances in which a person has a reasonable expectation of privacy and a warrant would be required for law enforcement purposes, and if both the sender and all intended recipients are located within the United States;
From my reading of these 3 parts of section (f), it would seem like:
If the US governement/NSA performs a wiretap in international waters (i.e. splices the undersea fiber-optic cable or copies the satellite signal in space), does so on a wholesale basis (i.e. captures every single communication on that wire, and isn't attempting to target a particular citizen), and does it only for communications where one party is outside the USA, that they would be exempt from FISA.
I'm still rather new to the law here, but this seems like a fairly obvious loophole.
Am I missing something here?
For those of you who don't know what the Terrorist Surveillance Program is, go read about it elsewhere. It's old news now.
I read a few parts of the FISA statute this evening, and a couple things jumped out at me. Lets look at 50 U.S.C. § 1801 (f).
“Electronic surveillance” means—
(1) the acquisition by an electronic, mechanical, or other surveillance device of the contents of any wire or radio communication sent by or intended to be received by a particular, known United States person who is in the United States, if the contents are acquired by intentionally targeting that United States person, under circumstances in which a person has a reasonable expectation of privacy and a warrant would be required for law enforcement purposes;
(2) the acquisition by an electronic, mechanical, or other surveillance device of the contents of any wire communication to or from a person in the United States, without the consent of any party thereto, if such acquisition occurs in the United States, but does not include the acquisition of those communications of computer trespassers that would be permissible under section 2511 (2)(i) of title 18;
(3) the intentional acquisition by an electronic, mechanical, or other surveillance device of the contents of any radio communication, under circumstances in which a person has a reasonable expectation of privacy and a warrant would be required for law enforcement purposes, and if both the sender and all intended recipients are located within the United States;
From my reading of these 3 parts of section (f), it would seem like:
If the US governement/NSA performs a wiretap in international waters (i.e. splices the undersea fiber-optic cable or copies the satellite signal in space), does so on a wholesale basis (i.e. captures every single communication on that wire, and isn't attempting to target a particular citizen), and does it only for communications where one party is outside the USA, that they would be exempt from FISA.
I'm still rather new to the law here, but this seems like a fairly obvious loophole.
Am I missing something here?
Sunday, March 04, 2007
How The RIAA and MPAA Unknowingly Assist Child Pornographers
Or: How the Media Companies did more to spread cryptography, anonymity preserving technology and general knowledge about good online privacy hygiene than an army of activist cypherpunks ever could have
[Ed: I have to admit, I'm pretty proud of the fact that I've managed to tar two of the great Satans in the world, the RIAA and MPAA, with the kiddie porn brush. It's about time, since they've been doing the same to anonymity researchers for years]
A few years back, after waiting all night outside the US Supreme Court, I saw a semi-familiar face walking towards the front of the court-house. Without thinking, I ran up to him, and asked if I could have my photo taken with him. True, he is an extremely evil and corrupt man. Not quite as bad as Pol Pot, or even Cheney, but still evil enough. His name is Jack Valenti, and this blog-post describes how, strangely enough, he and his cohorts make the lives of child pornographers far better, and far safer.
-------
Music and software piracy existed long before Napster. It took place on Internet news groups (usenet), bulletin board systems (BBS), ftp, and good old fashioned person-to-person exchange via floppy disks. The real threat that Napster posed, was that it was really easy to use. So simple, that a non-technical user could quickly figure it out. What Napster did, essentially, was make an entire generation of non-technical users into 'pirates'.
We all know the story: Napster was shut down by the record labels, and shortly afterwards, improved systems like Gnutella and Kazaa took its place. While Napster had been a centralized system (with verbose logging, should law enforcement ever need it), the new systems were extremely difficult to take down, and presented a significant problem for anyone who wished to do forensic analysis after the fact - since there were no centralized records of who downloaded and uploaded what files.
Whereas before, the FBI could have sent Napster a supoena stating "Tell us every user sharing these 5000 kiddie porn files", the new networks were purpose built to not be able to have that ability. Not because the designers wanted to help those sharing kiddie porn, but because the record labels used the very same techniques that the FBI used to combat child porn.
Fast forward a few years.
The record companies have their agents (like BayTSP) regularly trawling P2P networks looking for copyrighted content. The FBI and other parts of the government are either already using similar technologies, or surely have to be developing them....
In response, users have deployed technologies like PeerGuardian - which block network addresses known to be used by the record companies and their clients. And since DOJ has decided to begin, albeit slowly, prosecuting major P2P offenders, they will soon find themselves added to these blacklists - if they haven't been added already.
Let us now consider the case of encryption:
Shortly after the crypto-wars, the only people using encryption on their machines were paranoid crypto-geeks, or cypherpunks, as they called themselves. Systems were far too difficult to use to be deployed by the common man.
Fast forward a few years. The makers of Kazaa learned many lessons from their interactions with the record labels. When they developed their next program, Skype, they made sure to design cryptographic protocols into the core level of the program. Every single Skype call is encrypted - and if the call never leaves the skype network, then no one but the two callers can listen in. To make things even more difficult, just as with Kazaa, Skype was developed in eastern Europe, and owned in another country. This multi-jurisdictional separation makes subpoenas quite tricky.
Skype is now the most widely deployed cryptographic application, ever. It's easy to use, it is used by millions of Internet users around the world, and the government has no real way to tap voice data as it crosses the network - CALEA, or not.
The point that I am trying to make is the following:
By going after people for sharing movies and music online, the major media companies have essentially created a huge market for anonymous (or close to anonymous) technologies. Technologies such as Tor, Freenet, Gnutella, and Skype arguably wouldn't exist as they do today if the Media companies didn't go after 'pirates' with such vigor. And with the influx of millions of new users, these programs have become better - either through more financial support/advertising, or through new developers/open source coders who are finding bugs and adding features.
P2P enforcement forced anonymity and evasion technologies to evolve far faster than they ever would have if the FBI had been the only 'threat' to privacy online.
However, these technologies do not just make the task of detecting copyrighted works more difficult - they make the FBI's job of finding child pornographers more difficult. Far more people use encryption now. Far more people erase data, and turn off logging.
The mass publicity of the NSA lawsuits has only cemented the idea in the public consciousness that email can be read, and so, I would argue, that less and less sensitive information is sent by email. More, not all, but more, people know that their email is not secure.
And now with all the press relating to data loss/breaches by companies, we are finding that many Fortune 500 companies are demanding full disk encryption from their Operating System suppliers. This will roll downhill. Someone who gets comfortable with the idea of an encrypted filesystem at work will be far more likely to turn that option on when they install Windows Vista at home. This will of course, hugely frustrate the FBI. This isn't to say that they can't break it, but it makes their lives far far more difficult.
What is the moral to this story? The record companies have made an entire generation of college students into criminals, and as such, those college kids have resorted to technical means of avoiding detection - which create a gigantic crowd of encrypted and obfuscated data in which 'real' criminals can hide. These evasion methods are the very same techniques which can frustrate legitimate and useful law enforcement, which as an unintended side-effect, suffer. The ability to catch genuine terrorists and child pornographers is significantly limited through the short sighted actions of the major media companies.
And the thing is - it's too late to fix it. The genie is out of the bottle.
Just as the drug war has made an entire generation fear and mistrust the police, the P2P wars have given the Internet generation a reason to protect their privacy, or at least frustrate forensic analysis of their online activity.
So the next time you see an article describing a new tactic that the record labels are taking to stamp out piracy - Stop for a moment, and please, think of the children.
Note: I started coming up with the idea for this blog post a week ago over lunch with a colleague. However, I decided to hurry up and finish it after reading a recent law review article by Eric Stieglitz (ANONYMITY ON THE INTERNET: HOW DOES IT WORK, WHO NEEDS IT, AND WHAT ARE ITS POLICY IMPLICATIONS? ). You can find it on westlaw or lexis if you're lucky enough to have an account.
[Ed: I have to admit, I'm pretty proud of the fact that I've managed to tar two of the great Satans in the world, the RIAA and MPAA, with the kiddie porn brush. It's about time, since they've been doing the same to anonymity researchers for years]
A few years back, after waiting all night outside the US Supreme Court, I saw a semi-familiar face walking towards the front of the court-house. Without thinking, I ran up to him, and asked if I could have my photo taken with him. True, he is an extremely evil and corrupt man. Not quite as bad as Pol Pot, or even Cheney, but still evil enough. His name is Jack Valenti, and this blog-post describes how, strangely enough, he and his cohorts make the lives of child pornographers far better, and far safer.
-------
Music and software piracy existed long before Napster. It took place on Internet news groups (usenet), bulletin board systems (BBS), ftp, and good old fashioned person-to-person exchange via floppy disks. The real threat that Napster posed, was that it was really easy to use. So simple, that a non-technical user could quickly figure it out. What Napster did, essentially, was make an entire generation of non-technical users into 'pirates'.
We all know the story: Napster was shut down by the record labels, and shortly afterwards, improved systems like Gnutella and Kazaa took its place. While Napster had been a centralized system (with verbose logging, should law enforcement ever need it), the new systems were extremely difficult to take down, and presented a significant problem for anyone who wished to do forensic analysis after the fact - since there were no centralized records of who downloaded and uploaded what files.
Whereas before, the FBI could have sent Napster a supoena stating "Tell us every user sharing these 5000 kiddie porn files", the new networks were purpose built to not be able to have that ability. Not because the designers wanted to help those sharing kiddie porn, but because the record labels used the very same techniques that the FBI used to combat child porn.
Fast forward a few years.
The record companies have their agents (like BayTSP) regularly trawling P2P networks looking for copyrighted content. The FBI and other parts of the government are either already using similar technologies, or surely have to be developing them....
In response, users have deployed technologies like PeerGuardian - which block network addresses known to be used by the record companies and their clients. And since DOJ has decided to begin, albeit slowly, prosecuting major P2P offenders, they will soon find themselves added to these blacklists - if they haven't been added already.
Let us now consider the case of encryption:
Shortly after the crypto-wars, the only people using encryption on their machines were paranoid crypto-geeks, or cypherpunks, as they called themselves. Systems were far too difficult to use to be deployed by the common man.
Fast forward a few years. The makers of Kazaa learned many lessons from their interactions with the record labels. When they developed their next program, Skype, they made sure to design cryptographic protocols into the core level of the program. Every single Skype call is encrypted - and if the call never leaves the skype network, then no one but the two callers can listen in. To make things even more difficult, just as with Kazaa, Skype was developed in eastern Europe, and owned in another country. This multi-jurisdictional separation makes subpoenas quite tricky.
Skype is now the most widely deployed cryptographic application, ever. It's easy to use, it is used by millions of Internet users around the world, and the government has no real way to tap voice data as it crosses the network - CALEA, or not.
The point that I am trying to make is the following:
By going after people for sharing movies and music online, the major media companies have essentially created a huge market for anonymous (or close to anonymous) technologies. Technologies such as Tor, Freenet, Gnutella, and Skype arguably wouldn't exist as they do today if the Media companies didn't go after 'pirates' with such vigor. And with the influx of millions of new users, these programs have become better - either through more financial support/advertising, or through new developers/open source coders who are finding bugs and adding features.
P2P enforcement forced anonymity and evasion technologies to evolve far faster than they ever would have if the FBI had been the only 'threat' to privacy online.
However, these technologies do not just make the task of detecting copyrighted works more difficult - they make the FBI's job of finding child pornographers more difficult. Far more people use encryption now. Far more people erase data, and turn off logging.
The mass publicity of the NSA lawsuits has only cemented the idea in the public consciousness that email can be read, and so, I would argue, that less and less sensitive information is sent by email. More, not all, but more, people know that their email is not secure.
And now with all the press relating to data loss/breaches by companies, we are finding that many Fortune 500 companies are demanding full disk encryption from their Operating System suppliers. This will roll downhill. Someone who gets comfortable with the idea of an encrypted filesystem at work will be far more likely to turn that option on when they install Windows Vista at home. This will of course, hugely frustrate the FBI. This isn't to say that they can't break it, but it makes their lives far far more difficult.
What is the moral to this story? The record companies have made an entire generation of college students into criminals, and as such, those college kids have resorted to technical means of avoiding detection - which create a gigantic crowd of encrypted and obfuscated data in which 'real' criminals can hide. These evasion methods are the very same techniques which can frustrate legitimate and useful law enforcement, which as an unintended side-effect, suffer. The ability to catch genuine terrorists and child pornographers is significantly limited through the short sighted actions of the major media companies.
And the thing is - it's too late to fix it. The genie is out of the bottle.
Just as the drug war has made an entire generation fear and mistrust the police, the P2P wars have given the Internet generation a reason to protect their privacy, or at least frustrate forensic analysis of their online activity.
So the next time you see an article describing a new tactic that the record labels are taking to stamp out piracy - Stop for a moment, and please, think of the children.
Note: I started coming up with the idea for this blog post a week ago over lunch with a colleague. However, I decided to hurry up and finish it after reading a recent law review article by Eric Stieglitz (ANONYMITY ON THE INTERNET: HOW DOES IT WORK, WHO NEEDS IT, AND WHAT ARE ITS POLICY IMPLICATIONS? ). You can find it on westlaw or lexis if you're lucky enough to have an account.