Showing posts with label phishing. Show all posts
Showing posts with label phishing. Show all posts

Monday, July 09, 2007

Astroglide Data Loss Could Result In $18 Million Fine


[scroll way down for a spreadsheet containing numbers of Astroglide requests per state]



Executive Summary

In April 2007, Biofilm Inc. accidentally published on the Internet the names and addresses of over 200,000 customers who had requested a free sample of their popular sex lubricant Astroglide. This blog post highlights the fact that the leaked data could serve as highly effective bait for targeted phishing attacks and other kinds of scams. A full breakdown of numbers of requests for each state are released. These numbers are then used to estimate potential fines against Biofilm should state Attorneys General wish to get involved.



Introduction

Privacy is a strange beast. It is one of our "rights" least well defined and protected by the law. The U. S. Constitution contains no express right to privacy. Likewise, data protection is something that has yet to be properly addressed by US law.

Consumers regularly surrender their personal information to random strangers in return for t-shirts and teddy bears as credit card sign-up bonuses. Similarly, many consumers permit the tracking of individual items in their supermarket purchases by companies in return for modest discounts or "points" through loyalty schemes.

Data protection and privacy become far more important when they relate to personal and sexual information. Most consumers would probably be more concerned about someone else gaining access to the order info for ther their Good Vibrations (an online seller of marital aids) account than for their past book purchases from Amazon.

Likewise, when Congress rushed to pass extremely pro-privacy restrictions on the release of video-rental records in 1988, it was not because they were concerned about tabloid journalists learning how many times a particular Senator had rented Citizen Kane.



A Slippery Problem

The main subject of this blog post relates to a data loss/accidental release by a California company named Biofilm, Inc. They are the makers of Astroglide, a popular sexual lubricant.

For most of April 2007, a database of names and addresses of individuals who had requested free samples of Astroglide was inadvertently left unprotected on the company's website. In addition to random visitors being able to access the database, Google's search engine spider software made copies of the database - cached copies of which continued to be available online from Google's site for more than a week after Astroglide removed the data from their own website.

Within hours of Wired News picking up the Astroglide story, fellow Indiana University PhD student Sid Stamm and I began frantically downloading all the data from Google's cache. The leaked Astroglide database contains the names and addresses of individuals who requested a samples between 2003 to 2007. With a bit of effort to clean out duplicate entries, we soon had a database of just over two-hundred thousand unique names and addresses.

I've been struggling to come up with an interesting, useful and ethical way to use this data. While the obvious Yahoo Maps mashup is amusing (and scarily mind blowing), it's just not fair to the people who gave Astroglide their data in good faith. They do not deserve to have their privacy violated and abused more than they have already suffered. The screenshot posted at the top of this blog post is real - but out of respect to the people in the database, I will not be putting the mashup online.




More Than Just Embarassment

There is almost no chance that the Astroglide data could be used to steal someone's identity. Unfortunately, the data loss laws passed by the various states only really have identity theft in mind, and so they did not kick-in in this incident. This is primarily due to the fact that the data that was exposed does not match the strict definition of PII (personally identifiable information), as in this case, no social security, credit card or other account numbers were revealed.

Adam Shostack is quite vocal about his belief that data breaches/data loss incidents are not just about identity theft. He writes that "[Data Breaches] are about honesty about a commitment that an organization has made while collecting data, and a failure to meet that commitment."

My immediate reaction and concern when reading about the Astroglide incident was, "how embarrassing." Yes, it would be quite unpleasant for the people in the database if their colleagues, friends and attendees of their church learned that they had requested a sexual lubricant. Having this information come up in a Google search for the person's name could even pose a problem during some job interviews.

The Astroglide incident is bigger than just the issue of embarrassment. The smallest bit of information about an individual can serve as a vehicle for targeted phishing and other kinds of fraud. I discussed this with Prof. Markus Jakobsson and he came up with two fantastic examples of scams that could use this data.
  • A version of the spanish lottery scam with a spear phishing touch: A would-be phisher could send a postcard to each name on the list, advising them that since they are fans of the product, they were enrolled in an online lottery - and that they have won. All that they need to do is to go online to claim their winnings.

  • A class action version of the Nigerian 419 scam: A swindler could send a postcard to victims, notifying them of the data loss, and stating that they have been invited to join a class action lawsuit against Biofilm/Astrolide. The victim would be told that they will receive several hundred dollars as part of the settlement, and all that they need to do to claim their share is to fill out the postcard with their banking details and send it off.


These and other similar attacks would be much easier (and cheaper for the attacker) if they could be conducted by email. Turning each of the 200,000 names addresses into a valid email address is not an easy task - thankfully. This at least raises the cost of any attempted scam to the cost of a stamp for each potential victim.

A few months ago, I highlighted an incident at Indiana University where phishers were able to obtain a list of valid email addresses for IU students. They were then able to use this list, which consisted solely of users' names and email addresses to launch a highly successful spear phishing attack against the IU Credit Union.

Likewise, my colleagues in the Stop Phishing Research Group at Indiana University have conducted several targeted phishing studies that have clearly demonstrated the impact that of even the smallest bit of accurate information on a user can have on the effectiveness of a phishing attack. Simply put, Anything that is known about people can be used to win their trust. Such insights are used to improve consumer education in the recent effort www.SecurityCartoon.com.

I suspect that most phishing attacks against credit unions and small regional banks already involve some form of data breach/loss. The economics of phishing simply do not add up otherwise - a phisher would be far better off claiming to be Citibank/Chase if they are sending out an email to 3 million randomly collected email addresses. I predict that we'll see a lot more of these kinds of phishing attacks. Although, due to the fact that notification won't be required in data loss incidents where social security or credit card numbers are not lost, the public will not be told how the phishers got their target list.

Phishers are constantly evolving their techniques. As in-browser anti-phishing technology becomes the norm, and spam filters mature, we will likely see a shift towards more targeted phishing. These attacks involve far less email messages, and are thus likely to better stay below the radar of the anti-phishing blacklist teams at Google, Microsoft and Phishtank. While data loss/breach incidents involving social security numbers of course pose a identity theft risk - the risk of this information being used for phishing and other scam attacks is currently being completely overlooked.

The solution to this, of course, is to amend the data breach/loss notification laws to apply when any customer information is lost or released to unauthorized parties. Companies will fight this, citing the high cost of notification and a desire to avoid needlessly worrying their customers. The laws will stay the same, and phishers will laugh all the way to the bank.




Could Biofilm/Astroglide be fined?

Contrast the Astroglide data loss to a completely separate yet similar incident:

Between August and November of 2002, the order information (name, address, items purchased) for over 560 customers was available to any curious visitor on the website of American underwear retailer Victoria's Secret. This was due to a web security snafu, which was soon fixed after it was reported. The following year, New York Attorney General Eliot Spitzer negotiated a settlement with Victoria's Secret, in which the company agreed to pay the state of New York $50,000 as well as notifying each customer whose data was inadvertently made available online. The New York Times had a full write up of the story online.

I think it's really useful to compare the two different cases. In both, data was accidentally put on the Internet. Neither dataset contained credit card numbers, social security numbers, or what we would usually think of as PII. As such, the various state data breach/loss laws didn't kick in.

However, while the data lost (name and address) wasn't particularly sensitive - after all, in many cases, it can be looked up in the phone book - it is the combination of that data with a highly sensitive and sexual product which would give the average consumer a legitimate cause for concern.

Victoria's Secret agreed to notify every customer whose data was accidentally put online. Astroglide has not told a single customer. Victoria's secret agreed to pay $50,000 to the state of NY for about 560 customers, although only 26 of them were actually NY residents. Astroglide has not paid a single penny to any state as a result of this incident.

I think Biofilm should be held accountable for the accidental publication of the names and addresses of 200,000 customers. To remedy this, I have spent quite a bit of time over the past couple weeks filing complaints with numerous state Attorneys General, including the notoriously pro-privacy AGs in California and New York. I have filed a complaint with the Federal Trade Commission. A few hundred overseas consumers tried to get Biofilm to send them a sample by airmail. Thus, I'm working with The Canadian Internet Policy and Public Interest Clinic to file a complaint with the appropriate Canadian authorities. I've also already filed complaints with the data protection agencies in the UK, Ireland, Belgium, The Netherlands and Finland.

A wise lawyer has informed me that the ultimate way to kickstart things is to find a California resident victim, and have that person file an action under CA Business & Professional Code 17-200. My name is not in the database and I do not live in California. Furthermore, I do not feel that is would be ethical to go through the list of 17 thousand California residents, looking them up in google, hopefully finding an email address, and then contacting those individuals to ask them to file a complaint. Thus, as much as I'd like to get a CA Business & Professional code complaint filed against Biofilm, my hands are currently tied.




There are two ways to judge the cost of data loss per customer for Victoria's Secret. $50,000 divided by 26 New York residents equals approximately $1925 per customer. However, given that no other state fined Victoria's Secret, it is probably safer to divide the $50,000 fine by all 560 customers, which gives us a fine of approximately $90 per customer.

Using that $90 per customer figure, I decided to figure out how large of a fine Astroglide could potentially face, assuming of course, that one or more state Attorneys General began investigating.

I pulled per-state stats from the database - which are broad enough that I feel confident that I can release them without putting any individual user's privacy at risk. Using state population estimates from the US Census Bureau, I was also able to calculate a ratio for the number of people in each state per Astroglide request. As much as I was hoping that KY (Kentucky) would win - I could already visualize the Fark headline - North Dakota won, with one Astroglide sample request per 908 state residents. New Mexico came in "last" with one request per 2656 state residents. Analysis of what these numbers actually mean is an exercise best left to the reader.

While it may not be realistic to expect Biofilm to pay $18 million in fines, it's quite surprising that they've been able to get away without even having to notify all of their customers. My hope is that by putting this limited bit of information online, I can hopefully start a debate on this issue.





Conclusion


This blog post will hopefully raise the profile of the Astroglide data loss incident, which unfortunately disappeared from the headlines after a day or two without Biofilm being held accountable for the massive breach of customer trust. It should also highlight the fact that once data has been cached by Google, putting the proverbial genie back in the bottle is next to impossible. If two PhD students can pull a copy of the database from Google's servers, so can malicious parties, including would-be phishers. It is perfectly reasonable to expect that multiple copies of the database were downloaded before Google heeded Biofilm's request, a few days later, and removed the data from its cache servers. Likewise, it is quite reasonable to expect that at least one of the downloaders has criminal intentions - or at least a willingness to sell the data on to others.

Consumers in the database face more than just embarrassment. To minimize the risk associated with phishing and other scam attacks, Biofilm should be forced to notify each of the 200,000 + exposed individuals. The take home lesson from all of this, is that these kinds of data loss incidents will continue to occur in the future and it's highly unlikely that consumers will be told. Existing data breach/data loss laws have been narrowly focused to target the threat of identity theft, a noble goal, but by no means the only threat that consumers face. These laws should be amended to correct this problem. Consumers have a right to be told whenever their information is inadvertently released to unauthorized parties.

Monday, April 23, 2007

Update to IU Phishing story

I had a really interesting lunch meeting with a senior IU member of the administration on Friday. They wanted to chat about the IU Phishing incident, which I blogged about last week.

While we didn't see completely eye to eye on all issues, there was a fairly significant amount of common ground. While I felt that I had been pretty clear in my initial blog post, there seemed to be some confusion, esp. once it had been parsed by those in the media.

Thus, I'd like to clarify a few things:

1. The university 'steel' cluster was not 'hacked' through the use of a security hole. The phishers, whoever they were, logged in using compromised usernames/passwords. That is, they had somehow gotten access to valid user accounts.

This could have been done through another phishing attack, by hacking into student personal computers, or by someone leaving their gmail account open at a public computer in the library. There are plenty of other ways this could happen.

There is very very little that IT staff can do to protect computers from people who have their usernames/passwords stolen. Much can be done to fix system security flaws, but if a user chooses to keep their password on a post-it note attached to their monitor, it is very difficult to keep those bad guys out of their account.

2. There is no forensic evidence that indicates that the phishers obtained their target list (those people they'd send the phishing emails to) by looking at the /etc/passwd file on 'steel'.

Yes, the /etc/passwd file exists, contains usernames (which directly relate to an IU email account for many of the users), and is readable by all users logged in to the machine. If the phishers had wished to, they could have downloaded this file, and gotten a large number of email addressed by doing so. However, there is no forensic information which proves that they did do this.

We know that they had a a large list of targets - but how they got it, to this day, remains a mystery. We know several ways that they -might- have got it, but we do not have evidence to firmly support any of those theories.

3. Since every user who has an account on steel has access to the /etc/passwd file, it would have been quite possible for a student to login to the machine, download the file, and give it to the phishers, or share it online. The phishers wouldn't necessarily need to login to steel to get this, if they could somehow convince a student to get it for them.

4. No one knows where the phishers came from. All that is known is that they bounced their connection through a machine in China. The evidence trail stops there. They very well could have been in Bloomington, and we'd never know.

-----

I guess the main point that we both agreed on, was that this problem did not stem from IU technical staff not doing a good job. As soon as they noticed that those compromised accounts were doing bad things, the accounts were locked and the remote machine in china was blocked from connecting to the university network.

Phishing, as always, is a people problem. It is the age old art of deception moved to a new, and scarily flexible medium. The real take-home from this, I suppose, is that with even a minimal bit of information (someone's name and email address), a phisher can do some pretty significant damage. This makes the job of protecting our data even more critical, and it means that we must train the student body, our friends, and family to have their wits about them.

Monday, April 16, 2007

FOIA Fun. Or. How Phishers hacked into IU



This post should probably be called Indiana Public Records Act Fun - but that doesn't quite roll off the tongue.

I signed up for an Indiana University email account in March or so of 2006. Between signing up and the start of school in September, I'd never used the email address for anything, and a Google query at the time for the address came back negative.

In mid June of 2006, I received a phishing email claiming to be from the IU credit union. The Indiana Daily Student later covered this incident. The article merely mentioned that phishing emails targetting the credit union had been sent out, and that a bunch of students had typed in their info. The article didn't explain how the phishers had learned the email addresses of the students, nor who had launched the attack.

My IU email address is 'csoghoia'. Given that my email address was new, and wasn't published anywhere on the Internet, there was no way for a phisher to learn my address short of an exhaustive address space search (i.e. trying every possible combination of letters) - Unless, of course, the university was hacked, and information was stolen, or, if the university accidentally released my info. Either of these two potential scenarios were alarming, and so I began to look into the matter.

An email requesting information from IU's Incident Response Manager about how phishers had learned my email address resulted in this: "Unfortunately, I cannot comment on this activity as it relates to an active investigation. Be assured we are working aggressively to put a stop to it."

Alarm bells went off... Something phishy (ha ha) was going on.

Thus, on June 23 2006, I filed a Indiana Public Records Act Request with the University. I asked for: any and all information regarding theft or accidental loss of student data including but not limited to names, social security numbers, and email addresses. I am additionally requesting any and all information regarding any ongoing or completed investigations including those by the Office of the VP for Information Technology, of "phishing" emails sent to IU users pretending to be from the IU Credit Union. The scope for these two requests are for documents created within the last 6 months.

On January 11 2007, I received a fat envelope full of papers from the Office of the University Counsel. The response can be seen here. Most of the information was fairly boring, but there were some gems. I've scanned the interesting documents and put them online here.

Typically, when phishers send emails out - they will collect an email list of millions of email addresses, and then send the same email out to them. Thus, in an effort to get the most bang for their phishing buck, the fraudsters target major banks. The idea being, of course, that out of 5 million email addresses, perhaps 200,000 actually belong to Citibank customers. Thus, it doesn't make too much sense for a phisher to send out 5 million emails claiming to be from a small credit union in Bloomington, Indiana. It simply isn't worth it.

If the phisher can get his hands on an email list of every person in Bloomington, then sending an email to every one of those people on the off-chance that they have a bank account with one of the major credit unions in town starts to make sense. This kind of targeted phishing attack has a name: spear phishing.

And what happened in June 2006, was a case of spear phishing.

From reading the documents that I've placed online, I've been able to figure out the following:

Chinese hackers - or at least, someone connecting from a machine in China, broke into one of the accounts on the 'steel' research cluster at IU. This cluster serves the research needs of the student population, and the "about steel" page says that it has over 24,000 accounts active. It seems that most students have accounts - I have one, and I don't recall ever requesting one. Presumeably, it happens as part of the general account setup process.

Ok, so the hackers were able to gain access to Steel. What next?

Every unix machine has an "/etc/passwd" file that lists information on every active user account on the system. Steel has one of these. I just logged into steel a few moments ago to test this, and as of April 15 2007, Steel has over 30,000 user accounts listed in /etc/passwd.

With access to steel, the hackers were able to download the /etc/passwd file, and thus get a list of many many active user accounts. Your steel account name is the same as your IU email address. Thus, the hackers were able to get 30,000 email addresses.

The phishers then sent a large number of fake emails, claiming to be from the IU credit union, to IU users, directly from the steel cluser- that is, the fake emails were sent from within the IU network. A recent report indicates that between 70-80 users were duped by this attack. A subsequent attack happened in Feburary of 2007. It is more than likely that either the same phishers, or another gang using the same stolen email address list, caused this attack. We will probably continue to see attacks, every few months, using the same email list. Eventually, in 2-3 years, when most of the students on the list have graduated, will the list finally be useless. Thus, for the phishers, the capture of the email list is a gift that keeps on giving.

It's also worth noting that the very same phishers launched a similar attack against a credit union in Florida. They left a bunch of forensic evidence behind on steel which proves the link between the two credit union attacks. Clearly, these guys have found a niche (breaking into machines, gathering info, and then targeting small credit unions and banks). My guess is that it's quite profitable.

Which brings me to the most important point of this blog-post: Notification.

Indiana has a Breach Notification Law. However, it is very narrowly written to only kick in when the following information is lost:

  • Sec. 3. (a) As used in this chapter, "personal information" means:

    • (1) an individual's

      • (A) first name and last name; or

      • (B) first initial and last name; and

    • (2) at least one (1) of the following data elements:

      • (A) Social Security number.

      • (B) Driver's license number or identification card number.

      • (C) Account number, credit card number, debit card number, security code, access code, or password of an individual's financial account.

  • (b) The term does not include the following:

    • (1) The last four (4) digits of an individual's Social Security number.

    • (2) Publicly available information that is lawfully made available to the public from records of a federal agency or local agency.



The act only went into force on June 30 2006 - which is sadly, a few weeks too late.

For the purposes of discussion, lets pretend that the law kicked in on Jan 1 2006. In such a scenario, the university would still not be required to tell any of their students that chinese phishers had access to their email addresses. Why?
Because an email address is not covered by the law. If the phishers had stolen SSN's, then the university would be required to notify the student body...

I want to make it perfectly clear that I am not criticizing the university. They followed the law, and acted in a perfectly legally manner. My criticism, is of the law, which is weak, and ineffective.

As a side note: if the university decided to track students by their full last name and all but the first letter of their full name (i.e. "hristopher Soghoian"), as the law is currently written, the university wouldn't be required to notify students if the school were hacked into, and the entire database of student records and SSN's were stolen. Obviously, tracking students in such a way would not be very practical, but it does demonstrate that the law is fairly narrow, and doesn't cover everything.

My goal in posting this isn't to heap criticism on the university staff. The staff here are overworked, underpaid, and do their jobs as best as they can. The problem here is the law. It is broken, and needs to be fixed. We should not depend on inquisitive graduate students filing Public Records Act requests to learn about these kinds of incidents. The law should be amended so that we're told when they happen.

If you surveyed the student body, and asked them: "If the university were hacked into, and criminals were able to learn your email address, which they could later use for phishing attacks, or even to sell to spammers - would you want to be told" - I'm guessing that a large number would answer yes. Admittedly, this is a fairly loaded question - but in any case...

(As a technical aside: As things currently stand, every one of the 30,000 users who has an account on the steel cluster can get a full list of student's email addresses. This should be fixed. Any evil student could quite easily download the list and then sell it to spammers.)

Tuesday, April 10, 2007

A Deceit-Augmented Man In The Middle Attack Against Bank of America's SiteKey ® Service

See a video of the phishing attack in action (quicktime .mov, 700k): mirror 1, , mirror 2 , mirror 3, mirror 4




Executive Summary

We present this demonstration of a "deceit-augmented man in the middle attack" against the SiteKey ® service used by Bank of America (the underlying technology is also used by other companies). This, or a similar attack, could be used by a phisher to deceive users into entering their login details to a fraudulent website. BoA's own website tells users: "[W]hen you see your SiteKey, you can be certain you're at the valid Online Banking website at Bank of America, and not a fraudulent look-alike site. Only enter your Passcode when you see the SiteKey image and image title you selected."

We believe that this statement is not completely true, as our deceit-augmented man-in-the-middle attack shows. Whereas a normal man-in-the-middle attack identically replicates the attacked site, a deceit-augmented man-in-the-middle attack may present the user with a slightly different user interface than the regular interface. Man in the middle (MiTM) attacks are not a new threat - they have been known about for a number of years, and phishers have already used them to target Citibank and other online banks.
How a man in the middle attack works against an online bank. The user communicates with the the phisher, while believing that she is speaking to the bank. The phisher can use deceit to query the user for more information than the bank would normally ask for. Using this additional information, the phisher can communicate with the bank, while pretending to be the user.


We are putting this demonstration online to help warn the public of this risk. Just because you see your Sitekey/Passmark image, or Yahoo personalized sign-in seal, you should still be careful. Those security schemes, alone, are not enough to protect your security online. We suggest that users protect themselves from phishing threats online by adopting the following online security best practices:

  • Look for the SSL (lock) icon on the right hand side of the address bar in the browser.

  • Do not click on links in emails from anyone claiming to be your bank, financial firm, or anyone to whom you give money or personal information. Instead, type the Internet address into the address bar by hand.

  • For sites that you regularly go to (your own online bank), type the address into the Internet address bar, and then make a bookmark. Use this bookmark in the future.

  • Use anti-phishing software. Firefox 2.0 has this kind of technology built in. Users of other browsers should download one of the anti-phishing toolbars distributed by reputable groups. These include: Netcraft, Google and Microsoft.

  • If your bank has a "remember me" feature, use it. When you re-visit your bank's website, you should see your login name filled in already. While this is by no means a foolproof security feature, it is at least a signal that can help you to determine if you are visiting the bank's legitimate website.




In late 2005, federal regulators released guidance documents which strongly suggested that banks begin to roll out two-factor authentication schemes for their online banking systems. Bank of America responded by introducing it's SiteKey service, which used technology licensed from PassMark Security ® (now RSA ®, the security division of EMC ®). Shortly after the launch, some in the security community questioned the service, and voiced their concerns that a "man in the middle" (MiTM) attack could be performed to convince a user to reveal her username and password(s) to a phishing website. At the time, Mark Goines, chief marketer for Passmark told a journalist that "a man-in-the-middle attack is not possible" as SiteKey uses a "secure cookie" to link a user's PC to the Bank of America Web site. The cookie can only be read by a server with a specific security certificate and not by a malicious Web site set up by an attacker in such an attack, Goines said.


A phishing page displaying a user's SiteKey


Man in the middle attacks are possible in spite of the "secure cookies" that Passmark includes with their services. This is due to the fact that the human factor is often the most important, yet weakest link in a security system. The necessary feature that enables customers to login from computers that they have not used in the past (a new computer at home, an Internet cafe on vacation, etc) - after being prompted for the answer to one of their security questions - enables a phisher to prompt the user with her security question, and display her SiteKey image and then steal her login information.

A number of large financial institutions besides Bank of America have licensed the technology underlying the SiteKey system, including Vanguard and Pentagon Federal Credit Union. Yahoo ® uses a similar technology, which is vulnerable to the same deceit-augmented MiTM attack. Other second-factor schemes based on cookies and security questions are also vulnerable to attacks of the kind we demonstrate. Deceit is a powerful tool in the hands of an attacker - and it can be used to convince an innocent user to give away whatever authentication information that is supposed to remain secret.



The legitimate BoA login page


Man in the middle attacks are an established form of deceit, and are well understood by the computer security community. There are existing MiTM tools that work against 802.11 wireless networks and secure shell & secure web (https/SSL) connections. Russian phishers used MiTM attacks against Citibank's two-factor authentication scheme last year. A similar attack against ABN Amro (a Dutch bank) was recently revealed by the press. Researchers in the security community have previously voiced concern that the SiteKey system is vulnerable to a similar attack. We believe we are the first to provide a demo that allows people to see just how convincing a deceit-augmented MiTM attack can be.




"Phishy" BoA login page 1

A recent study by researchers from MIT & Harvard found that when they removed the SiteKey image from users’ login sessions, only 3% of the users did not continue with their login session (a group which included both subjects role-playing with fictitious accounts and passwords, and subjects using their own accounts and passwords). It was evident from their study that not everybody noticed the absence of the SiteKey image. Another recently published study showed that eBay users did not notice if their personalized greeting was removed, and that they would log in even if it was absent. The subjects in the latter study were not aware of being studied, and did use their own accounts and passwords. These findings suggest that the outcome of the MIT/Harvard study would not be affected if subjects were unaware of that they were studied, and they used their own accounts and passwords. Another important finding of the MIT & Harvard study was the fact that 100% of users typed in their password even when the site was being served over a non secure session (i.e. there was no lock icon in the toolbar).

While a more prominent display of the SiteKey image may improve the security of the scheme (as users will not so easily ignore the absence of a correct image), the scheme would still be vulnerable to man in the middle attacks, as these would truly cause the correct image to be displayed, no matter how the site is redesigned. The Stop-Phishing Research Group at Indiana University has developed a demonstration of a deceit-augmented man in middle attack against Bank of America so that members of the general public can better understand and appreciate what such an attack would look like, the vulnerability of SiteKey to this type of attack, and that phishers would be able to display users’ SiteKeys in the context of a man in the middle attack. Ultimately, we wish to highlight the need for users to pay careful attention to various characteristics of the sites they are visiting, and to take steps including the best practices noted above, to help avoid falling prey to phishers.


"Phishy" BoA login page 2 (asking security question)


We believe that a man in the middle attack against Bank of America or another institution using the technology underlying SiteKey’s would look as follows. Our demonstration is based on a concise 130 line ruby script that carries out this attack and that could be written by a phisher with average skills and in a relatively short time.

  • The user is prompted for her name + state of residence.

  • That information is then sent by the phisher's server, not the user's computer, to BoA.

  • The phisher passes on the security question BoA asks to the user, and then send the user's response back to BoA.

  • The bank replies by giving the phisher the SiteKey image and the SiteKey caption.

  • With that in hand, the phisher can obtain the valid SiteKey image from BoA, which in turn allows the phisher to present this to the user in order to convince her that they are the legitimate BoA website.

  • The phisher can then prompt the user for her login password, login to BoA's site, and from there, the phisher would have complete control over the user's online banking session.



Why does BoA allow users to get access to their SiteKey image after answering her security questions? The reason is simple. Normally, BoA knows to present the right SiteKey image to a user because it recognizes the computer that user logs in from as belonging to the user in question. This is done using secure cookies. But what happens if there are no cookies? Say that the user wants to log in to her BoA account from a computer that she has not successfully used to connect to BoA's website with before. Before sending the SiteKey image to the user, BoA will require the user to provide some evidence of her identity - the answers to the security questions. Once BoA receives these, and has verified that they are correct, then it will send the user's SiteKey image to the user. That allows the user to verify that it is really communicating with BoA, and not an impostor, which in turn, provides the user with the security to enter her password.

This is the loophole that we use in our demonstration. Through deceit, we convince the user to enter her security question, and thus get the SiteKey image.

RSA's Passmark technology is part of a complete package. They include a sophisticated risk/threat engine as part of their RSA Adaptive Authentication product. Just like the credit card companies use data mining to pick out fraudulent transactions based on signals and fuzzy data, RSA too gives banks the ability to assign a good/bad score to an IP address, and the risk that it may be an attacker and not the real customer.

If a naive attacker did deploy a phishing site similar to the one we have demonstrated in this page, it is quite likely that RSA would very quickly suspect that something bad was happening - simply due to the fact that hundreds of different users' SiteKeys would all be requested from the same IP address.

However, just as criminals use obfuscation and hiding tricks to perpetrate other kinds of online fraud, such as the use of a dual-personality page to perform click-fraud, so could phishers use obfuscation and hiding tricks to camouflage a MiTM server. This can be achieved by proxying users' SiteKey requests through the hundreds of thousands of infected/vulnerable home computers (known as "bot networks"on the Internet), through the perfectly legitimate Tor anonymizing network, or by using more complex techniques - such as using compromised home routers - using vulnerabilities recently described by Alex Tsow et al. and Sid Stamm et al. They recently demonstrated the ease with which home routers can be taken over by attacker, and have their software modified without the user's knowledge. These routers can then be used to perform click fraud, or hijack home users' web browsing sessions.



A Few Notes:

  • Source Code: To provide the factual support for our discussion above regarding the threat of a man in the middle attack against a site using the technology underlying SiteKey’s, and demonstrate how relatively easy such an attack would be to perform, we have posted an excerpt of the ruby script here. This is the portion that would connect to BoA and download the Sitekey imageB. A real attacker would incorporate other elements, such as HTML/images to mimic the bank’s website, that are not present with the code.

  • "sitekey.evil-phisher.com" does not exist. The demo was created on a university computer with a copy of the apache webserver running on the 'localhost'.

  • Our thanks to Sid Stamm for lending his javascript expertise during the early stages of this project.

  • SiteKey® is a registered trademark of the Bank of America Corporation. Bank of America has not sponsored, participated in, or approved this demonstration material.

  • PassMark® is a registered trademark of PassMark Software Pty. Ltd. Co.; RSA® is a registered trademark of RSA Security; EMC2® is a registered trademark of EMC. Neither RSA nor EMC has sponsored, participated in, or approved of this demonstration material.

  • Yahoo!® is a registered symbol of Yahoo! Inc.





About the Authors:

This is a project by Christopher Soghoian and Prof. Markus Jakobsson, both with the Stop-Phishing Research Group at Indiana University.


Christopher Soghoian is a graduate student in the school of Informatics at Indiana University. His research is focused on the areas of phishing, click-fraud, search privacy and airport security. He has worked an intern with Google, Apple, IBM and Cybertrust. He is the co-inventor of several pending patents in the areas of mobile authentication, anti-phishing, and virtual machine defense against viruses. His website is http://www.dubfire.net/chris/ and he blogs regularly at http://paranoia.dubfire.net


Markus Jakobsson is a computer security researcher and entrepreneur, best known for his research on phishing and anti-phishing. He is an associate professor in the School of Informatics and associate director of the Center for Applied Cybersecurity at Indiana University and specializes in understanding and preventing phishing. He is a visiting research fellow of the APWG, a founder of RavenWhite, a founding member of RSA eFraud Forum, and a consultant to the financial sector. He is the inventor or co-inventor of over 80 security related patents and patents pending, and a co-editor of Phishing and Countermeasures: Understanding the Increasing Problem of Electronic Identity Theft, released in 2006 by publisher Wiley, John & Sons. His website is: http://www.informatics.indiana.edu/markus/


The Stop-Phishing Research Group at Indiana University, Stop-Phishing.com, is striving to understand, detect and prevent online fraud, and in particular, to reduce the economic viability of phishing attacks. We achieve this goal through a cross-disciplinary research agenda in which we consider all facets of the problem, ranging from psychological aspects and technology matters to legal issues and interface design considerations. We are attuned to needs and concerns within the financial sector, among privacy advocates, and of common users, and are dedicated to turning the tide.

Tuesday, March 13, 2007

The Economics of Phishing Emails, and Corporate Logos



Disclaimer: This is all idle speculation. I have no inside info to support my claims.



This evening, I spent some time browsing through Phish Tank - A fantastic live reference for phishing websites.

A shockingly large number of the websites include images from Paypal, Ebay and other .com's own web servers. That is, instead of making a local copy of the image, and hosting it on the server which run the phishing site, they instead include the image directly off Ebay's webserver. Not only does Ebay get phished, but they have to pay the bandwidth costs for the graphics displayed to the victim.

It's almost like the tale of a twisted dictator shooting someone, and then sending the victim's family a bill for the bullet.

This got me thinking.

Paypal, Bank of America, and others know exactly where their graphics should be shown on the web. A general, and reasonable rule would be, anytime a website at Paypal.com loads our logo, let it happen. If someone at evilphisher.com tries to load our image, load up a big warning image instead. This could easily be done by checking the referrer passed by the browser.

This would be trivial to implement. The question then, is why isn't Paypal doing this already?

As crazy as it may be, the answer is probably something like this:

1. Bandwidth is cheap, at least in the huge quantities that Paypal is purchasing.
2. Phishers are often hosting their sites on zombie/hacked machines, so they don't pay for the bandwidth themselves.
3. If Paypal starts checking the referrer string sent by a browser, phishing website designers will simply save a local copy of the image, and host them on their own websites.

Simply put, Paypal doesn't really gain much by disallowing the phishers from using Paypal.com to host their images, and in fact, loses quite a bit.

As things stand right now, Paypal can analyze their logs, and see exactly which websites are causing people to load their images. Paypal probably has a team of people, or several scripts hitting each one of these websites to see if they are indeed a phishing site. If Paypal cuts off the flow of images, and forces phishers to host their own image files, they will immediately lose this valuable source of intelligence.

In this case - it seems that the enemy you know is far better than the enemy you've forced underground.

Tuesday, February 20, 2007

New TSA Website back online - Now Less Phishy

Both Ryan Singel of Wired News, and Brian Krebs of the Washington Post picked up the story of TSA's extremely amateurish looking website last week.

The website was hosted by a private company, did not use SSL, did not have a OBM form number, and was riddled with typos - sure signs that you shouldn't trust it, and enough reason for some to claim (albeit humorously), that it was a phishing site.. After a few phone calls from members of the press, TSA pulled the website.


The TSA Traveler Identity Verification Program website still tells passengers to download and fill out a .pdf form. However,
just like a shady, perpetual going-out-of-business sale retailer, TSA's website has resurfaced again, only this time, with a new name. It isn't linked to yet from the main TSA.gov site, but can be found via links from dhs.gov

The new website is: https://trip.dhs.gov/.

New improvements:


  1. http is redirected to https. Thus, even if their webmasters make future mistakes, and forget to link to the secure website, their webserver will redirect all non-secure content to their secure server. Good move! Try it. Go to http://trip.dhs.gov and watch as your browser gets redirected to https://trip.dhs.gov/.

  2. OBM Control Number. Any collection of personal information by the government is required to include a OBM Control Number. This was absent from their previous website, and as reportedly, from the Microsoft Word file previously available for download. You can view their Paperwork Reduction Act Statement (which includes their OBM #1652-0044) here: https://trip.dhs.gov/pra.htm

  3. No more word documents! They previously had a MS-word file available for download, if you didn't wish to send your information to their outsourced webserver. Predictably, this ms word file included meta-data on who at TSA had edited the file. They have now shifted to a pdf file.


Problem: It is still outsourced.

Both http://www.tsa.gov and http://www.dhs.gov are served by akamai distributed proxies, so it's impossible to figure out where they're actually being hosted.

However, someone from TSA visited my website last month, so I do know that TSA's outbound web proxies are:

pnxuser1.tsa.dhs.gov A 129.33.119.12
pnxuser2.tsa.dhs.gov A 129.33.119.13
pnxuser3.tsa.dhs.gov A 129.33.119.14
pnxuser4.tsa.dhs.gov A 129.33.119.25
pnxuser5.tsa.dhs.gov A 129.33.119.26

(Note, this is why Tor is useful)

Additonally, http://tsa.dhs.gov (which runs a webserver, albeit not one configured for public viewing) resolves to:
tsa.dhs.gov A 129.33.119.130

TSA's new website, http://trip.dhs.gov, resolves to
trip.dhs.gov A 64.124.212.23

Now, it's quite possible that TSA/DHS own a number of chunks of ip address space. All i'm stating here, is that the ip addresses are known to be owned by TSA/DHS are nowhere near the ip used by the trip.dhs.gov website.

I don't know the ip address of the old website rms.desyne.com - since it is no longer listed in DNS records. However, www.desyne.com resolves to 64.124.142.34.

Furthermore, a traceroute of http://trip.dhs.gov, and http://www.desyne.com leads me to believe that they're both hosted in the same data-center. I'd be willing to bet a couple Fin Du Monde beers that even with a change of DNS, that desyne is still running and hosting TSA's Traveler Redress Inquiry Program (TRIP) website.



traceroute to trip.dhs.gov (64.124.212.23), 30 hops max, 38 byte packets

.....

12 so-5-0-0.mpr2.iad1.us.above.net (64.125.27.209) 81.561 ms 118.933 ms 84.338 ms
13 so-3-0-0.mpr1.iad2.us.above.net (64.125.29.134) 82.985 ms 81.489 ms 83.893 ms
14 * * *

traceroute to www.desyne.com (64.124.142.34), 30 hops max, 38 byte packets

.....

12 so-5-0-0.mpr2.iad1.us.above.net (64.125.27.209) 84.352 ms 83.722 ms 84.142 ms
13 so-3-0-0.mpr1.iad2.us.above.net (64.125.29.134) 82.005 ms 82.326 ms 83.552 ms
14 * * *




Problem: It still uses cookies!

As Ryan Singel expertly notes, 2003 White House OBM rules state that government websites should not use cookies: "Particular privacy concerns may be raised when uses of web technology can track the activities of users over time and across different web sites. [...] Because of the unique laws and traditions about government access to citizens' personal information, the presumption should be that "cookies" will not be used at Federal web sites."

Ryan additionally states: If cookies are going to be used, the rules require that the site include "clear and conspicuous notice" of the cookies, that there exists a "a compelling need to gather the data on the site," that there are "appropriate and publicly disclosed privacy safeguards" for cookie information, and that the head of the agency personally approves the cookies.

When I browse to both http://www.tsa.gov, and this new unannounced TSA website, I am given a web cookie - "ForeseeLoyalty_MID_8El4YcUdgN".

Admittedly, this is not nearly as big a problem as their un-SSL encrypted webserver. However, I want TSA to have to follow the rules. Especially since they make us follow them, even in cases where they won't actually tell us what the rules are.

The big question is: If TSA is following official US government policy, Kip Hawley, Director of TSA will have signed off on the use of cookies for TSA's website. Did he indeed sign off? Inquiring minds wish to know.