Saturday, May 30, 2009

Pentesters and Beer

Over the years I've come to the inevitable conclusion that pentesters and beer are inseparable. It's as fundamental a pairing as salt & pepper, Internet & porn, Kebabs & chilli-sauce...

Most days of pentesting culminate in an evening down the pub (typically with the customers onsite technical authority), and yet the following day all concerned are as sprightly as they were on the first day (yes - there are reasons for that!).

On the other hand, gathering an onslaught of pentesters together is always a cause for concern. Taking the onslaught to a far-away land for a company kickoff tends to result in high medical expenses and some interesting legal fees - but, as the saying goes, you're not a real pentester if you get caught.

Nowadays pentesters go onsite armed with quad-core laptops, MP3 players and bottles of maximum strength paracetamol. But there's always been something missing - until now!

Pentesters of the world, I give you the preformatted Formal Apology...


Note: There have been many discussions about the naming convention for a group of pentesters. "Hustle", "swarm", "gaggle" and "pilgrimage" have all been proposed at some stage - I prefer "onslaught".

Sunday, May 24, 2009

Orange.fr SQL Injection - 245,000 clear text passwords...

OK, so it's getting a little tedious, but the folks over at HackersBlog have uncovered yet another high-profile site vulnerable to SQL Injection. This time it's Orange in France.

Through their Mystery Photo portal (http://laphotomystere.orange.fr/), it appears that user login credentials (including first name, surname, email and password) can be retrieved through some vulnerable parameters - something like 245,000 of them by last count.

Most importantly though, it looks like Orange forgot two of the fundamental security laws in managing online authentication credentials:
  1. NEVER STORE PASSWORDS IN THE CLEAR
  2. NEVER STORE PASSWORDS IN THE CLEAR
That's right, I'm saying it twice because its that important!

If you're going to store authentication credentials, store hashes of the passwords instead. Better yet, salt your hashes too - thereby making it even tougher for the bad guys to break them.

Over the years I've dealt with numerous folks working within the security teams of Orange around the world and they're generally a smart bunch of folks, so this lapse in security is rather disappointing. I can only presume that (as is so typical nowadays) this particular Web portal element was designed and developed by a third-party and didn't undergo the usual security scruitiny.

Regardless, Orange need to up their game here and get the vulnerability fixed. Apparently the folks over at HackersBlog informed Orange but haven't received a response from them.

As for those customers/patrons of the Orange.fr site - I'd recommend that you change your password immediately and, if you're also recycling the same password amongst multiple Web sites, you'd better change all those as well (but don't use the same "new" password you create for the Orange.fr site - since the site probably hasn't been fixed yet).

For more on passwords and their recovery, checkout an earlier blog on the topic - Passwords Revisited.

Saturday, May 23, 2009

If you can't protect it, you'd better be able to detect it!

The security trend over the last half-decade has been towards "protection" and we've seen technologies such as IDS morph in to IPS and network sniffing evolve in to DLP.

What I find amusing/worrying is that this laser focus on protection means that organizations have increasingly dropped the ball where it comes to threats that currently have no protection solution on the market. Basically, an attitude of "if I can't protect against it, then I don't want to know about it" has become prevalent within the security industry.

So, on that note, I found it refreshing to read the brief story over at Dark Reading How To Protect Your Organization From Malicious Insiders by Michael Davis. It's been a long standing mantra of mine that "If you can't protect it, you'd better be able to detect it!"

The 'Insider Threat' is one of the more insidious threats facing corporates today (especially in economic turmoil) and there really are so many ways for a knowledgeable employee to screw things up if they wanted to. I've had to do a mix of forensics and internal pentests within these areas in the past and it's always a potential playground of carnage.

But it's a little distressing to me that with the global sales push on DLP solutions many organizations have essentially thrown away their common sense. What I've observed is that enterprises that were initially deeply concerned about the potential of insider threat jumped heavily on to the DLP bandwagon and see this class of security technology as a way of over coming the threat. Then once they've deployed the DLP solution it's as if a mental box is ticked - "insider threat = solved" - and they move on to their next priority.

The problem is that DLP sucks as a protection system against the real insider threat and its rollout within an enterprise can be a substantial distraction to security & audit teams responsible for tracking the threat. Add to that the fact the executive support for further insider threat protection strategies quickly wanes after DLP has been rolled out -- "DLP = job done".

DLP will help identify (and block) many clear-text data leakage routes from an enterprise, however it'll do nothing against an insider that backdoors a server or Easter-eggs a DB to self destruct in a couple of weeks time - yet the mindset is that an investment has been made in DLP, and that since these kinds of insider threats can't be "protected" against, it's a problem too tough to solve (even though it may have been "solved" previously to the DLP solution - but that budget has now been used up - and DLP is supposed to reduce costs).

What ever happened to "detection"? As far as the insider threat goes, if you can't protect against it, you'd damn-well better ensure you can detect it. Failing that, I hope you're budgeting enough for post-attack disaster recovery and forensics.

Think of it this way. Say you're running a public library. You can bag check everyone that leaves the library to make sure they aren't stealing your books - and that's a wise precaution. But that doesn't mean you should skimp on the smoke detectors. The threat is "book loss" but there are clear differences between protection and detection strategies.

Tuesday, May 19, 2009

Not-so-secret Recovery

Over the years I've discussed the topic of breaking Web application/portal passwords many times and, as I've constantly said, the easiest way to hack a users account is typically through the "password recovery" facilities.

On that topic, there's a new research paper that puts some figures to how successful the technique is. The paper goes by the title "It's no secret" and quantifies the reliability of 'secret' questions as a back-door to authentication systems.

I'd recommend a read of the new paper when you get a chance - and probably combine it with some of the following reading as well...
Passwords Revisited
Challenging Challenge Questions
Choosing Better Challenge Questions

Gamespot.com Vulnerable to SQL Injection - 8,000,000 records exposed

It seems that "Unu" over at HackersBlog has exploited a new SQL Injection flaw in Gamespot.com to unveil some 8,000,000 member accounts.

The credentials extracted by Unu included the home address, date of birth, email address, and obfuscated password (hashed/encrypted?), and a few other details - all of which are valuable to enterprising criminals and have a monetary value "on the street".

I'm glad that Gamespot at least did something right by not storing user account passwords in the clear - which is so often the case with many Web application portals. I'm not so pleased that Gamespot hadn't found this particular SQL Injection point within their application during a regular pentest. The flaw appears to have been in http://www.gamespot.com/pages/unions/emblems.php with the "union_id" variable open to tampering. This particular flaw would have been easily discovered by simply running a commercial Web application vulnerability scanning tool.


While it appears that Gamespot have now fixed the problem, it does raise the question of responsibility for leaking personal information in such a manner. We hear of all sorts of corporate requirements around the world that require large registered corporations to publicly disclose any data leakages, and to update their customers of any break-in's. But how does that apply to Web-only portals - especially to large portals such as Gamespot? I haven't seen any acknowledgment by Gamespot to their "customers" about the flaw - and no confirmation that the personal information of their 8,000,000 "customers" is safe from future attacks - nor a rebuttal of how many credentials were actually leaked.

Granted Unu appeared to have (at least partially) done the right thing in informing Gamespot of the flaw and withheld his public notification until it was fixed - but Unu isn't the only hacker out there armed with SQL Injection tools/knowledge, and I'm reasonably sure that this was the only flaw within Gamespots Web portal (given how easy this one would have been to spot using standard off-the-shelf tools). Which raises the question of just how safe is anyone's personal data when entrusted to Web-only providers, and how accountable are they for that information?

I don't have any answers to that question - but plenty of opinions as to what needs to be done. Should the security industry help develop an online code of ethics for entities such as Gamespot and help them become better Internet denizens, or does naming and shaming work best?

Saturday, May 16, 2009

Organized Cybercrime Response or Vigilante mobs?

I was flying back from the OWASP 2009 Europe conference in Krakow yesterday and with 17 hours of travel I had several moments to think on different topics.

One of the topics I was doing some in-depth thinking about followed on from several questions that were raised following my talk "Factoring malware and organized crime in to Web application security"

Over the last couple of years we've seen some fairly serious responses by industry and interested others in building support mechanisms focused on tackling organized cybercrime. Some of these movements have been focused on a very specific threat - such as the Conficker Working Group - while others have been more generic grass-roots responses such as McAfee's Cybercrime Response Unit.

There are a couple of problems though:
  1. Judging illegal behaviors/activities based upon your own countries legal system.
  2. When does a movement of concerned entities become a vigilante mob?
International Law & Values
Now I'm certainly no international lawyer and would never admit to being one, but as a person who has lived/worked/emigrated to multiple countries around the world and spent multiple years in each country getting familiar with their cultures, legal systems, taxes and social ethics, what I can say is that no two countries are particularly alike - even the ones you think would be.

Sure, there are a lot of overlaps at various levels, but the combination of subtle differences results in quite a marked difference in world outlook.

Granted, as far as it comes to things such as hacking tools, most people have a basic understanding that a tool in one country may be classified differently in another - e.g. writing and distributing a hacking tool is legal, while actually using it against an unauthorised host is illegal.

I think most people understand the concept and probably think of it a bit like "there are countries that allow citizens to carry automatic handguns, then there are countries that only allow semi-automatic handguns, then there are countries that limit the number of bullets allowed in a handgun, and then there are countries that don't allow hand-guns at all" - which country that has that particular law is probably unknown to the vast majority of people - but they understand the concept.

Unfortunately what most people fail to grasp are the ethics and social norms that surround or dictated that particular law and, in my opinion, that's the element you really need to understand when looking at responses to the anti-cybercrime movement.

When I see and hear about these anti-cybercrime organizations and their "call to arms" in combating the threat, it worries me that they are basing their response (and anger) upon their own legal framework and countries ethics (as much as that statement "country ethics" makes sense). The laws most western countries would like to see that could aid the fight against cybercrime within or against their own country need to driven in a different manner if they are to be supported and enacted within other countries - and to do that you really need to understand the local countries culture and ethics - because failure to do so merely results in misdirected hot-air.

Vigilante Mobs
The discussion of country-specific ethics and culture leads me to also consider the question "when does a coordinated response become a vigilante mob?"

I have several concerns with the way some anti-cybercrime groups have appeared over recent years and approach their topic with single-minded intensity. It's the kind of drive I'd classify as being in the realm of religious fervor - with all the negative connotations that entails.

By all means, work with and support your local (i.e country) law enforcement teams in combating the threat against your organization or customers. But if you're thinking of taking the law in to your own hands and targeting (what you'd label as) cybercrime being operated in other countries - then you'd best think long and hard about the fact that the laws and ethics you're operating under are most likely not the same as those you're targeting - as you become part of a vigilante response to a threat.

Which, in my mind, draws further parallels to the religious troubles around the world. Only now perhaps we're looking at fanatic factions of an online anti-cybercrime religion.

Saturday, May 9, 2009

OWASP AppSec Europe 2009 - Krakow

Next week its time for the 2009 OWASP AppSec Europe conference. It'll be held in lovely Krakow, Poland.

The conference runs from May 13th -14th, and I'll be there for this years festivities. I'm speaking on the Thursday afternoon at 15:45 on the topic "Factoring Malware and Organized Crime in to Web Application Security".

If you're responsible for corporate security or secure Web application development, you should be planning on being at OWASP next week already. Don't forget to drop in on my talk.

The abstract for my talk:

The “good old days” of Web security being a battle between the application development team and a sole attacker operating from his bedroom have long since disappeared. Today’s Web application security is a battle with professional criminal hacking teams, organized at a global level, whose primary motivation is financial gain.

Despite knowledge of who the combatants are and their capabilities, both Web application developers and security consultants alike have persisted in largely ignoring this threat. Their doggedness with designing Web applications in the traditional way – with layers of authentication, authorization and complexity – have, to an extent, helped facilitate much of the success organized cyber-criminal teams have had over recent years.
Today’s security professionals need to factor in this organized criminal threat. With malware being near ubiquitous at the client, application developers need to address the fact that upwards of one-third of their customers are likely to be infected at any point in time. If so, how do you trust the data coming from your own customers and continue to do business with them?
The threat is most prevalent within the online banking industry, but the success of the tactics used by cyber-criminals to exploit these Web application vulnerabilities has seen them increasingly adopted in other profitable online spheres. How should Web developers factor in the use of malware (running on a host they have no control over) in to their application design? How should security consultants test and evaluate the countermeasures deployed by application designers to combat an organized cyber-crime threat?
With even the most advanced client authentication technologies being defeated, this session will cover how cyber-criminals are really defeating Web applications (by example) along with the multi-disciplinary skills and tactics developers and consultants need to adopt in order to help combat the evolving threat.

Wednesday, May 6, 2009

Patching the Web browser silently...

There's a new security paper out on the relative strengths of the patching methods used within Google's Chrome, Mozilla's Firefox, Apple's Safari and gold old Opera. The paper titled "Why Silent Updates Boost Security" was written by Thomas Duebendorfer and Stefan Frei, and progresses the studies Thomas, Stefan, Martin May and I did last year on Web browser insecurity with "Understanding the Web Browser Threat".

I'd recommend you take the time to read the paper. But for those that find themselves pressed for time, and need the highlights...
  • The paper analyzes the relative effectiveness of Web browser updating mechanisms in use by Chrome, Firefox, Safari and Opera.
  • Analysis is based upon access to anonymized logs from Google's Web servers. (Which I'm sure you'll agree are damned extensive!)
  • Back in June 2008, the previous study found that Firefox had the most successful update mechanism. Since then, Google's Chrome browser has appeared, and it's updating mechanism has been found to be even more successful (with certain caveats).
  • Chrome's silent update mechanism allowed users of the Web browser to update faster - subject to the user knowing that updates have been applied and that they need to restart the browser.
I think this is great work that Thomas and Stefan have done. Their findings not only backup the importance of improving security updating features in software, but also provide teh varification of which systems tend to work better.

Theres still work to be done though. Patching the Web browser in a prompt and reliable fashion is a critical element in improving desktop security - but it's not the only one. I'd place plug-in patching at the same level (if not a knotch or two higher) on the criticality scale.

I'd like to see Google or Firefox take the lead in enforcing a similar method of patching for all plug-in's accessible via their Web browser technologies - either silently patching those plug-in's or prompting users to patch immediately and, if the plug-in isn't patched, disabling it's usage from the Web browser until it is updated.