Wednesday, December 11, 2013

Consumer Antivirus Blogs

OK, I give up, what's up with all the blog sites run by the antivirus vendors - in particular the consumer-level antivirus products? Every day the post essentially the same damned blog entries. What is the purpose of those blogs?

You know the blogs I mean. Day-in, day-out, 20+ antivirus companies post the same mind-numbing blog entries covering their dissection of their latest "interesting" piece of malware or phishing campaign. The names of the malware change, but it's the same blow-by-blow step through of another boring piece of malware, with the same dire warnings that you need new detection signatures and offering up the same recommendations readers should follow to protect themselves.

I guess my question is "after a decade producing essentially the same blogs each day, who the hell do they think they're writing these blogs for?" I'm pretty damned sure that the end user isn't reading them the day they're churned out. I'm guessing that a couple hundred die-hard information security folks around the world have configured their RSS readers to download each day's worth of posting - but I'm pretty sure that all but a tiny fraction of those guys actually read more than a handful of the entries on any day.

I suspect that most of the blog entries are merely mandated responses to some marketing initiative to generate new content for the websites and help maintain a certain SEO presence. The content doesn't really matter. It's not like real people are using the data in a meaningful way... its for the machines.

I sometimes wonder if half the malware or phishing blog entries are completely made up. They may as well be generated by an automated routine for all the value they contribute to the community - let alone the actions they initiate for the end users of the vendors products.

I'm sure some marketing weenie or sub-junior malware analyst is prepared to justify the 2 hour investment in writing and posting the blog... and how someday someone will search for that particular piece of malware and the details will be online surrounded with their branded material and links back to the download page for the product... but come on, is the effort worth it? The antivirus vendor blogs would be a thousand times more interesting if the posted a photo of Grumpy Cat every day... it could even be the same photo every day...

Can we stop this farce, this joke, this waste of time that is the daily postings of yet another totally uninteresting piece of malware or phishing email and stop pretending it's news?

Saturday, December 7, 2013

Divvy Up the Data Breach Fines

There are now a bunch of laws that require companies to publicly disclose a data breach and provide guidance to the victims associated with the lost data. In a growing number of cases there are even fines to be paid for very large, or very public, or very egregious data breaches and losses of personal information.

I often wonder what happens to the money once the fines have been paid. I'm sure there's some formula or stipulation as to how the monies are meant to be divided up and to which coffers they're destined to fill. But, apart from paying for the bodies that brought forth the case for a fine, is there any consistency to where the money goes and, more to the point, does that money get applied to correcting the problem?

In some cases I guess the fine(s) are being used to further educate the victims on how to better protect themselves, or to go towards third-party credit monitoring services. But come-on, apart from a stinging slap on the wrist for the organization that was breached, do these fines actually make us (or anyone) more secure? In many cases the organization that got breached is treated like the villain - it was their fault that some hackers broke in and stole the data (it reminds me a little of the "she dressed provocatively, so deserved to be raped" arguments). I fail to see how the present "make'em pay a big fine" culture helps to prevent the next one.

A couple of years ago during some MAAWG conference of other, I remember hearing a tale of how Canada was about to bring out a new law affecting the way fines were actioned against organizations that had suffered a data breach. I have no idea whether these proposals were happening, about to happen, or were merely wishful thinking... but the more I've thought on the topic, the more I'm finding myself advocating their application.

The way I envisage a change in the way organizations are fined for data breaches is very simple. Fine them more heavily than we do today - however half of the fine goes back to the breached company and must be used within 12 months to increase the information security of the company. There... it's as simple as that. Force the breached organizations to spend their money making their systems (and therefore your and my personal data) more secure!

Yes, the devil is in the detail. Someone needs to define precisely what that money can be spent on in terms of bolstering security - but I'm leaning towards investments in technology and the third-party elbow-grease to setup, tune, and make it hum.

I can see some folks saying "this is just a ploy to put more money in the security vendors pockets!". If it's a ploy, it's hardly very transparent of me is it? No, these organizations are victims of data breaches because their attackers are better prepared, more knowledgeable, and more sophisticated than their victims. These organizations that are paying the fine would need to be smart about how they (forceably) spend their money - or they'll suffer again at the hands of their attackers and just have to pay more, and make wiser investments the second time round.

I've dealt with way too many of these breached organizations in my career. The story is the same each time. The IT departments know (mostly) what needs to be done to make their business more secure, but an adequate budget has never been forthcoming. A big data breach occurs, the company spends triple what they would have spent securing it in the first place doing forensics to determine the nature and scope of the data breach, they spend another big chunk of change on legal proceedings trying to protect themselves from lawsuits and limit liabilities and future fines, and then get lumbered with a marginal fine. The IT department gets a dollop of lucre to do the minimum to prevent the same attack from happening again, and they're staved again until the next data breach.

No, I'd much sooner see the companies being fined more heavily, but with half of that wrist-slapping money being forcably applied to securing the organization from future attacks and limiting the scope for subsequent data breaches. I defy anyone to come up with a better way of making these organizations focus on their security problems and reduce the likelihood of future data breaches.

-- Gunter Ollmann

Friday, December 6, 2013

The CISSP Badge of Security Competency

It can be a security conference anywhere around the world and, after a few beers with the attendees, you can guarantee the topic of CISSP will come up. Very rarely will it be positive. You see, CISSP has become the cockroach of the security community and it just wont die. They say that cockroaches could survive a nuclear winter... I'm pretty sure CISSP is just as resilient.

Personally, I think CISSP gets an unfair hearing. I don't see CISSP as a security competency certification (regardless of those folks who sell it or perceive it as such), rather I interpret it like a badge on a Girl Scout's sash that signifies completion of a rote task... like learning how to deliver CPR. It's a certification that reflects an understanding of the raw concepts and vocabulary, not a measure of competency. Just like the Girl Scout with the CPR badge has the potential to be a competent medic in the future, for now it's a "well done, you understand the concepts" kind of deal.

If that's the case, then why, as a security professional, would practitioners not be lining up to have their own CISSP accreditation? In a large way, it's a bit like requiring that aforementioned (and accomplished) professional medic to sit the Girl Scout CPR exam and to proudly show off her new badge afterwards. To many folks, both scenario's are likely to be interpreted as an insult. I think this is one of the reasons why the professional security practitioners community is so against CISSP (and other security accreditation's) - and causes the resultant backlash. The fact that many businesses are now asking for CISSP qualification as part of their recruitment vetting processes just adds salt to the wounds.

I see the CISSP certification as a great program for IT professionals (web developers, system administrators, backup operators, etc.) in order to gain the minimum level of understanding of what security means for them to do their jobs.

Drawing once again from the CPR badge analogy, I think that everyone who works in an office should do a first aid course and be competent in CPR. It just makes sense to have that basic understanding available in a time of need. However, the purpose of gaining those skills is to keep the patient alive until a professional can arrive and take over. This is exactly how I see CISSP operating in modern IT departments.

I think that if CISSP were positioned more appropriately as an "IT health" badge of minimum competency, then much of the backlash from the security community would die down.

-- Gunter Ollmann

Thursday, June 20, 2013

FDA Safety Communication for Medical Devices

 The US Food and Drug Agency (FDA) released an important safety communication targeted at medical device manufacturers, hospitals, medical device user facilities, health care IT and procurements staff, along with biomedical engineers in which they warn of risk of failure due to cyberattack – such as through malware or unauthorized access to configuration settings in medical devices and hospital networks.

Have you ever been to view a much anticipated movie based upon an exciting book you happened to have read when you were younger, only to be sorely disappointed by what the director finally pulled together on the big screen? Well that’s how I feel when I read this newest alert from the FDA. Actually it’s not even called an alert… it’s a “Safety Communication”… it’s analogous to Peter Jackson deciding that his own interpretation of JRR Tolkien’s ‘The Hobbit’ wasn’t really worthy of the title so to forestall criticism he named the movie ‘Some Dwarves and a Hobbit do Stuff’.

This particular alert (and I’m calling it an alert because I can’t lower myself to call it a safety communication any longer) is a long time coming. Almost a decade ago me and my teams at the time raised the red flag over the woeful security of hospital networks, then back in 2005 my then research teams raised new red flags related to the encroachment of unsecured WiFi in to medical equipment, for the last couple of years IOActive’s research team have been raising new red flags over the absence of security within implantable medical devices, and then on June 13th 2013 the FDA releases a much watered down alert where the primary recommendations and actions section simply states “[m]any medical devices contain configurable embedded computer systems that can be vulnerable to cybersecurity breaches”. It’s as if the hobbit has been interpreted as a midget with hairy feet.

Yes I joke a little, but I am very disappointed with the status of this alert covering an important topic.

The vulnerabilities being uncovered on a daily basis within hospital networks, medical equipment and implantable devices by professional security teams and researchers are generally more serious than what outsiders give credit. Much of the public cybersecurity discussion as it relates to the medical field to date has been about people hacking hospital data systems for patient records and, most recently, the threat of targeted slayings of people who happen to have vulnerable implanted insulin pumps and heart defibrillators. While both are certainly possible, they’re what I would associate with fringe events.

I believe that the biggest and most likely threats lie in non-malicious actors – the tinkerers, the cyber-crooks, and the “in the wrong place at the wrong time” events. These medical systems are so brittle that even the slightest knock or tire-kicking can cause them to fail. I’ll give you some examples:

  • Wireless heart and drug monitoring stations within emergency wards that have open WiFi connections; where anyone with an iPhone searching for an Internet connection can make an unauthenticated connection and have their web browser bring up the admin portal of the station.
  • Remote surgeon support and web camera interfaces used for emergency operations brought down by everyday botnet malware because someone happened to surf the web one day and hit the wrong site.
  • Internet auditing and scanning services run internationally and encountering medical devices connected directly to the Internet through routable IP addresses – being used as drop-boxes for file sharing groups (oblivious to the fact that it’s a medical device under their control).
  • Common WiFi and Bluetooth auditing tools (available for android smartphones and tablets) identifying medical devices during simple “war driving” exercises and leaving the discovered devices in a hung state.
  • Medial staff’s iPads without authentication or GeoIP-locking of hospital applications that “go missing” or are borrowed by kids and have applications (and games) installed from vendor markets that conflict with the use of the authorized applications.
  • NFC from smartphone’s and payment systems that can record, playback and interfere with the communications of implanted medical devices.

These are really just the day-to-day noise of an Internet connected life – but one that much of the medical industry is currently ill prepared to defend against. Against an experienced attacker or someone determined to cause harm – well, it’s as one sided as a lone hobbit versus the combined armies of Middle Earth.

I will give the alert some credit though, that did clarify a rather important point that may have been a stumbling block for many device vendors in the past:

“The FDA typically does not need to review or approve medical device software changes made solely to strengthen cybersecurity.”

IOActive’s experience when dealing with a multitude of vulnerable medical device manufacturers had often been disheartening in the past. A handful of manufacturers have made great strides in securing their devices and controlling software recently – and there has been a change in the hearts and minds over the last 6 months (pun intended) as more publicity has been drawn to the topic. The medical clients we’ve been working most closely with over recent months have made huge leaps in making their latest devices more secure, and their next generation of devices will be setting the standard for the industry for years to come.

In the meantime though, there’s a tremendous amount of work to be done. The FDA’s alert is significant. It is a formal recognition of the poor state of security within the industry – providing some preliminary guidance. It’s just not quite a call to arms I’d have liked to see after so many years – but I guess they don’t want to raise too much fear, nor the ire of vendors that could face long and costly FDA re‑evaluations of their technologies. Gandalf would be disappointed.

(BTW I actually liked Peter Jackson’s rendition of The Hobbit).

-- Gunter Ollmann

First Published: IOActive Blog - June 20, 2013

Wednesday, May 29, 2013

Security 101: Machine Learning and Big Data

 The other week I was invited to keynote at the ISSA CISO Forum on Incident Response in Dallas and in the weeks prior to it I was struggling to decide upon what angle I should take. Should I be funny, irreverent, diplomatic, or analytical? Should I plaster slides with the last quarter’s worth of threat statistics, breach metrics, and headline news? Should I quip some anecdote and hope the attending CISO’s would have an epiphany that’ll fundamentally change the way they secure their organizations?

In the end I did none of that… instead I decided to pull apart the latest batch of security buzzwords – “Big Data” and “Machine Learning”.

If you attended RSA USA (or any major security vendor/booth conference) this year you can’t have missed the fact that everything from Antivirus through to USB memory sticks now come with a dab of big data, a sprinkling of machine learning, and a dollop of cloud for good measure. Thankfully I’m a cynic; or else I’d have been thrashing around on the ground in an epileptic fit from all the flashy marketing claims and trademarked nonsense phrases.

I guess I’m lucky to be in the position of having had several years of exposure to some of the greatest minds at Georgia Tech as they drummed in to me on a daily basis the “what and how” of machine learning in the context of solving many of today’s toughest security problems.

So, it was with that in mind that I thought “If I’m a CISO and everything I know about machine learning and big data came from carefully rehearsed vendor sound bites and glossy pamphlets, would I be able to tell the difference between Chanel #5 and cow manure?” The obvious answer would result in some very happy farmers.

What was the net result of this self-inflection and impending deadline? I crafted a short presentation for CISO’s… a 101 course on machine learning and big data… and it included ducks.

If you’re in the upper tiers of your organization and you’ve had sales folks pimping you their latest cloud-infused, big data-munching, machine learning, and world-hunger-solving security solution, please carry on reading as I attempt to explain the basics of the latest and greatest in buzzwords…

First of all – some context! In the world of breach detection and incident response there’s a common idiom: “If it walks like a duck, flies like a duck, and quacks like a duck… it must be a duck.”


Now I could easily spend another 5,000 words explaining why such an idiom doesn’t apply to modern security threats, targeted attacks and advanced persistent threats, but you’ll have to wait for a different blog post. Rather, for this 101 lesson, it’s important to understand the concept of “Feature Selection” – which in the case of this idiom includes: walking, flying and quacking.

If you’ve been tasked with dealing with a duck problem, ideally you’d be an aficionado on the feet, wings and sounds of ducks. You’d be able to apply this knowledge to each bird you have the time to focus your attention on and make a determination: Duck, or Not a Duck. As a security professional, you’d be versed in the various attributes of certain threats – and able to derive a conclusion as to the nature of the security problem.

The problem though is that at scale things break down.


What do you do when there’s too many to analyze, when time is too short, and when you can’t make out all the duck features from afar? This is typical of the big data problem (and your everyday network traffic). Storing the data is the easy part. Retrieving the data is mechanically complicated, but solvable.

Meanwhile, making decisions and taking actions upon the data is typically the most difficult part. With every doubling of data, your problem grows exponentially.


The traditional method of dealing with the situation has been to employ signature matching systems. In essence, we build rules based upon the features we’ve previously identified as significant and capable of bounding the problem (or duck selection). We then compare these rules against the sample animal and receive a binary answer – Duck, or Not a Duck.

Signature systems can be very efficient at classification. Just look at your average Intrusion Prevention System (IPS). A problem though lies in the scope of the features that had been defined.

If those features (or parameters) used for classification are too narrow (or too broad) then evasion is not only probable, but guaranteed. In essence, for a threat (or duck) to be classified, it must have been observed in the past or carefully predicted (although rare).

From an attacker’s perspective, knowledge of those features and triggering parameters makes it a trivial task to evade or to conduct false flag operations. Just think – hunters do this all the time with their floating duck decoys. Even fellow duck hunters have been known to mistakenly take pot-shots at them too.

Switching pace a little, let’s look at the network a little.

The big green blob is all the network traffic for an organization for a week. The red blog right-of-center is traffic associated with an active breach, and the lighter red blob with the dotted lines are just general malicious traffic observed within the network. In this two-dimensional view (if I hadn’t color-coded it previously) you’d have a near impossible task differentiating between them. As it is, the malicious traffic is mixed with both the “safe” and “breach” traffic.

The trick in differentiating between the network traffic types lies in increasing the dimensionality of the problem. What was a two-dimensional blob suddenly becomes much clearer when an appropriate view or perspective to the data is added. In the context of the above diagram, the addition of a z-axis and an extension in to the third-dimension allows the observer (i.e. analyst) to easily differentiate between the traffic types – for example, the axis could represent “country code of destination IP address”. In this context, the appropriate feature selection can greatly simplify the detection problem. Choosing appropriate features is important – nay, it’s critical!

This is where advances in machine learning over the last half-decade have really come to the fore in computer science and more recently in information security.

Without getting in to any of the math behind the various machine learning algorithms or techniques, the key concept you need to understand is “training”. It can mean many things to many a mathematician, but since we’re likely not one of those, what training means in our context is that we already have samples of what we’re going to be looking for, and samples of things we know we’re definitely not interested in. The better we define and populate these training sets, the more precise the machine learning system we’re employing will be in differentiating between them – and potentially classifying other contenders.

So, in this example we’ve taken a bunch of ducks and grouped them together. They become our “+ve class” – which basically means these are the things we’re interested in. But, equally important, is our “-ve class” – our collection of things we know not to be ducks. In many cases our -ve class may be more important than our +ve class because it contains all those false positives and “nearly” ducks – the things that may have caught us out once before.

One function of a good machine learning system is to automatically determine which attributes make the most sense in differentiating between your +ve and -ve classes.

While our poor old hunter (or analyst) was working with three features – walks, flies, and talks – the computer-based system may have reviewed all the attributes that were available and determined which ones are the most useful in differentiating between “ducks” and “not ducks”. In many cases the system will have weighted the various features (or attributes) to indicate which features are more deterministic of the classes.

For example, texture may be a good indicator of “not a duck” – since none of the +ve class were made from plastic or wood. Meanwhile features such as “wing length” may not be such a good criteria and will be weighted in a way to not have an influence on determining whether a duck is a duck or not – or may be dropped by the system entirely.

The number of features reviewed, assessed and weighted by the machine learning system will eventually be determined by the type of data being used and how “rich” it is. For example, in the network security realm we may be feeding the system with collated samples of firewall logs, DNS traffic samples, IP blacklists/whitelists, IPS alerts, etc. It’s important to note though that the “richer” the data set (i.e. the more possible features there could be), the more complex the problem is for the computer to solve and the longer it’ll take to train the system.

Now let’s switch back to that “big data” we originally talked about. In the duck realm we’re talking about all the birds within a national park (say). Meanwhile, in the network security realm, we may be talking about all the traffic observed in real-time across a corporate network and all the alerting instrumentation (e.g. firewalls, IPS, etc.)

I’m going to do some hand-waving here because it can get rather complex and there’s a lot of proprietary tweaks that can be undertaken here… but in one representation we can get our trained system to automatically group and cluster events on our network.

Using our original training data, or some other previously labeled datasets, it’s possible to automatically label the clusters output by a machine learning system.

For example, in the graphic above we see a number of unique clusters (or blobs if you insist). Through automatic labeling we know that the green blobs are types of ducks, the red blobs are various groupings of not ducks, and the gray blobs are clusters of previously unknown or unlabeled clusters – each one mathematically distinct from the other – based upon the features the system chose.

What the system can also do is assign a probability that the unknown clusters are associated with our +ve or -ve training sets. For example, in this particular graphical representation the proximity of the unlabeled clusters to labeled (and classified) clusters allows the system to assign a probability of whether the cluster is a duck or not a duck – even though the system had never seen these things before (i.e. “birds” the system hasn’t encountered before).

The next (and near final) stage is to manually label these new clusters. For example, we ask an ornithologist to look at each cluster of “ducks” and “not ducks” in turn and to label them… “rubber duckies”, “robot duckies”, and “Madagascar mallard ducks”.

Then, to improve our machine learning system further, we add these newly labeled clusters to our +ve and -ve training sets… and the system continues to learn and become more precise over time.

In addition, since we’ve now labeled these clusters, in the future we’re able to automatically flag new additions to these clusters and correctly label the duck (or threat).

And, if we’re a really smart CISO, we can use this clustering system (and labeled clusters) to automatically alert us to new threats or to initiate automatic network security actions – e.g. enable blocking of a new malicious URL, stop blocking a new cloud service delivering official updates to applications, etc.

The application of machine learning techniques to the toughest security problems facing business today has come along in leaps and bounds recently. However as with any buzz word that falls in to the hands of marketers and gets manipulated until it squeaks and glitters, or oozes onto every product in this year’s price list, senior technical staff need to take added care not to be bamboozled by well-practiced but meaningless word salad.

A little understanding of the concepts behind big data and machine learning can not only cut through the latest batch of sales promises, but can also form the basis of constructing a new generation of meaningful breach detection and incident response solutions.

-- Gunter Ollmann

Original Publication: IOActive Blog - May 29, 2013

Tuesday, May 7, 2013

Bypassing Geo-locked BYOD Applications

In the wake of increasingly lenient BYOD policies within large corporations, there’s been a growing emphasis upon restricting access to business applications (and data) to specific geographic locations. Over the last 18 months more than a dozen start-ups in North America alone have sprung up seeking to offer novel security solutions in this space – essentially looking to provide mechanisms for locking application usage to a specific location or distance from an office, and ensuring that key data or functionality becomes inaccessible outside these prescribed zones.

These “Geo-locking” technologies are in hot demand as organizations try desperately to regain control of their networks, applications and data.

Over the past 9 months I’ve been asked by clients and potential investors alike for advice on the various technologies and the companies behind them. There’s quite a spectrum of available options in the geo-locking space; each start-up has a different take on the situation and has proposed (or developed) a unique way in tackling the problem. Unfortunately, in the race to secure a position in this evolving security market, much of the literature being thrust at potential customers is heavy in FUD and light in technical detail.

It may be because marketing departments are riding roughshod over the technical folks in order to establish these new companies, but in several of the solutions being proposed I’ve had concerns over the scope of the security element being offered. It’s not because the approaches being marketed aren’t useful or won’t work, it’s more because they’ve defined the problem they’re aiming to solve so narrowly that they’ve developed what I could only describe as tunnel-vision to the spectrum of threat organizations are likely to face in the BYOD realm.

In the meantime I wanted to offer this quick primer on the evolving security space that has become BYOD geo-locking.

Geo-locking BYOD

The general premise behind the current generation of geo-locking technologies is that each BYOD gadget will connect wirelessly to the corporate network and interface with critical applications. When the device is moved away from the location, those applications and data should no longer be accessible.

There are a number of approaches, but the most popular strategies can be categorized as follows:

  • Thick-client – A full-featured application is downloaded to the BYOD gadget and typically monitors physical location elements using telemetry from GPS or the wireless carrier directly. If the location isn’t “approved” the application prevents access to any data stored locally on the device.
  • Thin-client – a small application or driver is installed on the BYOD gadget to interface with the operating system and retrieve location information (e.g. GPS position, wireless carrier information, IP address, etc.). This application then incorporates this location information in to requests to access applications or data stored on remote systems – either through another on-device application or over a Web interface.
  • Share-my-location – Many mobile operating systems include opt-in functionality to “share my location” via their built-in web browser. Embedded within the page request is a short geo-location description.
  • Signal proximity – The downloaded application or driver will only interface with remote systems and data if the wireless channel being connected to by the device is approved. This is typically tied to WiFi and nanocell routers with unique identifiers and has a maximum range limited to the power of the transmitter (e.g. 50-100 meters).

The critical problem with the first three geo-locking techniques can be summed up simply as “any device can be made to lie about its location”.

The majority of start-ups have simply assumed that the geo-location information coming from the device is correct – and have not included any means of securing the integrity of that device’s location information. A few have even tried to tell customers (and investors) that it’s impossible for a device to lie about its GPS location or a location calculated off cell-tower triangulation. I suppose it should not be a surprise though – we’ve spent two decades trying to educate Web application developers to not trust client-side input validation and yet they still fall for web browser manipulations.

A quick search for “fake location” on the Apple and Android stores will reveal the prevalence and accessibility of GPS fakery. Any other data being reported from the gadget – IP address, network MAC address, cell-tower connectivity, etc. – can similarly be manipulated. In addition to manipulation of the BYOD gadget directly, alternative vectors that make use of private VPNs and local network jump points may be sufficient to bypass thin-client and “share-my-location” geo-locking application approaches.

That doesn’t mean that these geo-locking technologies should be considered unicorn pelts, but it does mean that organization’s seeking to deploy these technologies need to invest some time in determining the category of threat (and opponent) they’re prepared to combat.

If the worst case scenario is of a nurse losing a hospital iPad and that an inept thief may try to access patient records from another part of the city, then many of the geo-locking approaches will work quite well. However, if the scenario is that of a tech-savvy reporter paying the nurse to access the hospital iPad and is prepared in install a few small applications that manipulate the geo-location information in order to remotely access celebrity patient records… well, then you’ll need a different class of defense.

Given the rapid evolution of BYOD geo-locking applications and the number of new businesses offering security solutions in this space, my advice is two-fold – determine the worst case scenarios you’re trying to protect against, and thoroughly assess the technology prior to investment. Don’t be surprised if the marketing claims being made by many of these start-ups are a generation or two ahead of what the product is capable of performing today.

Having already assessed or reviewed the approaches of several start-ups in this particular BYOD security realm, I believe some degree of skepticism and caution is warranted.

-- Gunter Ollmann

First Published: IOActive Blog - May 7, 2013

Tuesday, April 30, 2013

Fact or Fiction: Is Huawei a Risk to Critical Infrastructure?

 How much of a risk does a company like Huawei or ZTE pose to U.S. national security? It’s a question that’s been on many peoples lips for a good year now. Last year the U.S. House of Representatives Permanent Select Committee on Intelligence warned American companies to “use another vendor”, and earlier in that year the French senator and former defense secretary Jean-Marie Bockel recommended a “total prohibition in Europe of core routers and other sensitive IT equipment coming from China.” In parallel discussions, the United Kingdom, Australia and New Zealand (to name a few) have restricted how Huawei operates within their borders.

Last week Eric Xu – executive vice-president and one of the triumvirate jointly running Huawei – stunned analysts when he told them that Huawei was “not interested in the U.S. market any more.”

Much of the analysis has previously focused upon Huawei’s sizable and influential position as the world’s second largest manufacturer of network routers and switching technology – a critical ingredient for making the Internet and modern telecommunications work – and the fact that it is unclear as to how much influence (or penetration) the Chinese government has in the company and its products. The fear is that at any point now or in the future, Chinese military leaders could intercept or disrupt critical telecommunications infrastructure – either as a means of aggressive statecraft or as a component of cyber warfare.

As someone who’s spent many years working with the majority of the world’s largest telecommunication companies, ISP’s, and cable providers, I’ve been able to observe firsthand the pressure being placed upon these critical infrastructure organizations to seek alternative vendors and/or replace any existing Huawei equipment they may already have deployed. In response, many of the senior technical management and engineers at these organizations have reached out to me and to IOActive to find out how true these rumors are. I use the term “rumor” because, while reports have been published by various government agencies, they’re woefully lacking in technical details. You may have also seen François Quentin (chairman of the board of Huawei France) claiming that the company is a victim of “rumors”.

In the public technical arena, there are relatively few bugs and vulnerabilities being disclosed in Huawei equipment. For example, if you search for CVE indexed vulnerabilities you’ll uncover very few. Compared to the likes of Cisco, Juniper, Nokia, and most of the other major players in routing and switching technology, the number of public disclosures is miniscule. But this is likely due to a few of the following reasons:

  • The important Huawei equipment isn’t generally the kind of stuff that security researchers can purchase off Ebay and poke around with at home for a few hours in a quest to uncover new bugs. They’re generally big-ticket items. (This is the same reason why you’ll see very few bugs publicly disclosed in Cisco’s or Nokia’s big ISP-level routers and switches).
  • Up until recently, despite Huawei being such a big player internationally, they haven’t been perceived as such to the English-speaking security researcher community – so have traditionally garnered little interest from bug hunters.
  • Most of the time when bugs are found and exploitable vulnerabilities are discovered, they occur during a paid-for penetration test or security assessment, and therefore those findings belong to the organization that commissioned the consulting work – and are unlikely to be publicly disclosed.
  • Remotely exploitable vulnerabilities that are found in Huawei equipment by independent security researchers are extremely valuable to various (international) government agencies. Any vulnerability that could allow someone to penetrate or eavesdrop at an international telecommunications carrier-level is worth big bucks and will be quickly gobbled up. And of course any vulnerability sold to such a government agency most certainly isn’t going to be disclosed to the vulnerable vendor – whether that be Huawei, Cisco, Juniper, Nokia, or whatever.

What does IOActive know of bugs and exploitable vulnerabilities within Huawei’s range of equipment? Quite a bit obviously – since we’ve been working to secure many of the telecommunications companies around the world that have Huawei’s top-end equipment deployed. It’s obviously not for me to disclose vulnerabilities that were uncovered on the dime of an IOActive client, however many of the vulnerabilities we’ve uncovered during tests have given great pause to our clients as remedies are sought.

Interesting enough, the majority of those vulnerabilities were encountered using standard network discovery techniques – which to my mind is just scratching the surface of things. However, based upon what’s been disclosed in these afore mentioned government reports over the last year, that was probably their level of scrutinization too. Digging deeper in to the systems reveals more interesting security woes.

Given IOActive’s expertise history and proven capability of hardware hacking, I’m certain that we’d be able to uncover a whole host of different and more significant security weaknesses in these critical infrastructure components for clients that needed that level of work done. To date IOActive the focus has be on in-situ analysis – typically assessing the security and integrity of core infrastructure components within live telco environments.

I’ve heard several senior folks talk of their fears that even with full access to the source code that that wouldn’t be enough to verify the integrity of Chinese network infrastructure devices. For a skillful opponent, that is probably so, because they could simply hide the backdoors and secret keys in the microcode of the devices semiconductor chips.

Unfortunately for organizations that think they can hide such critical flaws or backdoors at the silicon layer, I’ve got a surprise for you. IOActive already has the capability strip away the layers of logic within even the most advanced and secure microprocessor technologies out there and recover the code and secrets that have been embedded within the silicon itself.

So, I’d offer a challenge out there to the various critical infrastructure providers, government agencies, and to manufacturers such as Huawei themselves – let IOActive sort out the facts from the multitude of rumors. Everything you’ve probably been reading is hearsay.

Who else but IOActive can assess the security and integrity of a technology down through the layers – from the application, to the drivers, to the OS, to the firmware, to the hardware and finally down to the silicon of the microprocessors themselves? Exciting times!

-- Gunter Ollmann

First Published: IOActive Blog - April 30, 2013

Monday, March 25, 2013

Tales of SQLi

As attack vectors go, very few are as significant as obtaining the ability to insert bespoke code in to an application and have it automatically execute upon “inaccessible” backend systems. In the Web application arena, SQL Injection vulnerabilities are often the scariest threat that developers and system administrators come face to face with (albeit way too regularly).  In fact the OWASP Top-10 list of Web threats lists SQL Injection in first place.
More often than not, when security professionals discuss SQL Injection threats and attack vectors, they focus upon the Web application context. So it was with a bit of fun last week when I came across a photo of a slightly unorthodox SQL Injection attempt – that of someone attempting to subvert a traffic monitoring system by crafting a rather novel vehicle license plate.
My original tweet got retweeted a couple of thousand of times – which just goes to show how many security nerds there are out there in the twitterverse.

“in the wild” SQL Injection attempt was based upon the premise that video cameras are actively monitoring traffic on a road, reading license plates, and issuing driver warnings, tickets or fines as deemed appropriate by local law enforcement.
At some point the video captures of the passing vehicle’s license plate must be converted to text and stored – almost certainly in some kind of backend database. The hope of the hacker that devised this attack was that the process would be vulnerable to SQL Injection – and crafted a simple SQL statement that could potentially cause the backend database to drop (i.e. “delete”) the table containing all of the license plate information.
Whether or not this particular attempt worked, I have no idea (probably not if I have to guess an outcome); but it does help nicely to raise attention to this category of vulnerability.
As surveillance systems become more capable – digitally storing information, distilling meta-data from image captures, and sharing observation data between systems – it opens many new doors for mischievous and malicious attack.
The physical nature of these systems, coupled with the complexities of integration with legacy monitoring and reporting systems, often makes them open to attacks that would be classed as fairly simple in the world of Web application security.
A common failure of system developers is to assume that the physical constraints of the data acquisition process are less flexible than they are. For example, if you’re developing a traffic monitoring system it’s easy to assume that license plates are a fixed size and shape, and can only contain 10 alphanumeric characters. Meanwhile, the developers of the third-party image processing code had no such assumptions and will digitize any image. It reminds me a little of the story in which reuse of some object-oriented code a decade ago resulted in Kangaroos firing Stinger missiles during a military training simulation.
While the image above is amusing, I’ve encountered similar problems before when physical tracking systems integrate with digital backend processes – opening the door to embarrassing and fraudulent events. For example, in the past I’ve encountered similar SQL Injection vulnerabilities within systems such as:
  • Toll booths reading RFID tags mounted on vehicle windshields – where the tag readers would accept up to 2k of data from each tag (even though the system was only expecting a 16 digit number).
  • Credit card readers that would accept pre-paid cards with negative balances – which resulted in the backend database crediting the wrong accounts.
  • RFID inventory tracking systems – where a specially crafted RFID token could automatically remove all record of the previous hours’ worth of inventory logging information from the database allowing criminals to “disappear” with entire truckloads of goods.
  • Luggage barcode scanners within an airport – where specially crafted barcodes placed upon the baggage would be automatically conferred the status of “manually checked by security personnel” within the backend tracking database.
  • Shipping container RFID inventory trackers – where SQL statements could be embedded to adjust fields within the backend database to alter Custom and Excise tracking information.
Unlike the process of hunting for SQL Injection vulnerabilities within Internet accessible Web applications, you can’t just point an automated vulnerability scanner at the application and have at it. Assessing the security of complex physical monitoring systems is generally not a trivial task and requires some innovative approaches. Experience goes a long way.
-- Gunter Ollmann

Thursday, March 14, 2013

Credit Bureau Data Breaches

This week saw some considerable surprise over how easy it is to acquire personal credit report information.  On Tuesday Bloomberg News led with a story of how “Top Credit Agencies Say Hackers Stole Celebrity Reports”, and yesterday there were many follow-up stories examining the hack. In one story I spoke with Rob Westervelt over at CRN regarding the problems credit reporting agencies face when authenticating the person for which the credit information applies and the additional problems they face securing the data in general (you can read the article “Equifax, Other Credit Bureaus Acknowledge Data Breach”).

Many stories have focused on one of two areas – the celebrities, or the ease of acquiring credit reports – but I wanted to touch upon some of the problems credit monitoring agencies face in verifying who has access to the data and how that fits in to the bigger problem of Internet-based authentication and the prevalence of personal-enough information.

The repeated failure of Internet portals tasked with providing access to personal credit report information stems from the data they have available that can be used for authentication, and the legislated requirement to make the data available in the first place.

Credit monitoring agencies are required to make the data accessible to all the individuals they hold reports on, however access to the credit report information is achieved through a wide variety of free and subscription portals – most of which are not associated with the credit monitoring bureaus in the first place.

In order to provide access to a particular individual’s credit report, the user must answer a few questions about themselves via one such portal. These questions, by necessity, are restricted to the kinds of data held (and tracked) by the credit reporting agencies – based off information garnered from other financial institutions. This information includes name, date of birth (or age), social security number, account numbers, account balances, account addresses, financial institutes that manages the accounts, and past requests for access to credit report information. While it sounds like a lot of information, it’s actually not a very rich source for authentication purposes – especially when some of the most important information that can uniquely identify the individual is relatively easy to acquire through other external and Internet-based sources.

Time Magazine’s article “Hackers Now Aiming For Your Credit Reports” of a year ago describes many of these limitations and where some of this information can be acquired. In essence though, the data is easy to mine from social media sites and household tax records; and a little brute force guessing can overcome the hurdle of it not already being in the public domain.

The question then becomes “what can the credit monitoring agencies do to protect the privacy of credit reports?”  Some commentators have recommended that individuals should provide a copy of state-issued identification documents – such as a drivers license or passport.

The submission of such a scanned document poses new problems for the credit monitoring agencies. First of all, this probably isn’t automatable on a large scale and they’ll need trained staff to review each of these documents. Secondly, there are plenty of tools and websites that allow you to generate a fake ID within seconds (e.g. here) – and spotting the fakes will be extremely difficult without tying the authentication process to an external government authentication system (e.g. checking to see if the drivers license or passport number is legitimate). Thirdly, do you want the credit reporting agencies holding even more personal information about you?

This entire problem is getting worse – not just for the credit monitoring agencies, but for all online services. Authentication – especially “first time” authentication – is difficult at the best of times, but if you’re trying to do this using only data an organization has collected and holds themselves, it’s neigh on impossible given current hacking techniques.

I hate to say it, but there’s a very strong (and growing) requirement for governments to play a larger role in identity management. Someone somewhere needs to act as a trusted Internet passport authority – with “trusted” being the critical piece. I’ve seen the arguments that have been made for Facebook, Google, etc. being that identity management platform, but I respectively disagree. These commercial services aren’t identity management platforms, they’re authentication gateways. What is needed is the cyber-equivalent of a government-issued passport, with all the checks and balances that entails.

Even that is not perfect, but it would certainly be better than the crumby vendor-specific authentication systems and password recovery processes that currently plague the Internet.

In the meantime, don’t be surprised if you find your credit report and other personal information splattered over the Internet as part of some juvenile doxing attack.

-- Gunter Ollmann

Monday, February 4, 2013

Vulnerability Disclosures in 2012

A new blog post by me is up on the IOActive site - 2012 Vulnerability Disclosure Retrospective. It follows from a review of the new analyst briefing document from NSS Labs about the statistics of vulnerabilities throughout last year and their increase.

Monday, January 7, 2013

The Demise of Desktop Antivirus

 Are you old enough to remember the demise of the ubiquitous CompuServe and AOL CD’s that used to be attached to every computer magazine you ever brought between the mid-80’s and mid-90’s? If you missed that annoying period of Internet history, maybe you’ll be able to watch the death of desktop antivirus instead.

Just as dial-up subscription portals and proprietary “web browsers” represent a yester-year view of the Internet, desktop antivirus is similarly being confined to the annuls of Internet history. It may still be flapping vigorously like a freshly landed fish, but we all know how those last gasps end.

To be perfectly honest, it’s amazing that desktop antivirus has lasted this long. To be fair though, the product you may have installed on your computer (desktop or laptop) bears little resemblance to the antivirus products of just 3 years ago. Most vendors have even done away from using the “antivirus” term – instead they’ve tried renaming them as “protection suites” and “prevention technology” and throwing in a bunch of additional threat detection engines for good measure.

I have a vision of a hunchbacked Igor working behind the scenes stitching on some new appendage or bolting on an iron plate for reinforcement to the Frankenstein corpse of each antivirus product as he tries to keep it alive for just a little bit longer…

That’s not to say that a lot of effort doesn’t go in to maintaining an antivirus product. However, with the millions upon millions of new threats each month it’s hardly surprising that the technology (and approach) falls further and further behind. Despite that, the researchers and engineers that maintain these products try their best to keep the technology as relevant as possible… and certainly don’t like it when anyone points out the gap between the threat and the capability of desktop antivirus to deal with it.

For example, the New York Times ran a piece on the last day of 2012 titled “Outmaneuvered at Their Own Game, Antivirus Makers Struggle to Adapt” that managed to get many of the antivirus vendors riled up – interestingly enough not because of the claims of the antivirus industry falling behind, but because some of the statistics came from unfair and unscientific tests. In particular there was great annoyance that a security vendor (representing an alternative technology) used VirusTotal coverage as their basis for whether or not new malware could be detected – claiming that initial detection was only 5%.

I’ve discussed the topic of declining desktop antivirus detection rates (and evasion) many, many times in the past. From my own experience, within corporate/enterprise networks, desktop antivirus detection typically hovers at 1-2% for the threats that make it through the various network defenses. For newly minted malware that is designed to target corporate victims, the rate is pretty much 0% and can remain that way for hundreds of days after the malware has been released in to the wild.

You’ll note that I typically differentiate between desktop and network antivirus. The reason for this is because I’m a firm advocate that the battle is already over if the malware makes it down to the host. If you’re going to do anything on the malware prevention side of things, then you need to do it before it gets to the desktop – ideally filtering the threat at the network level, but gateway prevention (e.g. at the mail gateway or proxy server) will be good enough for the bulk of non-targeted Internet threats. Antivirus operations at the desktop are best confined to cleanup, and even then I wouldn’t trust any of the products to be particularly good at that… all too often reimaging of the computer isn’t even enough in the face of malware threats such as TDL.

So, does an antivirus product still have what it takes to earn the real estate it take up on your computer? As a standalone security technology – No, I don’t believe so. If it’s free, never ever bothers me with popups, and I never need to know it’s there, then it’s not worth the effort uninstalling it and I guess it can stay… other than that, I’m inclined to look at other technologies that operate at the network layer or within the cloud; stop what you can before it gets to the desktop. Many of the bloated “improvements” to desktop antivirus products over recent years seem to be analogous to improving the hearing of a soldier so he can more clearly hear the ‘click’ of the mine he’s just stood on as it arms itself.

I’m all in favor of retraining any hunchbacked Igor we may come across. Perhaps he can make artwork out of discarded antivirus DVDs – just as kids did in the 1990’s with AOL CD’s?

-- Gunter Ollmann