Friday, November 20, 2015

Battling Cyber Threats Using Lessons Learned 165 Years Ago

When it comes to protecting the end user, the information security community is awash with technologies and options. Yet, despite the near endless array of products and innovation focused on securing that end user from an equally broad and expanding array of threats, the end user remains more exposed and vulnerable than at any other period in the history of personal computing.

Independent of these protection technologies (or possibly because of them), we’ve also tried to educate the user in how best (i.e. more safely) to browse the Internet and take actions to protect themselves. With a cynical eye, it’s almost like a government handing out maps to their citizens and labeling streets, homes, and businesses that are known to be dangerous and shouldn’t be visited – because not even the police or military have been effective there.

Today we instruct our users (and at home, our children) to be careful what they click-on, what pages or sites they visit, what information they can share, and what files they should download. These instructions are not just onerous and confusing, more often than not they’re irrelevant – as, even after following them to the letter, the user can still fall victim.

The fact that a user can’t click on whatever they want, browse wherever they need to, and open what they’ve received, should be interpreted as a mile-high flashing neon sign saying “infosec has failed and continues to fail” (maybe reworded with a bunch of four-letter expletives for good measure too).
For decades now thousands of security vendors have brought to market technologies that, in effect, are predominantly tools designed to fill vulnerable and exploited gaps in the operating systems lying at the core of devices the end users rely upon. If we’re ever to make progress against the threat and reach the utopia of users being able to “carelessly” using the Internet, those operating systems must get substantially better.

In recent years, great progress has been made in the OS front – primarily smartphone OS’s. The operating systems running on our most pocket-friendly devices are considerably more secure than those we rely upon for our PC’s, notebooks, or servers at home or work. There’s a bunch of reasons why of course – and I’ll not get in to that here – but there’s still so much more that can be done.
I do believe that there are many lessons that can be learned from the past; lessons that can help guide future developments and technologies. Reaching back a little further in to the past than usual – way before the Internet, and way before computers – there are a couple of related events that could shine a brighter light on newer approaches to protecting the end user.

Back in 1850 a Hungarian doctor named Ignaz Semmelweis was working in the maternity clinic at the General Hospital in Vienna where he noted that many women in maternity wards were dying from puerperal fever - commonly known as childbed fever. He studied two medical wards in the hospital – one staffed by all male doctors and medical students, and the other by female midwifes – and counted the number of deaths in each ward. What he found was that death from childbirth was five times higher in the ward with the male doctors.

Dr. Semmulweis tested numerous hypothesis as to the root cause of the deadly difference – ranging from mothers giving birth on their sides versus their backs, through to the route priests traversed the ward and the bells they rang. It appears that his Eureka moment came after the death of a male pathologist who, upon pricking his finger while doing an autopsy on a woman who had died of childbed fever, had succumbed to the same fate (apparently being a pathologist in the mid-19th century was not conducive to a long life). Joining the dots, Dr. Semmulweis noted that the male doctors and medical students were doing autopsies while the midwifes were not, and that “cadaverous particles” (this is a period of time before germs were known) were being spread to those birthing mothers.

Dr. Semmulweis’ medical innovation? “Wash your hands!” The net result, after doctors and midwifes started washing their hands (in lime water, then later in chlorine), was that the rate of childbed fever dropped considerably.

Now, if you’re in the medical trade, washing your hands multiple times per day in chlorine or (by the late 1800’s) carbolic acid, you’ll note that it isn’t so good for your skin or hands.

In 1890 William Stewart Halsted of Johns Hopkins University asked the Goodyear Tire and Rubber Company if they could make a glove of rubber that could be dipped in carbolic acid in order to protect the hands of his nurses – and so was born the first sterilized medial gloves. The first disposable latex medical gloves were manufactured by Ansell and didn’t appear until 1964.

What does this foray in to 19th century medical history mean for Internet security I hear you say? Simple really, every time the end user needs to use a computer to access the Internet and do work, it needs to be clean/pristine. Whether that means a clean new virtual image (e.g. “wash your hands”) or a disposable environment that sits on top of the core OS and authorized application base (e.g. “disposable gloves”), the assumption needs to be that nothing the user encounters over the Internet can persist on the device they’re using after they’ve finished their particular actions.

This obviously isn’t a solution for every class of cyber threat out there, but it’s an 80% solution – just as washing your hands and wearing disposable gloves as a triage nurse isn’t going to protect you (or your patient) from every post-surgery ailment.

Operating system providers or security vendors that can seamlessly adopt and automatically procure a clean and pristine environment for the end user every time they need to conduct activities on or related to the Internet will fundamentally change the security game – altering the battle field for attackers and the tools of their trade.

Exciting times ahead.


-- Gunter

Wednesday, November 18, 2015

Exploiting Video Console Chat for Cybecrime or Terrorism

A couple of days ago there was a lot of interest in how terrorists may have been using chat features of popular video console platforms (e.g. PS4, XBox One) to secretly communicate and plan their attacks. Several journalists on tight deadlines reached out to me for insight in to threat. Here are some technical snippets on the topic that may be useful for future reference:

  • In-game chat systems have been used by cyber-criminals for over a decade to conduct business and organize transfers of stolen data. Because the chat systems within games tend to use proprietary protocols and exist solely within a secure connection to the game vendors server, it is not ordinarily possible to eavesdrop or collectively intercept these communications without some level of legal access to the central server farm. While the game vendors have the ability to inspect the chat traffic, this level of inspection (when conducted - which is rare) tends to focus on inappropriate language and bullying, and that inspection or evidence gathering is almost exclusively limited to text-based communications.
  • As games (particularly multi-player first-person shootem-up games) have embraced real-time voice chat protocols, it has become considerably more difficult to inspect traffic and identify inappropriate communications. Most responses to abuse are driven my multiple individuals complaining about another in-game player - rather that dynamic detection of abuse.
  • This difficulty in monitoring communications is well known in the criminal community and is conveniently abused. Criminals tend to not use their own personal account details, instead use aliases or, more frequently, stolen user credentials - and may electronically proxy their communications via TOR and other anonymizing proxy services to avoid people working out their physical location. There is a sizable underground market for stolen on-line gaming user credentials. When using stolen credentials, the criminals will often join specific game servers and use pre-arranged times for games (and sub-types of games) to ensure that they will be online with the right group(s) of associates. These game times/details are often discussed in private message boards.
  • While US law enforcement has expended efforts to intercept communications and ascertain geographical location information from TOR and proxy services in the past, it is difficult - since the communications themselves are typically encrypted. Intercepting in-game communications are very difficult because of the complex legal and physical server relationships between (lets say for example) Sony (running the PlayStation network), Electronic Arts (running the account management system and some of the gaming server farm), and the game development team (who implemented the communication protocol and runs the in-game service). For law enforcement, getting the appropriate legal interception rights to target an individual (criminal) is complex in this situation and may be thwarted anyway if the criminals choose to use their own encryption tools on top of the game - i.e. the in-game communications are encrypted by the criminals using a third-part non-game tool.
  • Console chat typically takes the form of either text or voice-based chat. Text-based chat is much easier to analyze and consequently easier for console operators and law enforcement to identify threats and abuse. In addition, text-based communications are much easier to store or archive - which means that, after an event, it is often possible for law enforcement to obtain the historical communication logs and perform analysis. Voice-based chat is much more difficult to handle and typically will only be inspected in a streaming fashion because the data volumes are so large - making it impractical to store for any extended period of time. There are also more difficulties in searching voice traffic for key words and threats. Video-based chat is even more difficult again to dynamically inspect, monitor, and store.
-- Gunter

Tuesday, November 17, 2015

Panel Selection of Penetration Testing Vendors

Most large companies have settled into a repeatable model in the way they undertake penetration testing and engage with their penetration testing suppliers. A popular model for companies that need to have several dozen pentests performed per year is to have a “board” or “panel” of three or four vetted companies and to rotate one provider in and out of the scheme per year – meaning that there is potentially a total refresh of providers every few years.

As vendor performance models go there is a certain undeniable logic to the process. However, it is worth questioning if these “board” models actually deliver better results – in particular, are the full spectrum of vulnerabilities being examined and are the individual consultants capable of delivering the work? In general, I’d argue that such a model often fails to meet these core requirements.


Advanced companies (e.g. brand-name software manufacturers) that require access to the most skilled talent-pool of penetration testers and reverse engineers tend to select vendors based upon the skills and experience of the consultants they employ – often specifically calling out individual consultants by names within the terms of the contract. They also pay premium rates to have access to that exclusive talent pool. In turn, the vendors that employ those consultants market and position their own companies as advanced service providers. For these companies, talent is the critical buying decision and it is not uncommon for the client organization to engage with new vendors when highly skilled or specialized consultants move between service providers.

Most large companies are not as sophisticated in discerning the talent pool needed to review and secure their products – yet still have many of the same demands and needs from a penetration testing skills perspective. For them, vendor selection is often about the responsiveness of the service provider (e.g. can they have 6 penetration testers onsite within two weeks in Germany or Boston) and the negotiated hourly-rate for their services. The churn of vendors through the “board” model is typically a compromise effort as they try to balance the needs of negotiating more favorable contractual terms, overcoming a perception of skill gaps within their providers consulting pool, and serve a mechanism for tapping a larger pool of vetted consultants.

From past observations, there are several flaws to this model (although several elements are not unique to the model).
  1. Today's automated vulnerability scanners (for infrastructure, web application, and code review) are capable of detecting up to 90% of the vulnerabilities an “average” penetration tester can uncover manually if they use their own scripts and tools. Managed vulnerability scanning services (e.g. delivered by managed security service providers (MSSP)) typically reach the same 90% level, but tend to provide the additional value of removing false positives and confirming true positives. If these automated tools and services already cover 90% of the vulnerability spectrum, organizations need to determine whether closing the gap on the remaining 10% is worth the consulting effort and price. Most often, the answer is “yes, but…” where the “but…” piece is assigned a discrete window of time and effort to uncover or solve – and hence value. Organizations who adopt the “board” approach often fail to get the balance between tools, MSSP, and consultant-led vulnerability discovery programs. There are significant cost savings to be had when the right balances have been struck.
  2. Very few consultants share the same depth of skills and experience. If an organization is seeking to uncover vulnerabilities that lie out of reach of automated discovery tools, it is absolutely critical that the consultant(s) undertaking the work have the necessary skills and experience. There is little point throwing a 15 year veteran of Windows OS security at an Android mobile application served from the AWS cloud – and vice versa. To this end, clients must evaluate the skill sets of the consultants that are being offered up by the vendor and who are expected to do the work. The reality of the situation is that clients that don’t pay the necessary attention can almost guarantee that they’ll get the second-rung consultants (pending availability) to perform this important work. The exception being when a new vendor is being evaluated, and they’ll often try to throw their best at the engagement for a period of time in order to show their corporate value – but clients should not anticipate the same level of results in subsequent engagements unless they are specific about the consultants they need on the job.
  3. Rotating a vendor in or out of a program based upon an annual schedule independent of evaluating the consultants employed by the company makes little sense. Many penetration testing companies will have a high churn of technical staff to begin with and their overall technical delivery capabilities and depth of skills specialization will flux though-out the year. By understanding what skill sets the client organization needs and the amount of experience in each skill area in advance, those organizations can better rationalize their service providers consulting capabilities – and negotiate better terms.
  4. Because consultant skills and experience play such an important role in being able to uncover new vulnerabilities, client organizations should consider cross-vendor teams when seeking to penetration test and secure higher-priority infrastructure, applications, and products. Cherry-picking named consultants from multiple vendors to work on an important security requirement tends to yield the best and most comprehensive findings. Often there is the added advantage of those vendors choosing to compete to ensure that their consultants do the best team work on the joint project – hoping that more follow-on business will fall in their direction.


While “board” or “panel” approaches to penetration testing vendor management may have an appeal from a convenience perspective, the key to getting the best results (both economical and vulnerability discovery) lies with the consultants themselves. 

Treating the vendor companies as convenient payment shells for the consultants you want or need working on your security assignments is OK as long as you evaluate the consultants they employ and are specific on which consultants you want working to secure your infrastructure, applications, and products. To do otherwise is a disservice to your organization.

-- Gunter

Monday, November 9, 2015

The Incredible Value of Passive DNS Data

If a scholar was to look back upon the history of the Internet in 50 years’ time, they’d likely be able to construct an evolutionary timeline based upon threats and countermeasures relatively easily. Having transitioned through the ages of malware, phishing, and APT’s, and the countermeasures of firewalls, anti-spam, and intrusion detection, I’m guessing those future historians would refer to the current evolutionary period as that of “mega breaches” (from a threat perspective) and “data feeds”.
Today, anyone can go out and select from a near infinite number of data feeds that run the gamut from malware hashes and phishing URL’s, through to botnet C&C channels and fast-flux IPs. 

Whether you want live feeds, historical data, bulk data, or just API’s you can hook in and ad hoc query, more than one person or organization appears to be offering it somewhere on the Internet; for free or as a premium service.

In many ways security feeds are like water. They’re available almost everywhere if take the time to look, however their usefulness, cleanliness, volume, and ease of acquiring, may vary considerably. Hence there value is dependent upon the source and the acquirees needs. Even then, pure spring water may be free from the local stream, or come bottled and be more expensive than a coffee at Starbucks.

At this juncture in history the security industry is still trying to figure out how to really take advantage of the growing array of data feeds. Vendors and enterprises like to throw around the term “intelligence feeds” and “threat analytics” as a means of differentiating their data feeds from competitors after they have processed multiple lists and data sources to (essentially) remove stuff – just like filtering water and reducing the mineral count – increasing the price and “value”.
Although we’re likely still a half-decade away from living in a world were “actionable intelligence” is the norm (where data feeds have evolved beyond disparate lists and amalgamations of data points into real-time sentry systems that proactively drive security decision making), there exist some important data feeds that add new and valuable dimensions to other bulk data feeds; providing the stepping stones to lofty actionable security goals.

From my perspective, the most important additive feed in progressing towards actionable intelligence is Passive DNS data (pDNS).

For those readers unfamiliar with pDNS, it is traditionally a database containing data related to successful DNS resolutions – typically harvested from just below the recursive or caching DNS server.

Whenever your laptop or computer wants to find out the IP address of a domain name your local DNS agent will delegate that resolution to a nominated recursive DNS server (listed in your TCP/IP configuration settings) which will either supply an answer it already knows (e.g. a cached answer) or in-turn will attempt to locate a nameserver that does know the domain name and can return an authoritative answer from that source.

By retaining all the domain name resolution data and collecting from a wide variety of sources for a prolonged period of time, you end up with a pDNS database capable of answering questions such as “where did this domain name point to in the past?”, “what domain names point to a given IP address?”, “what domain names are known by a nameserver?”, “what subdomains exist below a given domain name?”, and “what IP addresses will a domain or subdomain resolve to around the world?”.

pDNS, by itself, is very useful, but when used in conjunction with other data feeds its contributions towards actionable intelligence may be akin to turning water in to wine.

For example, a streaming data feed of suspicious or confirmed malicious URL’s (extracted from captured spam and phishing email sources) can provide insight as to whether the customers of a company or its brands have been targeted by attackers. However, because email delivery is asynchronous, a real-time feed does not necessarily translate to current window of visibility on the threat. By including pDNS in to the processing of this class of threat feed it is possible to identify both the current and past states of the malicious URL’s and to cluster together previous campaigns by the attackers – thereby allowing an organization to prioritize efforts on current threats and optimize responses.

While pDNS is an incredibly useful tool and intelligence aid, it is critical that users understand that acquiring and building a useful pDNS DB isn’t easy and, as with all data feeds, results are heavily dependent upon the quality of the sources. In addition, because historical and geographical observations are key, the longer the pDNS data goes back (ideally 3+ years) and the more data the sources cover global ISPs (ideally a few dozen tier-1 operators), the more reliable and useful the data will be. So select your provider carefully – this isn’t something you ordinarily build yourself (although you can contribute to a bigger collector if you wish).

If you’re looking for more ideas on how to use DNS data as a source and aid to intelligence services and even threat attribution, you can find a walk-through of techniques I’ve presented or discussed in the past here and here.

-- Gunter

Wednesday, October 28, 2015

Breaking out of the consulting wave

There are certain thresholds in the life of a company that must be crossed and, in so doing, fundamentally alter the business. In the world of boutique security consulting companies, one such period of change (and resultant growth) is when the task of managing client relationships and securing the next project or client shifts from being part of a senior consultant’s role and transitions in to the waiting hands of a dedicated sales organization.

Over the years I’ve observed first-hand just how difficult this transition can be for both the senior consultants and the executive management.

A critical driver for this transition is the way consultants are forced to divide their time and attention. When the consultant isn’t on a paid engagement they spend time responding to clients and prospects – writing proposals, responding to RFI’s, and scoping engagements etc. When they’re working on a client project, it’s heads-down on delivery – meaning that there’s far less time to engage with other customers or prospects, and limited attention can be applied to lining up the next consulting job. Visually, the cyclical nature of this business mode resembles a graph of out-of-phase waveforms transposed upon one-another.


If the red line represents the effort the consultant applies to “project delivery” over time, and the blue line in turn represents “business development”, it should be clear that low periods of non-delivery are countered with high periods of hunting for new work, and vice versa.

The problem with this cyclical work pattern is that a company typically only makes money if the consulting is delivering on paid engagements – and ideally you’d want the red-line to be horizontal and as close to 100% delivery utilization as possible.

If that wasn’t already an obvious problem, its effect on the business is then multiplied – as the task of securing business and constructing new proposals typically falls upon the most senior consultants. This in turn means that the most expensive people in the consulting organization, who typically command the highest rates from clients, are the most absorbed in this perpetual sales-delivery cycle.
I’ve heard time and again that “it’s just the way it is” and arguments such as:
  • As a technical consultancy, the client demands that they deal directly with the technical manager doing the delivery.
  • Scoping a job and preparing a technical proposal requires an expert consultant.
  • The onsite consultants know the customer the best. They’re always doing jobs for the client.
  • Our consultants are managing consultants, and that’s what they do.


The list of “why things can’t change” could go on ad infinitum, but the reality is that a consulting company cannot grow and scale beyond its senior consultants until it breaks out of the cyclical pattern – which is why this particular threshold is both so important and difficult for a company to transition.

Some things I’ve learnt over the years in navigating this business transition (and hopefully serve as some useful advice to other businesses seeking to cross the threshold) include:
  • The best security consultants, no matter how much they think of their skills at procuring and securing new business, are at best average farmers of an account (compared to a dedicated sales person). Yes, they typically understand the clients they do regular work for and are proficient at recognizing other opportunities within that client organization – however that pursuit and business development is limited to the client personnel they actually interact with during an engagement. The net result is that the client’s technical on-site folks love and adore the consultant and company, but most engagements are limited to a silo within the overall organization. For this reason the consulting company needs “hunters” – folks with the business development experience to identify other new people and opportunities in other parts of the same business.
  • Dropping in a “sales guy” in to the organization and letting them figure it out because they have a track record selling things is unlikely to succeed. Security consulting (in particular) is a very technical sell, and those tasked with hunting and closing in on new clients and projects need to not only also be technical, but need to be backed by deeper technical expertise. Consider the physical differences between an Olympic high-jumper and an Olympic shot-putter. Both sports require unique attributes, and are unlikely to triumph in the others field of expertise. While an Olympic decathlon medalist may be able to do both, they’re also unlikely to win against someone who specialized in just one of those sports.
  • Consulting managers are not sales people, they’re delivery coordinators and quality evangelists. Their role is often inglorious – as in-between chasing consultants for expenses and report deliverables, they spend much of their time apologizing to the client for things that didn’t go quite to plan and making the client happy again. Yes, they’re often the front line with existing customers and are core to delivering proposals to new clients, but their business development focus is (and should remain) blinkered to delivery.
  • In many cases the role of a consulting manager can morph in to that of a sales engineer (just never call them that!). When a consulting manager has no direct reports, they can serve effectively as the technical backup to the sales team – scoping engagements, constructing technical proposals, and being the technical evangelist is new client and prospect meetings. This “sales engineer” (SE) role is often a critical component to building and supporting a successful consulting sales team. The stronger these technical experts are, and the more years under the belt consulting they have, the more respect they tend to garner from prospective clients, and the easier it is to close deals. In many ways they add the technical credibility to the sale organization for technical clients.
  • Plan on building out a central team of technical authors. The technical author team provides the grease for easing a company through the transition period. By (slowly) removing some of the tedious consulting work – i.e. proposal generation, report proofing, and quality assurance on deliverables – the technical author team ensures a consistent quality of client-facing materials and eases the burden on the consulting and sales teams, and further frees up the time of valuable consultants. For global consulting companies or businesses that have consultants scattered around the world, the technical authorship team also helps overcome second-language frailties. Some caution needs to be maintained as these teams can be quickly overwhelmed with high workloads – which is why they should ideally report in to a senior consulting manager.
  • Senior and managing consultants who have been “managing accounts” often have compensation plans linked to closing client deals. The incorporation of a dedicated sales team means that compensation plans need to be reevaluated for those consultants. Ideally this type of conversation happens prior to the hiring and buildout of a sales team – and that the consultants concerned are party to how the transition will occur and how compensation can be changed. Since the monies associated with managing an account are not often insignificant, it is vital that those consultants are offered alternative means of “making their number”. Luckily the company has several tools at their disposal. First of all, since the purpose of employing a dedicated sales team is to grow revenue and increase the billable hours of senior consultants, there is typically scope to increase the base salaries of those consultants and to create a bonus payment structure based upon utilization and customer satisfaction levels. Alternatively, that important role conversion in to a consulting manager (i.e. SE) can be useful in a hybrid compensation model, where factors such as new clients versus lateral growth in an existing client are bonused differently.

The business transformation from 100% consultants to a mix of consultants and dedicated sales personnel can be perilous if not managed carefully. The senior consultants need to be well informed and actively participate in the transition, and the sales team built gradually from a nucleus of experienced sales professionals that have come from consulting businesses that had already successfully transitioned.


Any transition will take time. The senior consultants in particular must be gradually weaned off their account management responsibilities, and replaced with ones that drive a higher utilization rate for them and any other consultants they may lead. The worst thing a leadership team can do is to expect the transition to happen overnight. Instead, they should anticipate the process being a 3-9 month transition; the end result is worth it though.

Friday, October 23, 2015

Hacker Hat-trick at TalkTalk

For the third time this year the UK broadband provider TalkTalk have seen their online defenses fall to cyber attackers.

While the company has been quick to notify their customers of the breach (it was observed on Wednesday this week and reported the following day) and are currently working with law enforcement, details are still relatively sparse. Given the very short period between detection of the attack and public notification, it is unlikely any significant cyber forensics exercise has been conducted… so it’ll likely take those tasked with the investigation a couple of weeks to get a solid understanding of the scope of the breach and what was likely touched or stolen by the attackers.

Regardless, the stories currently being published as to the nature of the breach and what has actually been stolen are confusing and the details often contradictory (see Business Insider, The Telegraph, BBC, and AOL). It would appear that the names, addresses, dates of birth, email addresses, telephone numbers, TalkTalk account information, and credit card and/or bank details of some 4,000,000 subscribers may have been stolen and that the data may not have been (completely?) encrypted… or maybe the encryption keys were similarly stolen.

Claim for the latest hack are also being attributed by some to a Russian Islamist group (referred to as the “Th3 W3b 0f H4r4m”) who has posted a claim online along with samples of the data purporting to have come from the TalkTalk site (see Pastebin - http://pastebin.com/HHT4BxJA).



Some stories refer to there being a DDoS attack or component. A DDoS attack isn’t going to breach an internet service and result in data theft, but it’s not unheard of for attackers to use such a mechanism to divert security teams and investigative resources while a more focused and targeted attack is conducted. It’ll be interesting to see if this actually happened, or whether the DDoS (if there was one) was unrelated… although it would be difficult to tell unless the attackers really messed up and left a trail of breadcrumbs – since DDoS services can be procured easily over the Internet for as little as $50 per hour from dozens of illicit (but professional) providers.

If there are lessons to be learned so far from this hat-trick breach, they include:
  • Hackers are constantly looking for easy prey. If you’re easy pickings and you get a reputation for being a soft target, you should anticipate being targeted and breached multiple times and likely by different attackers.
  • There should be no excuse for not carefully encrypting customer data, and using cryptographic techniques that make it impractical for attackers that do breach an organizations defenses to profit from the encrypted data they stole.
  • Calling an attacker or the tools they use “sophisticated” and expecting the victims of the breach to consul themselves with the knowledge that the organization charged with protecting their data was defeated by a supposedly more advanced adversary is wrong. It simply underlines a failure to understand your adversaries and invest in the appropriate security strategies.
-- Gunter Ollmann


Tuesday, October 20, 2015

Ambulance Chasing vs Neighborly Support

The evolving world of Internet Security has a tendency to be a complex and bemusing arena for the professionals that make their living from it. The rapid development and deployment of immature technologies, the growing size and sophistication of systems, the unwanted attention and migration of organization crime, and the near religious fervor some devote to the ethical quandaries of the Internet, mean that few security topics are neither simple or devoid of opinion.

One area of topic guaranteed to crop up in a weekly discussion of Internet security is the topic of “ambulance chasing”. It’s a topic capable of dividing a room; initiating a prompt and well-rehearsed ethics debate, and causing more than a few veins on people’s foreheads to swell and pulsate.
Now that breach disclosures are a daily occurrence and the frequency of “mega breaches” seem to have hit their stride of monthly broadcasts, much of the security industry really does need to put on its big-boy pants and overcome the philosophical debate of whether reaching out to a breach victim and offering to work with them to understand, overcome, mitigate, or remediate, is in fact “ambulance chasing” or more akin to being neighborly and professional.

For many folks, the prospect of contacting a victim and explaining what you could do to help them evokes a vision of seedy lawyers prowling the halls of hospitals looking for the latest motor accident patients.

The vast majority of security professionals I know (ranging from consultants to analysts, and sales to engineers) genuinely see their occupation as a calling and passionately want to help make the Internet a better place. However, for one reason or another, the prospect of reaching out to someone that hasn’t already reached out to them and explicitly asked for help is too often interpreted as a breach of some unwritten rule… a kind of “invasion of personal space”.

For sure, as a professional they’re offering your skills and expertise for a price. However, to interpret the actions of pro-actively reaching out to a victim as some slimy underhanded means of gaining business is naive and outdated. Amusingly enough, the majority of security consultants I’ve known or worked with other the years are only too capable of identifying new victims that they or their company could help, but may grudgingly to pass it on to a “sales guy” – thereby keeping their hands clean and distancing themselves from what they perceive as ambulance chasing sleaziness.

I don’t see it that way and as advice to consultants that want to grow their career and move on to becoming business leaders (with the reputation and salary to go with it), get over your inhibitions and reach out to those organizations and contacts yourself. Forget the term “ambulance chasing” and instead think in terms of supporting a neighbor down the road.

Look at it this way. You’re an expert locksmith. Every day you walk your dog down the street and you notice how poor many of the locks are (and how many are missing). Then one day a house down the street is burgled. You see the flashing lights outside, police dusting for fingerprints, and a substandard lock was clearly dismantled and exploited by the criminal to gain entry. Do you ignore the incident and hope the victim will Google locksmiths later and contact you, or do you rush home to make a call to your sales guy and tell them your neighbors address and leave it in their hands? Or, as a professional confident in your skills and expertise, approach the victim, introduce yourself and what you do, and offer to help them if and when they’re ready?

Think about it from the perspective of the victim too. Would you rather hunt and peck looking for someone to help? Would you prefer a sales guy cold calling you and pimping all their products? Or would you respond most favorably to a local expert from down the street who approaches you directly and offers to help there and then?


In a world of daily breaches and vulnerability disclosures, more people need help than ever before. As a security professional, if you’re waiting for them to reach out to you and ask for your help, you’re doing a disservice to both them and yourself. 

Saturday, October 3, 2015

Experian Breached; T-Mobile Customer's Loss

The last couple of days has seen yet another breach disclosure - this time it's Experian, and the primary victims are 15m T-Mobile customers in the US. It was interesting to note T-Mobile's CEO, John Legere, publicly responding to the breach and the effect on his customers. He's angry, and rightfully so. I'm sure there are a bunch of other credit bureaus now lining up to secure new business.


Some personal thoughts on the breach and it's effects:

  • As is so often the trend now, professional hackers and cybercriminals are investing in the long game – stealthily taking control of a network and the data it contains over weeks, months and even years. Instead of opportunistic zero-day exploitation against lists of potential vulnerable targets, hackers carefully probe, infiltrate, and remove evidence of compromise against specific targets. Their end game is perpetual access to the target. The difference is as stark as killing the cow for today’s BBQ, or silently milking it for years.
  • While many organizations now employ encryption and cryptographic techniques to protect personal customer data. Many of the techniques employed are dated and focus predominantly on a mix of data-at-rest protection (to combat theft of hard drives or backup cassettes) and SQL DB data dumps – threats that, while severe, are not common targets of prolonged infiltration and stealthy attackers. A critical failure of many of these legacy approaches to data encryption lies in key management. Access to the keys used to encrypt and decrypt the data is a primary target of todays hackers. Unfortunately organizations have great trouble finding secure methods of protecting those keys and still often operate at a level of obfuscation equivalent to leaving the keys under the doormat.
  • The data stolen in this attack on Experian’s T-Mobile customers – which includes address details, date of birth, social security numbers, driver license numbers, and maybe passport numbers – is very valuable to cybercriminals. These aggregated personal details can reach as much as $200 per record on various underground forums and locations in the darknet. Stolen identities that include address, SSN, and drivers license details are commonly used in the creation of new online financial accounts – as the professional cybercriminals seek to launder other stolen monies from around the world.
  • Constant vigilance is mandatory when it comes to combating professional cybercrime who are in for the long game. It is critically important that organizations continually probe, assess, and monitor all Internet accessible services and assets. Annual penetration testing and quarterly scans didn’t work against this class of threat a decade ago, they most certainly provide less protection and assurance today. Organizations need to be vulnerability scanning their web applications and infrastructure continuously on a 24x7 timetable, must deploy breach detection systems that monitor network and egress traffic, and practice incident response on a monthly basis.

I'm sure that new details will filter out over coming weeks and, if history is anything to go by, the odds are that the victim count will continue to grow.

-- Gunter

Thursday, April 16, 2015

Is Upping the Minimum Wage Good for the Information Security Industry?

The movement for upping the minimum wage in the US is gathering momentum. Protests and placard waving are on the increase, and the quest for $15 per hour is well underway. There are plenty of arguments as to why such a hike in minimum wage is necessary, and what the consequences could be to those businesses dependent upon the cheapest hourly labor. But, for the information security industry, upping the minimum wage will likely yield only good news.

It's hard not to be cynical, but we're already hearing how simple automation will be used to replace most basic unskilled jobs.

For technologists, hiking up the minimum wage will almost certainly be fantastic news. Why stop at $15 per hour... perhaps $25 would yield a more dramatic societal change?

In some ways its hard to fathom how significant this minimum wage movement could be in driving the next generation of technology and information security, but I'm pretty sure we're on the cusp of a new generation of technological automation and innovation.

The combination of a dramatic increase in mandatory minimum wages, the steady cost-reduction of embedded systems, and the recent advancements in robotic control logic are working together to lower the threshold with which the next generation of robotic systems become economically viable.

If you thought those self-serve payment kiosks as your local supermarket or fast-food joint were an indicator of things to come, you were right. The coming generation of self-serve and automated construction or delivery systems have been in many innovators minds for a long time - but had been shelved for economic reasons. This year - assuming minimum wages advance to $15 per hour or greater - we'll see a fundamental societal change.

The stakes have changed - and it will unlikely bode well for those occupations that the minimum wage could likely have helped.

Those store clerk, hostess, or "order taker" jobs will largely cease to exist. With a few key presses I'll be able to type my own order for a medium Big Mac combo meal with no mayo... all by myself... and get my name spelled correctly too. In many ways I'd sooner have a mechanical marvel flipping burgers and frying my fries, with a little conveyor bringing me my meal (TM) ... than the current solution of having 5 different dissatisfied "minimum wage" people assemble my meal with all the gusto and enthusiasm of a beard-net.

With the threshold for economic viability likely to fall so sharply, it doesn't take a soothsayer to predict a tsunami of automated solutions capable of not only replacing costly unskilled-labor jobs, but also increasing quality and consistency of the products delivered. Perhaps those photos of plump and enticing burgers above every fast-food counter you've ever seen will finally be representative of what your robotic (quality controlled) chef produces? Or maybe that'll remain a fantasy.

Regardless, whats good for technologists and the impending minimum-wage revolution is undoubtedly doubly good for the information security industry.

New products, new technology, new software, new flaws, and new pressures to secure them, will require a new generation of testing methodologies, automated vulnerability scanner tools, and a growing body of specialist consultancy skills.

While not yet a scholar of history (more precisely a student of history), I can see parallels with the 19th Century Luddite movement against newly developed labor-economizing technologies. Most people associate the Luddites with the mindless smashing of technology they didn't understand, but in reality it was about unemployment and retaining a way of life. This time round I think we can expect folks will know and understand the technology they and their friends will be replaced with... and that means that electronic attacks and hacking will usurp sledgehammers in the pending automated revolution.

There are certainly pros and cons to the societal change we stand at the cusp of.

As an information security professional, things a quite rosy. For those who's only skills lie in delivering platters of fast-food or processing an order from a menu, or any repetitive sales task, things are about to get pretty rough.

If there's a silver lining for everyone else, perhaps it lies in the pending demise of the US tipping culture? The arguments for tipping waitstaff making a "living wage" or the tablet on the table taking your food and drink order may quickly become mute.

-- Gunter

Tuesday, January 20, 2015

A cynic’s view of 2015 security predictions (first part)

Better late than never, but the first of a series of blogs from me covering my ever cynical view of security predictions has now been posted to the NCC Group website.

Check out https://www.nccgroup.com/en/blog/2015/01/a-cynics-view-of-2015-security-predictions-part-one/ today. And more to come later this week.

I think yo'll enjoy it ;-)

Thursday, January 15, 2015

A Cancerous Computer Fraud and Misuse Act

As I read through multiple postings covering the proposed Computer Fraud and Misuse Act, such as the ever-insightful writing of Rob Graham in his Obama's War on Hackers or the EFF's analysis, and the deluge of Facebook discussion threads where dozens of my security-minded friends shriek at the damage passing such an act would bring to our industry, I can't but help myself think that surely it's an early April Fools joke.

The current draft/proposal for the Computer Fraud and Misuse Act reads terribly and, in Orin Kerr's analysis - is "awkward".

The sentiment behind the act appears to be a lashing out response to the evils that have been recently perpetuated by hackers - such as the mega breaches, DDoS's, password dumps, etc. - without any understanding of how the "good guys" do their work and operate at the forefront of stopping these evil-doers.

For those non-security folks, the best analogy I can think of is that a bunch of politicians have been reading how attackers are using knives to cut and stab people in their criminal endeavors, and that without knives those crimes would not have happened. Therefore, to prevent knife-based crime, they legislate that carrying a knife, manufacturing a knife, or using a knife to cut flesh, is punishable with 20 years prison.

Unfortunately, the legislation is written so poorly and generic, that the definition of "knife" includes butter knifes and scalpels - and overnight the medical profession of surgery becomes illegal. Even the process of helping those poor souls that have been stabbed by a criminal can no longer be saved by a scalpel wielding doctor.

That, in a nutshell, is what many feel the impact of this act will be on the Internet security industry. Penetration testing, bug hunting, and vulnerability research will be caught by this and, as Rob Graham postulates, there is reason to speculate that even posting a link to a vulnerability could land bot the poster and the clicker on the wrong side of the law.

One of the budding industries that will feel this the most will be threat analysis and companies/services that focus on early alerting and attribution of cybercrime. And that in my mind is particularly ominous.

Now, with that all said, is the act salvageable? Maybe - but it'll need a lot of work. I've heard a few folks argue that this US act is very similar to the UK's Computer Misuse Act of 1990. I mostly agree that a parallel act in the US would be helpful in dealing with the current plague of cybercrime, but what's been proposed thus far has the polish and refinement of a rusty piece of barbed-wire.

The only organization that'll benefit from the act as proposed right now is the US' privatized incarceration services.

-- Gunter