Friday, January 29, 2016

Watching the Watchers Watching Your Network

It seems that this last holiday season didn’t bring much cheer or goodwill to corporate security teams. With the public disclosure of remotely exploitable vulnerabilities and backdoors in the products of several well-known security vendors, many corporate security teams spent a great deal of time yanking cables, adding new firewall rules, and monitoring their networks with extra vigilance.

It’s not the first time that products from major security vendors have been found wanting.

It feels as though some vendor’s host-based security defenses fail on a monthly basis, while network defense appliances fail less frequently – maybe twice per year. At least that’s what a general perusal of press coverage may lead you to believe. However, the reality is quite different. Most security vendors fix and patch security weaknesses on a monthly basis. Generally, the issues are ones that they themselves have identified (through internal SDL processes or the use of third-party code reviews and assessment) or they are issues identified by customers. And, every so often, critical security flaws will be “dropped” on the vendor by an independent researcher or security company that need to be fixed quickly.

Two decades ago, the terms “bastion host”, DMZ, and “firewall” pretty much summed up the core concepts of network security, and it was a simpler time for most organizations – both for vendors and their customers. The threat spectrum was relatively narrow, the attacks largely manual, and an organization’s online presence consisted of mostly static material. Yet, even then, if you picked up a book on network security you were instructed in no short order that you needed to keep your networks separate; one for the Internet, one for your backend applications, one for your backups, and a separate one for managing your security technology.

Since that time, many organizations have either forgotten these basic principles or have intentionally opted for riskier (yet cheaper) architectures and just hoping that their protection technologies are up to the task. Alas, as the events of December 2015 have shown us, every device added to a network introduces a new set of security challenges and weaknesses.

From a network security perspective, when looking at the architecture of critical defenses, there are four core principles:

  1. Devices capable of monitoring or manipulating network traffic should never have their management interfaces directly connected to the Internet. If these security devices need to be managed over the Internet it is critical that only encrypted protocols be used, multi-factor authentication be employed, and that approved in-bound management IP addresses be whitelisted at a minimum. 
  2. The management and alerting interfaces of security appliances must be on a “management” network – separated from other corporate and public networks. It should not be possible for an attacker who may have compromised a security device to leverage the management network to move laterally onto other guest systems or provide a route to the Internet. 
  3. Span ports and network taps that observe Internet and internal corporate traffic should by default only operate in “read-only” mode. A compromised security monitoring appliance should never be capable of modifying network traffic or communicating with the Internet from such an observation port. 
  4. Monitor your security products and their management networks. Security products (especially networking appliances such as core routers, firewalls, and malware defenses) will always be a high-value target to both external and internal attackers. These core devices and their management networks must be continuously monitored for anomalies and audited. 

In an age where state-sponsored reverse engineers, security research teams, and online protagonists are actively hunting for flaws and backdoors in the widely deployed products of major security vendors as a means of gaining privileged and secret access to their target’s networks, it is beyond prudent to revisit the core tenets of secure network architecture.

Corporate security teams and network architects should assume not only that new vulnerabilities and backdoors will be disclosed throughout the year, but that those holes may have been accessible and exploited for several months beforehand. As such, they should adopt a robust defense-in-depth strategy including “watchers watching watchers.”

Shodan's Shining Light

The Internet is chock full of really helpful people and autonomous systems that silently probe, test, and evaluate your corporate defenses every second of every minute of every hour of every day. If those helpful souls and systems aren’t probing your network, then they’re diligently recording and cataloguing everything they’ve found so others can quickly enumerate your online business or list systems like yours that are similarly vulnerable to some kind of attack or other.

Back in the dark ages of the Internet (circa the 20th century) everyone had to run their own scans to map the Internet in order to spot vulnerable systems on the network. Today, if you don’t want to risk falling foul of some antiquated hacking law in some country by probing IP addresses and shaking electronic hands with the services you encounter, you can easily find a helpful soul that’s figured it all out on your behalf and turn on the faucet of knowledge for a paltry sum.

One of the most popular services to shine light on and enumerate the darkest corners of the Internet is Shodan. It’s a portal-driven service through which subscribers can query its vast database of IP addresses, online applications and service banners that populate the Internet. Behind the scenes, Shodan’s multiple servers continually scan the Internet, enumerating and probing every device they encounter and recording the latest findings.

As an online service that diligently catalogues the Internet, Shodan behaves rather nicely. Servers that do the scanning aren’t overly aggressive and provide DNS information that doesn’t obfuscate who and what they are. Additionally, they are little more troublesome than Google in its efforts to map out Web content on the Internet.

In general, most people don’t identify what Google (or Microsoft, Yahoo or any other commercial search engine) does as bad, let alone illegal. But if you are familiar with the advanced search options these sites offer or read any number of books or blogs on “Google Dorks,” you’ll likely be more fearful of them than something with limited scope like Shodan.

Unfortunately, Shodan is increasingly perceived as a threat by many organizations. This might be due to its overwhelming popularity or its frequent citation amongst the infosec community and journalists as a source of embarrassing statistics. Consequently, security companies like Check Point have included alerts and blocking signatures in a vain attempt to thwart Shodan and its ilk.

On one hand, you might empathize with many organizations on the receiving end of a Shodan scan. Their Internet-accessible systems are constantly probed, their services are enumerated, and every embarrassing misconfiguration or unpatched service is catalogued and could be used against them by evil hackers, researchers and journalists.

In some realms, you’ll also hear that the bad guy competitors to Shodan (e.g. cyber criminals mapping the Internet for their own financial gain) are copying the scanning characteristics of Shodan so the target’s security and incident response teams assume it’s actually the good guys and ignore the threat.

On the other hand, with it being so easy to modify the scanning process – changing scan types, modifying handshake processes, using different domain names, and launching scans from a broader range of IP addresses – you’d be forgiven for thinking that it’s all a bit of wasted effort… about as useful as a “keep-off-the-grass” sign in Hyde Park.

Although “robots.txt” in its own way serves as a similarly polite request for commercial Web search scanners to not navigate and cache pages on a site, it is most often ignored by scanning providers. It also serves as a flashing neon arrow that directs hackers and security researchers to the more sensitive content.

It’s a sad indictment of current network security practices that a reputable security vendor felt the need and justification to add detection rules for Shodan scans and that their customer organizations may feel more protected for implementing them.

While the virtual “keep-off-the-grass” warning isn’t going to stop anyone, it does empower the groundskeeper to shout, “Get off my land!” (in the best Cornish accent they can muster) and feel justified in doing so. In the meantime, the plague of ever-helpful souls and automated systems will continue to probe away to their hearts content.

Friday, November 20, 2015

Battling Cyber Threats Using Lessons Learned 165 Years Ago

When it comes to protecting the end user, the information security community is awash with technologies and options. Yet, despite the near endless array of products and innovation focused on securing that end user from an equally broad and expanding array of threats, the end user remains more exposed and vulnerable than at any other period in the history of personal computing.

Independent of these protection technologies (or possibly because of them), we’ve also tried to educate the user in how best (i.e. more safely) to browse the Internet and take actions to protect themselves. With a cynical eye, it’s almost like a government handing out maps to their citizens and labeling streets, homes, and businesses that are known to be dangerous and shouldn’t be visited – because not even the police or military have been effective there.

Today we instruct our users (and at home, our children) to be careful what they click-on, what pages or sites they visit, what information they can share, and what files they should download. These instructions are not just onerous and confusing, more often than not they’re irrelevant – as, even after following them to the letter, the user can still fall victim.

The fact that a user can’t click on whatever they want, browse wherever they need to, and open what they’ve received, should be interpreted as a mile-high flashing neon sign saying “infosec has failed and continues to fail” (maybe reworded with a bunch of four-letter expletives for good measure too).
For decades now thousands of security vendors have brought to market technologies that, in effect, are predominantly tools designed to fill vulnerable and exploited gaps in the operating systems lying at the core of devices the end users rely upon. If we’re ever to make progress against the threat and reach the utopia of users being able to “carelessly” using the Internet, those operating systems must get substantially better.

In recent years, great progress has been made in the OS front – primarily smartphone OS’s. The operating systems running on our most pocket-friendly devices are considerably more secure than those we rely upon for our PC’s, notebooks, or servers at home or work. There’s a bunch of reasons why of course – and I’ll not get in to that here – but there’s still so much more that can be done.
I do believe that there are many lessons that can be learned from the past; lessons that can help guide future developments and technologies. Reaching back a little further in to the past than usual – way before the Internet, and way before computers – there are a couple of related events that could shine a brighter light on newer approaches to protecting the end user.

Back in 1850 a Hungarian doctor named Ignaz Semmelweis was working in the maternity clinic at the General Hospital in Vienna where he noted that many women in maternity wards were dying from puerperal fever - commonly known as childbed fever. He studied two medical wards in the hospital – one staffed by all male doctors and medical students, and the other by female midwifes – and counted the number of deaths in each ward. What he found was that death from childbirth was five times higher in the ward with the male doctors.

Dr. Semmulweis tested numerous hypothesis as to the root cause of the deadly difference – ranging from mothers giving birth on their sides versus their backs, through to the route priests traversed the ward and the bells they rang. It appears that his Eureka moment came after the death of a male pathologist who, upon pricking his finger while doing an autopsy on a woman who had died of childbed fever, had succumbed to the same fate (apparently being a pathologist in the mid-19th century was not conducive to a long life). Joining the dots, Dr. Semmulweis noted that the male doctors and medical students were doing autopsies while the midwifes were not, and that “cadaverous particles” (this is a period of time before germs were known) were being spread to those birthing mothers.

Dr. Semmulweis’ medical innovation? “Wash your hands!” The net result, after doctors and midwifes started washing their hands (in lime water, then later in chlorine), was that the rate of childbed fever dropped considerably.

Now, if you’re in the medical trade, washing your hands multiple times per day in chlorine or (by the late 1800’s) carbolic acid, you’ll note that it isn’t so good for your skin or hands.

In 1890 William Stewart Halsted of Johns Hopkins University asked the Goodyear Tire and Rubber Company if they could make a glove of rubber that could be dipped in carbolic acid in order to protect the hands of his nurses – and so was born the first sterilized medial gloves. The first disposable latex medical gloves were manufactured by Ansell and didn’t appear until 1964.

What does this foray in to 19th century medical history mean for Internet security I hear you say? Simple really, every time the end user needs to use a computer to access the Internet and do work, it needs to be clean/pristine. Whether that means a clean new virtual image (e.g. “wash your hands”) or a disposable environment that sits on top of the core OS and authorized application base (e.g. “disposable gloves”), the assumption needs to be that nothing the user encounters over the Internet can persist on the device they’re using after they’ve finished their particular actions.

This obviously isn’t a solution for every class of cyber threat out there, but it’s an 80% solution – just as washing your hands and wearing disposable gloves as a triage nurse isn’t going to protect you (or your patient) from every post-surgery ailment.

Operating system providers or security vendors that can seamlessly adopt and automatically procure a clean and pristine environment for the end user every time they need to conduct activities on or related to the Internet will fundamentally change the security game – altering the battle field for attackers and the tools of their trade.

Exciting times ahead.

-- Gunter

Wednesday, November 18, 2015

Exploiting Video Console Chat for Cybecrime or Terrorism

A couple of days ago there was a lot of interest in how terrorists may have been using chat features of popular video console platforms (e.g. PS4, XBox One) to secretly communicate and plan their attacks. Several journalists on tight deadlines reached out to me for insight in to threat. Here are some technical snippets on the topic that may be useful for future reference:

  • In-game chat systems have been used by cyber-criminals for over a decade to conduct business and organize transfers of stolen data. Because the chat systems within games tend to use proprietary protocols and exist solely within a secure connection to the game vendors server, it is not ordinarily possible to eavesdrop or collectively intercept these communications without some level of legal access to the central server farm. While the game vendors have the ability to inspect the chat traffic, this level of inspection (when conducted - which is rare) tends to focus on inappropriate language and bullying, and that inspection or evidence gathering is almost exclusively limited to text-based communications.
  • As games (particularly multi-player first-person shootem-up games) have embraced real-time voice chat protocols, it has become considerably more difficult to inspect traffic and identify inappropriate communications. Most responses to abuse are driven my multiple individuals complaining about another in-game player - rather that dynamic detection of abuse.
  • This difficulty in monitoring communications is well known in the criminal community and is conveniently abused. Criminals tend to not use their own personal account details, instead use aliases or, more frequently, stolen user credentials - and may electronically proxy their communications via TOR and other anonymizing proxy services to avoid people working out their physical location. There is a sizable underground market for stolen on-line gaming user credentials. When using stolen credentials, the criminals will often join specific game servers and use pre-arranged times for games (and sub-types of games) to ensure that they will be online with the right group(s) of associates. These game times/details are often discussed in private message boards.
  • While US law enforcement has expended efforts to intercept communications and ascertain geographical location information from TOR and proxy services in the past, it is difficult - since the communications themselves are typically encrypted. Intercepting in-game communications are very difficult because of the complex legal and physical server relationships between (lets say for example) Sony (running the PlayStation network), Electronic Arts (running the account management system and some of the gaming server farm), and the game development team (who implemented the communication protocol and runs the in-game service). For law enforcement, getting the appropriate legal interception rights to target an individual (criminal) is complex in this situation and may be thwarted anyway if the criminals choose to use their own encryption tools on top of the game - i.e. the in-game communications are encrypted by the criminals using a third-part non-game tool.
  • Console chat typically takes the form of either text or voice-based chat. Text-based chat is much easier to analyze and consequently easier for console operators and law enforcement to identify threats and abuse. In addition, text-based communications are much easier to store or archive - which means that, after an event, it is often possible for law enforcement to obtain the historical communication logs and perform analysis. Voice-based chat is much more difficult to handle and typically will only be inspected in a streaming fashion because the data volumes are so large - making it impractical to store for any extended period of time. There are also more difficulties in searching voice traffic for key words and threats. Video-based chat is even more difficult again to dynamically inspect, monitor, and store.
-- Gunter

Tuesday, November 17, 2015

Panel Selection of Penetration Testing Vendors

Most large companies have settled into a repeatable model in the way they undertake penetration testing and engage with their penetration testing suppliers. A popular model for companies that need to have several dozen pentests performed per year is to have a “board” or “panel” of three or four vetted companies and to rotate one provider in and out of the scheme per year – meaning that there is potentially a total refresh of providers every few years.

As vendor performance models go there is a certain undeniable logic to the process. However, it is worth questioning if these “board” models actually deliver better results – in particular, are the full spectrum of vulnerabilities being examined and are the individual consultants capable of delivering the work? In general, I’d argue that such a model often fails to meet these core requirements.

Advanced companies (e.g. brand-name software manufacturers) that require access to the most skilled talent-pool of penetration testers and reverse engineers tend to select vendors based upon the skills and experience of the consultants they employ – often specifically calling out individual consultants by names within the terms of the contract. They also pay premium rates to have access to that exclusive talent pool. In turn, the vendors that employ those consultants market and position their own companies as advanced service providers. For these companies, talent is the critical buying decision and it is not uncommon for the client organization to engage with new vendors when highly skilled or specialized consultants move between service providers.

Most large companies are not as sophisticated in discerning the talent pool needed to review and secure their products – yet still have many of the same demands and needs from a penetration testing skills perspective. For them, vendor selection is often about the responsiveness of the service provider (e.g. can they have 6 penetration testers onsite within two weeks in Germany or Boston) and the negotiated hourly-rate for their services. The churn of vendors through the “board” model is typically a compromise effort as they try to balance the needs of negotiating more favorable contractual terms, overcoming a perception of skill gaps within their providers consulting pool, and serve a mechanism for tapping a larger pool of vetted consultants.

From past observations, there are several flaws to this model (although several elements are not unique to the model).
  1. Today's automated vulnerability scanners (for infrastructure, web application, and code review) are capable of detecting up to 90% of the vulnerabilities an “average” penetration tester can uncover manually if they use their own scripts and tools. Managed vulnerability scanning services (e.g. delivered by managed security service providers (MSSP)) typically reach the same 90% level, but tend to provide the additional value of removing false positives and confirming true positives. If these automated tools and services already cover 90% of the vulnerability spectrum, organizations need to determine whether closing the gap on the remaining 10% is worth the consulting effort and price. Most often, the answer is “yes, but…” where the “but…” piece is assigned a discrete window of time and effort to uncover or solve – and hence value. Organizations who adopt the “board” approach often fail to get the balance between tools, MSSP, and consultant-led vulnerability discovery programs. There are significant cost savings to be had when the right balances have been struck.
  2. Very few consultants share the same depth of skills and experience. If an organization is seeking to uncover vulnerabilities that lie out of reach of automated discovery tools, it is absolutely critical that the consultant(s) undertaking the work have the necessary skills and experience. There is little point throwing a 15 year veteran of Windows OS security at an Android mobile application served from the AWS cloud – and vice versa. To this end, clients must evaluate the skill sets of the consultants that are being offered up by the vendor and who are expected to do the work. The reality of the situation is that clients that don’t pay the necessary attention can almost guarantee that they’ll get the second-rung consultants (pending availability) to perform this important work. The exception being when a new vendor is being evaluated, and they’ll often try to throw their best at the engagement for a period of time in order to show their corporate value – but clients should not anticipate the same level of results in subsequent engagements unless they are specific about the consultants they need on the job.
  3. Rotating a vendor in or out of a program based upon an annual schedule independent of evaluating the consultants employed by the company makes little sense. Many penetration testing companies will have a high churn of technical staff to begin with and their overall technical delivery capabilities and depth of skills specialization will flux though-out the year. By understanding what skill sets the client organization needs and the amount of experience in each skill area in advance, those organizations can better rationalize their service providers consulting capabilities – and negotiate better terms.
  4. Because consultant skills and experience play such an important role in being able to uncover new vulnerabilities, client organizations should consider cross-vendor teams when seeking to penetration test and secure higher-priority infrastructure, applications, and products. Cherry-picking named consultants from multiple vendors to work on an important security requirement tends to yield the best and most comprehensive findings. Often there is the added advantage of those vendors choosing to compete to ensure that their consultants do the best team work on the joint project – hoping that more follow-on business will fall in their direction.

While “board” or “panel” approaches to penetration testing vendor management may have an appeal from a convenience perspective, the key to getting the best results (both economical and vulnerability discovery) lies with the consultants themselves. 

Treating the vendor companies as convenient payment shells for the consultants you want or need working on your security assignments is OK as long as you evaluate the consultants they employ and are specific on which consultants you want working to secure your infrastructure, applications, and products. To do otherwise is a disservice to your organization.

-- Gunter

Monday, November 9, 2015

The Incredible Value of Passive DNS Data

If a scholar was to look back upon the history of the Internet in 50 years’ time, they’d likely be able to construct an evolutionary timeline based upon threats and countermeasures relatively easily. Having transitioned through the ages of malware, phishing, and APT’s, and the countermeasures of firewalls, anti-spam, and intrusion detection, I’m guessing those future historians would refer to the current evolutionary period as that of “mega breaches” (from a threat perspective) and “data feeds”.
Today, anyone can go out and select from a near infinite number of data feeds that run the gamut from malware hashes and phishing URL’s, through to botnet C&C channels and fast-flux IPs. 

Whether you want live feeds, historical data, bulk data, or just API’s you can hook in and ad hoc query, more than one person or organization appears to be offering it somewhere on the Internet; for free or as a premium service.

In many ways security feeds are like water. They’re available almost everywhere if take the time to look, however their usefulness, cleanliness, volume, and ease of acquiring, may vary considerably. Hence there value is dependent upon the source and the acquirees needs. Even then, pure spring water may be free from the local stream, or come bottled and be more expensive than a coffee at Starbucks.

At this juncture in history the security industry is still trying to figure out how to really take advantage of the growing array of data feeds. Vendors and enterprises like to throw around the term “intelligence feeds” and “threat analytics” as a means of differentiating their data feeds from competitors after they have processed multiple lists and data sources to (essentially) remove stuff – just like filtering water and reducing the mineral count – increasing the price and “value”.
Although we’re likely still a half-decade away from living in a world were “actionable intelligence” is the norm (where data feeds have evolved beyond disparate lists and amalgamations of data points into real-time sentry systems that proactively drive security decision making), there exist some important data feeds that add new and valuable dimensions to other bulk data feeds; providing the stepping stones to lofty actionable security goals.

From my perspective, the most important additive feed in progressing towards actionable intelligence is Passive DNS data (pDNS).

For those readers unfamiliar with pDNS, it is traditionally a database containing data related to successful DNS resolutions – typically harvested from just below the recursive or caching DNS server.

Whenever your laptop or computer wants to find out the IP address of a domain name your local DNS agent will delegate that resolution to a nominated recursive DNS server (listed in your TCP/IP configuration settings) which will either supply an answer it already knows (e.g. a cached answer) or in-turn will attempt to locate a nameserver that does know the domain name and can return an authoritative answer from that source.

By retaining all the domain name resolution data and collecting from a wide variety of sources for a prolonged period of time, you end up with a pDNS database capable of answering questions such as “where did this domain name point to in the past?”, “what domain names point to a given IP address?”, “what domain names are known by a nameserver?”, “what subdomains exist below a given domain name?”, and “what IP addresses will a domain or subdomain resolve to around the world?”.

pDNS, by itself, is very useful, but when used in conjunction with other data feeds its contributions towards actionable intelligence may be akin to turning water in to wine.

For example, a streaming data feed of suspicious or confirmed malicious URL’s (extracted from captured spam and phishing email sources) can provide insight as to whether the customers of a company or its brands have been targeted by attackers. However, because email delivery is asynchronous, a real-time feed does not necessarily translate to current window of visibility on the threat. By including pDNS in to the processing of this class of threat feed it is possible to identify both the current and past states of the malicious URL’s and to cluster together previous campaigns by the attackers – thereby allowing an organization to prioritize efforts on current threats and optimize responses.

While pDNS is an incredibly useful tool and intelligence aid, it is critical that users understand that acquiring and building a useful pDNS DB isn’t easy and, as with all data feeds, results are heavily dependent upon the quality of the sources. In addition, because historical and geographical observations are key, the longer the pDNS data goes back (ideally 3+ years) and the more data the sources cover global ISPs (ideally a few dozen tier-1 operators), the more reliable and useful the data will be. So select your provider carefully – this isn’t something you ordinarily build yourself (although you can contribute to a bigger collector if you wish).

If you’re looking for more ideas on how to use DNS data as a source and aid to intelligence services and even threat attribution, you can find a walk-through of techniques I’ve presented or discussed in the past here and here.

-- Gunter

Wednesday, October 28, 2015

Breaking out of the consulting wave

There are certain thresholds in the life of a company that must be crossed and, in so doing, fundamentally alter the business. In the world of boutique security consulting companies, one such period of change (and resultant growth) is when the task of managing client relationships and securing the next project or client shifts from being part of a senior consultant’s role and transitions in to the waiting hands of a dedicated sales organization.

Over the years I’ve observed first-hand just how difficult this transition can be for both the senior consultants and the executive management.

A critical driver for this transition is the way consultants are forced to divide their time and attention. When the consultant isn’t on a paid engagement they spend time responding to clients and prospects – writing proposals, responding to RFI’s, and scoping engagements etc. When they’re working on a client project, it’s heads-down on delivery – meaning that there’s far less time to engage with other customers or prospects, and limited attention can be applied to lining up the next consulting job. Visually, the cyclical nature of this business mode resembles a graph of out-of-phase waveforms transposed upon one-another.

If the red line represents the effort the consultant applies to “project delivery” over time, and the blue line in turn represents “business development”, it should be clear that low periods of non-delivery are countered with high periods of hunting for new work, and vice versa.

The problem with this cyclical work pattern is that a company typically only makes money if the consulting is delivering on paid engagements – and ideally you’d want the red-line to be horizontal and as close to 100% delivery utilization as possible.

If that wasn’t already an obvious problem, its effect on the business is then multiplied – as the task of securing business and constructing new proposals typically falls upon the most senior consultants. This in turn means that the most expensive people in the consulting organization, who typically command the highest rates from clients, are the most absorbed in this perpetual sales-delivery cycle.
I’ve heard time and again that “it’s just the way it is” and arguments such as:
  • As a technical consultancy, the client demands that they deal directly with the technical manager doing the delivery.
  • Scoping a job and preparing a technical proposal requires an expert consultant.
  • The onsite consultants know the customer the best. They’re always doing jobs for the client.
  • Our consultants are managing consultants, and that’s what they do.

The list of “why things can’t change” could go on ad infinitum, but the reality is that a consulting company cannot grow and scale beyond its senior consultants until it breaks out of the cyclical pattern – which is why this particular threshold is both so important and difficult for a company to transition.

Some things I’ve learnt over the years in navigating this business transition (and hopefully serve as some useful advice to other businesses seeking to cross the threshold) include:
  • The best security consultants, no matter how much they think of their skills at procuring and securing new business, are at best average farmers of an account (compared to a dedicated sales person). Yes, they typically understand the clients they do regular work for and are proficient at recognizing other opportunities within that client organization – however that pursuit and business development is limited to the client personnel they actually interact with during an engagement. The net result is that the client’s technical on-site folks love and adore the consultant and company, but most engagements are limited to a silo within the overall organization. For this reason the consulting company needs “hunters” – folks with the business development experience to identify other new people and opportunities in other parts of the same business.
  • Dropping in a “sales guy” in to the organization and letting them figure it out because they have a track record selling things is unlikely to succeed. Security consulting (in particular) is a very technical sell, and those tasked with hunting and closing in on new clients and projects need to not only also be technical, but need to be backed by deeper technical expertise. Consider the physical differences between an Olympic high-jumper and an Olympic shot-putter. Both sports require unique attributes, and are unlikely to triumph in the others field of expertise. While an Olympic decathlon medalist may be able to do both, they’re also unlikely to win against someone who specialized in just one of those sports.
  • Consulting managers are not sales people, they’re delivery coordinators and quality evangelists. Their role is often inglorious – as in-between chasing consultants for expenses and report deliverables, they spend much of their time apologizing to the client for things that didn’t go quite to plan and making the client happy again. Yes, they’re often the front line with existing customers and are core to delivering proposals to new clients, but their business development focus is (and should remain) blinkered to delivery.
  • In many cases the role of a consulting manager can morph in to that of a sales engineer (just never call them that!). When a consulting manager has no direct reports, they can serve effectively as the technical backup to the sales team – scoping engagements, constructing technical proposals, and being the technical evangelist is new client and prospect meetings. This “sales engineer” (SE) role is often a critical component to building and supporting a successful consulting sales team. The stronger these technical experts are, and the more years under the belt consulting they have, the more respect they tend to garner from prospective clients, and the easier it is to close deals. In many ways they add the technical credibility to the sale organization for technical clients.
  • Plan on building out a central team of technical authors. The technical author team provides the grease for easing a company through the transition period. By (slowly) removing some of the tedious consulting work – i.e. proposal generation, report proofing, and quality assurance on deliverables – the technical author team ensures a consistent quality of client-facing materials and eases the burden on the consulting and sales teams, and further frees up the time of valuable consultants. For global consulting companies or businesses that have consultants scattered around the world, the technical authorship team also helps overcome second-language frailties. Some caution needs to be maintained as these teams can be quickly overwhelmed with high workloads – which is why they should ideally report in to a senior consulting manager.
  • Senior and managing consultants who have been “managing accounts” often have compensation plans linked to closing client deals. The incorporation of a dedicated sales team means that compensation plans need to be reevaluated for those consultants. Ideally this type of conversation happens prior to the hiring and buildout of a sales team – and that the consultants concerned are party to how the transition will occur and how compensation can be changed. Since the monies associated with managing an account are not often insignificant, it is vital that those consultants are offered alternative means of “making their number”. Luckily the company has several tools at their disposal. First of all, since the purpose of employing a dedicated sales team is to grow revenue and increase the billable hours of senior consultants, there is typically scope to increase the base salaries of those consultants and to create a bonus payment structure based upon utilization and customer satisfaction levels. Alternatively, that important role conversion in to a consulting manager (i.e. SE) can be useful in a hybrid compensation model, where factors such as new clients versus lateral growth in an existing client are bonused differently.

The business transformation from 100% consultants to a mix of consultants and dedicated sales personnel can be perilous if not managed carefully. The senior consultants need to be well informed and actively participate in the transition, and the sales team built gradually from a nucleus of experienced sales professionals that have come from consulting businesses that had already successfully transitioned.

Any transition will take time. The senior consultants in particular must be gradually weaned off their account management responsibilities, and replaced with ones that drive a higher utilization rate for them and any other consultants they may lead. The worst thing a leadership team can do is to expect the transition to happen overnight. Instead, they should anticipate the process being a 3-9 month transition; the end result is worth it though.

Friday, October 23, 2015

Hacker Hat-trick at TalkTalk

For the third time this year the UK broadband provider TalkTalk have seen their online defenses fall to cyber attackers.

While the company has been quick to notify their customers of the breach (it was observed on Wednesday this week and reported the following day) and are currently working with law enforcement, details are still relatively sparse. Given the very short period between detection of the attack and public notification, it is unlikely any significant cyber forensics exercise has been conducted… so it’ll likely take those tasked with the investigation a couple of weeks to get a solid understanding of the scope of the breach and what was likely touched or stolen by the attackers.

Regardless, the stories currently being published as to the nature of the breach and what has actually been stolen are confusing and the details often contradictory (see Business Insider, The Telegraph, BBC, and AOL). It would appear that the names, addresses, dates of birth, email addresses, telephone numbers, TalkTalk account information, and credit card and/or bank details of some 4,000,000 subscribers may have been stolen and that the data may not have been (completely?) encrypted… or maybe the encryption keys were similarly stolen.

Claim for the latest hack are also being attributed by some to a Russian Islamist group (referred to as the “Th3 W3b 0f H4r4m”) who has posted a claim online along with samples of the data purporting to have come from the TalkTalk site (see Pastebin -

Some stories refer to there being a DDoS attack or component. A DDoS attack isn’t going to breach an internet service and result in data theft, but it’s not unheard of for attackers to use such a mechanism to divert security teams and investigative resources while a more focused and targeted attack is conducted. It’ll be interesting to see if this actually happened, or whether the DDoS (if there was one) was unrelated… although it would be difficult to tell unless the attackers really messed up and left a trail of breadcrumbs – since DDoS services can be procured easily over the Internet for as little as $50 per hour from dozens of illicit (but professional) providers.

If there are lessons to be learned so far from this hat-trick breach, they include:
  • Hackers are constantly looking for easy prey. If you’re easy pickings and you get a reputation for being a soft target, you should anticipate being targeted and breached multiple times and likely by different attackers.
  • There should be no excuse for not carefully encrypting customer data, and using cryptographic techniques that make it impractical for attackers that do breach an organizations defenses to profit from the encrypted data they stole.
  • Calling an attacker or the tools they use “sophisticated” and expecting the victims of the breach to consul themselves with the knowledge that the organization charged with protecting their data was defeated by a supposedly more advanced adversary is wrong. It simply underlines a failure to understand your adversaries and invest in the appropriate security strategies.
-- Gunter Ollmann