Tuesday, April 24, 2018

Cyber Scorecarding Services

Ample evidence exists to underline that shortcomings in a third-parties cyber security posture can have an extremely negative effect on the security integrity of the businesses they connect or partner with. Consequently, there’s been a continuous and frustrated desire for a couple of decades for some kind of independent verification or scorecard mechanism that can help primary organizations validate and quantify the overall security posture of the businesses they must electronically engage with.

A couple decades ago organizations could host a small clickable logo on their websites – often depicting a tick or permutation of a “trusted” logo – that would display some independent validation certificate detailing their trustworthiness. Obviously, such a system was open to abuse. For the last 5 or so years, the trustworthiness verification process has migrated ownership from the third-party to a first-party responsibility.

Today, there are a growing number of brand-spanking-new start-ups adding to pool of slightly longer-in-the-tooth companies taking on the mission of independently scoring the security and cyber integrity of organizations doing business over the Web.

The general premise of these companies is that they’ll undertake a wide (and widening) range of passive and active probing techniques to map out a target organizations online assets, crawl associated sites and hidden crevasses (underground, over ground, wandering free… like the Wombles of Wimbledon?) to look for leaks and unintended disclosures, evaluate current security settings against recommended best practices, and even dig up social media dirt that could be useful to an attacker; all as contributors to a dynamic report and ultimate “scorecard” that is effectively sold to interested buyers or service subscribers.

I can appreciate the strong desire for first-party organizations to have this kind of scorecard on hand when making decisions on how best to trust a third-party supplier or partner, but I do question a number of aspects of the business model behind providing such security scorecards. And, as someone frequently asked by technology investors looking for guidance on the future of such business ventures, there are additional things to consider as well.

Are Cyber Scorecarding Services Worth it?
As I gather my thoughts on the business of cyber scorecarding and engage with the purveyors of such services again over the coming weeks (post RSA USA Conference), I’d offer up the following points as to why this technology may still have some business wrinkles and why I’m currently questioning the long-term value of the business model

1. Lack of scoring standards
There is no standard to the scorecards on offer. Every vendor is vying to make their scoring mechanism the future of the security scorecard business. As vendors add new data sources or encounter new third-party services and configurations that could influence a score, they’re effectively making things up as they go along. This isn’t necessarily a bad thing and ideally the scoring will stabilize over time at a per vendor level, but we’re still a long way away from having an international standard agreed to. Bear in mind, despite two decades of organizations such as OWASP, ISSA, SANS, etc., the industry doesn’t yet have an agreed mechanism of scoring the overall security of a single web application, let alone the combined Internet presence of a global online business.

2. Heightened Public Cloud Security
Third-party organizations that have moved to the public cloud and have enabled the bulk of the default security features that are freely available to them and are using the automated security alerting and management tools provided, are already very secure – much more so that their previous on-premise DIY efforts. As more organizations move to the public cloud, they all begin to have the same security features, so why would a third-party scorecard be necessary? We’re rapidly approaching a stage where just having an IP address in a major public cloud puts your organization ahead of the pack from a security perspective. Moreover, I anticipate that the default security of public cloud providers will continue to advance in ways that are not easily externally discernable (e.g. impossible travel protection against credential misuse) – and these kinds of ML/AI-led protection technologies may be more successful than the traditional network-based defense-in-depth strategies the industry has pursued for the last twenty-five years.

3. Score Representations
Not only is there no standard for scoring an organization’s security, it’s not clear what you’re supposed to do with the scores that are provided. This isn’t a problem unique to the scorecard industry – we’ve observed the phenomenon for CVSS scoring for 10+ years.
At what threshold should I be worried? Is a 7.3 acceptable, while a 7.6 means I must patch immediately? An organization with a score of 55 represents how much more of a risk to my business versus a vendor that scores 61?
The thresholds for action (or inaction) based upon a score are arbitrary and will be in conflict with each new advancement or input the scorecard provider includes as they evolve their service. Is the 88.8 of January the same as the 88.8 of May after the provider added new features that factored in CDN provider stability and Instagram crawling? Does this month’s score of 78.4 represent a newly introduced weakness in the organization’s security, or is the downgraded score an artifact of new insights that weren’t accounted for previously by the score provider?

4. Historical References and Breaches
Then there’s the question of how much of an organizations past should influence its future ability to conduct business more securely. If a business got hacked three years ago and the responsibly disclosed and managed their response – complete with reevaluating and improving their security, does another organization with the same current security configuration have a better score for not having disclosed a past breach?
Organizations get hacked all the time – it’s why modern security now works on the premise of “assume breach”. The remotely visible and attestable security of an organization provides no real insights in to whether they are currently hacked or have been recently breached.

5. Gaming of Scorecards
Gaming of the scorecard systems is trivial and difficult to defend against. If I know who my competitors are and which scorecard provider (or providers) my target customer is relying upon, I can adversely affect their scores. A few faked “breached password lists” posted to PasteBin and underground sites, a handful of spam and phishing emails sent, a new domain name registration and craftily constructed website, a few subtle contributions to IP blacklists, etc. and their score is affected.
I haven’t looked recently, but I wouldn’t be surprised if some blackhat entrepreneurs haven’t already launched such a service line. I’m sure it could pay quite well and requires little effort beyond the number of disinformation services that already exist underground. If scorecarding ever becomes valuable, so too will its deception.

6. Low Barrier to Market Entry
The barrier for entry in to the scorecarding industry is incredibly low. Armed with “proprietary” techniques and “specialist” data sources, anyone can get started in the business. If for some reason third-party scorecarding becomes popular and financially lucrative, then I anticipate that any of the popular managed security services providers (MSSP) or automated vulnerability (VA) assessment providers could launch their competitive service with as little as a month’s notice and only a couple of engineers.
At some point in the future, if there ever were to be standardization of scorecarding scores and evaluation criteria, that’s when the large MSSP’s and VA’s would likely add such a service. The problem for the all the new start-ups and longer-toothed start-ups is that these MSSP’s and VA’s would have no need to acquire the technology or clientele.

7. Defending a Score
Defending the integrity and righteousness of your independent scoring mechanism is difficult and expensive. Practically all the scorecard providers I’ve met like to explain their efficacy of operation as if it were a credit bureau’s Credit Score – as if that explains the ambiguities of how they score. I don’t know all the data sources and calculations that credit bureaus use in their credit rating systems, but I’m pretty sure they’re not port scanning websites, scraping IP blacklists, and enumerating service banners – and that the people being scored have as much control to modify the data that the scoring system relies upon.
My key point here though lies with the repercussions of getting the score wrong or providing a score that adversely affects an organization to conduct business online – regardless of the scores righteousness. The affected business will question and request the score provider to “fix their mistake” and to seek compensation for the damage incurred. In many ways it doesn’t matter whether the scorecard provider is right or wrong – costs are incurred defending each case (in energy expended, financial resources, lost time, and lost reputation). For cases that eventually make it to court, I think the “look at the financial credit bureau’s” defense will fall a little flat.

Final Thoughts
The industry strongly wants a scoring mechanism to help distinguish good from bad, and to help prioritize security responses at all levels. If only it were that simple, it would have been solved quite some time ago.

Organizations are still trying to make red/amber/green tagging work for threat severity, business risk, and response prioritization. Every security product tasked with uncovering or collating vulnerabilities, misconfigurations, aggregating logs and alerts, or monitoring for anomalies, is equally capable of (and likely is) producing their own scores.

Providing a score isn’t a problem in the security world, the problem lies in knowing how to respond to the score you’ve been presented with!

-- Gunter Ollmann

Thursday, March 8, 2018

NextGen SIEM Isn’t SIEM

Security Information and Event Management (SIEM) is feeling its age. Harkening back to a time in which businesses were prepping for the dreaded Y2K and where the cutting edge of security technology was bound to DMZ’s, Bastion Hosts, and network vulnerability scanning – SIEM has been along for the ride as both defenses and attacker have advanced over the intervening years. Nowadays though it feels less of a ride with SIEM, and more like towing an anchor.

Despite the deepening trench gauged by the SIEM anchor slowing down threat response, most organizations persist in throwing more money and resources at it. I’m not sure whether it’s because of a sunk cost fallacy or the lack of a viable technological alternative, but they continue to diligently trudge on with their SIEM – complaining with every step. I’ve yet to encounter an organization that feels like their SIEM is anywhere close to scratching their security itch.

The SIEM of Today
The SIEM of today hasn’t changed much over the last couple of decades with its foundation being the real-time collection and normalization of events from a broad scope of security event log sources and threat alerting tools. The primary objective of which was to manage and overcome the cacophony of alerts generated by the hundreds, thousands, or millions of sensors and logging devices scattered throughout an enterprise network – automatically generating higher fidelity alerts using a variety of analytical approaches – and displaying a more manageable volume of information via dashboards and reports.

As the variety and scope of devices providing alerts and logs continues to increase (often exponentially) consolidated SIEM reporting has had to focus upon statistical analytics and trend displays to keep pace with the streaming data – increasingly focused on the overall health of the enterprise, rather than threat detection and event risk classification.

Whilst the collection of alerts and logs are conducted in real-time, the ability to aggregate disparate intelligence and alerts to identify attacks and breaches has fallen to offline historical analysis via searches and queries – giving birth to the Threat Hunter occupation in recent years.

Along the way, SIEM has become the beating heart of Security Operations Centers (SOC) – particularly over the last decade – and it is often difficult for organizations to disambiguate SIEM from SOC. Not unlike Frankenstein’s monster, additional capabilities have been grafted to today’s operationalized SIEM’s; advanced forensics and threat hunting capabilities now dovetail in to SIEM’s event archive databases, a new generation of automation and orchestration tools have instantiated playbooks that process aggregated logs, and ticketing systems track responder’s efforts to resolve and mitigate threats.

SIEM Weakness
There is however a fundamental weakness in SIEM and it has become increasingly apparent over the last half-decade as more advanced threat detection tools and methodologies have evolved; facilitated by the widespread adoption of machine learning (ML) technologies and machine intelligence (MI).

Legacy threat detection systems such as firewalls, intrusion detection systems (IDS), network anomaly detection systems, anti-virus agents, network vulnerability scanners, etc. have traditionally had a high propensity towards false positive and false negative detections. Compounding this, for many decades (and still a large cause for concern today) these technologies have been sold and marketed on their ability to alert in volume – i.e. an IDS that can identify and alert upon 10,000 malicious activities is too often positioned as “better” than one that only alerts upon 8,000 (regardless of alert fidelity). Alert aggregation and normalization is of course the bread and butter of SIEM.

In response, a newer generation of vendors have brought forth new detection products that improve and replace most legacy alerting technologies – focused upon not only finally resolving the false positive and false negative alert problem, but to move beyond alerting and into mitigation – using ML and MI to facilitate behavioral analytics, big data analytics, deep learning, expert system recognition, and automated response orchestration.

The growing problem is that these new threat detection and mitigation products don’t output alerts compatible with traditional SIEM processing architectures. Instead, they provide output such as evidence packages, logs of what was done to automatically mitigate or remediate a detected threat, and talk in terms of statistical risk probabilities and confidence values – having resolved a threat to a much higher fidelity than a SIEM could. In turn, “integration” with SIEM is difficult and all too often meaningless for these more advanced technologies.

A compounding failure with the new ML/MI powered threat detection and mitigation technologies lies with the fact that they are optimized for solving a particular class of threats – for example, insider threats, host-based malicious software, web application attacks, etc. – and have optimized their management and reporting facilities for that category. Without a strong SIEM integration hook there is no single pane of glass for SOC management; rather a half-dozen panes of glass, each with their own unique scoring equations and operational nuances.

Next Generation SIEM
If traditional SIEM has failed and is becoming more of a bugbear than ever, and the latest generation of ML and MI-based threat detection and mitigation systems aren’t on a trajectory to coalesce by themselves into a manageable enterprise suite (let alone a single pane of glass), what does the next generation (i.e. NextGen) SIEM look like?

Looking forward, next generation SIEM isn’t SIEM, it’s an evolution of SOC – or, to license a more proscriptive turn of phrase, “SOC-in-a-box” (and inevitably “Cloud SOC”).

The NextGen SIEM lies in the natural evolution of today’s best hybrid-SOC solutions. The Frankenstein add-ins and bolt-ons that have extended the life of SIEM for a decade are the very fabric of what must ascend and replace it.

For the NextGen SIEM, SOC-in-a-box, Cloud SOC, or whatever buzzword the professional marketers eventually pronounce – to be successful, the core tenets of operation will necessarily include:
  • Real-time threat detection, classification, escalation, and response. Alerts, log entries, threat intelligence, device telemetry, and indicators of compromise (IOC), will be treated as evidence for ML-based classification engines that automatically categorize and label their discoveries, and optimize responses to both threats and system misconfigurations in real-time.
  • Automation is the beating heart of SOC-in-a-box. With no signs of data volumes falling, networks becoming less congested, or attackers slackening off, automation is the key to scaling to the businesses needs. Every aspect of SOC must be designed to be fully autonomous, self-learning, and elastic.
  • The vocabulary of security will move from “alerted” to “responded”. Alerts are merely one form of telemetry that, when combined with overlapping sources of evidence, lay the foundation for action. Businesses need to know which threats have been automatically responded to, and which are awaiting a remedy or response.
  • The tier-one human analyst role ceases to exist, and playbooks will be self-generated. The process of removing false positives and gathering cohobating evidence for true positive alerts can be done much more efficiently and reliably using MI. In turn, threat responses by tier-two or tier-three analysts will be learned by the system – automatically constructing and improving playbooks with each repeated response.
  • Threats will be represented and managed in terms of business risk. As alerts become events, “criticality” will be influenced by age, duration, and threat level, and will sit adjacent to “confidence” scores that take in to account the reliability of sources. Device auto-classification and responder monitoring will provide the framework for determining the relative value of business assets, and consequently the foundation for risk-based prioritization and management.
  • Threat hunting will transition to evidence review and preservation. Threat hunting grew from the failures of SIEM to correctly and automatically identify threats in real-time. The methodologies and analysis playbooks used by threat hunters will simply be part of what the MI-based system incorporates in real-time. Threat hunting experts will in-turn focus on preservation of evidence in cases where attribution and prosecution become probable or desirable.
  • Hybrid networks become native. The business network – whether it exists in the cloud, on premise, at the edge, or in the hands of employees and customers – must be monitored, managed, and have threats responded to as a single entity. Hybrid networks are the norm and attackers will continue to test and evolve hybrid attacks to leverage any mitigation omission.

Luckily, the NextGen SIEM is closer than we think. As SOC operations have increasingly adopted the cloud to leverage elastic compute and storage capabilities, hard-learned lessons in automation and system reliability from the growing DevOps movement have further defined the blueprint for SOC-in-a-box. Meanwhile, the current generation of ML-based and MI-defined threat detection products, combined with rapid evolution of intelligence graphing platforms, have helped prove most of the remaining building blocks.

These are not wholly additions to SIEM, and SIEM isn’t the skeleton of what will replace it.

The NextGen SIEM starts with the encapsulation of the best and most advanced SOC capabilities of today, incorporates its own behavioral and threat detection capabilities, and dynamically learns to defend the organization – finally reporting on what it has successfully resolved or mitigated.

-- Gunter Ollmann

Tuesday, March 6, 2018

Lock Picking at Security Conferences

Both new and returning attendees at technical security conferences are often puzzled by the presence of lock picking break-out areas and the gamut of hands-on tutorials. For an industry primarily focused on securing electronic packets of ones and zeros, an enthusiasm for manual manipulation of mechanical locks seems out of place to many.

Over the years, I’ve heard many reasons and justifications for the presence of lock picking villages, the hands-on training, and the multitude of booths selling the tools of the trade. The answers vary considerably and tend to be weighted by how much of a tinkerer or hacker the respondent thinks they are.

The reality – I think – can be boiled down to two primary reasons.

Like most longtime security professionals who now take to the stage to educate attendees on the fragility of the cyber-security domain, or attempt to mentor and guide the in-bound generation of attackers and defenders, locks and lock picking serve as a valuable teaching aid. As such, through our influence, we encourage people to tinker and learn.

By examining how mechanical locks operate and how they have evolved to counter each new picking technique used to subvert earlier models, cyber-security professionals begin to appreciate three fundamentals of security:

  1. Attackers learn by dissecting and studying the intricacies of the defenses before them and must practice, practice, practice to defeat them. 
  2. Defenders must understand the tools and methodologies that the attackers avail themselves of if they are to devise and deploy better defenses, and 
  3. No matter how well thought-out in advance, the limitations of fabrication tolerances and the environments with which the security technology must operate within will introduce new flaws and vectors for attack.

These are incredibly important lessons that must be learned. Would-be professionals seeking to get into penetration testing, red teaming, or reverse engineering can’t just pick up the latest Hacking Exposed edition and complete online Q&A exams – they must roll-up their sleeves and accumulate the hours of hands-on experience of both failures and successes, and build that muscle-memory. Would-be defenders can’t just read the operations manuals of the devices they’ll be entrusted to protect, or sit through vendor training courses on how to operate threat detection systems – they must learn the tools of the attackers and (ideally) gain basic proficiency in their use if they’re to make valuable contributions to defense. Meanwhile, the third point is where both attackers and defender need to learn humility – no matter how well we think we know a system or how often we’ve practiced against a technology, subtle flaws and unexpected permutations may undermine our best efforts through no fault of our own skills.

As a teaching aid, locks and lock picking are a tactile means of understanding the foibles of cyber security.

But there is a second reason… because it’s exciting and fun!

Lock picking feeds into the historical counter-culture of hacking. There’s a kind of excitement learning how to defeat something near the edge of legitimacy – an illicit knowledge that for centuries has been the trade-craft of criminals.

With a few minutes of guidance and practice, the easiest locks begin to pop open and the hacker is drawn to the challenge of a harder lock, and so on. As frustrations grow, the reward of the final movement and pop of the lock is often as stimulating as scoring a goal in some kind of popular uniformed team sport.

The skills associated with mastering lock picking however have little translation to being a good hacker – except perhaps the single-minded intensity and tenaciousness to solve technical changes.
I have noticed that there are a disproportionate number of hackers who are both accomplished lock pickers, (semi) professional magicians, and wall-flower introverts. Arguably, locking picking (and magic tricks) may be the hackers best defense at uncomfortable social events. Rather than have an awkward conversation about sports or pop culture, it’s often time to whip out a lock and a pack of picks, and teach instead of prattle.

-- Gunter Ollmann

Tuesday, December 19, 2017

Consumer IoT Security v1.01

They say charity begins at home, well IoT security probably should too. The growing number of Internet enabled and connected devices we populate our homes with continues to grow year on year - yet, with each new device we connect up, the less confident we become in our home security.

The TV news and online newspapers on one-hand extol the virtues of each newly launched Internet-connected technology, yet with the other they tell the tale of how your TV is listening to you and how the animatronic doll your daughter plays with is spying on her while she sleeps.

To be honest, it amazes me that some consumer networking company hasn't been successful in solving this scary piece of IoT real estate, and to win over the hearts and minds of  family IT junkies at the same time.

With practically all these IoT devices speaking over WiFi, and the remaining (lets guess at 10% of home deployments) using Zigbee, Z-Wave, Thread, or WeMo, logically a mix of current generation smart firewall, IPS, and behavioral log analytics would easily remediate well over 99% of envisaged Internet attacks these IoT devices are likely to encounter, and 90% of the remaining threats conducted from within the local network or residential airwaves.

Why is that we haven't seen a "standard" WiFi home router employing these security capabilities in a meaningful way - and marketed in a similar fashion to the Ads we see for identity protection, insurance companies, and drugs (complete with disclaimers if necessary)?

When I look at the long list of vulnerabilities disclosed weekly for all the IoT devices people are installing at home, it is rare to encounter one that either couldn't have an IPS rule constructed to protect it, or would be protected by generic attack vector rules (such as password brute forcing).

If you also included a current (i.e. 2017) generation of ML -powered log analytics and behavioral detection systems in to the home WiFi router, you could easily shut out attack and abuse vectors such as backdoor voyeurism, bitcoin mining, and stolen credential use.

Elevating home IoT security to v1.01 seems so trivial.

The technologies are available, the threat is ever present, the desire for a remedy is there, and I'd argue the money is there too. Anyone installing an app controllable light bulb, door lock, or coffee maker, has obviously already invested several hundreds of dollars in their WiFi kit, Internet cable/fiber provider, laptop(s), and cell phone(s) - so the incremental hit of $100-200 to the WiFi router unit RRP plus a $9.99 or $19.99 monthly subscription fee for IPS signatures, trained classifiers, and behavioral analysis updates, seems like a no-brainer.

You'd think that Cisco/Linksys, D-Link, Netgear, etc. would have solved this problem already... that IoT security (at home) would be "in the bag" and we'd be at v1.01 status already. Maybe market education is lagging and a focused advertising campaign centers on securing your electronic home would push market along? Or perhaps these "legacy" vendors need an upstart company to come along and replace them?

Regardless, securing IoT at home is not a technologically challenging problem. It has been solved many times with different tools within the enterprise (for many years), and the limited scope and sophistication of home networking makes the problem much easier to deal with.

I hope some intelligent security vendor can come to the fore and bring the right mix of security technology to the fore. Yes, it costs R&D effort to maintain signatures, train classifiers, and broaden behavioral detection scenarios, but even if only 1% of homes that have WiFi routers today (approximately 150 million) paid a $9.99 monthly subscription for updates - that $15m per month would be the envy of 95% of security vendors around the world.

-- Gunter

[Note to (potential) vendors that want to create such a product or add such capabilities to an existing product, I'd happily offer up my expertise, advice, and contact-book to help you along the way. I think this is a massive hole in consumer security that is waiting to be filled by an innovative company, and will gladly help where I can.]

Sunday, December 17, 2017

Deception Technologies: Deceiving the Attacker or the Buyer?

Deception technologies, over the last three-ish years, have come into vogue; with more than a dozen commercial vendors and close to a hundred open source products available to choose from. Solutions range from local host canary file monitoring, through to autonomous self-replicating and dynamic copies of the defenders network operating like an endless hall of mirrors.

The technologies employed for deception purposes are increasingly broad - but the ultimate goal is for an attacker to be deceived into tripping over or touching a specially deposited file, user account, or networked service and, in doing so, sounding an alarm so that the defenders can start to... umm... well..., often it's not clear what the defender is supposed to do. And that's part of the problem with the deception approach to defense.

I'm interested, but deeply cautious about the claims of deception technology vendors, and so should you be. It's incredibly difficult to justify their expense and understand their overall value when incorporated in to a defense in depth strategy.

There have been many times over the last couple of decades I have recommended to my clients and businesses a quick and dirty canary solution. For example, adding unique user accounts that appear at the start and end of your LDAP, Active Directory, or email contacts list - such that if anyone ever emails those addresses, you know you've been compromised. And similar canary files or shares for detecting the presence of worm outbreaks. But, and I must stress the "but", those solutions only apply to organizations that have not invested in the basics of network hygiene and defense in depth.

Honeypots, Honeynets, canaries, and deception products are HIGHLY prone to false positives. Vendors love to say otherwise, but the practical reality is that there's a near infinite number of everyday things that'll set them off - on hole or in part. For example:

  • Regular vulnerability scanning,
  • Data backups and file recovery,
  • System patching and updates,
  • Changes in firewall or VPN policies,
  • Curious employees,
  • Anti-virus scanners and suite updates,
  • On-premise enterprise search systems,
  • Cloud file repository configuration changes and synchronization,
The net result being either you ignore or turn off the system after a short period of time, or you swell your security teams ranks and add headcount to continually manage and tune the system(s).

If you want my honest opinion though, I'd have to say that the time for deception-based products has already past. 

If you're smart, you've already turned on most of the logging features of your desktop computers, laptops, servers, and infrastructure devices, and you're capturing all file, service, user, and application access attempts. You're therefore already capturing more of the raw information necessary to detect any threat your favorite deception technology is proposing to identify for you. Obviously, the trick is being able to process those logs for anomalies and responding to the threat.

This year alone the number of automated log analytics platforms and standalone products that employ AI and machine learning that are capable of real-time (or, worst case, "warm") detection of threats, has grown to outnumber all the tools in the Deception solution category - and they do it cheaper, more efficiently, and with less human involvement. 

Deception vendors were too slow. The log analytics vendors incorporated more advanced detection systems, user behavioral analytics, and were better able to mitigate the false positive problems - and didn't require additional investment in host agents and network appliances to collect the data that the deception technologies needed.

As an enterprise security buyer, I think you can forget about employing deception technologies and instead invest in automated log analytics. Not only will you cover the same threats, but the log analytics platforms will continue to innovate faster and cover a broader spectrum of threats and SecOps without the propensity of false positives.

-- Gunter Ollman

Saturday, December 16, 2017

What would you do if...

As a bit of a "get to know your neighbor" exercise or part of a team building exercise, have you ever been confronted with one of those "What would you do if..." scenarios?

My socially awkward and introvert nature (through some innate mechanism of self preservation) normally helps me evade such team building exercises, but every so often I do get caught out and I'm forced to offer up an answer to the posed scenario.

The last couple of times the posed question (or a permutation thereof) has been "What would you do if you were guaranteed to be financially secure and could choose to do anything you wanted to do - with no worries over money?" i.e. money is no object. It surprises me how many people will answer along the lines of building schools in Africa, working with war veterans, helping the homeless, etc.

Perhaps its a knee jerk response if you haven't really thought about it and re-actively think of something that you expect your new found group of friends and colleges will appreciate, or maybe it is genuine... but for me, such a thought seems so shallow.

I've often dwelled and retrospectively thought about the twists and turns of my career, my family life, and where I screwed up more than other times etc. and, along the way, I have though many many times about what I'd do if I were ever financially secure that I could chose to do anything.

Without doubt (OK, maybe a little trepidation), I'd go back to University and purse a degree and career in bio-medical engineering research. But I don't have any desire to be a doctor, a surgeon, or pharmacist.

I'd cast away my information security career to become someone driving research at the forefront of medicine - in the realm of tissue, organ, and limb regrowth... and beyond. And, with enough money, build a research lab to purse and lead this new area of research

You see I believe were at the cusp of being able to regrow/correct many of the disabilities that limit so many lives today. We're already seeing new biomedical technologies enabling children deaf or blind from birth to hear their mothers voice or see their mothers face for the first time. It's absolutely wonderful and if anyone who's ever seen a video of the first moments a child born with such disabilities experiences such a moment hasn't choked up and felt the tears themselves, then I guess we're cut from different sheets.

But that fusion of technology in solving these disabilities, like the attachments of robotic limbs to amputees, is (in my mind) still only baby-steps; not towards the cyborgs of science fiction fame, but towards to world of biological regrowth and augmentation through biological means.

Today, we see great steps towards the regrowth of ears, hearts, kidneys, bone, and skin. In the near future... the future I would so dearly love to learn, excel, and help advance, lies in what happens next. We'll soon be able to regrow any piece of the human body. Wounded warriors will eventually have lost limbs restored - not replaced with titanium and carbon-fiber fabricated parts.

I believe that the next 20 years of bio-medical engineering research will cause medicine to advance more that all medical history previously combined. And, as part of that journey, within the 30 years after that (i.e. 21-50 years from now), I believe in the potential of that science to not only allow humans to effectively become immortal (if you assume that periodic replacement of faulty parts are replaced, until our very being finally gives up due to boredom), but also to augment ourselves in many new and innovative ways. For example, using purely biological means, enabling our eyes to view a much broader spectrum of the electromagnetic spectrum, at orders of magnitude higher than today, with "built-in" zoom.

Yes, it sounds fantastical, but that's in part to the opportunities that lie ahead in such a new and exciting field, and why I'd choose to drop everything an enter "...if you were guaranteed to be financially secure and could choose to do anything you wanted to do - with no worries over money."

-- Gunter

Sunday, January 15, 2017

Allowing Vendors VPN access during Product Evaluation

For many prospective buyers of the latest generation of network threat detection technologies it may appear ironic that these AI-driven learning systems require so much manual tuning and external monitoring by vendors during a technical “proof of concept” (PoC) evaluation.

Practically all vendors of the latest breed of network-based threat detection technology require varying levels of network accessibility to the appliances or virtual installations of their product within a prospect’s (and future customers) network. Typical types of remote access include:

  • Core software updates (typically a pushed out-to-in update)
  • Detection model and signature updates (typically a scheduled in-to-out download process)
  • Threat intelligence and labeled data extraction (typically an ad hoc per-detection in-to-out connection)
  • Cloud contribution of abstracted detection details or meta-data (often a high frequency in-to-out push of collected data)
  • Customer support interface (ad hoc out-to-in human-initiated supervisory control)
  • Command-line technical support and maintenance (ad hoc out-to-in human-initiated supervisory control)

Depending upon the product, the vendor, and the network environment, some or all of these types of remote access will be required for the solution to function correctly. But which are truly necessary and which could be used to unfairly manually manipulate the product during this important evaluation phase?

To be flexible, most vendors provide configuration options that control the type, direction, frequency, and initialization processes for remote access.

When evaluating network detection products of this ilk, the prospective buyer needs to very carefully review each remote access option and fully understand the products reliance and efficacy associated with each one. Every remote access option eventually allowed is (unfortunately) an additional hole being introduced to the buyers’ defenses. Knowing this, it is unfortunate that some vendors will seek to downplay their reliance upon certain remote access requirements – especially during a PoC.

Prior to conducting a technical evaluation of the network detection system, buyers should ask the following types of questions to their prospective vendor(s):

  • What is the maximum period needed for the product to have learned the network and host behaviors of the environment it will be tested within?
  • During this learning period and throughout the PoC evaluation, how frequently will the product’s core software, detection models, typically be updated? 
  • If no remote access is allowed to the product, how long can the product operate before losing detection capabilities and which detection types will degrade to what extent over the PoC period?
  • If remote interactive (e.g. VPN) control of the product is required, precisely what activities does the vendor anticipate to conduct during the PoC, and will all these manipulations be comprehensively logged and available for post-PoC review?
  • What controls and data segregation are in place to secure any meta-data or performance analytics sent by the product to the vendor’s cloud or remote processing location? At the end of the PoC, how does the vendor propose to irrevocably delete all meta-data from their systems associated with the deployed product?
  • If testing is conducted during a vital learning period, what attack behaviors are likely to be missed and may negatively influence other detection types or alerting thresholds for the network and devices hosted within it?
  • Assuming VPN access during the PoC, what manual tuning, triage, or data clean-up processes are envisaged by the vendor – and how representative will it be of the support necessary for a real deployment?

It is important that prospective buyers understand not only the number and types of remote access necessary for the product to correctly function, but also how much “special treatment” the PoC deployment will receive during the evaluation period – and whether this will carry-over to a production deployment.

As vendors strive to battle their way through security buzzword bingo, in this early age of AI-powered detection technology, remote control and manual intervention in to the detection process (especially during the PoC period) may be akin to temporarily subscribing to a Mechanical Turk solution; something to be very careful of indeed.

-- Gunter Ollmann, Founder/Principal @ Ablative Security

Friday, January 13, 2017

Machine Learning Approaches to Anomaly and Behavioral Threat Detection

Anomaly detection approaches to threat detection have traditionally struggled to make good on the efficacy claims of vendors once deployed in real environments. Rarely have the vendors lied about their products capability – rather, the examples and stats they provide are typically for contrived and isolated attack instances; not representative of a deployment in a noisy and unsanitary environment.

Where anomaly detection approaches have fallen flat and cast them in a negative value context is primarily due to alert overload and “false positives”. False Positive deserves to be in quotations because (in almost every real-network deployment) the anomaly detection capability is working and alerting correctly – however the anomalies that are being reported often have no security context and are unactionable.

Tuning is a critical component to extracting value from anomaly detection systems. While “base-lining” sounds rather dated, it is a rather important operational component to success. Most false positives and nuisance alerts are directly attributable to missing or poor base-lining procedures that would have tuned the system to the environment it had been tasked to spot anomalies in.

Assuming an anomaly detection system has been successfully tuned to an environment, there is still a gap on actionability that needs to be closed. An anomaly is just an anomaly after all.
Closure of that gap is typically achieved by grouping, clustering, or associating multiple anomalies together in to a labeled behavior. These behaviors in turn can then be classified in terms of risk.

While anomaly detection systems dissect network traffic or application hooks and memory calls using statistical feature identification methods, the advance to behavioral anomaly detection systems requires the use of a broader mix of statistical features, meta-data extraction, event correlation, and even more base-line tuning.

Because behavioral threat detection systems require training and labeled detection categories (i.e. threat alert types), they too suffer many of the same operational ill effects of anomaly detection systems. Tuned too tightly, they are less capable of detecting threats than an off-the-shelf intrusion detection system (network NIDS or host HIDS). Tuned to loosely, then they generate unactionable alerts more consistent with a classic anomaly detection system.

The middle ground has historically been difficult to achieve. Which anomalies are the meaningful ones from a threat detection perspective?

Inclusion of machine learning tooling in to the anomaly and behavioral detection space appears to be highly successful in closing the gap.

What machine learning brings to the table is the ability to observe and collect all anomalies in real-time, make associations to both known (i.e. trained and labeled) and unknown or unclassified behaviors, and to provide “guesses” on actions based upon how an organization’s threat response or helpdesk (or DevOps, or incident response, or network operations) team has responded in the past.

Such systems still require baselining, but are expected to dynamically reconstruct baselines as it learns over time how the human operators respond to the “threats” it detects and alerts upon.
Machine learning approaches to anomaly and behavioral threat detection (ABTD) provide a number of benefits over older statistical-based approaches:

  • A dynamic baseline ensures that as new systems, applications, or operators are added to the environment they are “learned” without manual intervention or superfluous alerting.
  • More complex relationships between anomalies and behaviors can be observed and eventually classified; thereby extending the range of labeled threats that can be correctly classified, have risk scores assigned, and prioritized for remediation for the correct human operator.
  • Observations of human responses to generated alerts can be harnesses to automatically reevaluate risk and prioritization over detection and events. For example, three behavioral alerts are generated associated with different aspects of an observed threat (e.g. external C&C activity, lateral SQL port probing, and high-speed data exfiltration). The human operator associates and remediates them together and uses the label “malware-based database hack”. The system now learns that clusters of similar behaviors and sequencing are likely to classified and remediated the same way – therefore in future alerts the system can assign a risk and probability to the new labeled threat.
  • Outlier events can be understood in the context of typical network or host operations – even if no “threat” has been detected. Such capabilities prove valuable in monitoring the overall “health” of the environment being monitored. As helpdesk and operational (non-security) staff leverage the ABTD system, it also learns to classify and prioritize more complex sanitation events and issues (which may be impeding the performance of the observed systems or indicate a pending failure).

It is anticipated that use of these newest generation machine learning approaches to anomaly and behavioral threat detection will not only reduce the noise associated with real-time observations of complex enterprise systems and networks, but also cause security to be further embedded and operationalized as part of standard support tasks – down to the helpdesk level.

-- Gunter Ollmann, Founder/Principal @ Ablative Security

(first published January 13th - "From Anomaly, to Behavior, and on to Learning Systems")