Tuesday, December 19, 2017

Consumer IoT Security v1.01

They say charity begins at home, well IoT security probably should too. The growing number of Internet enabled and connected devices we populate our homes with continues to grow year on year - yet, with each new device we connect up, the less confident we become in our home security.

The TV news and online newspapers on one-hand extol the virtues of each newly launched Internet-connected technology, yet with the other they tell the tale of how your TV is listening to you and how the animatronic doll your daughter plays with is spying on her while she sleeps.

To be honest, it amazes me that some consumer networking company hasn't been successful in solving this scary piece of IoT real estate, and to win over the hearts and minds of  family IT junkies at the same time.

With practically all these IoT devices speaking over WiFi, and the remaining (lets guess at 10% of home deployments) using Zigbee, Z-Wave, Thread, or WeMo, logically a mix of current generation smart firewall, IPS, and behavioral log analytics would easily remediate well over 99% of envisaged Internet attacks these IoT devices are likely to encounter, and 90% of the remaining threats conducted from within the local network or residential airwaves.


Why is that we haven't seen a "standard" WiFi home router employing these security capabilities in a meaningful way - and marketed in a similar fashion to the Ads we see for identity protection, insurance companies, and drugs (complete with disclaimers if necessary)?

When I look at the long list of vulnerabilities disclosed weekly for all the IoT devices people are installing at home, it is rare to encounter one that either couldn't have an IPS rule constructed to protect it, or would be protected by generic attack vector rules (such as password brute forcing).

If you also included a current (i.e. 2017) generation of ML -powered log analytics and behavioral detection systems in to the home WiFi router, you could easily shut out attack and abuse vectors such as backdoor voyeurism, bitcoin mining, and stolen credential use.

Elevating home IoT security to v1.01 seems so trivial.

The technologies are available, the threat is ever present, the desire for a remedy is there, and I'd argue the money is there too. Anyone installing an app controllable light bulb, door lock, or coffee maker, has obviously already invested several hundreds of dollars in their WiFi kit, Internet cable/fiber provider, laptop(s), and cell phone(s) - so the incremental hit of $100-200 to the WiFi router unit RRP plus a $9.99 or $19.99 monthly subscription fee for IPS signatures, trained classifiers, and behavioral analysis updates, seems like a no-brainer.

You'd think that Cisco/Linksys, D-Link, Netgear, etc. would have solved this problem already... that IoT security (at home) would be "in the bag" and we'd be at v1.01 status already. Maybe market education is lagging and a focused advertising campaign centers on securing your electronic home would push market along? Or perhaps these "legacy" vendors need an upstart company to come along and replace them?

Regardless, securing IoT at home is not a technologically challenging problem. It has been solved many times with different tools within the enterprise (for many years), and the limited scope and sophistication of home networking makes the problem much easier to deal with.

I hope some intelligent security vendor can come to the fore and bring the right mix of security technology to the fore. Yes, it costs R&D effort to maintain signatures, train classifiers, and broaden behavioral detection scenarios, but even if only 1% of homes that have WiFi routers today (approximately 150 million) paid a $9.99 monthly subscription for updates - that $15m per month would be the envy of 95% of security vendors around the world.

-- Gunter

[Note to (potential) vendors that want to create such a product or add such capabilities to an existing product, I'd happily offer up my expertise, advice, and contact-book to help you along the way. I think this is a massive hole in consumer security that is waiting to be filled by an innovative company, and will gladly help where I can.]

Sunday, December 17, 2017

Deception Technologies: Deceiving the Attacker or the Buyer?

Deception technologies, over the last three-ish years, have come into vogue; with more than a dozen commercial vendors and close to a hundred open source products available to choose from. Solutions range from local host canary file monitoring, through to autonomous self-replicating and dynamic copies of the defenders network operating like an endless hall of mirrors.


The technologies employed for deception purposes are increasingly broad - but the ultimate goal is for an attacker to be deceived into tripping over or touching a specially deposited file, user account, or networked service and, in doing so, sounding an alarm so that the defenders can start to... umm... well..., often it's not clear what the defender is supposed to do. And that's part of the problem with the deception approach to defense.

I'm interested, but deeply cautious about the claims of deception technology vendors, and so should you be. It's incredibly difficult to justify their expense and understand their overall value when incorporated in to a defense in depth strategy.

There have been many times over the last couple of decades I have recommended to my clients and businesses a quick and dirty canary solution. For example, adding unique user accounts that appear at the start and end of your LDAP, Active Directory, or email contacts list - such that if anyone ever emails those addresses, you know you've been compromised. And similar canary files or shares for detecting the presence of worm outbreaks. But, and I must stress the "but", those solutions only apply to organizations that have not invested in the basics of network hygiene and defense in depth.

Honeypots, Honeynets, canaries, and deception products are HIGHLY prone to false positives. Vendors love to say otherwise, but the practical reality is that there's a near infinite number of everyday things that'll set them off - on hole or in part. For example:

  • Regular vulnerability scanning,
  • Data backups and file recovery,
  • System patching and updates,
  • Changes in firewall or VPN policies,
  • Curious employees,
  • Anti-virus scanners and suite updates,
  • On-premise enterprise search systems,
  • Cloud file repository configuration changes and synchronization,
The net result being either you ignore or turn off the system after a short period of time, or you swell your security teams ranks and add headcount to continually manage and tune the system(s).

If you want my honest opinion though, I'd have to say that the time for deception-based products has already past. 

If you're smart, you've already turned on most of the logging features of your desktop computers, laptops, servers, and infrastructure devices, and you're capturing all file, service, user, and application access attempts. You're therefore already capturing more of the raw information necessary to detect any threat your favorite deception technology is proposing to identify for you. Obviously, the trick is being able to process those logs for anomalies and responding to the threat.

This year alone the number of automated log analytics platforms and standalone products that employ AI and machine learning that are capable of real-time (or, worst case, "warm") detection of threats, has grown to outnumber all the tools in the Deception solution category - and they do it cheaper, more efficiently, and with less human involvement. 

Deception vendors were too slow. The log analytics vendors incorporated more advanced detection systems, user behavioral analytics, and were better able to mitigate the false positive problems - and didn't require additional investment in host agents and network appliances to collect the data that the deception technologies needed.

As an enterprise security buyer, I think you can forget about employing deception technologies and instead invest in automated log analytics. Not only will you cover the same threats, but the log analytics platforms will continue to innovate faster and cover a broader spectrum of threats and SecOps without the propensity of false positives.

-- Gunter Ollman

Saturday, December 16, 2017

What would you do if...

As a bit of a "get to know your neighbor" exercise or part of a team building exercise, have you ever been confronted with one of those "What would you do if..." scenarios?

My socially awkward and introvert nature (through some innate mechanism of self preservation) normally helps me evade such team building exercises, but every so often I do get caught out and I'm forced to offer up an answer to the posed scenario.

The last couple of times the posed question (or a permutation thereof) has been "What would you do if you were guaranteed to be financially secure and could choose to do anything you wanted to do - with no worries over money?" i.e. money is no object. It surprises me how many people will answer along the lines of building schools in Africa, working with war veterans, helping the homeless, etc.

Perhaps its a knee jerk response if you haven't really thought about it and re-actively think of something that you expect your new found group of friends and colleges will appreciate, or maybe it is genuine... but for me, such a thought seems so shallow.

I've often dwelled and retrospectively thought about the twists and turns of my career, my family life, and where I screwed up more than other times etc. and, along the way, I have though many many times about what I'd do if I were ever financially secure that I could chose to do anything.

Without doubt (OK, maybe a little trepidation), I'd go back to University and purse a degree and career in bio-medical engineering research. But I don't have any desire to be a doctor, a surgeon, or pharmacist.

I'd cast away my information security career to become someone driving research at the forefront of medicine - in the realm of tissue, organ, and limb regrowth... and beyond. And, with enough money, build a research lab to purse and lead this new area of research

You see I believe were at the cusp of being able to regrow/correct many of the disabilities that limit so many lives today. We're already seeing new biomedical technologies enabling children deaf or blind from birth to hear their mothers voice or see their mothers face for the first time. It's absolutely wonderful and if anyone who's ever seen a video of the first moments a child born with such disabilities experiences such a moment hasn't choked up and felt the tears themselves, then I guess we're cut from different sheets.

But that fusion of technology in solving these disabilities, like the attachments of robotic limbs to amputees, is (in my mind) still only baby-steps; not towards the cyborgs of science fiction fame, but towards to world of biological regrowth and augmentation through biological means.

Today, we see great steps towards the regrowth of ears, hearts, kidneys, bone, and skin. In the near future... the future I would so dearly love to learn, excel, and help advance, lies in what happens next. We'll soon be able to regrow any piece of the human body. Wounded warriors will eventually have lost limbs restored - not replaced with titanium and carbon-fiber fabricated parts.

I believe that the next 20 years of bio-medical engineering research will cause medicine to advance more that all medical history previously combined. And, as part of that journey, within the 30 years after that (i.e. 21-50 years from now), I believe in the potential of that science to not only allow humans to effectively become immortal (if you assume that periodic replacement of faulty parts are replaced, until our very being finally gives up due to boredom), but also to augment ourselves in many new and innovative ways. For example, using purely biological means, enabling our eyes to view a much broader spectrum of the electromagnetic spectrum, at orders of magnitude higher than today, with "built-in" zoom.

Yes, it sounds fantastical, but that's in part to the opportunities that lie ahead in such a new and exciting field, and why I'd choose to drop everything an enter "...if you were guaranteed to be financially secure and could choose to do anything you wanted to do - with no worries over money."

-- Gunter

Sunday, January 15, 2017

Allowing Vendors VPN access during Product Evaluation

For many prospective buyers of the latest generation of network threat detection technologies it may appear ironic that these AI-driven learning systems require so much manual tuning and external monitoring by vendors during a technical “proof of concept” (PoC) evaluation.

Practically all vendors of the latest breed of network-based threat detection technology require varying levels of network accessibility to the appliances or virtual installations of their product within a prospect’s (and future customers) network. Typical types of remote access include:

  • Core software updates (typically a pushed out-to-in update)
  • Detection model and signature updates (typically a scheduled in-to-out download process)
  • Threat intelligence and labeled data extraction (typically an ad hoc per-detection in-to-out connection)
  • Cloud contribution of abstracted detection details or meta-data (often a high frequency in-to-out push of collected data)
  • Customer support interface (ad hoc out-to-in human-initiated supervisory control)
  • Command-line technical support and maintenance (ad hoc out-to-in human-initiated supervisory control)

Depending upon the product, the vendor, and the network environment, some or all of these types of remote access will be required for the solution to function correctly. But which are truly necessary and which could be used to unfairly manually manipulate the product during this important evaluation phase?

To be flexible, most vendors provide configuration options that control the type, direction, frequency, and initialization processes for remote access.

When evaluating network detection products of this ilk, the prospective buyer needs to very carefully review each remote access option and fully understand the products reliance and efficacy associated with each one. Every remote access option eventually allowed is (unfortunately) an additional hole being introduced to the buyers’ defenses. Knowing this, it is unfortunate that some vendors will seek to downplay their reliance upon certain remote access requirements – especially during a PoC.

Prior to conducting a technical evaluation of the network detection system, buyers should ask the following types of questions to their prospective vendor(s):

  • What is the maximum period needed for the product to have learned the network and host behaviors of the environment it will be tested within?
  • During this learning period and throughout the PoC evaluation, how frequently will the product’s core software, detection models, typically be updated? 
  • If no remote access is allowed to the product, how long can the product operate before losing detection capabilities and which detection types will degrade to what extent over the PoC period?
  • If remote interactive (e.g. VPN) control of the product is required, precisely what activities does the vendor anticipate to conduct during the PoC, and will all these manipulations be comprehensively logged and available for post-PoC review?
  • What controls and data segregation are in place to secure any meta-data or performance analytics sent by the product to the vendor’s cloud or remote processing location? At the end of the PoC, how does the vendor propose to irrevocably delete all meta-data from their systems associated with the deployed product?
  • If testing is conducted during a vital learning period, what attack behaviors are likely to be missed and may negatively influence other detection types or alerting thresholds for the network and devices hosted within it?
  • Assuming VPN access during the PoC, what manual tuning, triage, or data clean-up processes are envisaged by the vendor – and how representative will it be of the support necessary for a real deployment?

It is important that prospective buyers understand not only the number and types of remote access necessary for the product to correctly function, but also how much “special treatment” the PoC deployment will receive during the evaluation period – and whether this will carry-over to a production deployment.

As vendors strive to battle their way through security buzzword bingo, in this early age of AI-powered detection technology, remote control and manual intervention in to the detection process (especially during the PoC period) may be akin to temporarily subscribing to a Mechanical Turk solution; something to be very careful of indeed.

-- Gunter Ollmann, Founder/Principal @ Ablative Security

Friday, January 13, 2017

Machine Learning Approaches to Anomaly and Behavioral Threat Detection

Anomaly detection approaches to threat detection have traditionally struggled to make good on the efficacy claims of vendors once deployed in real environments. Rarely have the vendors lied about their products capability – rather, the examples and stats they provide are typically for contrived and isolated attack instances; not representative of a deployment in a noisy and unsanitary environment.

Where anomaly detection approaches have fallen flat and cast them in a negative value context is primarily due to alert overload and “false positives”. False Positive deserves to be in quotations because (in almost every real-network deployment) the anomaly detection capability is working and alerting correctly – however the anomalies that are being reported often have no security context and are unactionable.

Tuning is a critical component to extracting value from anomaly detection systems. While “base-lining” sounds rather dated, it is a rather important operational component to success. Most false positives and nuisance alerts are directly attributable to missing or poor base-lining procedures that would have tuned the system to the environment it had been tasked to spot anomalies in.

Assuming an anomaly detection system has been successfully tuned to an environment, there is still a gap on actionability that needs to be closed. An anomaly is just an anomaly after all.
Closure of that gap is typically achieved by grouping, clustering, or associating multiple anomalies together in to a labeled behavior. These behaviors in turn can then be classified in terms of risk.

While anomaly detection systems dissect network traffic or application hooks and memory calls using statistical feature identification methods, the advance to behavioral anomaly detection systems requires the use of a broader mix of statistical features, meta-data extraction, event correlation, and even more base-line tuning.

Because behavioral threat detection systems require training and labeled detection categories (i.e. threat alert types), they too suffer many of the same operational ill effects of anomaly detection systems. Tuned too tightly, they are less capable of detecting threats than an off-the-shelf intrusion detection system (network NIDS or host HIDS). Tuned to loosely, then they generate unactionable alerts more consistent with a classic anomaly detection system.

The middle ground has historically been difficult to achieve. Which anomalies are the meaningful ones from a threat detection perspective?

Inclusion of machine learning tooling in to the anomaly and behavioral detection space appears to be highly successful in closing the gap.

What machine learning brings to the table is the ability to observe and collect all anomalies in real-time, make associations to both known (i.e. trained and labeled) and unknown or unclassified behaviors, and to provide “guesses” on actions based upon how an organization’s threat response or helpdesk (or DevOps, or incident response, or network operations) team has responded in the past.

Such systems still require baselining, but are expected to dynamically reconstruct baselines as it learns over time how the human operators respond to the “threats” it detects and alerts upon.
Machine learning approaches to anomaly and behavioral threat detection (ABTD) provide a number of benefits over older statistical-based approaches:

  • A dynamic baseline ensures that as new systems, applications, or operators are added to the environment they are “learned” without manual intervention or superfluous alerting.
  • More complex relationships between anomalies and behaviors can be observed and eventually classified; thereby extending the range of labeled threats that can be correctly classified, have risk scores assigned, and prioritized for remediation for the correct human operator.
  • Observations of human responses to generated alerts can be harnesses to automatically reevaluate risk and prioritization over detection and events. For example, three behavioral alerts are generated associated with different aspects of an observed threat (e.g. external C&C activity, lateral SQL port probing, and high-speed data exfiltration). The human operator associates and remediates them together and uses the label “malware-based database hack”. The system now learns that clusters of similar behaviors and sequencing are likely to classified and remediated the same way – therefore in future alerts the system can assign a risk and probability to the new labeled threat.
  • Outlier events can be understood in the context of typical network or host operations – even if no “threat” has been detected. Such capabilities prove valuable in monitoring the overall “health” of the environment being monitored. As helpdesk and operational (non-security) staff leverage the ABTD system, it also learns to classify and prioritize more complex sanitation events and issues (which may be impeding the performance of the observed systems or indicate a pending failure).

It is anticipated that use of these newest generation machine learning approaches to anomaly and behavioral threat detection will not only reduce the noise associated with real-time observations of complex enterprise systems and networks, but also cause security to be further embedded and operationalized as part of standard support tasks – down to the helpdesk level.

-- Gunter Ollmann, Founder/Principal @ Ablative Security

(first published January 13th - "From Anomaly, to Behavior, and on to Learning Systems")