Showing posts with label infosec. Show all posts
Showing posts with label infosec. Show all posts

Wednesday, January 9, 2019

Hacker History III: Professional Hardware Hacker

Following on from my C64 hacking days, but in parallel to my BBS Hacking, this final part looks at my early hardware hacking and creation of a new class of meteorological research radar...

Ever since that first C64 and through the x86 years, I’d been hacking away – mostly software; initially bypassing copy-protection, then game cracks and cheats, followed by security bypasses and basic exploit development.

Before bug bounty programs were invented in the 2010’s, as early as 1998 I used to say the best way to learn and practice hacking skills was to target porn sites. The “theory” being that they were constantly under attack, tended to have the best security (yes, even better than the banks) and, if you were ever caught, the probability of ever appearing in court and having to defend your actions in front of a jury was never going to happen - and the folks that ran and built the sites would be the first to tell you that.

In the mid-to-late 1980’s, following France’s 1985 bombing and sinking of the Rainbow Warrior in New Zealand, if you wanted to learn to hack and not worry about repercussions – any system related to the French Government was within scope. It was in that period that war-dialing and exploit development really took off and, in my opinion, the professional hacker was born – at least in New Zealand it was. Through 1989-1991 I had the opportunity to apply those acquired skills in meaningful ways – but those tales are best not ever written down.

Digital Radar

Easily the most fun hardware hacking I’ve ever done or been involved with ended up being the basis for my post-graduate research and thesis. My mixed hardware hacking and industrial control experience set me up for an extraordinary project as part of my post graduate research and eventual Masters in Atmospheric Physics.

I was extremely lucky:
  1. The first Mhz digitizer cards were only just hitting the market
  2. PC buses finally had enough speed to handle Mhz digitizer cards
  3. Mass storage devices (i.e. hard drives) were finally reaching an affordable capacity/price
  4. My supervisor was the Dean of Physics and had oversight of all departments “unused budgets”
  5. Digital radar had yet to be built

My initial mission was to build the world’s first digital high-resolution vertically pointing radar and to use it to prove or disprove the “Seeder-feeder mechanism of orographic rainfall”.

Taking a commercial analogue X-band marine radar and converting the 25 kilo-watt radar with a range of 50 miles and a resolution measured in tens-of meters, to a digital radar with an over-sampled resolution of 3.25 cm out to a range of 10km was the start of the challenge – but successfully delivered nevertheless. That first radar was mounted on the back of a 4x4 Toyota truck – which was great at getting to places no radar had been before. Pointing straight up was interesting – and served its purpose of capturing the Seeder-feeder mechanism in operation – but there was room for improvement.

Back at the (family) factory, flicking through pages of operation specification tables for electric motors (remember – pre-Internet/pre-Google) and harnessing the power of MS-DOS based AutoCAD, I spec'ed out and designed a mounting mechanism for making the radar scan the sky like a traditional meteorological radar – but one that could operate in winds of 80 mph winds, at high altitude, in the rain. Taking a leaf out of my father’s design book – it was massively over engineered ;-)

Home for many months - the mobile high resolution radar + attached caravan. Circa 1994.

This second radar was mounted to an old tow-able camper-van. It was funny because, while the radar would survive 80+ mph winds, a gust of 50+ mph would have simply blown over the camper-van (and probably down the side of a hill or over a cliff). Anyhow, that arrangement (and the hacks it took to get working) resulted in a few interesting scientific advances:
  • Tracking bumblebees. Back in 1994, while GPS was a thing, it didn’t have very good coverage in the southern hemisphere and, due to US military control, it’s positioning resolution was very poor (due to Selective Availability). So, in order to work out a precise longitude and latitude of the radar system, it was back to ancient ways and tracking the sun. I had code that ran the radar in passive mode, scanned horizontally and vertically until it found that big microwave in the sky, and tracked its movements – and from there determine the radar’s physical location. (Un)fortunately, through a mistake in my programming and leaving the radar emitting it's 25kW load, I found it could sometimes lock-on and track bright blips near ground-level. Through some investigation and poor coding, I’d managed to build a radar tracking system for bumblebees (since bumblebees were proportional to the wavelength and over-sampled bin size – they were highly reflective and dominated the sun).
  • Weather inside valleys. The portability of the camper-van and the high resolution of the radar also meant that for the first time ever it was possible to monitor and scientifically measure the weather phenomenon within complex mountain valley systems. Old long-range radar, with resolutions measured in thousands of cubic meters per pixel, had only observed weather events above the mountains. Now it was possible to digitally observe weather events below that, inside valleys and between mountains, at bumblebee resolution.
  • Digital contrails. Another side-effect of the high resolution digital radar was its ability to measure water density of clouds even on sunny days. Sometimes those clouds were condensation trails from aircraft. So, with a little code modification, it became possible to identify contrails and follow their trails back to their root source in the sky – often a highly reflective aircraft – opening up new research paths into tracking stealth aircraft and cruise missiles.
It was a fascinating scientific and hacking experience. If you’ve ever stood in a doorway during a heavy rainfall event and watched a curtain of heavier rainfall weave its way slowly down the road and wondered at the physics and meteorology behind it, here was a system that digitally captured that event from a few meters above the ground, past the clouds, through the melting layer, and up to 10 km in the air – and helped reset and calibrate the mathematical models still used today for weather forecasting and global climate modeling.

By the end of 1994 it was time to wrap up my thesis, leave New Zealand, head off on my Great OE, and look for full-time employment in some kind of professional capacity.


When I look back at what led me to a career in Information Security, the 1980's hacking of protected C64 games, the pre-Internet evolution of BBS and it's culture of build collaboration, and the hardware hacking and construction of a technology that was game changing (for it's day) - they're the three things (and time periods) that remind me of how I grew the skills and developed the experience to tackle any number of subsequent Internet security problems - i.e. hack my way through them. I think of it as a unique mix. When I meet other hackers who's passions likewise began in the 1980's or early 1990's, it's clear that everyone has their own equally exciting and unique journey - which makes it all the more interesting.

I hope the tale of my journey inspires you to tell your own story and, for those much newer to the scene, proves that us older hands probably didn't really have a plan on how we got to where we are either :-)

This is PART THREE of THREE.

PART ONE (C64 Hacking)  and PART TWO (BBS Hacking) are available to read too.

--Gunter


Tuesday, January 8, 2019

Hacker History II: The BBS Years

Post-C64 Hacking (in Part 1 of Hacker History)... now on to Part 2: The BBS Years

Late 1986 (a few months before I started my first non-newspaper delivery and non-family-business job – working at a local supermarket) I launched my first bulletin board system (BBS). I can’t remember the software that I was running at the time, but it had a single 14k dial-up facility running on all the extra C64 equipment I’d been “gifted” by friends wanting faster/always access too my latest cheats and hacks.

The premise behind the BBS was two-fold: I wanted to learn something new (and hacking together a workable and reliable BBS system in the mid-80’s was a difficult enough challenge), and I saw it as a saving time distribution channel for my cheats/hacks; others could dial-in and download themselves, instead of me messing around with stacks of floppy discs etc.

At some point in 1986 I’d also saved enough money to by an IBM PC AT clone – a whopping 12Mhz 80286 PC, complete with Turbo button and a 10Mb hard drive. I remember specking out the PC with the manufacturer. They were stunned that a kid could afford their own PC AT and that he planned to keep it in his bedroom, and that he wanted an astounding 16k of video memory (“what do you need that for? Advanced ACAD?”)!

By 1989 the BBS had grown fairly large with a couple hundred regular members with several paying monthly subscription fees, but the stack of C64’s powering the BBS were showing their age and, in the meantime my main computing had moved down the PC path from 286, to 386, and to a brand-spanking new 486.

It was time to move on from C64 and go full-PC – both with the BBS and the hacks/cheats I was writing.

So in 1990, over the Summer/Christmas break from University I set about shifting the BBS over to a (single) PC – running Remote Access, with multiple dial-in lines (14.4k for regular users and 28.8k for subscribers).


The dropping of C64 and move to fully-fledged x86 PC resulted in a few memorable times for me:
  • BBS’s are like pets. Owning and operating a BBS is a lot like looking after an oversized pet that eats everything in its path and has destructive leanings; they’re expensive and something is always going wrong. From the mid-80’s to mid-90’s (pre-“Internet”) having a BBS go down would be maddening to all subscribers. Those subscribers would be great friends when things were running, or act like ungrateful modern-day teenagers being denied “screen-time” if they couldn’t dial-in for more than a couple of days. Keeping a BBS running meant constant tinkering under the covers – learning the intricacies of PC hardware architecture, x86 assembly, live patching, memory management, downtime management, backup/recovery, and “customer management”. The heady “good-old days” of PC development.
  • International Connectivity. With me in University and too-often referred to as the “student that knows more about computers than the campus IT team”, in 1991 I added Fidonet and Usenet support to my BBS. There had been a few BBS’s in New Zealand before mine to offer these newsgroups, but they were very limited (i.e. a small number of groups) because they were reliant upon  US dial-up for synching (which was damned expensive!). My solution was to use a spare modem in the pack of a University lab PC to connect semi-permanently to my BBS. From there my BBS used the Universities “Internet” undersea cable connectivity to download and synch all the newsgroups. Technically I guess you could call it my first “backdoor” hacking experience – which ended circa 1993 after being told to stop as (by some accounts) the BBS was peak consuming 1/3 of the entire countries academic bandwidth.
  • First Security Disclosure. Setting up Remote Access (RA) was an ordeal. It was only a week later – Christmas Eve 1990 – that I publicly disclosed my first security vulnerability (with a self-developed patch); an authentication bypass to the system that controlled what games or zones a subscriber could access. I can’t remember how many bugs and vulnerabilities I found in RA, QEMM, MS-DOS, modem drivers, memory managers, and the games that ran on RA over those years. Most required some kind of assembly instruction patch to fix.
  • Mailman and Sysop. Ever since those first BBS days in 1986, I’d felt that email (or Email, or E-Mail) would be the future for communications. The tools and skills needing for managing a reliable person-to-person or person-to-group communication system had to be built and learned – as too did the management of trust and the application of security. Some BBS operators loved being Sysops (System Operators – i.e. Admins) because they could indulge their voyeurism tendencies. I hated BBS’s and Sysops that operated that way and it became an early mission of mine to figure out ways of better protecting subscriber messages.

That fumbling about and experimenting with PC hardware, MS-DOS, and Windows at home and with the Bulletin Board System, coupled with learning new systems at University such as DEC Alpha, OpenVMS, Cray OS, and HP-UX in the course of my studies, and the things I had to piece-together and program at my parents factories (e.g. PLC’s,  ICS’s, RTU’s, etc.) all combined to add to a unique perspective on operating systems and hardware hacking.

By the time I’d finished and submitted my post-grad research thesis, it was time to tear down the BBS, sell all my computers and peripherals, and leave New Zealand for my Great OE (Overseas Experience) at the end of 1994.

This is PART TWO of THREE.

PART ONE (C64 Hacking) was posted yesterday and PART THREE (Radar Hacking) will be on Wednesday.

Monday, January 7, 2019

Hacker History I: Getting Started as a Hacker

Curiosity is a wonderful thing; and the key ingredient to making a hacker. All the best hackers I know are not only deeply curious creatures but have a driving desire to share the knowledge they uncover. That curiosity and sharing underpins much of the hacker culture today – and is pretty core to people like me and those I trust the most.

Today I continue to get a kick out of mentoring other hackers, (crossed-fingers) upcoming InfoSec stars and, in a slightly different format, providing “virtual CISO” support to a handful of professionals (through my Ablative Security company) that have been thrown headfirst into protecting large enterprise or local government networks.

One of the first questions I get asked as I’m mentoring, virtual CISO’ing, or grabbing beers with a new batch of hacker friends at some conference or other is “how did you get started in computers and hacking?”.

Where did it all start?

The early days of home computing were a mixed bag for me in New Zealand. Before ever having my own computer, a bunch of friends and I would ditch our BMX’s daily in the front yard of any friend that had a Commodore VIC20 or Amstrad CPC, throw a tape in the tape reader, and within 15 minutes be engrossed in a game – battling each other for the highest score. School days were often dominated by room full of BBC Micros – where one of the most memorable early programs I wrote was to use a sensitive microphone to capture the sounds of bugs eating. I can still remember plotting the dying scream of a stick insect as it succumbed to science!

Image via: WorthPoint

I remember well the first computer I actually owned – a brand-spanking new SpectraVideo SV-328 (complete with cassette tape reader) that Santa delivered for Christmas in 1983. I thought it was great, but quickly tired of it because there weren’t many games and all my friends were getting Commodore VIC-20 or Commodore 64 microcomputers – which had oh so many more games. So, come late 1984, I flogged my SpectraVideo and brought (second-hand) my first Commodore 64 (C64).

I can safely say that it was the C64 that lit my inner hacker spark. First off, the C64 had both a tape (then later diskette) capability and a games cartridge port. Secondly, New Zealand is a LONG way from where all the new games were being written and distributed from. Thirdly, as a (pre)teen, a single cartridge game represented 3+ months of pocket money and daily newspaper deliveries.

These three constraints resulted in the following:
  • My first hardware hack. It was possible to solder a few wires and short-circuit the memory flushing and reboot process of the C64 via the games cartridge mechanism to construct a “reset” button. This meant that you could insert the game cartridge, load the game, hold-down your cobbled together reset button, remove the games cartridge, and use some C64 assembly language to manipulate the game (still in memory). From there you could add your own boot loader, save to tape or floppy, and create a back-up copy of the game.
  • “Back-up Copies” and Community. C64 games, while plentiful, were damned expensive and took a long time to get to New Zealand. So a bunch of friends all with C64’s would pool our money every few weeks to buy the latest game from the UK or US; thereafter creating “back-ups” for each-other to hold on to – just in case the costly original ever broke. Obviously, those back-up copies needed to be regularly tested for integrity.  Anyhow, that was the basis of South Auckland’s community of C64 Hackers back in 1983-1985. A bunch of 10-14 year-olds sharing the latest C64 games.
  • Copy-protection Bypassing. Unsurprisingly, our bunch of kiwi hackers weren’t the first or only people to create unauthorized back-ups of games. As floppies replaced tapes and physical cassettes as the preferred media for C64 games, the software vendors started their never-ending quest of adding copy-protection to protect unauthorized copying and back-ups. For me, this was when hacking become a passion. Here were companies of dozens, if not hundreds, of professional software developers trying to prevent us from backing-up the programs we had purchased. For years we learned, developed, and shared techniques to bypass the protections; creating new tools for backing-up, outright removal of onerous copy-protection, and shrinking bloated games to fit on single floppies.
  • Games Hacking. At some point, you literally have too many games and the thrill of the chase changes. Instead of looking forward to playing the latest game for dozens of hours or days and iteratively working through campaigns, I found myself turning to hacking the games themselves. The challenge became partially reversing each game, constructing new cheats and bypasses, and wrapping them up in a cool loader for a backed-up copy of the game. Here you could gain infinite lives, ammo, gold, or whatever, and quickly step through the game – seeing all it had to offer and doing so within an hour.
  • Hacking for Profit. Once some degree of reputation for bypassing copy-protection and creating reliable cheater apps got around, I found that my base of “friends” grew, and monetary transactions started to become more common. Like-minded souls wanted to buy hacks and tools to back-up their latest game, and others wanted to bypass difficult game levels or creatures. So, for $5-10 I’d sell the latest cheat I had.
At some point in 1986 I recognized that I had a bunch of C64 equipment – multiple floppy drives, a few modems, even a new Commodore 64C – and more than enough to start a BBS.

This is PART ONE of THREE. 

PART TWO (BBS Hacking) is up and PART THREE (Radar Hacking) on Wednesday.

Thursday, March 8, 2018

NextGen SIEM Isn’t SIEM


Security Information and Event Management (SIEM) is feeling its age. Harkening back to a time in which businesses were prepping for the dreaded Y2K and where the cutting edge of security technology was bound to DMZ’s, Bastion Hosts, and network vulnerability scanning – SIEM has been along for the ride as both defenses and attacker have advanced over the intervening years. Nowadays though it feels less of a ride with SIEM, and more like towing an anchor.

Despite the deepening trench gauged by the SIEM anchor slowing down threat response, most organizations persist in throwing more money and resources at it. I’m not sure whether it’s because of a sunk cost fallacy or the lack of a viable technological alternative, but they continue to diligently trudge on with their SIEM – complaining with every step. I’ve yet to encounter an organization that feels like their SIEM is anywhere close to scratching their security itch.



The SIEM of Today
The SIEM of today hasn’t changed much over the last couple of decades with its foundation being the real-time collection and normalization of events from a broad scope of security event log sources and threat alerting tools. The primary objective of which was to manage and overcome the cacophony of alerts generated by the hundreds, thousands, or millions of sensors and logging devices scattered throughout an enterprise network – automatically generating higher fidelity alerts using a variety of analytical approaches – and displaying a more manageable volume of information via dashboards and reports.

As the variety and scope of devices providing alerts and logs continues to increase (often exponentially) consolidated SIEM reporting has had to focus upon statistical analytics and trend displays to keep pace with the streaming data – increasingly focused on the overall health of the enterprise, rather than threat detection and event risk classification.

Whilst the collection of alerts and logs are conducted in real-time, the ability to aggregate disparate intelligence and alerts to identify attacks and breaches has fallen to offline historical analysis via searches and queries – giving birth to the Threat Hunter occupation in recent years.

Along the way, SIEM has become the beating heart of Security Operations Centers (SOC) – particularly over the last decade – and it is often difficult for organizations to disambiguate SIEM from SOC. Not unlike Frankenstein’s monster, additional capabilities have been grafted to today’s operationalized SIEM’s; advanced forensics and threat hunting capabilities now dovetail in to SIEM’s event archive databases, a new generation of automation and orchestration tools have instantiated playbooks that process aggregated logs, and ticketing systems track responder’s efforts to resolve and mitigate threats.

SIEM Weakness
There is however a fundamental weakness in SIEM and it has become increasingly apparent over the last half-decade as more advanced threat detection tools and methodologies have evolved; facilitated by the widespread adoption of machine learning (ML) technologies and machine intelligence (MI).

Legacy threat detection systems such as firewalls, intrusion detection systems (IDS), network anomaly detection systems, anti-virus agents, network vulnerability scanners, etc. have traditionally had a high propensity towards false positive and false negative detections. Compounding this, for many decades (and still a large cause for concern today) these technologies have been sold and marketed on their ability to alert in volume – i.e. an IDS that can identify and alert upon 10,000 malicious activities is too often positioned as “better” than one that only alerts upon 8,000 (regardless of alert fidelity). Alert aggregation and normalization is of course the bread and butter of SIEM.

In response, a newer generation of vendors have brought forth new detection products that improve and replace most legacy alerting technologies – focused upon not only finally resolving the false positive and false negative alert problem, but to move beyond alerting and into mitigation – using ML and MI to facilitate behavioral analytics, big data analytics, deep learning, expert system recognition, and automated response orchestration.

The growing problem is that these new threat detection and mitigation products don’t output alerts compatible with traditional SIEM processing architectures. Instead, they provide output such as evidence packages, logs of what was done to automatically mitigate or remediate a detected threat, and talk in terms of statistical risk probabilities and confidence values – having resolved a threat to a much higher fidelity than a SIEM could. In turn, “integration” with SIEM is difficult and all too often meaningless for these more advanced technologies.

A compounding failure with the new ML/MI powered threat detection and mitigation technologies lies with the fact that they are optimized for solving a particular class of threats – for example, insider threats, host-based malicious software, web application attacks, etc. – and have optimized their management and reporting facilities for that category. Without a strong SIEM integration hook there is no single pane of glass for SOC management; rather a half-dozen panes of glass, each with their own unique scoring equations and operational nuances.

Next Generation SIEM
If traditional SIEM has failed and is becoming more of a bugbear than ever, and the latest generation of ML and MI-based threat detection and mitigation systems aren’t on a trajectory to coalesce by themselves into a manageable enterprise suite (let alone a single pane of glass), what does the next generation (i.e. NextGen) SIEM look like?

Looking forward, next generation SIEM isn’t SIEM, it’s an evolution of SOC – or, to license a more proscriptive turn of phrase, “SOC-in-a-box” (and inevitably “Cloud SOC”).

The NextGen SIEM lies in the natural evolution of today’s best hybrid-SOC solutions. The Frankenstein add-ins and bolt-ons that have extended the life of SIEM for a decade are the very fabric of what must ascend and replace it.

For the NextGen SIEM, SOC-in-a-box, Cloud SOC, or whatever buzzword the professional marketers eventually pronounce – to be successful, the core tenets of operation will necessarily include:
  • Real-time threat detection, classification, escalation, and response. Alerts, log entries, threat intelligence, device telemetry, and indicators of compromise (IOC), will be treated as evidence for ML-based classification engines that automatically categorize and label their discoveries, and optimize responses to both threats and system misconfigurations in real-time.
  • Automation is the beating heart of SOC-in-a-box. With no signs of data volumes falling, networks becoming less congested, or attackers slackening off, automation is the key to scaling to the businesses needs. Every aspect of SOC must be designed to be fully autonomous, self-learning, and elastic.
  • The vocabulary of security will move from “alerted” to “responded”. Alerts are merely one form of telemetry that, when combined with overlapping sources of evidence, lay the foundation for action. Businesses need to know which threats have been automatically responded to, and which are awaiting a remedy or response.
  • The tier-one human analyst role ceases to exist, and playbooks will be self-generated. The process of removing false positives and gathering cohobating evidence for true positive alerts can be done much more efficiently and reliably using MI. In turn, threat responses by tier-two or tier-three analysts will be learned by the system – automatically constructing and improving playbooks with each repeated response.
  • Threats will be represented and managed in terms of business risk. As alerts become events, “criticality” will be influenced by age, duration, and threat level, and will sit adjacent to “confidence” scores that take in to account the reliability of sources. Device auto-classification and responder monitoring will provide the framework for determining the relative value of business assets, and consequently the foundation for risk-based prioritization and management.
  • Threat hunting will transition to evidence review and preservation. Threat hunting grew from the failures of SIEM to correctly and automatically identify threats in real-time. The methodologies and analysis playbooks used by threat hunters will simply be part of what the MI-based system incorporates in real-time. Threat hunting experts will in-turn focus on preservation of evidence in cases where attribution and prosecution become probable or desirable.
  • Hybrid networks become native. The business network – whether it exists in the cloud, on premise, at the edge, or in the hands of employees and customers – must be monitored, managed, and have threats responded to as a single entity. Hybrid networks are the norm and attackers will continue to test and evolve hybrid attacks to leverage any mitigation omission.

Luckily, the NextGen SIEM is closer than we think. As SOC operations have increasingly adopted the cloud to leverage elastic compute and storage capabilities, hard-learned lessons in automation and system reliability from the growing DevOps movement have further defined the blueprint for SOC-in-a-box. Meanwhile, the current generation of ML-based and MI-defined threat detection products, combined with rapid evolution of intelligence graphing platforms, have helped prove most of the remaining building blocks.

These are not wholly additions to SIEM, and SIEM isn’t the skeleton of what will replace it.

The NextGen SIEM starts with the encapsulation of the best and most advanced SOC capabilities of today, incorporates its own behavioral and threat detection capabilities, and dynamically learns to defend the organization – finally reporting on what it has successfully resolved or mitigated.

-- Gunter Ollmann

Sunday, January 15, 2017

Allowing Vendors VPN access during Product Evaluation

For many prospective buyers of the latest generation of network threat detection technologies it may appear ironic that these AI-driven learning systems require so much manual tuning and external monitoring by vendors during a technical “proof of concept” (PoC) evaluation.

Practically all vendors of the latest breed of network-based threat detection technology require varying levels of network accessibility to the appliances or virtual installations of their product within a prospect’s (and future customers) network. Typical types of remote access include:

  • Core software updates (typically a pushed out-to-in update)
  • Detection model and signature updates (typically a scheduled in-to-out download process)
  • Threat intelligence and labeled data extraction (typically an ad hoc per-detection in-to-out connection)
  • Cloud contribution of abstracted detection details or meta-data (often a high frequency in-to-out push of collected data)
  • Customer support interface (ad hoc out-to-in human-initiated supervisory control)
  • Command-line technical support and maintenance (ad hoc out-to-in human-initiated supervisory control)

Depending upon the product, the vendor, and the network environment, some or all of these types of remote access will be required for the solution to function correctly. But which are truly necessary and which could be used to unfairly manually manipulate the product during this important evaluation phase?

To be flexible, most vendors provide configuration options that control the type, direction, frequency, and initialization processes for remote access.

When evaluating network detection products of this ilk, the prospective buyer needs to very carefully review each remote access option and fully understand the products reliance and efficacy associated with each one. Every remote access option eventually allowed is (unfortunately) an additional hole being introduced to the buyers’ defenses. Knowing this, it is unfortunate that some vendors will seek to downplay their reliance upon certain remote access requirements – especially during a PoC.

Prior to conducting a technical evaluation of the network detection system, buyers should ask the following types of questions to their prospective vendor(s):

  • What is the maximum period needed for the product to have learned the network and host behaviors of the environment it will be tested within?
  • During this learning period and throughout the PoC evaluation, how frequently will the product’s core software, detection models, typically be updated? 
  • If no remote access is allowed to the product, how long can the product operate before losing detection capabilities and which detection types will degrade to what extent over the PoC period?
  • If remote interactive (e.g. VPN) control of the product is required, precisely what activities does the vendor anticipate to conduct during the PoC, and will all these manipulations be comprehensively logged and available for post-PoC review?
  • What controls and data segregation are in place to secure any meta-data or performance analytics sent by the product to the vendor’s cloud or remote processing location? At the end of the PoC, how does the vendor propose to irrevocably delete all meta-data from their systems associated with the deployed product?
  • If testing is conducted during a vital learning period, what attack behaviors are likely to be missed and may negatively influence other detection types or alerting thresholds for the network and devices hosted within it?
  • Assuming VPN access during the PoC, what manual tuning, triage, or data clean-up processes are envisaged by the vendor – and how representative will it be of the support necessary for a real deployment?

It is important that prospective buyers understand not only the number and types of remote access necessary for the product to correctly function, but also how much “special treatment” the PoC deployment will receive during the evaluation period – and whether this will carry-over to a production deployment.

As vendors strive to battle their way through security buzzword bingo, in this early age of AI-powered detection technology, remote control and manual intervention in to the detection process (especially during the PoC period) may be akin to temporarily subscribing to a Mechanical Turk solution; something to be very careful of indeed.

-- Gunter Ollmann, Founder/Principal @ Ablative Security

Friday, January 13, 2017

Machine Learning Approaches to Anomaly and Behavioral Threat Detection

Anomaly detection approaches to threat detection have traditionally struggled to make good on the efficacy claims of vendors once deployed in real environments. Rarely have the vendors lied about their products capability – rather, the examples and stats they provide are typically for contrived and isolated attack instances; not representative of a deployment in a noisy and unsanitary environment.

Where anomaly detection approaches have fallen flat and cast them in a negative value context is primarily due to alert overload and “false positives”. False Positive deserves to be in quotations because (in almost every real-network deployment) the anomaly detection capability is working and alerting correctly – however the anomalies that are being reported often have no security context and are unactionable.

Tuning is a critical component to extracting value from anomaly detection systems. While “base-lining” sounds rather dated, it is a rather important operational component to success. Most false positives and nuisance alerts are directly attributable to missing or poor base-lining procedures that would have tuned the system to the environment it had been tasked to spot anomalies in.

Assuming an anomaly detection system has been successfully tuned to an environment, there is still a gap on actionability that needs to be closed. An anomaly is just an anomaly after all.
Closure of that gap is typically achieved by grouping, clustering, or associating multiple anomalies together in to a labeled behavior. These behaviors in turn can then be classified in terms of risk.

While anomaly detection systems dissect network traffic or application hooks and memory calls using statistical feature identification methods, the advance to behavioral anomaly detection systems requires the use of a broader mix of statistical features, meta-data extraction, event correlation, and even more base-line tuning.

Because behavioral threat detection systems require training and labeled detection categories (i.e. threat alert types), they too suffer many of the same operational ill effects of anomaly detection systems. Tuned too tightly, they are less capable of detecting threats than an off-the-shelf intrusion detection system (network NIDS or host HIDS). Tuned to loosely, then they generate unactionable alerts more consistent with a classic anomaly detection system.

The middle ground has historically been difficult to achieve. Which anomalies are the meaningful ones from a threat detection perspective?

Inclusion of machine learning tooling in to the anomaly and behavioral detection space appears to be highly successful in closing the gap.

What machine learning brings to the table is the ability to observe and collect all anomalies in real-time, make associations to both known (i.e. trained and labeled) and unknown or unclassified behaviors, and to provide “guesses” on actions based upon how an organization’s threat response or helpdesk (or DevOps, or incident response, or network operations) team has responded in the past.

Such systems still require baselining, but are expected to dynamically reconstruct baselines as it learns over time how the human operators respond to the “threats” it detects and alerts upon.
Machine learning approaches to anomaly and behavioral threat detection (ABTD) provide a number of benefits over older statistical-based approaches:

  • A dynamic baseline ensures that as new systems, applications, or operators are added to the environment they are “learned” without manual intervention or superfluous alerting.
  • More complex relationships between anomalies and behaviors can be observed and eventually classified; thereby extending the range of labeled threats that can be correctly classified, have risk scores assigned, and prioritized for remediation for the correct human operator.
  • Observations of human responses to generated alerts can be harnesses to automatically reevaluate risk and prioritization over detection and events. For example, three behavioral alerts are generated associated with different aspects of an observed threat (e.g. external C&C activity, lateral SQL port probing, and high-speed data exfiltration). The human operator associates and remediates them together and uses the label “malware-based database hack”. The system now learns that clusters of similar behaviors and sequencing are likely to classified and remediated the same way – therefore in future alerts the system can assign a risk and probability to the new labeled threat.
  • Outlier events can be understood in the context of typical network or host operations – even if no “threat” has been detected. Such capabilities prove valuable in monitoring the overall “health” of the environment being monitored. As helpdesk and operational (non-security) staff leverage the ABTD system, it also learns to classify and prioritize more complex sanitation events and issues (which may be impeding the performance of the observed systems or indicate a pending failure).

It is anticipated that use of these newest generation machine learning approaches to anomaly and behavioral threat detection will not only reduce the noise associated with real-time observations of complex enterprise systems and networks, but also cause security to be further embedded and operationalized as part of standard support tasks – down to the helpdesk level.

-- Gunter Ollmann, Founder/Principal @ Ablative Security

(first published January 13th - "From Anomaly, to Behavior, and on to Learning Systems")

Thursday, December 1, 2016

NTP: The Most Neglected Core Internet Protocol

The Internet of today is awash with networking protocols, but at its core lies  a handful that fundamentally keep the Internet functioning. From my perspective, there is no modern Internet without DNS, HTTP, SSL, BGP, SMTP, and NTP.

Of these most important Internet protocols, NTP (Network Time Protocol) is the likely least understood and has the least attention and support. Until very recently, it was supported (part-time) by just one person - Harlen Stenn - "who had lost the root passwords to the machine where the source code was maintained (so that machine hadn't received security updates in many years), and that machine ran a proprietary source-control system that almost no one had access to, so it was very hard to contribute to".

Just about all secure communication protocols and server synchronization processes require that they have their internal clocks set the same. NTP is the protocol that allows all this to happen.

ICEI and CACR have gotten involved with supporting NTP and there are several related protocol advancements underway to increase security of such vital component of the Internet. NTS (Network Time Security), currently in draft version with the Internet Engineering Task Force (IETF), aims to give administrators a way to add security to NTP and promote secure time synchronization.

While there have been remarkably few exploitable vulnerabilities in NTP over the years, the recent growth of DDoS botnets (such as Mirai) utilizing NTP Reflection Attacks shone a new light on its frailties and importance.

Some relevant stories on the topic of how frail and vital NTP has become and whats being done to correct the problem can be found at:



Friday, January 29, 2016

Watching the Watchers Watching Your Network

It seems that this last holiday season didn’t bring much cheer or goodwill to corporate security teams. With the public disclosure of remotely exploitable vulnerabilities and backdoors in the products of several well-known security vendors, many corporate security teams spent a great deal of time yanking cables, adding new firewall rules, and monitoring their networks with extra vigilance.

It’s not the first time that products from major security vendors have been found wanting.

It feels as though some vendor’s host-based security defenses fail on a monthly basis, while network defense appliances fail less frequently – maybe twice per year. At least that’s what a general perusal of press coverage may lead you to believe. However, the reality is quite different. Most security vendors fix and patch security weaknesses on a monthly basis. Generally, the issues are ones that they themselves have identified (through internal SDL processes or the use of third-party code reviews and assessment) or they are issues identified by customers. And, every so often, critical security flaws will be “dropped” on the vendor by an independent researcher or security company that need to be fixed quickly.

Two decades ago, the terms “bastion host”, DMZ, and “firewall” pretty much summed up the core concepts of network security, and it was a simpler time for most organizations – both for vendors and their customers. The threat spectrum was relatively narrow, the attacks largely manual, and an organization’s online presence consisted of mostly static material. Yet, even then, if you picked up a book on network security you were instructed in no short order that you needed to keep your networks separate; one for the Internet, one for your backend applications, one for your backups, and a separate one for managing your security technology.

Since that time, many organizations have either forgotten these basic principles or have intentionally opted for riskier (yet cheaper) architectures and just hoping that their protection technologies are up to the task. Alas, as the events of December 2015 have shown us, every device added to a network introduces a new set of security challenges and weaknesses.

From a network security perspective, when looking at the architecture of critical defenses, there are four core principles:

  1. Devices capable of monitoring or manipulating network traffic should never have their management interfaces directly connected to the Internet. If these security devices need to be managed over the Internet it is critical that only encrypted protocols be used, multi-factor authentication be employed, and that approved in-bound management IP addresses be whitelisted at a minimum. 
  2. The management and alerting interfaces of security appliances must be on a “management” network – separated from other corporate and public networks. It should not be possible for an attacker who may have compromised a security device to leverage the management network to move laterally onto other guest systems or provide a route to the Internet. 
  3. Span ports and network taps that observe Internet and internal corporate traffic should by default only operate in “read-only” mode. A compromised security monitoring appliance should never be capable of modifying network traffic or communicating with the Internet from such an observation port. 
  4. Monitor your security products and their management networks. Security products (especially networking appliances such as core routers, firewalls, and malware defenses) will always be a high-value target to both external and internal attackers. These core devices and their management networks must be continuously monitored for anomalies and audited. 

In an age where state-sponsored reverse engineers, security research teams, and online protagonists are actively hunting for flaws and backdoors in the widely deployed products of major security vendors as a means of gaining privileged and secret access to their target’s networks, it is beyond prudent to revisit the core tenets of secure network architecture.

Corporate security teams and network architects should assume not only that new vulnerabilities and backdoors will be disclosed throughout the year, but that those holes may have been accessible and exploited for several months beforehand. As such, they should adopt a robust defense-in-depth strategy including “watchers watching watchers.”

Shodan's Shining Light

The Internet is chock full of really helpful people and autonomous systems that silently probe, test, and evaluate your corporate defenses every second of every minute of every hour of every day. If those helpful souls and systems aren’t probing your network, then they’re diligently recording and cataloguing everything they’ve found so others can quickly enumerate your online business or list systems like yours that are similarly vulnerable to some kind of attack or other.

Back in the dark ages of the Internet (circa the 20th century) everyone had to run their own scans to map the Internet in order to spot vulnerable systems on the network. Today, if you don’t want to risk falling foul of some antiquated hacking law in some country by probing IP addresses and shaking electronic hands with the services you encounter, you can easily find a helpful soul that’s figured it all out on your behalf and turn on the faucet of knowledge for a paltry sum.

One of the most popular services to shine light on and enumerate the darkest corners of the Internet is Shodan. It’s a portal-driven service through which subscribers can query its vast database of IP addresses, online applications and service banners that populate the Internet. Behind the scenes, Shodan’s multiple servers continually scan the Internet, enumerating and probing every device they encounter and recording the latest findings.

As an online service that diligently catalogues the Internet, Shodan behaves rather nicely. Servers that do the scanning aren’t overly aggressive and provide DNS information that doesn’t obfuscate who and what they are. Additionally, they are little more troublesome than Google in its efforts to map out Web content on the Internet.

In general, most people don’t identify what Google (or Microsoft, Yahoo or any other commercial search engine) does as bad, let alone illegal. But if you are familiar with the advanced search options these sites offer or read any number of books or blogs on “Google Dorks,” you’ll likely be more fearful of them than something with limited scope like Shodan.

Unfortunately, Shodan is increasingly perceived as a threat by many organizations. This might be due to its overwhelming popularity or its frequent citation amongst the infosec community and journalists as a source of embarrassing statistics. Consequently, security companies like Check Point have included alerts and blocking signatures in a vain attempt to thwart Shodan and its ilk.

On one hand, you might empathize with many organizations on the receiving end of a Shodan scan. Their Internet-accessible systems are constantly probed, their services are enumerated, and every embarrassing misconfiguration or unpatched service is catalogued and could be used against them by evil hackers, researchers and journalists.

In some realms, you’ll also hear that the bad guy competitors to Shodan (e.g. cyber criminals mapping the Internet for their own financial gain) are copying the scanning characteristics of Shodan so the target’s security and incident response teams assume it’s actually the good guys and ignore the threat.

On the other hand, with it being so easy to modify the scanning process – changing scan types, modifying handshake processes, using different domain names, and launching scans from a broader range of IP addresses – you’d be forgiven for thinking that it’s all a bit of wasted effort… about as useful as a “keep-off-the-grass” sign in Hyde Park.

Although “robots.txt” in its own way serves as a similarly polite request for commercial Web search scanners to not navigate and cache pages on a site, it is most often ignored by scanning providers. It also serves as a flashing neon arrow that directs hackers and security researchers to the more sensitive content.

It’s a sad indictment of current network security practices that a reputable security vendor felt the need and justification to add detection rules for Shodan scans and that their customer organizations may feel more protected for implementing them.

While the virtual “keep-off-the-grass” warning isn’t going to stop anyone, it does empower the groundskeeper to shout, “Get off my land!” (in the best Cornish accent they can muster) and feel justified in doing so. In the meantime, the plague of ever-helpful souls and automated systems will continue to probe away to their hearts content.

Friday, November 20, 2015

Battling Cyber Threats Using Lessons Learned 165 Years Ago

When it comes to protecting the end user, the information security community is awash with technologies and options. Yet, despite the near endless array of products and innovation focused on securing that end user from an equally broad and expanding array of threats, the end user remains more exposed and vulnerable than at any other period in the history of personal computing.

Independent of these protection technologies (or possibly because of them), we’ve also tried to educate the user in how best (i.e. more safely) to browse the Internet and take actions to protect themselves. With a cynical eye, it’s almost like a government handing out maps to their citizens and labeling streets, homes, and businesses that are known to be dangerous and shouldn’t be visited – because not even the police or military have been effective there.

Today we instruct our users (and at home, our children) to be careful what they click-on, what pages or sites they visit, what information they can share, and what files they should download. These instructions are not just onerous and confusing, more often than not they’re irrelevant – as, even after following them to the letter, the user can still fall victim.

The fact that a user can’t click on whatever they want, browse wherever they need to, and open what they’ve received, should be interpreted as a mile-high flashing neon sign saying “infosec has failed and continues to fail” (maybe reworded with a bunch of four-letter expletives for good measure too).
For decades now thousands of security vendors have brought to market technologies that, in effect, are predominantly tools designed to fill vulnerable and exploited gaps in the operating systems lying at the core of devices the end users rely upon. If we’re ever to make progress against the threat and reach the utopia of users being able to “carelessly” using the Internet, those operating systems must get substantially better.

In recent years, great progress has been made in the OS front – primarily smartphone OS’s. The operating systems running on our most pocket-friendly devices are considerably more secure than those we rely upon for our PC’s, notebooks, or servers at home or work. There’s a bunch of reasons why of course – and I’ll not get in to that here – but there’s still so much more that can be done.
I do believe that there are many lessons that can be learned from the past; lessons that can help guide future developments and technologies. Reaching back a little further in to the past than usual – way before the Internet, and way before computers – there are a couple of related events that could shine a brighter light on newer approaches to protecting the end user.

Back in 1850 a Hungarian doctor named Ignaz Semmelweis was working in the maternity clinic at the General Hospital in Vienna where he noted that many women in maternity wards were dying from puerperal fever - commonly known as childbed fever. He studied two medical wards in the hospital – one staffed by all male doctors and medical students, and the other by female midwifes – and counted the number of deaths in each ward. What he found was that death from childbirth was five times higher in the ward with the male doctors.

Dr. Semmulweis tested numerous hypothesis as to the root cause of the deadly difference – ranging from mothers giving birth on their sides versus their backs, through to the route priests traversed the ward and the bells they rang. It appears that his Eureka moment came after the death of a male pathologist who, upon pricking his finger while doing an autopsy on a woman who had died of childbed fever, had succumbed to the same fate (apparently being a pathologist in the mid-19th century was not conducive to a long life). Joining the dots, Dr. Semmulweis noted that the male doctors and medical students were doing autopsies while the midwifes were not, and that “cadaverous particles” (this is a period of time before germs were known) were being spread to those birthing mothers.

Dr. Semmulweis’ medical innovation? “Wash your hands!” The net result, after doctors and midwifes started washing their hands (in lime water, then later in chlorine), was that the rate of childbed fever dropped considerably.

Now, if you’re in the medical trade, washing your hands multiple times per day in chlorine or (by the late 1800’s) carbolic acid, you’ll note that it isn’t so good for your skin or hands.

In 1890 William Stewart Halsted of Johns Hopkins University asked the Goodyear Tire and Rubber Company if they could make a glove of rubber that could be dipped in carbolic acid in order to protect the hands of his nurses – and so was born the first sterilized medial gloves. The first disposable latex medical gloves were manufactured by Ansell and didn’t appear until 1964.

What does this foray in to 19th century medical history mean for Internet security I hear you say? Simple really, every time the end user needs to use a computer to access the Internet and do work, it needs to be clean/pristine. Whether that means a clean new virtual image (e.g. “wash your hands”) or a disposable environment that sits on top of the core OS and authorized application base (e.g. “disposable gloves”), the assumption needs to be that nothing the user encounters over the Internet can persist on the device they’re using after they’ve finished their particular actions.

This obviously isn’t a solution for every class of cyber threat out there, but it’s an 80% solution – just as washing your hands and wearing disposable gloves as a triage nurse isn’t going to protect you (or your patient) from every post-surgery ailment.

Operating system providers or security vendors that can seamlessly adopt and automatically procure a clean and pristine environment for the end user every time they need to conduct activities on or related to the Internet will fundamentally change the security game – altering the battle field for attackers and the tools of their trade.

Exciting times ahead.


-- Gunter

Monday, November 9, 2015

The Incredible Value of Passive DNS Data

If a scholar was to look back upon the history of the Internet in 50 years’ time, they’d likely be able to construct an evolutionary timeline based upon threats and countermeasures relatively easily. Having transitioned through the ages of malware, phishing, and APT’s, and the countermeasures of firewalls, anti-spam, and intrusion detection, I’m guessing those future historians would refer to the current evolutionary period as that of “mega breaches” (from a threat perspective) and “data feeds”.
Today, anyone can go out and select from a near infinite number of data feeds that run the gamut from malware hashes and phishing URL’s, through to botnet C&C channels and fast-flux IPs. 

Whether you want live feeds, historical data, bulk data, or just API’s you can hook in and ad hoc query, more than one person or organization appears to be offering it somewhere on the Internet; for free or as a premium service.

In many ways security feeds are like water. They’re available almost everywhere if take the time to look, however their usefulness, cleanliness, volume, and ease of acquiring, may vary considerably. Hence there value is dependent upon the source and the acquirees needs. Even then, pure spring water may be free from the local stream, or come bottled and be more expensive than a coffee at Starbucks.

At this juncture in history the security industry is still trying to figure out how to really take advantage of the growing array of data feeds. Vendors and enterprises like to throw around the term “intelligence feeds” and “threat analytics” as a means of differentiating their data feeds from competitors after they have processed multiple lists and data sources to (essentially) remove stuff – just like filtering water and reducing the mineral count – increasing the price and “value”.
Although we’re likely still a half-decade away from living in a world were “actionable intelligence” is the norm (where data feeds have evolved beyond disparate lists and amalgamations of data points into real-time sentry systems that proactively drive security decision making), there exist some important data feeds that add new and valuable dimensions to other bulk data feeds; providing the stepping stones to lofty actionable security goals.

From my perspective, the most important additive feed in progressing towards actionable intelligence is Passive DNS data (pDNS).

For those readers unfamiliar with pDNS, it is traditionally a database containing data related to successful DNS resolutions – typically harvested from just below the recursive or caching DNS server.

Whenever your laptop or computer wants to find out the IP address of a domain name your local DNS agent will delegate that resolution to a nominated recursive DNS server (listed in your TCP/IP configuration settings) which will either supply an answer it already knows (e.g. a cached answer) or in-turn will attempt to locate a nameserver that does know the domain name and can return an authoritative answer from that source.

By retaining all the domain name resolution data and collecting from a wide variety of sources for a prolonged period of time, you end up with a pDNS database capable of answering questions such as “where did this domain name point to in the past?”, “what domain names point to a given IP address?”, “what domain names are known by a nameserver?”, “what subdomains exist below a given domain name?”, and “what IP addresses will a domain or subdomain resolve to around the world?”.

pDNS, by itself, is very useful, but when used in conjunction with other data feeds its contributions towards actionable intelligence may be akin to turning water in to wine.

For example, a streaming data feed of suspicious or confirmed malicious URL’s (extracted from captured spam and phishing email sources) can provide insight as to whether the customers of a company or its brands have been targeted by attackers. However, because email delivery is asynchronous, a real-time feed does not necessarily translate to current window of visibility on the threat. By including pDNS in to the processing of this class of threat feed it is possible to identify both the current and past states of the malicious URL’s and to cluster together previous campaigns by the attackers – thereby allowing an organization to prioritize efforts on current threats and optimize responses.

While pDNS is an incredibly useful tool and intelligence aid, it is critical that users understand that acquiring and building a useful pDNS DB isn’t easy and, as with all data feeds, results are heavily dependent upon the quality of the sources. In addition, because historical and geographical observations are key, the longer the pDNS data goes back (ideally 3+ years) and the more data the sources cover global ISPs (ideally a few dozen tier-1 operators), the more reliable and useful the data will be. So select your provider carefully – this isn’t something you ordinarily build yourself (although you can contribute to a bigger collector if you wish).

If you’re looking for more ideas on how to use DNS data as a source and aid to intelligence services and even threat attribution, you can find a walk-through of techniques I’ve presented or discussed in the past here and here.

-- Gunter