Wednesday, January 9, 2019

Hacker History III: Professional Hardware Hacker

Following on from my C64 hacking days, but in parallel to my BBS Hacking, this final part looks at my early hardware hacking and creation of a new class of meteorological research radar...

Ever since that first C64 and through the x86 years, I’d been hacking away – mostly software; initially bypassing copy-protection, then game cracks and cheats, followed by security bypasses and basic exploit development.

Before bug bounty programs were invented in the 2010’s, as early as 1998 I used to say the best way to learn and practice hacking skills was to target porn sites. The “theory” being that they were constantly under attack, tended to have the best security (yes, even better than the banks) and, if you were ever caught, the probability of ever appearing in court and having to defend your actions in front of a jury was never going to happen - and the folks that ran and built the sites would be the first to tell you that.

In the mid-to-late 1980’s, following France’s 1985 bombing and sinking of the Rainbow Warrior in New Zealand, if you wanted to learn to hack and not worry about repercussions – any system related to the French Government was within scope. It was in that period that war-dialing and exploit development really took off and, in my opinion, the professional hacker was born – at least in New Zealand it was. Through 1989-1991 I had the opportunity to apply those acquired skills in meaningful ways – but those tales are best not ever written down.

Digital Radar

Easily the most fun hardware hacking I’ve ever done or been involved with ended up being the basis for my post-graduate research and thesis. My mixed hardware hacking and industrial control experience set me up for an extraordinary project as part of my post graduate research and eventual Masters in Atmospheric Physics.

I was extremely lucky:
  1. The first Mhz digitizer cards were only just hitting the market
  2. PC buses finally had enough speed to handle Mhz digitizer cards
  3. Mass storage devices (i.e. hard drives) were finally reaching an affordable capacity/price
  4. My supervisor was the Dean of Physics and had oversight of all departments “unused budgets”
  5. Digital radar had yet to be built

My initial mission was to build the world’s first digital high-resolution vertically pointing radar and to use it to prove or disprove the “Seeder-feeder mechanism of orographic rainfall”.

Taking a commercial analogue X-band marine radar and converting the 25 kilo-watt radar with a range of 50 miles and a resolution measured in tens-of meters, to a digital radar with an over-sampled resolution of 3.25 cm out to a range of 10km was the start of the challenge – but successfully delivered nevertheless. That first radar was mounted on the back of a 4x4 Toyota truck – which was great at getting to places no radar had been before. Pointing straight up was interesting – and served its purpose of capturing the Seeder-feeder mechanism in operation – but there was room for improvement.

Back at the (family) factory, flicking through pages of operation specification tables for electric motors (remember – pre-Internet/pre-Google) and harnessing the power of MS-DOS based AutoCAD, I spec'ed out and designed a mounting mechanism for making the radar scan the sky like a traditional meteorological radar – but one that could operate in winds of 80 mph winds, at high altitude, in the rain. Taking a leaf out of my father’s design book – it was massively over engineered ;-)

Home for many months - the mobile high resolution radar + attached caravan. Circa 1994.

This second radar was mounted to an old tow-able camper-van. It was funny because, while the radar would survive 80+ mph winds, a gust of 50+ mph would have simply blown over the camper-van (and probably down the side of a hill or over a cliff). Anyhow, that arrangement (and the hacks it took to get working) resulted in a few interesting scientific advances:
  • Tracking bumblebees. Back in 1994, while GPS was a thing, it didn’t have very good coverage in the southern hemisphere and, due to US military control, it’s positioning resolution was very poor (due to Selective Availability). So, in order to work out a precise longitude and latitude of the radar system, it was back to ancient ways and tracking the sun. I had code that ran the radar in passive mode, scanned horizontally and vertically until it found that big microwave in the sky, and tracked its movements – and from there determine the radar’s physical location. (Un)fortunately, through a mistake in my programming and leaving the radar emitting it's 25kW load, I found it could sometimes lock-on and track bright blips near ground-level. Through some investigation and poor coding, I’d managed to build a radar tracking system for bumblebees (since bumblebees were proportional to the wavelength and over-sampled bin size – they were highly reflective and dominated the sun).
  • Weather inside valleys. The portability of the camper-van and the high resolution of the radar also meant that for the first time ever it was possible to monitor and scientifically measure the weather phenomenon within complex mountain valley systems. Old long-range radar, with resolutions measured in thousands of cubic meters per pixel, had only observed weather events above the mountains. Now it was possible to digitally observe weather events below that, inside valleys and between mountains, at bumblebee resolution.
  • Digital contrails. Another side-effect of the high resolution digital radar was its ability to measure water density of clouds even on sunny days. Sometimes those clouds were condensation trails from aircraft. So, with a little code modification, it became possible to identify contrails and follow their trails back to their root source in the sky – often a highly reflective aircraft – opening up new research paths into tracking stealth aircraft and cruise missiles.
It was a fascinating scientific and hacking experience. If you’ve ever stood in a doorway during a heavy rainfall event and watched a curtain of heavier rainfall weave its way slowly down the road and wondered at the physics and meteorology behind it, here was a system that digitally captured that event from a few meters above the ground, past the clouds, through the melting layer, and up to 10 km in the air – and helped reset and calibrate the mathematical models still used today for weather forecasting and global climate modeling.

By the end of 1994 it was time to wrap up my thesis, leave New Zealand, head off on my Great OE, and look for full-time employment in some kind of professional capacity.


When I look back at what led me to a career in Information Security, the 1980's hacking of protected C64 games, the pre-Internet evolution of BBS and it's culture of build collaboration, and the hardware hacking and construction of a technology that was game changing (for it's day) - they're the three things (and time periods) that remind me of how I grew the skills and developed the experience to tackle any number of subsequent Internet security problems - i.e. hack my way through them. I think of it as a unique mix. When I meet other hackers who's passions likewise began in the 1980's or early 1990's, it's clear that everyone has their own equally exciting and unique journey - which makes it all the more interesting.

I hope the tale of my journey inspires you to tell your own story and, for those much newer to the scene, proves that us older hands probably didn't really have a plan on how we got to where we are either :-)

This is PART THREE of THREE.

PART ONE (C64 Hacking)  and PART TWO (BBS Hacking) are available to read too.

--Gunter


Tuesday, January 8, 2019

Hacker History II: The BBS Years

Post-C64 Hacking (in Part 1 of Hacker History)... now on to Part 2: The BBS Years

Late 1986 (a few months before I started my first non-newspaper delivery and non-family-business job – working at a local supermarket) I launched my first bulletin board system (BBS). I can’t remember the software that I was running at the time, but it had a single 14k dial-up facility running on all the extra C64 equipment I’d been “gifted” by friends wanting faster/always access too my latest cheats and hacks.

The premise behind the BBS was two-fold: I wanted to learn something new (and hacking together a workable and reliable BBS system in the mid-80’s was a difficult enough challenge), and I saw it as a saving time distribution channel for my cheats/hacks; others could dial-in and download themselves, instead of me messing around with stacks of floppy discs etc.

At some point in 1986 I’d also saved enough money to by an IBM PC AT clone – a whopping 12Mhz 80286 PC, complete with Turbo button and a 10Mb hard drive. I remember specking out the PC with the manufacturer. They were stunned that a kid could afford their own PC AT and that he planned to keep it in his bedroom, and that he wanted an astounding 16k of video memory (“what do you need that for? Advanced ACAD?”)!

By 1989 the BBS had grown fairly large with a couple hundred regular members with several paying monthly subscription fees, but the stack of C64’s powering the BBS were showing their age and, in the meantime my main computing had moved down the PC path from 286, to 386, and to a brand-spanking new 486.

It was time to move on from C64 and go full-PC – both with the BBS and the hacks/cheats I was writing.

So in 1990, over the Summer/Christmas break from University I set about shifting the BBS over to a (single) PC – running Remote Access, with multiple dial-in lines (14.4k for regular users and 28.8k for subscribers).


The dropping of C64 and move to fully-fledged x86 PC resulted in a few memorable times for me:
  • BBS’s are like pets. Owning and operating a BBS is a lot like looking after an oversized pet that eats everything in its path and has destructive leanings; they’re expensive and something is always going wrong. From the mid-80’s to mid-90’s (pre-“Internet”) having a BBS go down would be maddening to all subscribers. Those subscribers would be great friends when things were running, or act like ungrateful modern-day teenagers being denied “screen-time” if they couldn’t dial-in for more than a couple of days. Keeping a BBS running meant constant tinkering under the covers – learning the intricacies of PC hardware architecture, x86 assembly, live patching, memory management, downtime management, backup/recovery, and “customer management”. The heady “good-old days” of PC development.
  • International Connectivity. With me in University and too-often referred to as the “student that knows more about computers than the campus IT team”, in 1991 I added Fidonet and Usenet support to my BBS. There had been a few BBS’s in New Zealand before mine to offer these newsgroups, but they were very limited (i.e. a small number of groups) because they were reliant upon  US dial-up for synching (which was damned expensive!). My solution was to use a spare modem in the pack of a University lab PC to connect semi-permanently to my BBS. From there my BBS used the Universities “Internet” undersea cable connectivity to download and synch all the newsgroups. Technically I guess you could call it my first “backdoor” hacking experience – which ended circa 1993 after being told to stop as (by some accounts) the BBS was peak consuming 1/3 of the entire countries academic bandwidth.
  • First Security Disclosure. Setting up Remote Access (RA) was an ordeal. It was only a week later – Christmas Eve 1990 – that I publicly disclosed my first security vulnerability (with a self-developed patch); an authentication bypass to the system that controlled what games or zones a subscriber could access. I can’t remember how many bugs and vulnerabilities I found in RA, QEMM, MS-DOS, modem drivers, memory managers, and the games that ran on RA over those years. Most required some kind of assembly instruction patch to fix.
  • Mailman and Sysop. Ever since those first BBS days in 1986, I’d felt that email (or Email, or E-Mail) would be the future for communications. The tools and skills needing for managing a reliable person-to-person or person-to-group communication system had to be built and learned – as too did the management of trust and the application of security. Some BBS operators loved being Sysops (System Operators – i.e. Admins) because they could indulge their voyeurism tendencies. I hated BBS’s and Sysops that operated that way and it became an early mission of mine to figure out ways of better protecting subscriber messages.

That fumbling about and experimenting with PC hardware, MS-DOS, and Windows at home and with the Bulletin Board System, coupled with learning new systems at University such as DEC Alpha, OpenVMS, Cray OS, and HP-UX in the course of my studies, and the things I had to piece-together and program at my parents factories (e.g. PLC’s,  ICS’s, RTU’s, etc.) all combined to add to a unique perspective on operating systems and hardware hacking.

By the time I’d finished and submitted my post-grad research thesis, it was time to tear down the BBS, sell all my computers and peripherals, and leave New Zealand for my Great OE (Overseas Experience) at the end of 1994.

This is PART TWO of THREE.

PART ONE (C64 Hacking) was posted yesterday and PART THREE (Radar Hacking) will be on Wednesday.

Monday, January 7, 2019

Hacker History I: Getting Started as a Hacker

Curiosity is a wonderful thing; and the key ingredient to making a hacker. All the best hackers I know are not only deeply curious creatures but have a driving desire to share the knowledge they uncover. That curiosity and sharing underpins much of the hacker culture today – and is pretty core to people like me and those I trust the most.

Today I continue to get a kick out of mentoring other hackers, (crossed-fingers) upcoming InfoSec stars and, in a slightly different format, providing “virtual CISO” support to a handful of professionals (through my Ablative Security company) that have been thrown headfirst into protecting large enterprise or local government networks.

One of the first questions I get asked as I’m mentoring, virtual CISO’ing, or grabbing beers with a new batch of hacker friends at some conference or other is “how did you get started in computers and hacking?”.

Where did it all start?

The early days of home computing were a mixed bag for me in New Zealand. Before ever having my own computer, a bunch of friends and I would ditch our BMX’s daily in the front yard of any friend that had a Commodore VIC20 or Amstrad CPC, throw a tape in the tape reader, and within 15 minutes be engrossed in a game – battling each other for the highest score. School days were often dominated by room full of BBC Micros – where one of the most memorable early programs I wrote was to use a sensitive microphone to capture the sounds of bugs eating. I can still remember plotting the dying scream of a stick insect as it succumbed to science!

Image via: WorthPoint

I remember well the first computer I actually owned – a brand-spanking new SpectraVideo SV-328 (complete with cassette tape reader) that Santa delivered for Christmas in 1983. I thought it was great, but quickly tired of it because there weren’t many games and all my friends were getting Commodore VIC-20 or Commodore 64 microcomputers – which had oh so many more games. So, come late 1984, I flogged my SpectraVideo and brought (second-hand) my first Commodore 64 (C64).

I can safely say that it was the C64 that lit my inner hacker spark. First off, the C64 had both a tape (then later diskette) capability and a games cartridge port. Secondly, New Zealand is a LONG way from where all the new games were being written and distributed from. Thirdly, as a (pre)teen, a single cartridge game represented 3+ months of pocket money and daily newspaper deliveries.

These three constraints resulted in the following:
  • My first hardware hack. It was possible to solder a few wires and short-circuit the memory flushing and reboot process of the C64 via the games cartridge mechanism to construct a “reset” button. This meant that you could insert the game cartridge, load the game, hold-down your cobbled together reset button, remove the games cartridge, and use some C64 assembly language to manipulate the game (still in memory). From there you could add your own boot loader, save to tape or floppy, and create a back-up copy of the game.
  • “Back-up Copies” and Community. C64 games, while plentiful, were damned expensive and took a long time to get to New Zealand. So a bunch of friends all with C64’s would pool our money every few weeks to buy the latest game from the UK or US; thereafter creating “back-ups” for each-other to hold on to – just in case the costly original ever broke. Obviously, those back-up copies needed to be regularly tested for integrity.  Anyhow, that was the basis of South Auckland’s community of C64 Hackers back in 1983-1985. A bunch of 10-14 year-olds sharing the latest C64 games.
  • Copy-protection Bypassing. Unsurprisingly, our bunch of kiwi hackers weren’t the first or only people to create unauthorized back-ups of games. As floppies replaced tapes and physical cassettes as the preferred media for C64 games, the software vendors started their never-ending quest of adding copy-protection to protect unauthorized copying and back-ups. For me, this was when hacking become a passion. Here were companies of dozens, if not hundreds, of professional software developers trying to prevent us from backing-up the programs we had purchased. For years we learned, developed, and shared techniques to bypass the protections; creating new tools for backing-up, outright removal of onerous copy-protection, and shrinking bloated games to fit on single floppies.
  • Games Hacking. At some point, you literally have too many games and the thrill of the chase changes. Instead of looking forward to playing the latest game for dozens of hours or days and iteratively working through campaigns, I found myself turning to hacking the games themselves. The challenge became partially reversing each game, constructing new cheats and bypasses, and wrapping them up in a cool loader for a backed-up copy of the game. Here you could gain infinite lives, ammo, gold, or whatever, and quickly step through the game – seeing all it had to offer and doing so within an hour.
  • Hacking for Profit. Once some degree of reputation for bypassing copy-protection and creating reliable cheater apps got around, I found that my base of “friends” grew, and monetary transactions started to become more common. Like-minded souls wanted to buy hacks and tools to back-up their latest game, and others wanted to bypass difficult game levels or creatures. So, for $5-10 I’d sell the latest cheat I had.
At some point in 1986 I recognized that I had a bunch of C64 equipment – multiple floppy drives, a few modems, even a new Commodore 64C – and more than enough to start a BBS.

This is PART ONE of THREE. 

PART TWO (BBS Hacking) is up and PART THREE (Radar Hacking) on Wednesday.

Tuesday, December 4, 2018

Ubiquitous Video Surveillance and the Policing Paradigm Change it Brings

Policing in the 21st Century is obviously changing rapidly. New technological advances are fundamentally changing the way in which police forces and related government entities can track, locate, and collect evidence.

Two game changing technologies - working together - perhaps underpin the greatest tool for policing the world over. The combination of high-resolution digital video capture and facial recognition. Both sit at the crux of future policing and bring new societal change.

Projecting forward, what could the next decade or two hold? 

It's easy to get in to the realm of Science Fiction and dystopian futures, but when I consider some of the social impact (and "opportunities") such technologies can bring, it naturally feels like the premise for several short stories.

One: Peaking Under Facial Obfuscation
As I watch news of the "Yellow Vest" riots in Paris, it is inevitable that high-resolution digital video capture of protesters - combined with facial recognition - will mean that many group protest actions will become individually attributable. While the face of the perpetrator may not initially be tied to an identity, a portfolio of digital captures can be compiled and (at some future date) associated with the named individual.

New technologies in the realm of full-color night-vision video capture and advanced infrared heat-based body and face mapping lie the basis of radically better tools for associating captured maleficence to an individual. Combined with the work being done with infrared facial recognition, and we'll soon find that scarfs, balaclavas, or even helmets will cease to protect the identity of the perpetrator.

Two: Perpetual Digital Trail
Many large cities are approaching the point that it is impossible to stand in any public space or thoroughfare and not be captured by at least one video camera. Capitalizing on this, many metropolitan police forces are already able to real-time track a surveilled individual or entity through their networked cameras. In addition, some police forces have already combined such capabilities with facial recognition to quickly spot wanted individuals or suspects in crowds and track their movements across cameras in real-time.

We can expect the density and prevalence of cameras to grow. We can also expect that the video captured from these cameras to increasingly move to the cloud and be retained indefinitely. The AI tooling today already enables us to intelligently stitch together all the video content and construct a historical trail for any person or physical entity.

When combined with (One), it means that in the near future police forces could track (both forward and reverse in time) each suspect - identifying not only the history of travel and events preceding the crime, but also their origination (e.g. home) address... and from there, arrive at an identity conclusion. Imagine doing this for thousands of protesters simultaneously. Obviously, such a capability would also facilitate the capture of facial images of the suspect before they donned any masks or facial obfuscation tools they used during the protest or crime.

Three: Inferring Pending Harm and Stress
As digital video cameras radically increase their capture resolution - moving from an average of 640x480 to 3096x2160 (4k Ultra HD) and beyond over the coming years; facial recognition accuracy will obviously improve, but we can expect other personal traits and characteristics to be accurately inferred.

Studies of human movement and tooling for life-like movements in digital movies naturally lend to the ability to identify changes in an individuals movements and infer certain things. For example, being able to guess the relative weight and flexibility of contents within a backpack being worn by the surveilled individual, the presence of heavy objects being carried within a suit jacket, or changes in contents of a bag carried in-hand.

If we also assume that the surveilled individual will be caught by multiple cameras from different locations, the combination of angles and perspective will further help define the unique characteristics and load of the individual. Police forces could use that intelligence to infer the presence of weapons for example.

Higher resolution monitoring also lends itself to identifying and measuring other more personal attributes of the person being monitored. For example, details on the type and complexity of jewelry being worn, tattoos, or unique visible identifiers.

Using physical tells such as sweat density, head-bobbing (see an earlier blog - Body Worn Camera Technologies), and heart-rate, it will be possible to identify whether the person is stressed, under duress, or has recently exerted themselves.

Four: The Reverse Paradigm
Such technologies are not exclusive to police forces and government departments. It is inevitable that these technologies will also be leveraged by civilians and criminals alike - bringing a rather new dynamic to future policing.

For example, what happens if anti-riot police cannot obfuscate their identity during a riot - despite balaclavas and protective helmets? If every retaliatory baton strike, angered shield charge, tear gas spray, or taser use can be individually attributed to the officer that did it (and the officer can be identified by face capture or uniform number), officers become individually responsible and ultimately accountable for their actions.

Imagine for instance that every office present now had a virtual folder of video captures detailing their actions during the riots. With little effort and likely a little crowd-sourcing, each officer's identity would be discovered and publicly associated with their actions.

It would be reasonable to expect that police officers would adjust their responses to riot situations - and its likely that many would not want to expose themselves to such risk. While it is possible new or existing laws could be used to protect officers from most legal consequences of "doing their job", the social consequences may be a different story.

We've already seen how doxing can severely affect the life and safety of those that are victims, it would be reasonable to assume that some percentage of a rioting population would be only too eager to publish a police officers "crimes" along with all their identity, address, and any other personal data they could find. Would we expect police officers with young families take on the risk of riotous agitators arriving at their families doorstep - looking for vengeance.

Five: Ubiquitous Consumer-led Video Surveillance
A question will inevitably be raised that the ability to construct digital trails or infer motivations and harm will be restricted to police and government entities. After all, they're the ones with the budget and authority to install the cameras.

These challenges may be overcome. For example, new legal test cases may force governments to make such video feeds publicly available - after all, funding is through public money. We've seen such shifts in technology access before - e.g. GPS, satellite mapping, satellite imagery.

An interesting model for consumer video that could become even more complete and ubiquitous video capture (and analytics) may be relatively simple.

Just as today's home security systems have advanced to include multiple internal and external high-resolution video feeds to the cloud, maybe the paradigm changes again. Instead of a managed security monitoring service, things become community based and crowd sourced.

For example, lets say that for each high-resolution video camera you install and connect to a community cloud service, you gain improved access to the accumulated mesh of every other contributing camera. Overlaying that mesh of camera feeds, additional services augment the video with social knowledge and identity information, and the ability to trace movements around an event - just like the police could - for a small fee. The resultant social dynamics would be very interesting... does privacy end at your doorstep? If every crime in a public space is captured and perpetrators labeled, does that make petty and premeditated crime disappear? Does local policing shift to community groups and vigilantes?

Conclusions
Advances in high-resolution digital video cameras and facial recognition will play a critical role in how policing society changes over the next couple of decades.

The anticipated advancements will certainly make it easier for police forces to track, identify, and hold accountable those that perpetuate crimes. But such technologies will also inevitably be likewise utilized by those being policed. While the illustrated scenarios began with the recent riots in France, the repercussions for police forces in Mexico facing wealthy cartels would perhaps be more dire.

It is too early to tell whether the ubiquity of these advancing technologies will tilt the hand in one direction or the other or whether we'll reach a new technology stalemate. Accountability for an individuals actions - backed by proof - feels like the right societal movement, but opens doors to entirely new forms of abuse.

-- Gunter Ollmann

Monday, October 1, 2018

The Diet Pill Security Model

The information security industry, lacking social inhibitions, generally rolls its eyes at anything remotely hinting to be a "silver bullet" for security. Despite that obvious hint, marketing teams remain undeterred at labeling their companies upcoming widget as the savior to the next security threat (or the last one - depending on what's in the news today).

I've joked in the past that the very concept of a silver bullet is patently wrong - as if silver would make a difference. No, the silver bullet must in fact be water. After all, chucking a bucket of water on a compromised server is guaranteed to stop the attacker dead in their tracks.

Bad jokes aside, the fundamental problem with InfoSec has less to do with the technology being proposed or deployed to prevent this or that class of threat, and more to do with the lack of buyers willing to change their broken security practices and compliment their new technology investment.

Too many security buyers are effectively looking for the diet pill solution. Rather than adjusting internal processes and dropping bad practices, there is eternal hope that the magical security solution will fix all ills and the business can continue to binge on deep-fried Mars bars and New York Cheesecakes.

As they say, "hope springs eternal".

Just as a medical doctor's first-line advice is to exercise more and eat healthily, our corresponding security advice is harden your systems and keep up to date with patching.

Expecting the next diet pill solution to cure all your security ills is ludicrous. Get the basics done right, and get them right all the time first, and expand from there.

-- Gunter

Friday, September 21, 2018

The Missing Piece of the Security Conference Circuit

So far this year I think I've attended 20+ security conferences around the world - speaking at many of them. Along the way I got to chat with hundreds of attendees and gather their thoughts on what they hoped to achieve or learn at each of these conferences.

In way too many cases I think the conference organizers have missed the mark.

I'd like to offer the following thoughts and feedback to the people organizing and facilitating these conferences (especially those catering to local security professionals):


  • Attendees have had enough of stunt hacking presentations. By all means, throw in one or two qualified speakers on some great stunt hack - but use them as sparingly as keynotes.
  • Highly specialized - border-line stunt hacking topics - disenfranchise many of the attendees. Sure, it's fun to have a deep-dive hacking session on voting machines, smart cars, etc. but when every session is focused on (what is essentially an) "edge" security device that most attendees will never be charged with attacking or defending... it's no longer overwhelming, it becomes noise that can't be applied in "real-life" for the majority of attendees.
  • As an industry we're desperately trying to engage those entering the job market and "sell" them on our security profession. Trinket displays of security (e.g. CTF, lock-picking) sound more interesting to people already in security... and much less so to those just entering the job market. Lets face it, no matter how much they enjoy picking locks, it's unlikely a qualification for first-line SOC analysts. Even for those that have been in the industry for a few years, these cliche trinket displays of security "skill" have become tired... and look like wannabe Def Cons.
  • Most attendees really want to LEARN something that they can APPLY to their job. They're looking for nuggets of smartness that can be used tomorrow in the execution of their job.

Here's a few thoughts for security (/hacker) conference organizers:


  • Have a track (or two) specifically focused on attack techniques (or defense techniques) where each presented session can clearly say what new skill or technique the attendee will have acquired as the leave the hallowed chamber of security knowledge goodness. This may be as simple as escalating existing skills e.g. "if you're a 5 on XSS today, by the end of the session you'll have reached a 7 in XSS against SAP installations", or "you'll learn how to use Jupyter Notebooks for managing threat hunt collaboration". The objective is simple: an attendee should be able to apply new skills and expertise tomorrow... at their day job.
  • Get more people presenting, and presenting for less time. Encourage a broader range of speakers to present on practical security topics. I think many attendees would love to see a "open mic" speaker track where security professionals (new and upcoming) can deep-dive present on interesting security topics and raise questions to attendees for help/guidance/answers. For example, the speaker has deep-dived into blocking spear-phishing emails using XYZ product but identified that certain types of email vectors evade it... they present proposals on improvement... and the attendees add their collective knowledge. It encourages interaction and (ideally) helps to solve real-world problems.
  • An iteration of the idea above, but focused on students, those job hunting for security roles, or on their first rung of the security ladder... a track where they can present on a vetted security topic where a panel of security veterans that evaluate the presentation - the content and the delivery - and provide rewards. In particular, I'd love to see (and ensure) that the presentation is recorded, and the presentation material is available for download (including maybe a backup whitepaper). Why? Because I'd encourage these speakers to reference and link to these resources (and conference awards) in their resumes/CV's so they can differentiate themselves in the hiring market.
  • Finally, I'd encourage (and offer myself up for participation) a track for practicing and refining interview techniques. It's daunting for all new starters in our industry to successfully navigate an interview with experienced and battle wary security professionals. It takes practice, guidance, and encouragement. In reality, starter interviewees have less than 15 minutes to establish their technical depth, learning capability, and group compatibility. On the flip-side, learning and practice sessions for technical security hiring managers on overcoming biases and encouraging diversity. We're an industry full of introverts and know-it-all's that genuinely want to help... but we all need a little help and coaching in this critical area.

-- Gunter Ollmann

The Security Talent Gap is Misunderstood and AI Changes it All

Despite headlines now at least a couple years old, the InfoSec world is still (largely) playing lip-service to the lack of security talent and the growing skills gap.

The community is apt to quote and brandish the dire figures, but unless you're actually a hiring manager striving to fill low to mid-level security positions, you're not feeling the pain - in fact there's a high probability many see problem as a net positive in terms of their own employment potential and compensation.

I see today's Artificial Intelligence (AI) and the AI-based technologies that'll be commercialized over the next 2-3 years as exacerbating the problem - but also offering up a silver-lining.

I've been vocal for decades that much of the professional security industry is and should be methodology based. And, by being methodology based, be reliably repeatable; whether that be bug hunting, vulnerability assessment, threat hunting, or even incident response. If a reliable methodology exists, and the results can be consistently verified correct, then the process can be reliably automated. Nowadays, that automation lies firmly in the realm of AI - and the capabilities of these newly emerged AI security platforms are already reliably out-performing tier-one (e.g. 0-2 years experience) security professionals.

In some security professions (such as auditing & compliance, penetration testing, and threat hunting) AI-based systems are already capable of performing at tier-two (i.e. 2-8 years experience) levels for 80%+ of the daily tasks.


On one hand, these AI systems alleviate much of the problem related to shortage and global availability of security skills at the lower end of the security professional ladder. So perhaps the much touted and repeated shortage numbers don't matter - and extrapolation of current shortages in future open positions is overestimated.

However, if AI solutions consume the security roles and daily tasks equivalency of 8-year industry veterans, have we also created an insurmountable chasm for resent graduates and those who wish to transition and join the InfoSec professional ladder?

While AI is advancing the boundaries of defense and, frankly, an organizations ability to detect and mitigate threats has never been better (and will be even better tomorrow), there are still large swathes of the security landscape that AI has yet to solve. In fact many of these new swathes have only opened up to security professionals because AI has made them available.

What I see in our AI Security future is more of a symbiotic relationship.

AI's will continue to speed up the discovery and mitigation of threats, and get better and more accurate along the way. It is inevitable that tier-two security roles will succumb and eventually be replaced by AI. What will also happen is that security professional roles will change from the application of tools and techniques into business risk advisers and supervisors. Understanding the business, communicating with colleagues in other operational facets, and prioritizing risk response, are the intangibles that AI systems will struggle with.

In a symbiotic relationship, security professionals will guide and communicate these operations in terms of business needs and risk. Just as Internet search engines have replaced the voluminous Encyclopedia Britannica and Encarta, and the Dewey Decimal system, Security AI is evolving to answer any question a business may raise about defending their organization - assuming you ask the right question, and know how to interpret the answer.

With regards to the skills shortage of today - I truly believe that AI will be the vehicle to close that gap. But I also think we're in for a paradigm change in who we'll be welcoming in to our organizations and employing in the future because of it.

I think that the primary beneficiaries of these next generation AI-powered security professional roles will not be recent graduates. With a newly level playing field, I anticipate that more weathered and "life experienced" people will assume more of these roles.

For example, given the choice between a 19 year-old freshly minted graduate in computer science, versus a 47 year-old woman with 25 years of applied mechanical engineering experience in the "rust belt" of the US,... those life skills will inevitably be more applicable to making risk calls and communicating them to the business.

In some ways the silver-lining may be the middle-America that has suffered and languished as technology has moved on from coal mining and phone-book printing. It's quite probable that it will become the hot-spot for newly minted security professionals - leveraging their past (non security) professional experiences, along with decades of people or business management and communication skills - and closing the missing security skills gap using AI.

-- Gunter

Tuesday, April 24, 2018

Cyber Scorecarding Services

Ample evidence exists to underline that shortcomings in a third-parties cyber security posture can have an extremely negative effect on the security integrity of the businesses they connect or partner with. Consequently, there’s been a continuous and frustrated desire for a couple of decades for some kind of independent verification or scorecard mechanism that can help primary organizations validate and quantify the overall security posture of the businesses they must electronically engage with.

A couple decades ago organizations could host a small clickable logo on their websites – often depicting a tick or permutation of a “trusted” logo – that would display some independent validation certificate detailing their trustworthiness. Obviously, such a system was open to abuse. For the last 5 or so years, the trustworthiness verification process has migrated ownership from the third-party to a first-party responsibility.

Today, there are a growing number of brand-spanking-new start-ups adding to pool of slightly longer-in-the-tooth companies taking on the mission of independently scoring the security and cyber integrity of organizations doing business over the Web.

The general premise of these companies is that they’ll undertake a wide (and widening) range of passive and active probing techniques to map out a target organizations online assets, crawl associated sites and hidden crevasses (underground, over ground, wandering free… like the Wombles of Wimbledon?) to look for leaks and unintended disclosures, evaluate current security settings against recommended best practices, and even dig up social media dirt that could be useful to an attacker; all as contributors to a dynamic report and ultimate “scorecard” that is effectively sold to interested buyers or service subscribers.

I can appreciate the strong desire for first-party organizations to have this kind of scorecard on hand when making decisions on how best to trust a third-party supplier or partner, but I do question a number of aspects of the business model behind providing such security scorecards. And, as someone frequently asked by technology investors looking for guidance on the future of such business ventures, there are additional things to consider as well.

Are Cyber Scorecarding Services Worth it?
As I gather my thoughts on the business of cyber scorecarding and engage with the purveyors of such services again over the coming weeks (post RSA USA Conference), I’d offer up the following points as to why this technology may still have some business wrinkles and why I’m currently questioning the long-term value of the business model

1. Lack of scoring standards
There is no standard to the scorecards on offer. Every vendor is vying to make their scoring mechanism the future of the security scorecard business. As vendors add new data sources or encounter new third-party services and configurations that could influence a score, they’re effectively making things up as they go along. This isn’t necessarily a bad thing and ideally the scoring will stabilize over time at a per vendor level, but we’re still a long way away from having an international standard agreed to. Bear in mind, despite two decades of organizations such as OWASP, ISSA, SANS, etc., the industry doesn’t yet have an agreed mechanism of scoring the overall security of a single web application, let alone the combined Internet presence of a global online business.

2. Heightened Public Cloud Security
Third-party organizations that have moved to the public cloud and have enabled the bulk of the default security features that are freely available to them and are using the automated security alerting and management tools provided, are already very secure – much more so that their previous on-premise DIY efforts. As more organizations move to the public cloud, they all begin to have the same security features, so why would a third-party scorecard be necessary? We’re rapidly approaching a stage where just having an IP address in a major public cloud puts your organization ahead of the pack from a security perspective. Moreover, I anticipate that the default security of public cloud providers will continue to advance in ways that are not easily externally discernable (e.g. impossible travel protection against credential misuse) – and these kinds of ML/AI-led protection technologies may be more successful than the traditional network-based defense-in-depth strategies the industry has pursued for the last twenty-five years.

3. Score Representations
Not only is there no standard for scoring an organization’s security, it’s not clear what you’re supposed to do with the scores that are provided. This isn’t a problem unique to the scorecard industry – we’ve observed the phenomenon for CVSS scoring for 10+ years.
At what threshold should I be worried? Is a 7.3 acceptable, while a 7.6 means I must patch immediately? An organization with a score of 55 represents how much more of a risk to my business versus a vendor that scores 61?
The thresholds for action (or inaction) based upon a score are arbitrary and will be in conflict with each new advancement or input the scorecard provider includes as they evolve their service. Is the 88.8 of January the same as the 88.8 of May after the provider added new features that factored in CDN provider stability and Instagram crawling? Does this month’s score of 78.4 represent a newly introduced weakness in the organization’s security, or is the downgraded score an artifact of new insights that weren’t accounted for previously by the score provider?

4. Historical References and Breaches
Then there’s the question of how much of an organizations past should influence its future ability to conduct business more securely. If a business got hacked three years ago and the responsibly disclosed and managed their response – complete with reevaluating and improving their security, does another organization with the same current security configuration have a better score for not having disclosed a past breach?
Organizations get hacked all the time – it’s why modern security now works on the premise of “assume breach”. The remotely visible and attestable security of an organization provides no real insights in to whether they are currently hacked or have been recently breached.

5. Gaming of Scorecards
Gaming of the scorecard systems is trivial and difficult to defend against. If I know who my competitors are and which scorecard provider (or providers) my target customer is relying upon, I can adversely affect their scores. A few faked “breached password lists” posted to PasteBin and underground sites, a handful of spam and phishing emails sent, a new domain name registration and craftily constructed website, a few subtle contributions to IP blacklists, etc. and their score is affected.
I haven’t looked recently, but I wouldn’t be surprised if some blackhat entrepreneurs haven’t already launched such a service line. I’m sure it could pay quite well and requires little effort beyond the number of disinformation services that already exist underground. If scorecarding ever becomes valuable, so too will its deception.

6. Low Barrier to Market Entry
The barrier for entry in to the scorecarding industry is incredibly low. Armed with “proprietary” techniques and “specialist” data sources, anyone can get started in the business. If for some reason third-party scorecarding becomes popular and financially lucrative, then I anticipate that any of the popular managed security services providers (MSSP) or automated vulnerability (VA) assessment providers could launch their competitive service with as little as a month’s notice and only a couple of engineers.
At some point in the future, if there ever were to be standardization of scorecarding scores and evaluation criteria, that’s when the large MSSP’s and VA’s would likely add such a service. The problem for the all the new start-ups and longer-toothed start-ups is that these MSSP’s and VA’s would have no need to acquire the technology or clientele.

7. Defending a Score
Defending the integrity and righteousness of your independent scoring mechanism is difficult and expensive. Practically all the scorecard providers I’ve met like to explain their efficacy of operation as if it were a credit bureau’s Credit Score – as if that explains the ambiguities of how they score. I don’t know all the data sources and calculations that credit bureaus use in their credit rating systems, but I’m pretty sure they’re not port scanning websites, scraping IP blacklists, and enumerating service banners – and that the people being scored have as much control to modify the data that the scoring system relies upon.
My key point here though lies with the repercussions of getting the score wrong or providing a score that adversely affects an organization to conduct business online – regardless of the scores righteousness. The affected business will question and request the score provider to “fix their mistake” and to seek compensation for the damage incurred. In many ways it doesn’t matter whether the scorecard provider is right or wrong – costs are incurred defending each case (in energy expended, financial resources, lost time, and lost reputation). For cases that eventually make it to court, I think the “look at the financial credit bureau’s” defense will fall a little flat.

Final Thoughts
The industry strongly wants a scoring mechanism to help distinguish good from bad, and to help prioritize security responses at all levels. If only it were that simple, it would have been solved quite some time ago.

Organizations are still trying to make red/amber/green tagging work for threat severity, business risk, and response prioritization. Every security product tasked with uncovering or collating vulnerabilities, misconfigurations, aggregating logs and alerts, or monitoring for anomalies, is equally capable of (and likely is) producing their own scores.

Providing a score isn’t a problem in the security world, the problem lies in knowing how to respond to the score you’ve been presented with!

-- Gunter Ollmann