Showing posts with label hacking. Show all posts
Showing posts with label hacking. Show all posts

Wednesday, December 11, 2019

How Commercial Bug Hunting Changed the Boutique Security Consultancy Landscape

It’s been almost a decade since the first commercial “for-profit” bug bounty companies launched leveraging crowdsourced intelligence to uncover security vulnerabilities and simultaneously creating uncertainty for boutique security companies around the globe.

Not only could crowdsourced bug hunting drive down their consulting rates or result in their best bug hunters turning solo, it raised ethics questions, such as should a consultant previously engaged on a customer security assessment also pursue out-of-hours bug hunting against that same customer. What if she held back findings from the day-job to claim bounties at night?

With years of bug bounty programs now behind us, it is interesting to see how the information security sector transformed – or didn’t.


The fears of the boutique security consultancies – particularly those offering penetration testing and reverse engineering expertise – were proven unfounded. A handful of consultants did slip away and adopt full-time bug bounty pursuit lifestyles, but most didn’t. Nor did those companies feel a pinch on their hourly consulting rates. Instead, a few other things happened.

First, the boutiques upped the ante by repositioning their attack-based services – defining aggressive “red team” methodologies and doubling down on the value of combining black-box with white-box testing (or reverse engineering combined with code reviews) to uncover product and application bugs in a more efficient manner. Customers were (and are) encouraged to use bug bounties as a “first-pass filter” for finding common vulnerabilities – and then turn to dedicated experts to uncover (and help remediate) the truly nasty bugs.

Second, they began using bug bounty leaderboard tables as a recruitment vehicle for junior consultants. It was a subtle, but meaningful change. Previously, a lot of recruitment had been based off evaluating in-bound resumes by how many public disclosures or CVEs a security researcher or would-be consultant had made in the past. By leveraging the public leaderboards, suddenly there was a target list of candidates to go after. An interesting and obvious ramification was (and continues to be) that newly rising stars on public bug bounty leaderboards often disappear as they get hired as full-time consultants.

Third, bug bounty companies struggled with their business model. Taking a slice of the vendors payments to crowdsourced bug hunters sounded easier and less resource intensive than it turned out. The process of triaging the thousands of bug submissions – removing duplicates, validating proof-of-concept code, classifying criticality, and resolving disparities in hunter expectations – is tough work. It’s also something that tends to require a high degree of security research experience and costly expertise that doesn’t scale as rapidly as a crowdsource community can. The net result is that many of the bug bounty crowdsource vendors were forced to outsource sizable chunks of the triage work to boutique consultancies – as many in-house bug bounty programs also do.

A fourth (but not final) effect was that some consulting teams found contributing to public bug bounty programs an ideal way of cashing in on consulting “bench time” when a consultant is not directly engaged on a commercial project. Contributing to bug bounties has proven a nice supplement to what was previously lost productivity.

Over the last few years I’ve seen some pentesting companies also turn third-party bug bounty research and contribution into in-house training regimes, marketing campaigns, and an engagement model to secure new customers, e.g., find and submit bugs through the bug bounty program and then reach out directly to the customer with a bag full of more critical bugs.

Given the commercial pressures of on third-party bug bounty companies, it was not unexpected that they would seek to stretch their business model towards higher premium offerings, such as options for customers to engage with their best and most trusted bug hunters before opening up to the public or offering more traditional report-based “assessments” of the company’s product or website. More recently, some bug bounty vendors have expanded offerings to encompass community managed penetration testing and red team services.

The lines continue to blur between the boutique security consultancies and crowdsourcing bug bounty providers. It’ll be interesting to see what the landscape looks like in another decade. While there is a lot to be said and gained from crowdsourced security services, I must admit that the commercial realities of operating businesses that profit from managing or middle-manning their output strikes me as a difficult proposition in the long run.

I think the crowdsourcing of security research will continue to hold value for the businesses owning the product or web application, and I encourage businesses to take advantage of the public resource. But I would balance that with the reliability from engaging a dedicated consultancy for the tougher stuff.

-- Gunter Ollmann

First Published: SecurityWeek - December 11, 2019

Thursday, November 14, 2019

Securing Autonomous Vehicles Paves the Way for Smart Cities

As homes, workplaces, and cities digitally transform during our Fourth Industrial Revolution, many of those charged with securing this digital future can find it difficult to “level up” from the endpoints and focus on defining and solving the larger problem sets. It is easy to get bogged down in the myriad of smart and smart-enough devices that constitute “IoT” in isolation of the overall security scope of the smart city – losing both valuable context and constraints.

While “smart city” can mean a bunch of things to different people, for city planners and officials, it’s definition and implementation problems are quite well understood. The vendors that come knocking on their doors promote point solutions – smart traffic control systems, 5G and ultra-high bandwidth wireless communications, driverless vehicles, etc. – leaving the cities’ IT, operational technology (OT), and infosec teams to bring it all together.

An essential part of a security professional’s work is diving deep into the flaws and perils of individual products and clusters of technologies. But trying to “solve security” at a city level is an entirely different paradigm.


A substantial number of my peers and security researchers I’ve worked with over the past couple of decades have focused their energies on securing autonomous vehicles. The threats are varied – ranging from bypassing emission and speed controls to evading the next generation of city road taxes and insurance regulations to malicious remote control of someone else’s vehicle – yet mostly isolated to the vehicles themselves. From what I’m seeing and hearing, they’re doing a great job in securing these vehicles. Their security successes also advance traditional transit solutions, which helps smart cities keep pace with the transportation needs of a growing population. 

Given the continued urbanization of human population, the growth and attraction of megacities (10 million plus inhabitants), and the strains on traditional transport systems, the thought of increasing personal-use autonomous vehicles in these heavily congested cities is outdated and arguably ludicrous. Today’s megacities are already battling traffic congestion with zoned charging, elimination of fossil fuels, and outright banning of private transport. Tomorrow’s megacities – jumping from 33 cities today with the largest holding 38 million people to over 100 with populations in excess of 88 million people by 2100 – need to completely rethink their transport systems and the security that goes with it.

Oddly enough, securing mass transit for megacities come with some advantages. Mass transport systems that evolve from trains, trams, and subways, have embedded within them design constraints that positively influence security. For example, driverless cars of today have to navigate and solve all kinds of road and traffic problems while trams stick to pre-defined paths (i.e. rail networks) with greatly simplified routing and traffic signaling. Research papers covering adversarial AI in recent years have focused on attacking deep learning and cognitive AI systems used by autonomous vehicles (e.g. adding stickers to a stop sign and making the driverless car think the sign says 45 mph), but these tactics would have negligible to no impact on reasonably scoped public transport systems.

It is reasonable to assume that the smart cities of the near future will consist of trillions of smart devices – each of them semi or fully managed, providing alerts, logs, and telemetry of their operations. For those city leaders – particularly CIOs, COOs, CTOs, CISOs, and CSOs – the changes needed to manage, secure, certify, and govern all these devices and their output are mind bogglingly huge.

Interestingly enough, the framework for managing data security for millions of chatty networked devices has largely been solved. Having become cloud-native, modern Security Incident and Event Management (SIEM) technologies have proved to be remarkably successful in identifying anomalies, attacks, and misconfigurations.

The data handling capabilities and scalability of cloud-native SIEM may be just the right kind of toolkit to begin to solve smart city operations (and security) at the megacity level. In addition, with advanced AI being a core component of SIEM, the systems that identify and construct attack kill chains and mitigate threats through conditional access rules could instead be used and trained to identify surge transport requirements (due to concerts ending on a rainy day) and automatically reroute and optimize tram or bus capacity to deliver citizens safely (and dryly) to their destinations – as an example. 

Securing smart cities offers many opportunities to rethink our assumptions on security and “level up” the discussion to solve problems at the ecosystem level. Advancements in AI analytics and automated response technologies can handle the logs, alerts, and streaming telemetry that contribute to OT infrastructure security for mega cities. In turn, this increase in data volume fine tunes anomaly and behavioral-based detection systems to operate with higher efficiency and fidelity, which helps secure city-wide IT infrastructure.

-- Gunter Ollmann

First Published: SecurityWeek - November 14, 2019

Tuesday, August 20, 2019

Harnessing Stunt Hacking for Enterprise Defense

Make Sure You Understand the Root Cause of the Vulnerabilities or Attack Vectors Behind the Next Over-Hyped Stunt Hack

Every year, at least one mediocre security vulnerability surprisingly snatches global media attention, causing CISOs and security researchers to scratch their heads and sigh “who cares?”

Following a trail of overly-hyped and publicized security bugs in smart ovens, household fridges, digital teddy bears, and even multi-function toilet-bidets, the last few weeks have seen digital SLR camera vulnerabilities join to the buzz list. Yet, this latest hack boils down to a set of simple WiFi enabled file-sharing flaws in a mid-priced camera that allowed researchers to demonstrate specially crafted ransomware attacks. It is not an obvious or imminent threat to most enterprise networks.

Love it or loathe it, stunt hacking and over-hyped bugs are part of modern information security landscape. While the vast majority of such bugs represent little threat to business in reality, they stir up legitimate questions. Does marketing security hacks to a fever-pitch cause more harm than good? Are stunts a distraction or amplifier for advancing enterprise security?


There is little doubt within the security researcher community that a well-staged vulnerability disclosure can quickly advance stalled conversations with reluctant vendors. Staged demonstrations and a flare for showmanship had the healthcare industry hopping as security flaws embedded in surgically implanted insulin pumps and heart defibrillators became overnight dinner-table discussions and murder plots in TV dramas. A couple years later, prime time news stories of researchers taking control of a reporter’s car – remotely steering the vehicle and disabling breaking – opened eyes worldwide to the threats underlying autonomous vehicles, helping to create new pillars of valued cyber security research.

Novel technologies and new devices draw security researchers like moths to a flame – and that tends to benefit the community as a whole. But it is often difficult for those charged with defending the enterprise to turn awareness into meaningful actions. A CFO who’s been sitting on a proposal for managed vulnerability scanning because the ROI arguments were a little flimsy may suddenly approve it on reading news of how the latest step-tracking watch inadvertently reveals the locations of secret military bases around the world.

In a world of over-hyped bugs, stunt hacking, and branded vulnerability disclosures, my advice to CISOs is to make security lemonade by finding practical next steps to take:

  1. Look beyond the device and learn from the root cause of the security failing. Hidden under most of the past medical device hacks were fundamental security flaws involving outdated plain-text network protocols and passwords, unsigned patching and code execution, replay attacks and, perhaps most worrying, poorly thought through mechanisms to fix or patch devices in the field. The outdated and unauthenticated Picture Transfer Protocol (PTP) was the root cause of the SLR camera hack.
  2. Use threat models to assess your enterprise resilience to recently disclosed vulnerabilities. The security research community waxes and wanes on attack vectors from recent bug disclosures, so it often pays to follow which areas of research are most in vogue. The root cause vulnerabilities of the most recent hacks serve as breadcrumbs for other researchers hunting for similar vulnerabilities in related products. For this reason, build threat models for all form factors the root flaw can affect.
  3. Learn, but don’t obsess, over vulnerable device categories and practice appropriate responses. At the end of the day, a WiFi-enabled digital SLR camera is another unauthenticated removable data storage unit that can potentially attach to the corporate network. As such, the response should be similar to any other roaming exfiltration device. Apply the controls for preventing a visitor or employee roaming a datacenter with a USB key in hand to digital SLR cameras.

Regardless of how you feel about the showmanship of stunt hacking, take the time to understand and learn from their root causes. While it is highly unlikely that an attacker will attempt to infiltrate your organization with a digital SLR camera (there are far easier and more subtle hacking techniques that will achieve the same goal), it is still important to invest in appropriate policies and system controls to defend vulnerable vectors.

With more people seeking futures as security researchers, it would be reasonable to assume that more bugs (in a broader range of devices and formats) will be disclosed. What may originally present as a novel flaw in, let us say, a robotic lawnmower, may become the seed vector for uncovering and launching new 0-day exploits against smart power strips in the enterprise datacenter at a later date.

Chuckle or cringe, but make sure you understand the root cause of the vulnerabilities or attack vectors behind the next over-hyped stunt hack and don’t have similar weaknesses in your enterprise.

-- Gunter Ollmann

First Published: SecurityWeek - August 20, 2019

Tuesday, July 2, 2019

Defending Downwind as the Cyberwar Heats up

The last few weeks have seen a substantial escalation of tensions between Iran and the US as regional cyberattacks gain pace and sophistication with Iran’s downing of a US drone, possibly leveraging its previously claimed GPS spoofing and GNSS hacking skills (to trick it into Iranian airspace) and a retaliatory US cyberattack knocking out Iranian missile control systems


While global corporations have been targeted by actors often cited as supported by or sympathetic to Iran, the escalating tensions in recent weeks will inevitably bring more repercussions as tools and tactics change with new strategic goals. Over the last decade, at other times of high tension, sympathetic malicious actors have often targeted the websites or networks of Western corporations – pursuing defacement and denial of service strategies. Recent state-level cyberattacks show actors evolving from long-cycle data exfiltration to include tactical destruction.

State sponsored attacks are increasingly focused on destruction. Holmium, a Middle Eastern actor, has been observed recently by Microsoft to target oil & gas and maritime transportation sectors – using a combination of tactics to gain access to networks, including socially engineered spear phishing operations and password spray attacks – and are increasingly associated with destructive attacks.

Many businesses may be tempted to take a “business as usual” stand but there is growing evidence that, as nation state cyber forces square off, being downwind of a festering cyberwar inevitably exposes organizations to collateral damage. 

As things heat up, organizations can expect attacks to shift from data exfiltration to data destruction and for adversarial tooling to grow in sophistication as they expose advanced tools and techniques, such as zero-day exploits, in order to gain a temporary advantage on the cyber battlefield.

Against this backdrop, corporate security teams and CISOs should focus on the following areas:

  1. Pivot SOC teams from daily worklist and ticket queue response to an active threat hunting posture. As state-sponsored attackers escalate to more advanced tools and break out cherished exploits, some attacks will become more difficult to pick up with existing signature and payload-based threat detection systems. Consequently, SOC teams will need to spend more time correlating events and logs, and hunting for new attack sequences.
  2. Prepare incident responders to investigate suspicious events earlier and to mitigate threats faster. As attackers move from exfiltration to destruction, a timely response becomes even more critical.
  3. Review the organization’s back-up strategy for all critical business data and business systems, and verify their recoverability. As the saying goes, a back-up is only as good as its last recovery. This will provide continuity in the event actors using ransomware no longer respond to payment, leaving your data unrecoverable.
  4. Update your business response plan and practice disaster recovery to build your recovery muscle memory. Plan for new threat vectors and rapid destruction of critical business systems, both internal and third-party.
  5. Double-check the basics and make sure they’re applied everywhere. Since so many successful attack vectors still rely on social engineering and password guessing, use anti-phishing and multi-factor authentication (MFA) as front-line defenses for the cyberwar. Every privileged account throughout the organization and those entrusted to “trusted” supplier access should be using MFA by default.
  6. Engage directly with your preferred security providers and operationalize any new TTPs and indicators associated with Middle Eastern attack operators that they can share with you. Make sure that your hunting tools account for the latest threat intelligence and are capable of alerting the right teams should a threat surface.
  7. For organizations that have adopted cyber-insurance policies to cover business threats that cannot be countered with technology, double-check which and what “acts of war” are covered.

While implementing the above advice will place your organization on a better “cyberwar footing”, history shows that even well-resourced businesses targeted by Iranian state-sponsored groups fall victim to these attacks. Fortunately, there’s a silver lining in the storm clouds. Teaming up in-house security teams with public cloud providers puts companies in a much better position to respond to and counter such threats because doing so lets them leverage the massively scalable capabilities of the cloud provider’s infrastructure and the depth of security expertise from additional responders. For this reason, organizations should consider which critical business systems could be duplicated or moved for continuity and recovery purposes to the cloud, and in the process augment their existing on-premises threat response.

-- Gunter Ollmann

First Published: SecurityWeek - July 2, 2019

Wednesday, January 9, 2019

Hacker History III: Professional Hardware Hacker

Following on from my C64 hacking days, but in parallel to my BBS Hacking, this final part looks at my early hardware hacking and creation of a new class of meteorological research radar...

Ever since that first C64 and through the x86 years, I’d been hacking away – mostly software; initially bypassing copy-protection, then game cracks and cheats, followed by security bypasses and basic exploit development.

Before bug bounty programs were invented in the 2010’s, as early as 1998 I used to say the best way to learn and practice hacking skills was to target porn sites. The “theory” being that they were constantly under attack, tended to have the best security (yes, even better than the banks) and, if you were ever caught, the probability of ever appearing in court and having to defend your actions in front of a jury was never going to happen - and the folks that ran and built the sites would be the first to tell you that.

In the mid-to-late 1980’s, following France’s 1985 bombing and sinking of the Rainbow Warrior in New Zealand, if you wanted to learn to hack and not worry about repercussions – any system related to the French Government was within scope. It was in that period that war-dialing and exploit development really took off and, in my opinion, the professional hacker was born – at least in New Zealand it was. Through 1989-1991 I had the opportunity to apply those acquired skills in meaningful ways – but those tales are best not ever written down.

Digital Radar

Easily the most fun hardware hacking I’ve ever done or been involved with ended up being the basis for my post-graduate research and thesis. My mixed hardware hacking and industrial control experience set me up for an extraordinary project as part of my post graduate research and eventual Masters in Atmospheric Physics.

I was extremely lucky:
  1. The first Mhz digitizer cards were only just hitting the market
  2. PC buses finally had enough speed to handle Mhz digitizer cards
  3. Mass storage devices (i.e. hard drives) were finally reaching an affordable capacity/price
  4. My supervisor was the Dean of Physics and had oversight of all departments “unused budgets”
  5. Digital radar had yet to be built

My initial mission was to build the world’s first digital high-resolution vertically pointing radar and to use it to prove or disprove the “Seeder-feeder mechanism of orographic rainfall”.

Taking a commercial analogue X-band marine radar and converting the 25 kilo-watt radar with a range of 50 miles and a resolution measured in tens-of meters, to a digital radar with an over-sampled resolution of 3.25 cm out to a range of 10km was the start of the challenge – but successfully delivered nevertheless. That first radar was mounted on the back of a 4x4 Toyota truck – which was great at getting to places no radar had been before. Pointing straight up was interesting – and served its purpose of capturing the Seeder-feeder mechanism in operation – but there was room for improvement.

Back at the (family) factory, flicking through pages of operation specification tables for electric motors (remember – pre-Internet/pre-Google) and harnessing the power of MS-DOS based AutoCAD, I spec'ed out and designed a mounting mechanism for making the radar scan the sky like a traditional meteorological radar – but one that could operate in winds of 80 mph winds, at high altitude, in the rain. Taking a leaf out of my father’s design book – it was massively over engineered ;-)

Home for many months - the mobile high resolution radar + attached caravan. Circa 1994.

This second radar was mounted to an old tow-able camper-van. It was funny because, while the radar would survive 80+ mph winds, a gust of 50+ mph would have simply blown over the camper-van (and probably down the side of a hill or over a cliff). Anyhow, that arrangement (and the hacks it took to get working) resulted in a few interesting scientific advances:
  • Tracking bumblebees. Back in 1994, while GPS was a thing, it didn’t have very good coverage in the southern hemisphere and, due to US military control, it’s positioning resolution was very poor (due to Selective Availability). So, in order to work out a precise longitude and latitude of the radar system, it was back to ancient ways and tracking the sun. I had code that ran the radar in passive mode, scanned horizontally and vertically until it found that big microwave in the sky, and tracked its movements – and from there determine the radar’s physical location. (Un)fortunately, through a mistake in my programming and leaving the radar emitting it's 25kW load, I found it could sometimes lock-on and track bright blips near ground-level. Through some investigation and poor coding, I’d managed to build a radar tracking system for bumblebees (since bumblebees were proportional to the wavelength and over-sampled bin size – they were highly reflective and dominated the sun).
  • Weather inside valleys. The portability of the camper-van and the high resolution of the radar also meant that for the first time ever it was possible to monitor and scientifically measure the weather phenomenon within complex mountain valley systems. Old long-range radar, with resolutions measured in thousands of cubic meters per pixel, had only observed weather events above the mountains. Now it was possible to digitally observe weather events below that, inside valleys and between mountains, at bumblebee resolution.
  • Digital contrails. Another side-effect of the high resolution digital radar was its ability to measure water density of clouds even on sunny days. Sometimes those clouds were condensation trails from aircraft. So, with a little code modification, it became possible to identify contrails and follow their trails back to their root source in the sky – often a highly reflective aircraft – opening up new research paths into tracking stealth aircraft and cruise missiles.
It was a fascinating scientific and hacking experience. If you’ve ever stood in a doorway during a heavy rainfall event and watched a curtain of heavier rainfall weave its way slowly down the road and wondered at the physics and meteorology behind it, here was a system that digitally captured that event from a few meters above the ground, past the clouds, through the melting layer, and up to 10 km in the air – and helped reset and calibrate the mathematical models still used today for weather forecasting and global climate modeling.

By the end of 1994 it was time to wrap up my thesis, leave New Zealand, head off on my Great OE, and look for full-time employment in some kind of professional capacity.


When I look back at what led me to a career in Information Security, the 1980's hacking of protected C64 games, the pre-Internet evolution of BBS and it's culture of build collaboration, and the hardware hacking and construction of a technology that was game changing (for it's day) - they're the three things (and time periods) that remind me of how I grew the skills and developed the experience to tackle any number of subsequent Internet security problems - i.e. hack my way through them. I think of it as a unique mix. When I meet other hackers who's passions likewise began in the 1980's or early 1990's, it's clear that everyone has their own equally exciting and unique journey - which makes it all the more interesting.

I hope the tale of my journey inspires you to tell your own story and, for those much newer to the scene, proves that us older hands probably didn't really have a plan on how we got to where we are either :-)

This is PART THREE of THREE.

PART ONE (C64 Hacking)  and PART TWO (BBS Hacking) are available to read too.

--Gunter


Tuesday, January 8, 2019

Hacker History II: The BBS Years

Post-C64 Hacking (in Part 1 of Hacker History)... now on to Part 2: The BBS Years

Late 1986 (a few months before I started my first non-newspaper delivery and non-family-business job – working at a local supermarket) I launched my first bulletin board system (BBS). I can’t remember the software that I was running at the time, but it had a single 14k dial-up facility running on all the extra C64 equipment I’d been “gifted” by friends wanting faster/always access too my latest cheats and hacks.

The premise behind the BBS was two-fold: I wanted to learn something new (and hacking together a workable and reliable BBS system in the mid-80’s was a difficult enough challenge), and I saw it as a saving time distribution channel for my cheats/hacks; others could dial-in and download themselves, instead of me messing around with stacks of floppy discs etc.

At some point in 1986 I’d also saved enough money to by an IBM PC AT clone – a whopping 12Mhz 80286 PC, complete with Turbo button and a 10Mb hard drive. I remember specking out the PC with the manufacturer. They were stunned that a kid could afford their own PC AT and that he planned to keep it in his bedroom, and that he wanted an astounding 16k of video memory (“what do you need that for? Advanced ACAD?”)!

By 1989 the BBS had grown fairly large with a couple hundred regular members with several paying monthly subscription fees, but the stack of C64’s powering the BBS were showing their age and, in the meantime my main computing had moved down the PC path from 286, to 386, and to a brand-spanking new 486.

It was time to move on from C64 and go full-PC – both with the BBS and the hacks/cheats I was writing.

So in 1990, over the Summer/Christmas break from University I set about shifting the BBS over to a (single) PC – running Remote Access, with multiple dial-in lines (14.4k for regular users and 28.8k for subscribers).


The dropping of C64 and move to fully-fledged x86 PC resulted in a few memorable times for me:
  • BBS’s are like pets. Owning and operating a BBS is a lot like looking after an oversized pet that eats everything in its path and has destructive leanings; they’re expensive and something is always going wrong. From the mid-80’s to mid-90’s (pre-“Internet”) having a BBS go down would be maddening to all subscribers. Those subscribers would be great friends when things were running, or act like ungrateful modern-day teenagers being denied “screen-time” if they couldn’t dial-in for more than a couple of days. Keeping a BBS running meant constant tinkering under the covers – learning the intricacies of PC hardware architecture, x86 assembly, live patching, memory management, downtime management, backup/recovery, and “customer management”. The heady “good-old days” of PC development.
  • International Connectivity. With me in University and too-often referred to as the “student that knows more about computers than the campus IT team”, in 1991 I added Fidonet and Usenet support to my BBS. There had been a few BBS’s in New Zealand before mine to offer these newsgroups, but they were very limited (i.e. a small number of groups) because they were reliant upon  US dial-up for synching (which was damned expensive!). My solution was to use a spare modem in the pack of a University lab PC to connect semi-permanently to my BBS. From there my BBS used the Universities “Internet” undersea cable connectivity to download and synch all the newsgroups. Technically I guess you could call it my first “backdoor” hacking experience – which ended circa 1993 after being told to stop as (by some accounts) the BBS was peak consuming 1/3 of the entire countries academic bandwidth.
  • First Security Disclosure. Setting up Remote Access (RA) was an ordeal. It was only a week later – Christmas Eve 1990 – that I publicly disclosed my first security vulnerability (with a self-developed patch); an authentication bypass to the system that controlled what games or zones a subscriber could access. I can’t remember how many bugs and vulnerabilities I found in RA, QEMM, MS-DOS, modem drivers, memory managers, and the games that ran on RA over those years. Most required some kind of assembly instruction patch to fix.
  • Mailman and Sysop. Ever since those first BBS days in 1986, I’d felt that email (or Email, or E-Mail) would be the future for communications. The tools and skills needing for managing a reliable person-to-person or person-to-group communication system had to be built and learned – as too did the management of trust and the application of security. Some BBS operators loved being Sysops (System Operators – i.e. Admins) because they could indulge their voyeurism tendencies. I hated BBS’s and Sysops that operated that way and it became an early mission of mine to figure out ways of better protecting subscriber messages.

That fumbling about and experimenting with PC hardware, MS-DOS, and Windows at home and with the Bulletin Board System, coupled with learning new systems at University such as DEC Alpha, OpenVMS, Cray OS, and HP-UX in the course of my studies, and the things I had to piece-together and program at my parents factories (e.g. PLC’s,  ICS’s, RTU’s, etc.) all combined to add to a unique perspective on operating systems and hardware hacking.

By the time I’d finished and submitted my post-grad research thesis, it was time to tear down the BBS, sell all my computers and peripherals, and leave New Zealand for my Great OE (Overseas Experience) at the end of 1994.

This is PART TWO of THREE.

PART ONE (C64 Hacking) was posted yesterday and PART THREE (Radar Hacking) will be on Wednesday.

Thursday, January 15, 2015

A Cancerous Computer Fraud and Misuse Act

As I read through multiple postings covering the proposed Computer Fraud and Misuse Act, such as the ever-insightful writing of Rob Graham in his Obama's War on Hackers or the EFF's analysis, and the deluge of Facebook discussion threads where dozens of my security-minded friends shriek at the damage passing such an act would bring to our industry, I can't but help myself think that surely it's an early April Fools joke.

The current draft/proposal for the Computer Fraud and Misuse Act reads terribly and, in Orin Kerr's analysis - is "awkward".

The sentiment behind the act appears to be a lashing out response to the evils that have been recently perpetuated by hackers - such as the mega breaches, DDoS's, password dumps, etc. - without any understanding of how the "good guys" do their work and operate at the forefront of stopping these evil-doers.

For those non-security folks, the best analogy I can think of is that a bunch of politicians have been reading how attackers are using knives to cut and stab people in their criminal endeavors, and that without knives those crimes would not have happened. Therefore, to prevent knife-based crime, they legislate that carrying a knife, manufacturing a knife, or using a knife to cut flesh, is punishable with 20 years prison.

Unfortunately, the legislation is written so poorly and generic, that the definition of "knife" includes butter knifes and scalpels - and overnight the medical profession of surgery becomes illegal. Even the process of helping those poor souls that have been stabbed by a criminal can no longer be saved by a scalpel wielding doctor.

That, in a nutshell, is what many feel the impact of this act will be on the Internet security industry. Penetration testing, bug hunting, and vulnerability research will be caught by this and, as Rob Graham postulates, there is reason to speculate that even posting a link to a vulnerability could land bot the poster and the clicker on the wrong side of the law.

One of the budding industries that will feel this the most will be threat analysis and companies/services that focus on early alerting and attribution of cybercrime. And that in my mind is particularly ominous.

Now, with that all said, is the act salvageable? Maybe - but it'll need a lot of work. I've heard a few folks argue that this US act is very similar to the UK's Computer Misuse Act of 1990. I mostly agree that a parallel act in the US would be helpful in dealing with the current plague of cybercrime, but what's been proposed thus far has the polish and refinement of a rusty piece of barbed-wire.

The only organization that'll benefit from the act as proposed right now is the US' privatized incarceration services.

-- Gunter

Monday, October 6, 2014

The Pillars of Trust on the Internet

As readers may have seen recently, I've moved on from IOActive and joined NCC Group. Here is my first blog under the new company... first published September 15th 2014...

The Internet of today in many ways resembles the lawless Wild West of yore. There are the land-rushes as corporations and innovators seek new and fertile grounds, over yonder there are the gold-diggers panning for nuggets in the flow of big data, and crunching under foot are the husks of failed businesses and discarded technology.

For many years various star-wielding sheriffs have tried to establish a brand of law and order over the Internet, but for every step forward a menagerie of robbers and scoundrels have found new ways to pick-pocket and harass those trying to earn a legitimate crust. Does it really have to continue this way?

Over the years I’ve seen many technologies invented and embraced with the goal of thwarting the attackers and miscreants that inhabit the Internet.

I’m sure I’m not alone in the feeling that with each new threat (or redefinition of a threat) that comes along someone volunteers another “solution” that’ll provide temporary relief; yet we continue to find ourselves in a never-ending swatting match with the tentacles of cyber crime.

With so many threats to be faced and a slew of jargon to wade through, it shouldn’t be surprising to readers that most organisations (and their customers) often appear baffled and bewildered when they become victims of cyber crime – whether that is directly or indirectly.

While the newspapers and media outlets may discuss the scale of stolen credit cards from the latest batch of mega-breaches and strive to provide common sense (and utterly ignored) advice on password sophistication and how to be mindful of what we’re clicking on, the dynamics of the attack are easily glossed over and subsequently lost to those that are in the best position to mitigate the threat.

The vast majority of successful breaches begin with deception, and depend upon malware. The deception tactics usually take the form of social engineering – such as receiving an email pretending to be an invoice from a trusted supplier – with the primary objective being the installation of a malicious payload.

The dynamics of the trickery and the exploits used to install the malware are ingeniously varied but, all too often, it’s the capabilities of the malware that dictate the scope and persistence of the breach.

While there exist a plethora of technologies that can layered one atop another like some gargantuan wedding cake to combat each tactic, tool, or subversive technique the cyber criminal may seek to employ in their exploitation of a system, doing so successfully is as difficult as attempting to stack a dozen feral cats – and just as likely to leave you scratched and scarred.

In the past I’ve publicly talked about the paradigm change in the way organisations have begun to approach breaches… to accept that they will happen repeatedly and to prioritise on the rapid (and near instantaneous) detection and automated remediation of the compromised systems, rather than waste valuable cycles analysing yesterday’s malware or exploits, or churning over attribution possibilities.

But I think there’s a second paradigm change underway – one which doesn’t attempt to change the entire Internet, but instead focuses on mitigating the deception tactics used by the attackers at the root and creating a safe and trusted environment to conduct business within.

I think the time has come to build (rather than give lip-service to) a safe corner of the Internet and expand from there. It’s the reason I’ve come and joined NCC Group. It is my hope and aspiration that the Domain Services division will provide that anchor point, that Rock of Gibraltar, that technical credibility and wherewithal necessary to regain trust in doing business over the Internet once again.

A core tenant to building a trusted and safe platform for business has to start with the core building blocks of the Internet. Domain Name System (DNS) and Domain registration lie at the very heart of the Internet and yet, from a security perspective, they’ve been largely neglected as a means to neutering the most common and vile social engineering vectors of attack.

Couple tight control of domain registration and DNS with perpetual threat monitoring and scanning, merge it with vigilant policing of secure configuration policies and best practices (not some long-in-the-tooth consensus-strained minimum standards of a decade ago), and you have the pillars necessary to elevate a corner of the Internet beyond the reach of the general lawlessness that’s plaguing business today. And that’s before we get really innovative.

It wasn’t guns or graves that tamed the West of yore, it was the juggernaut of technology that began with railway lines and the telegraph. The mechanisms for restoring business trust in the Internet are now in play. Exciting times lay ahead.

Thursday, July 31, 2014

Smart homes still not "smarter than a fifth-grader"

Smart Home technologies continue to make their failures headline news. Only yesterday did the BBC ran the story "Smart home kit proves easy to hack, says HP study" laying out a litany of vulnerabilities and weaknesses uncovered in popular internet-connected home gadgetry by HP's Fortify security division. If nothing else the story proves that household vulnerabilities are now worthy of attention - no matter how late HP and the BBC are to the party.


As manufacturers try to figure out how cram internet connectivity in to their (formerly) inanimate appliance and turn it in something you can manage from your iPad while flying from Atlanta to Seattle over the in-air WiFi system, you've got to wonder "do we deserve this?"

I remember a study done several years ago about consumer purchasing of Blu-ray players. The question seeking an answer at the time was why were some brands of player outselling others when they were all the same price point and did the same thing? Was brand loyalty or familiarity a critical factor? The answer turned out to be much simpler. The Blu-ray player with the highest sales simply had a longer list of "functions" than the competitors. If all the boxes for the players list 50 carefully bullet-listed pieces of techno-jargon and the other box listed 55 - then obviously that one had to be better, even if the consumer had no understanding of what more than a dozen of those bullets even meant.

In many ways both the manufacturers and consumers of Smart Home technologies continue to fall in to that same trap. Choosing a new LCD HiDef TV is mostly about long lists of word-soup techno-babble, but that babble now extends into all the new features your replacement TV can do via the Internet now. How did we ever survive before we could issue a command via the TV (hidden 5 levels deep in menus and after 3 agonizing minutes of waiting for the various apps to initialize) in order to make the popcorn machine switch from unsalted to salted butter?

For as much thought as goes in to the buying decision over one long list of features against another, the manufacturers of Smart Home devices appear to exhibit about the same effort in securing the features they're trying to cram in. That is to say, very little.

In some ways it's not even the product engineering teams that are at fault. It's more than likely they've been honing their product for 20+ years from an electrical engineering perspective. But now they've been forced to find someway of wedging a TCP/IP stack in to the device and construct a mobile Web app for its remote management. They aren't software engineers, they certainly aren't cyber-security engineers, and you can bet they've never had to adhere to a Security Development Lifecycle (SDL).

How to I characterize the state of Smart Home device security today? I think Richard O'Brien summed it up best in his play The Rocky Horror Picture Show - Let's do the timewarp again!!! The overall state of Smart Home security today is as if we've jumped back 20 years in time to Windows 95.

Wednesday, March 26, 2014

A Bigger Stick To Reduce Data Breaches

On average I receive a postal letter from a bank or retailer every two months telling me that I’ve become the unfortunate victim of a data theft or that my credit card is being re-issued to prevent against future fraud. When I quiz my friends and colleagues on the topic, it would seem that they too suffer the same fate on a reoccurring schedule. It may not be that surprising to some folks. 2013 saw over 822 million private records exposed according to the folks over at DatalossDB – and that’s just the ones that were disclosed publicly.

It’s clear to me that something is broken and it’s only getting worse. When it comes to the collection of personal data, too many organizations have a finger in the pie and are ill equipped (or prepared) to protect it. In fact I’d question why they’re collecting it in the first place. All too often these organizations – of which I’m supposedly a customer – are collecting personal data about “my experience” doing business with them and are hoping to figure out how to use it to their profit (effectively turning me in to a product). If these corporations were some bloke visiting a psychologist, they’d be diagnosed with a hoarding disorder. For example, consider what criteria the DSM-5 diagnostic manual uses to identify the disorder:

  • Persistent difficulty discarding or parting with possessions, regardless of the value others may attribute to these possessions.
  • This difficulty is due to strong urges to save items and/or distress associated with discarding.
  • The symptoms result in the accumulation of a large number of possessions that fill up and clutter active living areas of the home or workplace to the extent that their intended use is no longer possible.
  • The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
  • The hoarding symptoms are not due to a general medical condition.
  • The hoarding symptoms are not restricted to the symptoms of another mental disorder.

Whether or not the organizations hording personal data know how to profit from it or not, it’s clear that even the biggest of them are increasingly inept at protecting it. The criminals that are pilfering the data certainly know what they’re doing. The gray market for identity laundering has expanded phenomenally since I talked about at Blackhat in 2010.

We can moan all we like about the state of the situation now, but we’ll be crying in the not too distant future when statistically we progress from being a victim to data loss, to being a victim of (unrecoverable) fraud.

The way I see it, there are two core components to dealing with the spiraling problem of data breaches and the disclosure of personal information. We must deal with the “what data are you collecting and why?” questions, and incentivize corporations to take much more care protecting the personal data they’ve been entrusted with.

I feel that the data hording problem can be dealt with fairly easily. At the end of the day it’s about transparency and the ability to “opt out”. If I was to choose a role model for making a sizable fraction of this threat go away, I’d look to the basic component of the UK’s Data Protection Act as being the cornerstone of a solution – especially here in the US. I believe the key components of personal data collection should encompass the following:

  • Any organization that wants to collect personal data must have a clearly identified “Data Protection Officer” who not only is a member of the executive board, but is personally responsible for any legal consequences of personal data abuse or data breaches.
  • Before data can be collected, the details of the data sought for collection, how that data is to be used, how long it would be retained, and who it is going to be used by, must be submitted for review to a government or legal authority. I.e. some third-party entity capable of saying this is acceptable use – a bit like the ethics boards used for medical research etc.
  • The specifics of what data a corporation collects and what they use that data for must be publicly visible. Something similar to the nutrition labels found on packaged foods would likely be appropriate – so the end consumer can rapidly discern how their private data is being used.
  • Any data being acquired must include a date of when it will be automatically deleted and removed.
  • At any time any person can request a copy of any and all personal data held by a company about themselves.
  • At any time any person can request the immediate deletion and removal of all data held by a company about themselves.

If such governance existed for the collection and use of personal data, then the remaining big item is enforcement. You’d hope that the morality and ethics of corporations would be enough to ensure they protected the data entrusted to them with the vigor necessary to fight off the vast majority of hackers and organized crime, but this is the real world. Apparently the “big stick” approach needs to be reinforced.

A few months ago I delved in to how the fines being levied against organizations that had been remiss in doing all they could to protect their customer’s personal data should be bigger and divvied up. Essentially I’d argue that half of the fine should be pumped back in to the breached organization and used for increasing their security posture.

Looking at the fines being imposed upon the larger organizations (that could have easily invested more in protecting their customers data prior to their breaches), the amounts are laughable. No noticeable financial pain occurs, so why should we be surprised if (and when) it happens again. I’ve become a firm believer that the fines businesses incur should be based upon a percentage of valuation. Why should a twenty-billion-dollar business face the same fine for losing 200,000,000 personal records as a ten-million-dollar business does for losing 50,000 personal records? If the fine was something like two-percent of valuation, I can tell you that the leadership of both companies would focus more firmly on the task of keeping yours and mine data much safer than they do today. 

-- Gunter Ollmann

First Published: IOActive Blog - March 26, 2014

Thursday, February 6, 2014

An Equity Investor’s Due Diligence

Information technology companies constitute the core of many investment portfolios nowadays. With so many new startups popping up and some highly visible IPO’s and acquisitions by public companies egging things on, many investors are clamoring for a piece of the action and looking for new ways to rapidly qualify or disqualify an investment ; particularly so when it comes to hottest of hot investment areas – information security companies. 

Over the years I’ve found myself working with a number of private equity investment firms – helping them to review the technical merits and implications of products being brought to the market by new security startups. In most case’s it’s not until the B or C investment rounds that the money being sought by the fledgling company starts to get serious to the investors I know. If you’re going to be handing over money in the five to twenty million dollar range, you’re going to want to do your homework on both the company and the product opportunity. 

Over the last few years I’ve noted that a sizable number of private equity investment firms have built in to their portfolio review the kind of technical due diligence traditionally associated with the formal acquisition processes of Fortune-500 technology companies. It would seem to me that the $20,000 to $50,000 price tag for a quick-turnaround technical due diligence report is proving to be valuable investment in a somewhat larger investment strategy. 

When it comes to performing the technical due diligence on a startup (whether it’s a security or social media company for example), the process tends to require a mix of technical review and tapping past experiences if it’s to be useful, let alone actionable, to the potential investor. Here are some of the due diligence phases I recommend, and why:

  1. Vocabulary Distillation – For some peculiar reason new companies go out of their way to invent their own vocabulary as descriptors of their value proposition, or they go to great lengths to disguise the underlying processes of their technology with what can best be described as word-soup. For example, a “next-generation big-data derived heuristic determination engine” can more than adequately be summed up as “signature-based detection”. Apparently using the word “signature” in your technology description is frowned upon and the product management folks avoid the use the word (however applicable it may be). Distilling the word soup is a key component of being able to compare apples with apples.
  2. Overlapping Technology Review – Everyone wants to portray their technology as unique, ground-breaking, or next generation. Unfortunately, when it comes to the world of security, next year’s technology is almost certainly a progression of the last decade’s worth of invention. This isn’t necessarily bad, but it is important to determine the DNA and hereditary path of the “new” technology (and subcomponents of the product the start-up is bringing to market). Being able to filter through the word-soup of the first phase and determine whether the start-up’s approach duplicates functionality from IDS, AV, DLP, NAC, etc. is critical. I’ve found that many start-ups position their technology (i.e. advancements) against antiquated and idealized versions of these prior technologies. For example, simplifying desktop antivirus products down to signature engines – while neglecting things such as heuristic engines, local-host virtualized sandboxes, and dynamic cloud analysis.
  3. Code Language Review – It’s important to look at the languages that have been employed by the company in the development of their product. Popular rapid prototyping technologies like Ruby on Rails or Python are likely acceptable for back-end systems (as employed within a private cloud), but are potential deal killers to future acquirer companies that’ll want to integrate the technology with their own existing product portfolio (i.e. they’re not going to want to rewrite the product). Similarly, a C or C++ implementation may not offer the flexibility needed for rapid evolution or integration in to scalable public cloud platforms. Knowing which development technology has been used where and for what purpose can rapidly qualify or disqualify the strength of the company’s product management and engineering teams – and help orientate an investor on future acquisition or IPO paths.
  4. Security Code Review – Depending upon the size of the application and the due diligence period allowed, a partial code review can yield insight in to a number of increasingly critical areas – such as the stability and scalability of the code base (and consequently the maturity of the development processes and engineering team), the number and nature of vulnerabilities (i.e. security flaws that could derail the company publicly), and the effort required to integrate the product or proprietary technology with existing major platforms.
  5. Does it do what it says on the tin? – I hate to say it, but there’s a lot of snake oil being peddled nowadays. This is especially so for new enterprise protection technologies. In a nut-shell, this phase focuses on the claims being made by the marketing literature and product management teams, and tests both the viability and technical merits of each of them. Test harnesses are usually created to monitor how well the technology performs in the face of real threats – ranging from the samples provided by the companies user acceptance team (UAT) (i.e. the stuff they guarantee they can do), through to common hacking tools and tactics, and on to a skilled adversary with key domain knowledge.
  6. Product Penetration Test – Conducting a detailed penetration test against the start-up’s technology, product, or service delivery platform is always thoroughly recommended. These tests tend to unveil important information about the lifecycle-maturity of the product and the potential exposure to negative media attention due to exploitable flaws. This is particularly important to consumer-focused products and services because they are the most likely to be uncovered and exposed by external security researchers and hackers, and any public exploitation can easily set-back the start-up a year or more in brand equity alone. For enterprise products (e.g. appliances and cloud services) the hacker threat is different; the focus should be more upon what vulnerabilities could be introduced in to the customers environment and how much effort would be required to re-engineer the product to meet security standards.

Obviously there’s a lot of variety in the technical capabilities of the various private equity investment firms (and private investors). Some have people capable of sifting through the marketing hype and can discern the actual intellectual property powering the start-ups technology – but many do not. Regardless, in working with these investment firms and performing the technical due diligence on their potential investments, I’ve yet to encounter a situation where they didn’t “win” in some way or other. A particular favorite of mine is when, following a code review and penetration test that unveiled numerous serious vulnerabilities, the private equity firm was still intent on investing with the start-up but was able use the report to negotiate much better buy-in terms with the existing investors – gaining a larger percentage of the start-up for the same amount.

-- Gunter Ollmann

First Published: IOActive Blog - February 6, 2014

Saturday, December 7, 2013

Divvy Up the Data Breach Fines

There are now a bunch of laws that require companies to publicly disclose a data breach and provide guidance to the victims associated with the lost data. In a growing number of cases there are even fines to be paid for very large, or very public, or very egregious data breaches and losses of personal information.

I often wonder what happens to the money once the fines have been paid. I'm sure there's some formula or stipulation as to how the monies are meant to be divided up and to which coffers they're destined to fill. But, apart from paying for the bodies that brought forth the case for a fine, is there any consistency to where the money goes and, more to the point, does that money get applied to correcting the problem?

In some cases I guess the fine(s) are being used to further educate the victims on how to better protect themselves, or to go towards third-party credit monitoring services. But come-on, apart from a stinging slap on the wrist for the organization that was breached, do these fines actually make us (or anyone) more secure? In many cases the organization that got breached is treated like the villain - it was their fault that some hackers broke in and stole the data (it reminds me a little of the "she dressed provocatively, so deserved to be raped" arguments). I fail to see how the present "make'em pay a big fine" culture helps to prevent the next one.

A couple of years ago during some MAAWG conference of other, I remember hearing a tale of how Canada was about to bring out a new law affecting the way fines were actioned against organizations that had suffered a data breach. I have no idea whether these proposals were happening, about to happen, or were merely wishful thinking... but the more I've thought on the topic, the more I'm finding myself advocating their application.

The way I envisage a change in the way organizations are fined for data breaches is very simple. Fine them more heavily than we do today - however half of the fine goes back to the breached company and must be used within 12 months to increase the information security of the company. There... it's as simple as that. Force the breached organizations to spend their money making their systems (and therefore your and my personal data) more secure!

Yes, the devil is in the detail. Someone needs to define precisely what that money can be spent on in terms of bolstering security - but I'm leaning towards investments in technology and the third-party elbow-grease to setup, tune, and make it hum.

I can see some folks saying "this is just a ploy to put more money in the security vendors pockets!". If it's a ploy, it's hardly very transparent of me is it? No, these organizations are victims of data breaches because their attackers are better prepared, more knowledgeable, and more sophisticated than their victims. These organizations that are paying the fine would need to be smart about how they (forceably) spend their money - or they'll suffer again at the hands of their attackers and just have to pay more, and make wiser investments the second time round.

I've dealt with way too many of these breached organizations in my career. The story is the same each time. The IT departments know (mostly) what needs to be done to make their business more secure, but an adequate budget has never been forthcoming. A big data breach occurs, the company spends triple what they would have spent securing it in the first place doing forensics to determine the nature and scope of the data breach, they spend another big chunk of change on legal proceedings trying to protect themselves from lawsuits and limit liabilities and future fines, and then get lumbered with a marginal fine. The IT department gets a dollop of lucre to do the minimum to prevent the same attack from happening again, and they're staved again until the next data breach.

No, I'd much sooner see the companies being fined more heavily, but with half of that wrist-slapping money being forcably applied to securing the organization from future attacks and limiting the scope for subsequent data breaches. I defy anyone to come up with a better way of making these organizations focus on their security problems and reduce the likelihood of future data breaches.

-- Gunter Ollmann

Friday, December 6, 2013

The CISSP Badge of Security Competency

It can be a security conference anywhere around the world and, after a few beers with the attendees, you can guarantee the topic of CISSP will come up. Very rarely will it be positive. You see, CISSP has become the cockroach of the security community and it just wont die. They say that cockroaches could survive a nuclear winter... I'm pretty sure CISSP is just as resilient.

Personally, I think CISSP gets an unfair hearing. I don't see CISSP as a security competency certification (regardless of those folks who sell it or perceive it as such), rather I interpret it like a badge on a Girl Scout's sash that signifies completion of a rote task... like learning how to deliver CPR. It's a certification that reflects an understanding of the raw concepts and vocabulary, not a measure of competency. Just like the Girl Scout with the CPR badge has the potential to be a competent medic in the future, for now it's a "well done, you understand the concepts" kind of deal.

If that's the case, then why, as a security professional, would practitioners not be lining up to have their own CISSP accreditation? In a large way, it's a bit like requiring that aforementioned (and accomplished) professional medic to sit the Girl Scout CPR exam and to proudly show off her new badge afterwards. To many folks, both scenario's are likely to be interpreted as an insult. I think this is one of the reasons why the professional security practitioners community is so against CISSP (and other security accreditation's) - and causes the resultant backlash. The fact that many businesses are now asking for CISSP qualification as part of their recruitment vetting processes just adds salt to the wounds.

I see the CISSP certification as a great program for IT professionals (web developers, system administrators, backup operators, etc.) in order to gain the minimum level of understanding of what security means for them to do their jobs.

Drawing once again from the CPR badge analogy, I think that everyone who works in an office should do a first aid course and be competent in CPR. It just makes sense to have that basic understanding available in a time of need. However, the purpose of gaining those skills is to keep the patient alive until a professional can arrive and take over. This is exactly how I see CISSP operating in modern IT departments.

I think that if CISSP were positioned more appropriately as an "IT health" badge of minimum competency, then much of the backlash from the security community would die down.

-- Gunter Ollmann

Thursday, June 20, 2013

FDA Safety Communication for Medical Devices

 The US Food and Drug Agency (FDA) released an important safety communication targeted at medical device manufacturers, hospitals, medical device user facilities, health care IT and procurements staff, along with biomedical engineers in which they warn of risk of failure due to cyberattack – such as through malware or unauthorized access to configuration settings in medical devices and hospital networks.

Have you ever been to view a much anticipated movie based upon an exciting book you happened to have read when you were younger, only to be sorely disappointed by what the director finally pulled together on the big screen? Well that’s how I feel when I read this newest alert from the FDA. Actually it’s not even called an alert… it’s a “Safety Communication”… it’s analogous to Peter Jackson deciding that his own interpretation of JRR Tolkien’s ‘The Hobbit’ wasn’t really worthy of the title so to forestall criticism he named the movie ‘Some Dwarves and a Hobbit do Stuff’.

This particular alert (and I’m calling it an alert because I can’t lower myself to call it a safety communication any longer) is a long time coming. Almost a decade ago me and my teams at the time raised the red flag over the woeful security of hospital networks, then back in 2005 my then research teams raised new red flags related to the encroachment of unsecured WiFi in to medical equipment, for the last couple of years IOActive’s research team have been raising new red flags over the absence of security within implantable medical devices, and then on June 13th 2013 the FDA releases a much watered down alert where the primary recommendations and actions section simply states “[m]any medical devices contain configurable embedded computer systems that can be vulnerable to cybersecurity breaches”. It’s as if the hobbit has been interpreted as a midget with hairy feet.

Yes I joke a little, but I am very disappointed with the status of this alert covering an important topic.

The vulnerabilities being uncovered on a daily basis within hospital networks, medical equipment and implantable devices by professional security teams and researchers are generally more serious than what outsiders give credit. Much of the public cybersecurity discussion as it relates to the medical field to date has been about people hacking hospital data systems for patient records and, most recently, the threat of targeted slayings of people who happen to have vulnerable implanted insulin pumps and heart defibrillators. While both are certainly possible, they’re what I would associate with fringe events.

I believe that the biggest and most likely threats lie in non-malicious actors – the tinkerers, the cyber-crooks, and the “in the wrong place at the wrong time” events. These medical systems are so brittle that even the slightest knock or tire-kicking can cause them to fail. I’ll give you some examples:

  • Wireless heart and drug monitoring stations within emergency wards that have open WiFi connections; where anyone with an iPhone searching for an Internet connection can make an unauthenticated connection and have their web browser bring up the admin portal of the station.
  • Remote surgeon support and web camera interfaces used for emergency operations brought down by everyday botnet malware because someone happened to surf the web one day and hit the wrong site.
  • Internet auditing and scanning services run internationally and encountering medical devices connected directly to the Internet through routable IP addresses – being used as drop-boxes for file sharing groups (oblivious to the fact that it’s a medical device under their control).
  • Common WiFi and Bluetooth auditing tools (available for android smartphones and tablets) identifying medical devices during simple “war driving” exercises and leaving the discovered devices in a hung state.
  • Medial staff’s iPads without authentication or GeoIP-locking of hospital applications that “go missing” or are borrowed by kids and have applications (and games) installed from vendor markets that conflict with the use of the authorized applications.
  • NFC from smartphone’s and payment systems that can record, playback and interfere with the communications of implanted medical devices.

These are really just the day-to-day noise of an Internet connected life – but one that much of the medical industry is currently ill prepared to defend against. Against an experienced attacker or someone determined to cause harm – well, it’s as one sided as a lone hobbit versus the combined armies of Middle Earth.

I will give the alert some credit though, that did clarify a rather important point that may have been a stumbling block for many device vendors in the past:

“The FDA typically does not need to review or approve medical device software changes made solely to strengthen cybersecurity.”

IOActive’s experience when dealing with a multitude of vulnerable medical device manufacturers had often been disheartening in the past. A handful of manufacturers have made great strides in securing their devices and controlling software recently – and there has been a change in the hearts and minds over the last 6 months (pun intended) as more publicity has been drawn to the topic. The medical clients we’ve been working most closely with over recent months have made huge leaps in making their latest devices more secure, and their next generation of devices will be setting the standard for the industry for years to come.

In the meantime though, there’s a tremendous amount of work to be done. The FDA’s alert is significant. It is a formal recognition of the poor state of security within the industry – providing some preliminary guidance. It’s just not quite a call to arms I’d have liked to see after so many years – but I guess they don’t want to raise too much fear, nor the ire of vendors that could face long and costly FDA re‑evaluations of their technologies. Gandalf would be disappointed.

(BTW I actually liked Peter Jackson’s rendition of The Hobbit).

-- Gunter Ollmann

First Published: IOActive Blog - June 20, 2013

Tuesday, May 7, 2013

Bypassing Geo-locked BYOD Applications

In the wake of increasingly lenient BYOD policies within large corporations, there’s been a growing emphasis upon restricting access to business applications (and data) to specific geographic locations. Over the last 18 months more than a dozen start-ups in North America alone have sprung up seeking to offer novel security solutions in this space – essentially looking to provide mechanisms for locking application usage to a specific location or distance from an office, and ensuring that key data or functionality becomes inaccessible outside these prescribed zones.

These “Geo-locking” technologies are in hot demand as organizations try desperately to regain control of their networks, applications and data.

Over the past 9 months I’ve been asked by clients and potential investors alike for advice on the various technologies and the companies behind them. There’s quite a spectrum of available options in the geo-locking space; each start-up has a different take on the situation and has proposed (or developed) a unique way in tackling the problem. Unfortunately, in the race to secure a position in this evolving security market, much of the literature being thrust at potential customers is heavy in FUD and light in technical detail.

It may be because marketing departments are riding roughshod over the technical folks in order to establish these new companies, but in several of the solutions being proposed I’ve had concerns over the scope of the security element being offered. It’s not because the approaches being marketed aren’t useful or won’t work, it’s more because they’ve defined the problem they’re aiming to solve so narrowly that they’ve developed what I could only describe as tunnel-vision to the spectrum of threat organizations are likely to face in the BYOD realm.

In the meantime I wanted to offer this quick primer on the evolving security space that has become BYOD geo-locking.

Geo-locking BYOD

The general premise behind the current generation of geo-locking technologies is that each BYOD gadget will connect wirelessly to the corporate network and interface with critical applications. When the device is moved away from the location, those applications and data should no longer be accessible.

There are a number of approaches, but the most popular strategies can be categorized as follows:

  • Thick-client – A full-featured application is downloaded to the BYOD gadget and typically monitors physical location elements using telemetry from GPS or the wireless carrier directly. If the location isn’t “approved” the application prevents access to any data stored locally on the device.
  • Thin-client – a small application or driver is installed on the BYOD gadget to interface with the operating system and retrieve location information (e.g. GPS position, wireless carrier information, IP address, etc.). This application then incorporates this location information in to requests to access applications or data stored on remote systems – either through another on-device application or over a Web interface.
  • Share-my-location – Many mobile operating systems include opt-in functionality to “share my location” via their built-in web browser. Embedded within the page request is a short geo-location description.
  • Signal proximity – The downloaded application or driver will only interface with remote systems and data if the wireless channel being connected to by the device is approved. This is typically tied to WiFi and nanocell routers with unique identifiers and has a maximum range limited to the power of the transmitter (e.g. 50-100 meters).

The critical problem with the first three geo-locking techniques can be summed up simply as “any device can be made to lie about its location”.

The majority of start-ups have simply assumed that the geo-location information coming from the device is correct – and have not included any means of securing the integrity of that device’s location information. A few have even tried to tell customers (and investors) that it’s impossible for a device to lie about its GPS location or a location calculated off cell-tower triangulation. I suppose it should not be a surprise though – we’ve spent two decades trying to educate Web application developers to not trust client-side input validation and yet they still fall for web browser manipulations.

A quick search for “fake location” on the Apple and Android stores will reveal the prevalence and accessibility of GPS fakery. Any other data being reported from the gadget – IP address, network MAC address, cell-tower connectivity, etc. – can similarly be manipulated. In addition to manipulation of the BYOD gadget directly, alternative vectors that make use of private VPNs and local network jump points may be sufficient to bypass thin-client and “share-my-location” geo-locking application approaches.

That doesn’t mean that these geo-locking technologies should be considered unicorn pelts, but it does mean that organization’s seeking to deploy these technologies need to invest some time in determining the category of threat (and opponent) they’re prepared to combat.

If the worst case scenario is of a nurse losing a hospital iPad and that an inept thief may try to access patient records from another part of the city, then many of the geo-locking approaches will work quite well. However, if the scenario is that of a tech-savvy reporter paying the nurse to access the hospital iPad and is prepared in install a few small applications that manipulate the geo-location information in order to remotely access celebrity patient records… well, then you’ll need a different class of defense.

Given the rapid evolution of BYOD geo-locking applications and the number of new businesses offering security solutions in this space, my advice is two-fold – determine the worst case scenarios you’re trying to protect against, and thoroughly assess the technology prior to investment. Don’t be surprised if the marketing claims being made by many of these start-ups are a generation or two ahead of what the product is capable of performing today.

Having already assessed or reviewed the approaches of several start-ups in this particular BYOD security realm, I believe some degree of skepticism and caution is warranted.

-- Gunter Ollmann

First Published: IOActive Blog - May 7, 2013

Monday, March 25, 2013

Tales of SQLi

As attack vectors go, very few are as significant as obtaining the ability to insert bespoke code in to an application and have it automatically execute upon “inaccessible” backend systems. In the Web application arena, SQL Injection vulnerabilities are often the scariest threat that developers and system administrators come face to face with (albeit way too regularly).  In fact the OWASP Top-10 list of Web threats lists SQL Injection in first place.
More often than not, when security professionals discuss SQL Injection threats and attack vectors, they focus upon the Web application context. So it was with a bit of fun last week when I came across a photo of a slightly unorthodox SQL Injection attempt – that of someone attempting to subvert a traffic monitoring system by crafting a rather novel vehicle license plate.
My original tweet got retweeted a couple of thousand of times – which just goes to show how many security nerds there are out there in the twitterverse.

“in the wild” SQL Injection attempt was based upon the premise that video cameras are actively monitoring traffic on a road, reading license plates, and issuing driver warnings, tickets or fines as deemed appropriate by local law enforcement.
At some point the video captures of the passing vehicle’s license plate must be converted to text and stored – almost certainly in some kind of backend database. The hope of the hacker that devised this attack was that the process would be vulnerable to SQL Injection – and crafted a simple SQL statement that could potentially cause the backend database to drop (i.e. “delete”) the table containing all of the license plate information.
Whether or not this particular attempt worked, I have no idea (probably not if I have to guess an outcome); but it does help nicely to raise attention to this category of vulnerability.
As surveillance systems become more capable – digitally storing information, distilling meta-data from image captures, and sharing observation data between systems – it opens many new doors for mischievous and malicious attack.
The physical nature of these systems, coupled with the complexities of integration with legacy monitoring and reporting systems, often makes them open to attacks that would be classed as fairly simple in the world of Web application security.
A common failure of system developers is to assume that the physical constraints of the data acquisition process are less flexible than they are. For example, if you’re developing a traffic monitoring system it’s easy to assume that license plates are a fixed size and shape, and can only contain 10 alphanumeric characters. Meanwhile, the developers of the third-party image processing code had no such assumptions and will digitize any image. It reminds me a little of the story in which reuse of some object-oriented code a decade ago resulted in Kangaroos firing Stinger missiles during a military training simulation.
While the image above is amusing, I’ve encountered similar problems before when physical tracking systems integrate with digital backend processes – opening the door to embarrassing and fraudulent events. For example, in the past I’ve encountered similar SQL Injection vulnerabilities within systems such as:
  • Toll booths reading RFID tags mounted on vehicle windshields – where the tag readers would accept up to 2k of data from each tag (even though the system was only expecting a 16 digit number).
  • Credit card readers that would accept pre-paid cards with negative balances – which resulted in the backend database crediting the wrong accounts.
  • RFID inventory tracking systems – where a specially crafted RFID token could automatically remove all record of the previous hours’ worth of inventory logging information from the database allowing criminals to “disappear” with entire truckloads of goods.
  • Luggage barcode scanners within an airport – where specially crafted barcodes placed upon the baggage would be automatically conferred the status of “manually checked by security personnel” within the backend tracking database.
  • Shipping container RFID inventory trackers – where SQL statements could be embedded to adjust fields within the backend database to alter Custom and Excise tracking information.
Unlike the process of hunting for SQL Injection vulnerabilities within Internet accessible Web applications, you can’t just point an automated vulnerability scanner at the application and have at it. Assessing the security of complex physical monitoring systems is generally not a trivial task and requires some innovative approaches. Experience goes a long way.
-- Gunter Ollmann