Wednesday, December 11, 2019

How Commercial Bug Hunting Changed the Boutique Security Consultancy Landscape

It’s been almost a decade since the first commercial “for-profit” bug bounty companies launched leveraging crowdsourced intelligence to uncover security vulnerabilities and simultaneously creating uncertainty for boutique security companies around the globe.

Not only could crowdsourced bug hunting drive down their consulting rates or result in their best bug hunters turning solo, it raised ethics questions, such as should a consultant previously engaged on a customer security assessment also pursue out-of-hours bug hunting against that same customer. What if she held back findings from the day-job to claim bounties at night?

With years of bug bounty programs now behind us, it is interesting to see how the information security sector transformed – or didn’t.


The fears of the boutique security consultancies – particularly those offering penetration testing and reverse engineering expertise – were proven unfounded. A handful of consultants did slip away and adopt full-time bug bounty pursuit lifestyles, but most didn’t. Nor did those companies feel a pinch on their hourly consulting rates. Instead, a few other things happened.

First, the boutiques upped the ante by repositioning their attack-based services – defining aggressive “red team” methodologies and doubling down on the value of combining black-box with white-box testing (or reverse engineering combined with code reviews) to uncover product and application bugs in a more efficient manner. Customers were (and are) encouraged to use bug bounties as a “first-pass filter” for finding common vulnerabilities – and then turn to dedicated experts to uncover (and help remediate) the truly nasty bugs.

Second, they began using bug bounty leaderboard tables as a recruitment vehicle for junior consultants. It was a subtle, but meaningful change. Previously, a lot of recruitment had been based off evaluating in-bound resumes by how many public disclosures or CVEs a security researcher or would-be consultant had made in the past. By leveraging the public leaderboards, suddenly there was a target list of candidates to go after. An interesting and obvious ramification was (and continues to be) that newly rising stars on public bug bounty leaderboards often disappear as they get hired as full-time consultants.

Third, bug bounty companies struggled with their business model. Taking a slice of the vendors payments to crowdsourced bug hunters sounded easier and less resource intensive than it turned out. The process of triaging the thousands of bug submissions – removing duplicates, validating proof-of-concept code, classifying criticality, and resolving disparities in hunter expectations – is tough work. It’s also something that tends to require a high degree of security research experience and costly expertise that doesn’t scale as rapidly as a crowdsource community can. The net result is that many of the bug bounty crowdsource vendors were forced to outsource sizable chunks of the triage work to boutique consultancies – as many in-house bug bounty programs also do.

A fourth (but not final) effect was that some consulting teams found contributing to public bug bounty programs an ideal way of cashing in on consulting “bench time” when a consultant is not directly engaged on a commercial project. Contributing to bug bounties has proven a nice supplement to what was previously lost productivity.

Over the last few years I’ve seen some pentesting companies also turn third-party bug bounty research and contribution into in-house training regimes, marketing campaigns, and an engagement model to secure new customers, e.g., find and submit bugs through the bug bounty program and then reach out directly to the customer with a bag full of more critical bugs.

Given the commercial pressures of on third-party bug bounty companies, it was not unexpected that they would seek to stretch their business model towards higher premium offerings, such as options for customers to engage with their best and most trusted bug hunters before opening up to the public or offering more traditional report-based “assessments” of the company’s product or website. More recently, some bug bounty vendors have expanded offerings to encompass community managed penetration testing and red team services.

The lines continue to blur between the boutique security consultancies and crowdsourcing bug bounty providers. It’ll be interesting to see what the landscape looks like in another decade. While there is a lot to be said and gained from crowdsourced security services, I must admit that the commercial realities of operating businesses that profit from managing or middle-manning their output strikes me as a difficult proposition in the long run.

I think the crowdsourcing of security research will continue to hold value for the businesses owning the product or web application, and I encourage businesses to take advantage of the public resource. But I would balance that with the reliability from engaging a dedicated consultancy for the tougher stuff.

-- Gunter Ollmann

First Published: SecurityWeek - December 11, 2019

Thursday, November 14, 2019

Securing Autonomous Vehicles Paves the Way for Smart Cities

As homes, workplaces, and cities digitally transform during our Fourth Industrial Revolution, many of those charged with securing this digital future can find it difficult to “level up” from the endpoints and focus on defining and solving the larger problem sets. It is easy to get bogged down in the myriad of smart and smart-enough devices that constitute “IoT” in isolation of the overall security scope of the smart city – losing both valuable context and constraints.

While “smart city” can mean a bunch of things to different people, for city planners and officials, it’s definition and implementation problems are quite well understood. The vendors that come knocking on their doors promote point solutions – smart traffic control systems, 5G and ultra-high bandwidth wireless communications, driverless vehicles, etc. – leaving the cities’ IT, operational technology (OT), and infosec teams to bring it all together.

An essential part of a security professional’s work is diving deep into the flaws and perils of individual products and clusters of technologies. But trying to “solve security” at a city level is an entirely different paradigm.


A substantial number of my peers and security researchers I’ve worked with over the past couple of decades have focused their energies on securing autonomous vehicles. The threats are varied – ranging from bypassing emission and speed controls to evading the next generation of city road taxes and insurance regulations to malicious remote control of someone else’s vehicle – yet mostly isolated to the vehicles themselves. From what I’m seeing and hearing, they’re doing a great job in securing these vehicles. Their security successes also advance traditional transit solutions, which helps smart cities keep pace with the transportation needs of a growing population. 

Given the continued urbanization of human population, the growth and attraction of megacities (10 million plus inhabitants), and the strains on traditional transport systems, the thought of increasing personal-use autonomous vehicles in these heavily congested cities is outdated and arguably ludicrous. Today’s megacities are already battling traffic congestion with zoned charging, elimination of fossil fuels, and outright banning of private transport. Tomorrow’s megacities – jumping from 33 cities today with the largest holding 38 million people to over 100 with populations in excess of 88 million people by 2100 – need to completely rethink their transport systems and the security that goes with it.

Oddly enough, securing mass transit for megacities come with some advantages. Mass transport systems that evolve from trains, trams, and subways, have embedded within them design constraints that positively influence security. For example, driverless cars of today have to navigate and solve all kinds of road and traffic problems while trams stick to pre-defined paths (i.e. rail networks) with greatly simplified routing and traffic signaling. Research papers covering adversarial AI in recent years have focused on attacking deep learning and cognitive AI systems used by autonomous vehicles (e.g. adding stickers to a stop sign and making the driverless car think the sign says 45 mph), but these tactics would have negligible to no impact on reasonably scoped public transport systems.

It is reasonable to assume that the smart cities of the near future will consist of trillions of smart devices – each of them semi or fully managed, providing alerts, logs, and telemetry of their operations. For those city leaders – particularly CIOs, COOs, CTOs, CISOs, and CSOs – the changes needed to manage, secure, certify, and govern all these devices and their output are mind bogglingly huge.

Interestingly enough, the framework for managing data security for millions of chatty networked devices has largely been solved. Having become cloud-native, modern Security Incident and Event Management (SIEM) technologies have proved to be remarkably successful in identifying anomalies, attacks, and misconfigurations.

The data handling capabilities and scalability of cloud-native SIEM may be just the right kind of toolkit to begin to solve smart city operations (and security) at the megacity level. In addition, with advanced AI being a core component of SIEM, the systems that identify and construct attack kill chains and mitigate threats through conditional access rules could instead be used and trained to identify surge transport requirements (due to concerts ending on a rainy day) and automatically reroute and optimize tram or bus capacity to deliver citizens safely (and dryly) to their destinations – as an example. 

Securing smart cities offers many opportunities to rethink our assumptions on security and “level up” the discussion to solve problems at the ecosystem level. Advancements in AI analytics and automated response technologies can handle the logs, alerts, and streaming telemetry that contribute to OT infrastructure security for mega cities. In turn, this increase in data volume fine tunes anomaly and behavioral-based detection systems to operate with higher efficiency and fidelity, which helps secure city-wide IT infrastructure.

-- Gunter Ollmann

First Published: SecurityWeek - November 14, 2019

Tuesday, October 8, 2019

Cloud is Creating Security and Network Convergence

Network Security Expertise is Needed More Than Ever Inside Security Operations Centers and on DevOps Teams

Digital transformation forces many changes to a business as it migrates to the public cloud. One of the most poorly examined is the convergence of network and security administration tasks and responsibilities in the public cloud.


On premises, the division between roles is pretty clear. The physical nature of networking infrastructure – the switches, routers, firewall appliances, network taps, WiFi hubs, and miles upon miles of cable – makes it easy to separate responsibilities. If it has power, stuff connects to it, it routes packets, and weighs more than 5 pounds, it probably belongs to the networking team.

In the cloud, where network connectivity features are defined by policies and code, the network is ephemeral. More importantly, the network is a security boundary – protecting services, applications, and data.

For many organizations, an early steppingstone in their digital transformation is virtualizing all their on-premises applications, infrastructure, and administrative and monitoring processes. Operating almost entirely within an Infrastructure-as-a-Service (IaaS) mode, previously favored network vendors provide virtual machine (VM) versions of their on-premises networking and security appliances – effectively making the transition to public cloud the equivalent of shifting to a new co-hosting datacenter.

This early stage takes very little advantage of public cloud. VMs remain implanted in statically defined networking architectures and old-style network monitoring remains largely the same. However, as organizations embrace continuous integration and continuous delivery (CI/CD), DevOps, serverless functions, and other cloud-native services, the roles of network and security administrator converge rapidly. At that point, network topology ceases to be the grid that servers and applications must snap to. Instead, leveraging the software defined network (SDN) nature of the cloud, the network becomes ephemeral – continuously defined, created, and disposed of in code.

With zero trust running core to modern CI/CD and DevOps security practices in the cloud, SDN has become a critical framework for protecting data, identities, and access controls.

Today, a cloud security architect, security analyst, or compliance officer cannot fulfill their security responsibilities without being a cloud network expert too. And, vice versa, a systems architect or network engineer cannot bring value to cloud operations without being comfortable wearing size 15 cloud security shoes.

For networking professionals transitioning to the cloud, I offer the following advice:

  • Partner extensively with your peers on the security team – they too are a transformation and are destined to become network experts.
  • Plan to transition from VM infested IaaS environments as fast as possible to cloud-native services which are easier to understand, manage, and deploy.
  • Become familiar with the portal management experience of each new network (security) service, but plan on day-to-day management being at the command line.
  • Brush up your scripting language expertise and get comfortable with code management tools. In a CI/CD workplace GitHub and its ilk are where the real action happens.
  • Throw out the old inhibitions of consuming valuable network bandwidth with event logs and streaming service health telemetry. In the age of cloud SIEM, data is king and storage is cheap, and trouble-shooting ephemeral network problems requires both in abundance.
  • Forget thumbing through network security books to learn. Training is all online. Watch the cloud provider’s workshop videos and test the lessons in real-time online.

With so many cloud critical controls existing at the network layer, network security expertise is needed more than ever inside security operations centers and on DevOps teams.

The faster in-house network administrators can transition to becoming public cloud network security engineers, architects, or analysts, the faster their organizations can implement digital transformation.

-- Gunter Ollmann

First Published: SecurityWeek - October 8, 2019

Tuesday, September 10, 2019

Stop Using CVSS to Score Risk

The mechanics of prioritizing one vulnerability’s business risk over another has always been fraught with concern. What began as securing business applications and infrastructure from full-disclosure bugs a couple of decades ago, has grown to encompass vaguely referenced flaws in insulin-pumps and fly-by-wire aircraft with lives potentially hanging in the balance.

The security industry has always struggled to “score” the significance of the threat posed by a newly discovered vulnerability and recent industry practices have increased pressure on how this should be done.

With the growth of bug bounty programs and vertical industry specialization at boutique security consultancies, vulnerability discoveries with higher severity often translate directly into greater financial reward for the discoverers. As such, there is immense pressure to increase both the significance and perceived threat posed by the vulnerability. In a growing number of cases, marketing teams will conduct world-wide campaigns to alert, scare, and drive business to the company.

It’s been close to 25 years since the first commercial vulnerability scanners started labeling findings in terms of high, medium, and low severity. Even back then, security professionals stumbled by confusing severity with “risk.”

At the turn of the last century as companies battled millennium bugs, the first generation of professional penetration testing consultancies started to include factors such as “exploitability,” “likelihood of exploitation,” and “impact of exploitation” in to their daily reports and end-of-engagement reports as way of differentiating between vulnerabilities with identical severity levels. Customers loved the additional detail, yet the system of scoring was highly dependent on the skills and experience of the consultant tabulating and reporting the results. While the penetration testing practices of 20 years ago have been rebranded Red Teaming and increasingly taken in-house, risk scoring vulnerabilities remains valuable – but continues to be more art than science.

Perhaps the most useful innovation in terms of qualifying the significance of a new vulnerability (or threat) has been the Common Vulnerability Scoring System (CVSS). It’s something I feel lucky to have contributed to and helped drive across products when I led X-Force at Internet Security Systems (acquired by IBM in 2006). As the (then) premier automated scanner and managed vulnerability scanning vendor, the development and inclusion of CVSS v1 scoring back in 2005 changed the industry – and opened up new contentions in the quantitative weighting of vulnerability features that are still wrestled with today in CVSS version 3.1.


CVSS is intended to summarize the severity of vulnerabilities in the context of the software or device – not the systems that are dependent upon the software or device. As a result, it worries me deeply when I hear that CVSS scores are wrongly being used to score the risk a vulnerability poses to an organization, device manufacturer, or end user.

That misconception was captured recently in an article arguing that vulnerability scoring flaws put patients’ lives at risk. On one hand, the researchers point out that though the CVSS score for their newly disclosed vulnerability was only middling (5.8 out of 10), successful exploitation could enable an attacker to adjust medicine dosage levels and potentially kill a patient. And, on the other hand, medical device manufacturers argue that because the score was relatively low, the vulnerability may not require an expedited fix and subsequent regulatory alerting.

As far as CVSS in concerned, both the researchers and medical device vendor were wrong. CVSS isn’t, and should never be used as, a risk score.

Many bright minds over two decades have refined CVSS scoring elements to make it more accurate and useful as a severity indicator, but have stalled in searching for ways to stretch environmental factors and the knock-on impacts of a vulnerability into quantifiable elements for determining “risk.” Today, CVSS doesn’t natively translate to a risk score – and it may never because every industry assesses risk differently and each business has its own risk factor qualifications that an external party won’t know.

I would caution any bug hunter, security analyst, software vendor, or device manufacturer to not rely on CVSS as the pointy end of the stick for prioritizing remediation. It is an important variable in the risk calculation – but it is not an adequate risk qualifier by itself.

-- Gunter Ollmann

First Published: SecurityWeek - September 10, 2019

Tuesday, August 20, 2019

Harnessing Stunt Hacking for Enterprise Defense

Make Sure You Understand the Root Cause of the Vulnerabilities or Attack Vectors Behind the Next Over-Hyped Stunt Hack

Every year, at least one mediocre security vulnerability surprisingly snatches global media attention, causing CISOs and security researchers to scratch their heads and sigh “who cares?”

Following a trail of overly-hyped and publicized security bugs in smart ovens, household fridges, digital teddy bears, and even multi-function toilet-bidets, the last few weeks have seen digital SLR camera vulnerabilities join to the buzz list. Yet, this latest hack boils down to a set of simple WiFi enabled file-sharing flaws in a mid-priced camera that allowed researchers to demonstrate specially crafted ransomware attacks. It is not an obvious or imminent threat to most enterprise networks.

Love it or loathe it, stunt hacking and over-hyped bugs are part of modern information security landscape. While the vast majority of such bugs represent little threat to business in reality, they stir up legitimate questions. Does marketing security hacks to a fever-pitch cause more harm than good? Are stunts a distraction or amplifier for advancing enterprise security?


There is little doubt within the security researcher community that a well-staged vulnerability disclosure can quickly advance stalled conversations with reluctant vendors. Staged demonstrations and a flare for showmanship had the healthcare industry hopping as security flaws embedded in surgically implanted insulin pumps and heart defibrillators became overnight dinner-table discussions and murder plots in TV dramas. A couple years later, prime time news stories of researchers taking control of a reporter’s car – remotely steering the vehicle and disabling breaking – opened eyes worldwide to the threats underlying autonomous vehicles, helping to create new pillars of valued cyber security research.

Novel technologies and new devices draw security researchers like moths to a flame – and that tends to benefit the community as a whole. But it is often difficult for those charged with defending the enterprise to turn awareness into meaningful actions. A CFO who’s been sitting on a proposal for managed vulnerability scanning because the ROI arguments were a little flimsy may suddenly approve it on reading news of how the latest step-tracking watch inadvertently reveals the locations of secret military bases around the world.

In a world of over-hyped bugs, stunt hacking, and branded vulnerability disclosures, my advice to CISOs is to make security lemonade by finding practical next steps to take:

  1. Look beyond the device and learn from the root cause of the security failing. Hidden under most of the past medical device hacks were fundamental security flaws involving outdated plain-text network protocols and passwords, unsigned patching and code execution, replay attacks and, perhaps most worrying, poorly thought through mechanisms to fix or patch devices in the field. The outdated and unauthenticated Picture Transfer Protocol (PTP) was the root cause of the SLR camera hack.
  2. Use threat models to assess your enterprise resilience to recently disclosed vulnerabilities. The security research community waxes and wanes on attack vectors from recent bug disclosures, so it often pays to follow which areas of research are most in vogue. The root cause vulnerabilities of the most recent hacks serve as breadcrumbs for other researchers hunting for similar vulnerabilities in related products. For this reason, build threat models for all form factors the root flaw can affect.
  3. Learn, but don’t obsess, over vulnerable device categories and practice appropriate responses. At the end of the day, a WiFi-enabled digital SLR camera is another unauthenticated removable data storage unit that can potentially attach to the corporate network. As such, the response should be similar to any other roaming exfiltration device. Apply the controls for preventing a visitor or employee roaming a datacenter with a USB key in hand to digital SLR cameras.

Regardless of how you feel about the showmanship of stunt hacking, take the time to understand and learn from their root causes. While it is highly unlikely that an attacker will attempt to infiltrate your organization with a digital SLR camera (there are far easier and more subtle hacking techniques that will achieve the same goal), it is still important to invest in appropriate policies and system controls to defend vulnerable vectors.

With more people seeking futures as security researchers, it would be reasonable to assume that more bugs (in a broader range of devices and formats) will be disclosed. What may originally present as a novel flaw in, let us say, a robotic lawnmower, may become the seed vector for uncovering and launching new 0-day exploits against smart power strips in the enterprise datacenter at a later date.

Chuckle or cringe, but make sure you understand the root cause of the vulnerabilities or attack vectors behind the next over-hyped stunt hack and don’t have similar weaknesses in your enterprise.

-- Gunter Ollmann

First Published: SecurityWeek - August 20, 2019

Monday, July 22, 2019

Digital Transformation Makes the Case for Log Retention in Cloud SIEMs

As organizations pursue their digital transformation dreams, they’ll migrate from on-premises SIEM to cloud-based SIEM. In the process of doing so, CISOs are taking a closer look at their previous security incident and event log retention policies, and revisiting past assumptions and processes.

For organizations needing to maintain a smorgasbord of industry compliance and regulatory requirements, overall event log retention will range from one year through to seven. Many organizations find that a minimum of one year meets most mandated requirements but err on the side of retaining between three to four years – depending on what their legal counsel advises.

With public cloud, data retention spans many different options, services, and price points. Backups, blob storage, “hot” access, “cold” access, etc. – there are endless ways to store and access security events and logs. With cloud storage dropping in price year-on-year, it’s cheap and easy to just store everything forever – assuming there’s no rush or requirement to inspect the stored data. But hot data, more expensive than the cold option, gives defenders the quick access they need for real-time threat hunting. Keeping data hot for SIEM use is inevitably one of the more expensive data storage options. A balance needs to be struck between having instant access to SIEM for queries and active threat hunting, and long-term regulatory-driven storage of event and log data. Can an optimal storage balance be achieved?


Widely available public threat reports for the last couple of years provide a “mean-time” to breach discovery ranging from 190 to 220 days and a breach containment window of between 60 to 100 days. Therefore, keeping 220 days of security event logs “hot” and available in a cloud SIEM would statistically only help with identifying half of an organization’s breaches. Obviously, a higher retention period makes sense – especially for organizations with less mature or less established security operations capabilities.

However, a sizable majority of SIEM-discoverable threats and correlated events are detectable in a much shorter timeframe – and rapidly detecting these breaches naturally makes it considerably more difficult for an adversary to maintain long-time persistence. For example, automatically piecing together the kill chain for an email phishing attack that led to a malware installation, that phoned home to a malicious C&C, which had then brute-forced the administrative access to a high value server is almost trivial for cloud SIEM (assuming appropriate logging was enabled). Nowadays, such a scenario (or permutation of that scenario) likely accounts for near half of all enterprise network breaches.

My advice to organizations new to cloud SIEM is to begin with a rolling window of one year’s worth of event logs while measuring both the frequency of breaches and time to mitigate. All older event logs can be stored using cheaper cloud storage options and needn’t be immediately available for threat hunting.

Depending on the security operations teams’ capacity for mitigating the events raised by cloud SIEM, it may be financially beneficial to reduce the rolling window if the team is overwhelmed with unresolvable events. I’d be hesitant to reduce that rolling window. Instead, I would recommend CISOs with under-resourced teams find and engage a managed security services provider to fill that skills gap.

A question then arises as to the value of retaining multiple years of event logs. Is multi-year log retainment purely a compliance tick-box?

While day-to-day cloud SIEM operations may focus on a one-year rolling window, it can be beneficial to organize a twice-annual threat hunt against several years of event logs using the latest available threat intelligence and indicator of compromise (IoC) information as seeds for investigation. These periodic events have two objectives: reduce your average monthly cloud SIEM operating costs (by temporarily loading and unloading the historic data) and allow teams to change mode and “deep dive” into a broader set of data while looking for “low and slow” compromises. If an older breach is detected, incrementally older event logs could be included in the quest to uncover the origin point of an intruder’s penetration or full spectrum of records accessed.

Caution over infinite event log retention may be warranted, however. If the breached organization only has a couple years of logs, versus being able to trace breach inception to, say, four years earlier, their public disclosure to customers may sound worse to some ears (including regulators). For example, disclosing “we can confirm customers over the last two years are affected” is a weaker disclosure than “customers since July 4th 2015 are affected”. Finding the sweet-spot in log retention needs to be a board-level decision.

Having moved to cloud SIEM, CISOs also need to decide what logs should be included and what log settings should be used.

Ideally, all event logs should be passed to the cloud SIEM. That is because the AI and log analytics systems powering threat detection and automated response thrive on data. Additionally, inclusion of logs from the broadest spectrum of enterprise devices and applications will help reduce detection times and remove potential false positives, which increase overall confidence in the system’s recommendations.

Most applications and networked appliances allow for different levels of logging, including scaling from error messages to alerts and error messages through to errors, warnings, status messages, and debugging information. In general, the greater the detail in the event logs, the greater the value they bring to cloud SIEM. In this way, upgrading from “normal” to “verbose” log settings can offer several threat response advantages – particularly when it comes to handling misconfigurations and criticality determination.

The symbiotic development of cloud SIEM and cloud AI innovation continues at an astounding pace. While cloud SIEM may be new for most organizations, its ability to harness the innate capabilities of public cloud are transforming security operations. Not only are threats being uncovered quicker and responses managed more efficiently, but continual advancements in the core AI makes the technology more valuable while costs of operating SIEM and storing data in the cloud continue to drop. This makes it possible for companies to make pragmatic use of the intelligent cloud by operating on a one-year window of hot data while getting value out of older data, stored cold, on twice a year threat hunts.

-- Gunter Ollmann

First Published: SecurityWeek - July 22, 2019

Tuesday, July 2, 2019

Defending Downwind as the Cyberwar Heats up

The last few weeks have seen a substantial escalation of tensions between Iran and the US as regional cyberattacks gain pace and sophistication with Iran’s downing of a US drone, possibly leveraging its previously claimed GPS spoofing and GNSS hacking skills (to trick it into Iranian airspace) and a retaliatory US cyberattack knocking out Iranian missile control systems


While global corporations have been targeted by actors often cited as supported by or sympathetic to Iran, the escalating tensions in recent weeks will inevitably bring more repercussions as tools and tactics change with new strategic goals. Over the last decade, at other times of high tension, sympathetic malicious actors have often targeted the websites or networks of Western corporations – pursuing defacement and denial of service strategies. Recent state-level cyberattacks show actors evolving from long-cycle data exfiltration to include tactical destruction.

State sponsored attacks are increasingly focused on destruction. Holmium, a Middle Eastern actor, has been observed recently by Microsoft to target oil & gas and maritime transportation sectors – using a combination of tactics to gain access to networks, including socially engineered spear phishing operations and password spray attacks – and are increasingly associated with destructive attacks.

Many businesses may be tempted to take a “business as usual” stand but there is growing evidence that, as nation state cyber forces square off, being downwind of a festering cyberwar inevitably exposes organizations to collateral damage. 

As things heat up, organizations can expect attacks to shift from data exfiltration to data destruction and for adversarial tooling to grow in sophistication as they expose advanced tools and techniques, such as zero-day exploits, in order to gain a temporary advantage on the cyber battlefield.

Against this backdrop, corporate security teams and CISOs should focus on the following areas:

  1. Pivot SOC teams from daily worklist and ticket queue response to an active threat hunting posture. As state-sponsored attackers escalate to more advanced tools and break out cherished exploits, some attacks will become more difficult to pick up with existing signature and payload-based threat detection systems. Consequently, SOC teams will need to spend more time correlating events and logs, and hunting for new attack sequences.
  2. Prepare incident responders to investigate suspicious events earlier and to mitigate threats faster. As attackers move from exfiltration to destruction, a timely response becomes even more critical.
  3. Review the organization’s back-up strategy for all critical business data and business systems, and verify their recoverability. As the saying goes, a back-up is only as good as its last recovery. This will provide continuity in the event actors using ransomware no longer respond to payment, leaving your data unrecoverable.
  4. Update your business response plan and practice disaster recovery to build your recovery muscle memory. Plan for new threat vectors and rapid destruction of critical business systems, both internal and third-party.
  5. Double-check the basics and make sure they’re applied everywhere. Since so many successful attack vectors still rely on social engineering and password guessing, use anti-phishing and multi-factor authentication (MFA) as front-line defenses for the cyberwar. Every privileged account throughout the organization and those entrusted to “trusted” supplier access should be using MFA by default.
  6. Engage directly with your preferred security providers and operationalize any new TTPs and indicators associated with Middle Eastern attack operators that they can share with you. Make sure that your hunting tools account for the latest threat intelligence and are capable of alerting the right teams should a threat surface.
  7. For organizations that have adopted cyber-insurance policies to cover business threats that cannot be countered with technology, double-check which and what “acts of war” are covered.

While implementing the above advice will place your organization on a better “cyberwar footing”, history shows that even well-resourced businesses targeted by Iranian state-sponsored groups fall victim to these attacks. Fortunately, there’s a silver lining in the storm clouds. Teaming up in-house security teams with public cloud providers puts companies in a much better position to respond to and counter such threats because doing so lets them leverage the massively scalable capabilities of the cloud provider’s infrastructure and the depth of security expertise from additional responders. For this reason, organizations should consider which critical business systems could be duplicated or moved for continuity and recovery purposes to the cloud, and in the process augment their existing on-premises threat response.

-- Gunter Ollmann

First Published: SecurityWeek - July 2, 2019

Tuesday, June 11, 2019

The Symbiosis Between Public Cloud and MSSPs

To the surprise of many, public cloud appears to be driving a renaissance in adoption and advancement of managed security service providers (MSSP).

For several years, the major public cloud providers have settled upon a regular rhythm of rolling out new security features for inclusion in their workload management tooling – adding new detections and alerting capabilities that, for want of a better description, are designed to help subscribers clean up an expanding corpus of horrible little mistakes that expose confidential information or make it easy for an attacker to abuse valuable resources and steal intellectual property. To my mind, this incremental rollout of embedded security features represents perhaps the single most valuable advantage of moving to the cloud.


Many of these security features are simple and non-intrusive. For example, they could alert the subscriber that they just created a publicly accessible data storage device that is using a poor administrator password, or that they’re about to spin up a virtual machine (VM) that hasn’t been patched or updated in nine months. Moving beyond alerts, the cloud security tooling could also propose (or force – if enforcing compliance mandate) that both a stronger password be used and that multi-factor authentication be applied by clicking a button or, in the case of a dated VM, auto-patch the OS and installing an updated security suite on the image. 

Getting these security basics done right and applied consistently across millions of subscribers and tens of millions of workloads has, year over year, proved that businesses operating in the public cloud are more secure than those that are solely on-premises. Combining the cloud’s security benefits with MSSP solutions unlocks even greater value, the most common of which are:

Small and medium businesses (SMB), prior to moving to the cloud, were lucky to have a couple of IT support staff who probably between them managed three or four security technologies (e.g. anti-virus, firewall, VPN, and an anti-phishing gateway). Upon moving to the cloud, the IT team are presented with 20+ default running security services and another 50+ security product options available within a single clicks reach, and are simply overwhelmed by the volume of technology presented to them and the responsibility of managing such a diverse portfolio of security products.

The move to the cloud is not the flick of a switch, but a journey. The company’s in-house security team must continue to support the legacy on-premises security technology while learning and mastering an even larger set of cloud-based security options and technologies. These teams are stretched too thin and cannot afford the time to “retrain” for the cloud.

Businesses embracing DevOps strive to optimize value and increase the pace of innovation in the cloud. Operationalizing a DevOps culture typically requires the business to re-orient their internal security team and have them master SecDevOps. As in-house security expertise focuses on SecDevOps, daily security operational tasks and incident response require additional resourcing.

Locating, hiring, and retaining security talent is becoming more difficult – especially for SMBs. Companies moving to the cloud typically either hire new security expertise to carry the organization into the cloud or retrain their smartest and most valuable in-house security talent to try to backfill those “legacy” security roles.

Traditionally, MSSPs value lay in their ability to manage a portfolio of security products that they sold to and installed into their customers’ environments. To ensure service level quality and depth of knowledge, the most successful MSSPs would be highly selective and optimize the portfolio of security products they could support.

As their customers move workloads to the public cloud, larger MSSPs are retraining their technical teams in the cloud-native security offerings from the top public cloud providers. In tandem, the MSSPs are updating their internally developed SOC, NOC, and incident handling tools to embrace the default public cloud provider’s APIs and security products. 

At the same time, MSSPs, appear to be doing better with hiring and retaining security expertise than SMBs. Not only are they able to pay higher salaries but, perhaps more importantly, they’re able to provide the career development paths not present in smaller businesses through a diverse spectrum of security challenges spread over multiple customer environments. 

The parallel growth of default public cloud security capabilities and MSSP adoption offers a solution for the dearth of entry level information security personnel and access to experienced incident responders. Combining cloud efficiencies with MSSP delivery creates advanced capabilities beyond that on-premises only defense can achieve.

Smart MSSPs are embracing cloud operations for their own optimizations and service delivery. Many are taking advantage of the built-in AI and elastic compute capabilities to provide more advanced and personalized security services to customers – without needing to scale their pool of human experts. In this way businesses embracing the efficiencies of the public cloud and on-demand security expertise gain a critical advantage in working around the shortage of security professionals.

Today we have less horses from a century ago and consequently less trained farriers but more qualified welders. As businesses move to the cloud and embrace MSSP, this will make it possible to deliver advanced capabilities that help fill entry level security requirements which account for the majority of security vacancies around the world. As result, existing defenders can work on higher level problems, enabling companies to cover more ground.

-- Gunter Ollmann

First Published: SecurityWeek - June 11, 2019

Tuesday, May 21, 2019

From APES to Bespoke Security Automated as a Service

Many of the most innovative security start-ups I come across share a common heritage – their core product evolved from a need to automate the delivery of an advanced service that had begun as a boutique or specialized consulting offering. Start-ups with this legacy tend to have bypassed the “feature looking for a problem” phase that many others struggle with and often launch their products on day-one alongside a parade of satisfied marque accounts.

While there isn’t a universal formula for success, over my years delivering boutique professional security services, I have been very lucky to encounter that product evolution several times, usually resulting from consultants intelligently automating the repetitive parts of their jobs away and creating a new class of product.

For example, around the turn of the millennium, when penetration testing came to the fore as the cutting edge in security consulting, the need for automating away the drudgery of port scans and vulnerability scanning was obvious. The first foray led to tooling that freed up consultants to focus on the “art” of bug hunting and recognition that some customers needs were satisfied with those basic capabilities. During my time at Internet Security Systems, that first automation came to be known as the “monkey scan” – because of how easy it was to run. Of course, once the marketing team got wind of customers purchasing the scanning service, a more sensible name was needed and so Automated Perimeter and Enterprise Scanner (APES) was born. From humble beginnings, that X-Force managed service line business grew and, through acquisition, its legacy continues today as part of IBM’s Managed Security Services Provider (MSSP) business. 


Automation of repetitive consulting tasks is an obvious and critical element, but so too is the need to ensure consistency and exhaustive completion of delivery. Along my own journey I’ve seen former colleagues spawn companies such as SPI Dynamics and PortSwigger Web Security (i.e. Burp Proxy) to bring to market new web application security testing tools, Continuum Security SL to solve SDLC-based threat modelling and risk management challenges, Endgame to kickstart the nation-state threat intelligence market, and AttackIQ to construct and define the Attack and Breach Simulation category – all springing from the imagination of talented consultants looking to make life just a little bit easier.

To understand what the next innovative security technology will be, we should look closely at the premium service offerings from specialized services boutique consulting companies and pay attention to those services that have a documented and repeatable methodology. While many young specialist service lines may present themselves as more “art” than “science” – the turning point comes with the development and enforcement of a standardized methodology.

A service methodology ensures consistency of delivery. Consistency means that the differentiated elements of the service can be, or must be, repeatable. If they are repeatable, then they almost always can be automated. If key elements of the service are automatable, then they can be productized. 

The depth of service that automation can deliver roughly defines whether the product will be most effectively delivered as a managed security service, a self-service SaaS offering, or a stand-alone product.

In the past, that evolution from boutique consulting service to top-right corner market leading product has taken a few years – typically three years for productization and market awareness, then another three to five before analysts label and assign a market segment. I anticipate that more consulting services will mature into products and the overall pace will increase over coming years because public cloud and AI are rapidly accelerating the gestation of these products. 

Just as many of the most innovative companies launch as cloud-native, security consultants have similarly embraced and applied their expertise in cloud environments. Consultants were often constraint bound by their clients’ hardware and physical locations. Now, when consultants need to automate repetitive tasks (e.g. enumerating APIs, fuzzing payloads, etc.) or to test a hypothesis, they already have the tools in front of them – with no energy lost in applying them. This greatly shortens the time needed to prototype new cross-client solution sets and capabilities.

But automation will only go so far. A successful product needs to capture and distill the expertise and experience that a specialist consultant applies when interpreting the output of all those automated tasks. This is where advances in AI are accelerating the product creation process and transforming managed services businesses.

Off-the-shelf AI libraries and cloud services are allowing innovators to move from linear content creation modes (e.g., each threat requires a unique signature) and decades-old if-then-else logic to training classifier systems capable of identifying and labeling swathes of the problem space they are seeking to solve, and teaching systems to learn new responses directly from the actions the consultants  are already undertaking to solve their customers’ problems.

In my time as a CISO for organizations that often required security consulting expertise, I’ve engaged in reviewing the methodology that consultants will be applying to my systems. Lack of a detailed methodology will inevitably lead to inconsistent results and lack of repeatability, the death knell of compliance. When reviewing a proposed methodology, a CISO should also ask about the automation process framework and whether those automated tasks can be separated from consultant billing. This could possibly reduce overall job costs, but also prompts your consulting partners to accelerate an important services transition into a more versatile product.

For my former consulting brethren, take a critical look at the innovative services you are delivering. Stop playing the “art” card and instead focus on the detailed methodology that’ll promote repeatability and confidence in your service. From there, invest time in applying the resources of public cloud to bring automation, scalability, and AI to solving the given problem as a platform for all customers – past, present, and future.

-- Gunter Ollmann

First Published: SecurityWeek - May 21, 2019

Tuesday, April 30, 2019

To Reach SIEM’s Promise, Take a Lesson From World War II

With two of the largest public cloud providers having launched their cloud Security Information and Event Management (SIEM) products and an inevitability that the remainder of the top 5 cloud providers will launch their own permutations some time this year, 2019 is clearly the year of the cloud SIEM.

For an on-premises technology that has been cursed with a couple decades of over-promising, under-achieving, and eye-watering cost escalation, modernizing SIEM into a cloud native security technology is a watershed moment for the InfoSec community.

The promise of finally being able to analyze all the logs, intelligence, and security data of an enterprise in real-time opens the door to many great and obvious things. We can let the SIEM vendors shout about all the obvious defensive value cloud SIEM brings. Instead, I’d like to focus on a less obvious but arguably more valuable long-term contribution that a fully capable cloud SIEM brings to enterprise defense.

Assuming an enterprise invests in bringing all their network logs, system events, flow telemetry, and security events and alerts together into the SIEM, businesses will finally be able to track threats as they propagate in an environment. Most importantly, they’ll be able to easily identify and map the “hotspots” of penetration and compromise, and remedy accordingly.

A unified view will also allow analysts and security professionals to pinpoint the spots where compromises remain hidden from peering eyes. As enterprises strive to deploy and manage an arsenal of threat detection, configuration management, and incident response tools in increasingly dynamic environments, visibility and coverage wax and wane with each employee addition, wireless router hook-up, application installation, or SaaS business connection. Those gaps, whether temporary or permanent, tend to attract an unfair share of compromise and harm.

In World War II, a gentleman by the name of Abraham Wald was a member of Columbia University’s Statistical Research Group (SRG). One problem SRG was tasked with was examining the distribution of damage to returning aircraft and advise on how to minimize bomber losses to enemy fire. A premise of the research was that the areas of bombers that were most damaged and therefore susceptible to flak should be redesigned and made more robust. Wald noted that such a study was biased to only aircrafts that survived their missions and, if you were to assume that damage was more uniformly distributed to all aircrafts, those that returned had actually been hit in the less vulnerable parts. By mapping the damage done to the surviving aircraft, the “undamaged” areas represented the most vulnerable parts of the aircrafts that didn’t survive to return.


Wald’s revelations and work were seminal in the early days of Operational Research – a discipline of applying advanced analytical methods to help make better decisions. I expect cloud SIEM and the integration of AI systems to usher Operational Research and its associated disciplines into the information security sector. Securing an enterprise is a highly complex and dynamic problem and, because Operational Research is focused on optimizing solutions for complex decision-making problems, it is well suited to finding solutions that balance the multi-faceted aspects of business continuity and risk.

As we’re in the early days for cloud SIEM, I’ve yet to see much in the area of employing native AI to address the cold-spots in enterprise threat visibility. The focus to-date is applying AI in threat hunting and automating the reconstruction of kill chain associated with an in-progress attack and supplementing that visualization with related threat intelligence and historical data artifacts.

Putting on a forecasting hat, I expect much of the immediate adoption and growth of cloud SIEM will be driven by desire to realize the promises of on-premises SIEM, in particular, using supervised-learning systems to automate the detection and mitigation of the threats that have pestered security operations teams for twenty-plus years. Infusing SIEM natively on the cloud provider’s platform also creates end to end visibility into security related events inside a business’ environment and pieces in valuable intelligence from the cloud provider’s operations – thereby harnessing the “cloud effects” of collective intelligence and removing the classic requirement for a “patient zero” to initiate an informed response.

What I hope is, once engineering teams have matured those hunting and mitigation capabilities by weaving in AI decision systems and real-time data processing, the “science” of information security can finally come up for air and move forward.

Leveraging the inherent power and scale of public cloud for real-time analytics of enterprise security data at streaming rates means that we’re at the cusp of finally calculating the ROI of each security technology deployed inside an enterprise. That alone should have many CISOs and CFOs jumping for joy. With all the enterprise security data flowing to one place, the cloud SIEM also becomes the anchor for IT operations – such as tracking the “meantime between failures” (MTBF) of protected systems, providing robustness metrics for software assets and system updates, and surfacing the latent risks of the environments being monitored.

75 years may separate War World II from cloud SIEM, but we’re on the cusp of being able to apply the hard-earned learnings from Abraham Wald in our latest adversarial conflict – the cyberwar.

-- Gunter Ollmann

First Published: SecurityWeek - April 30, 2019

Tuesday, April 9, 2019

Get Ready for the First Wave of AI Malware

While viruses and malware have stubbornly stayed as a top-10 “things I lose sleep over as a CISO,” the overall threat has been steadily declining for a decade. Unfortunately, WannaCry, NotPetya, and an entourage of related self-propagating ransomware abruptly propelled malware back up the list and highlighted the risks brought by modern inter-networked business systems and the explosive growth of unmanaged devices.

The damage wrought by these autonomous (not yet AI-powered) threats should compel CISOs to contemplate the defenses to counter such a sophisticated adversary.


The threat of a HAL-9000 intelligence directing malware from afar is still the realm of fiction, so too is the prospect of an uber elite hacker collective that has been digitized and shrunken down to an email-sized AI package filled with evil and rage. However, over the next two to three years, I see six economically viable and “low hanging fruit” uses for AI infused malware – all focused on optimizing efficiency in harvesting valuable data, targeting specific users, and bypassing detection technologies.

  • Removing the reliance upon frequent C&C communications – Smart automation and basic logic processing could be employed to automatically navigate a compromised network, undertake non-repetitive and selective exploitation of desired target types and, upon identification and collection of desired data types, perform a one-off data push to a remote service controlled by the malware owner. While not terribly magical, such AI-powered capabilities would not only undermine all perimeter blacklist and enforcement technologies, but also sandboxing and behavioral analysis detection.
  • Use of data labeling and classification capabilities to dynamically identify and capture the most interesting or valuable data –  Organizations use these types of data classifiers and machine learning (ML) to label and protect valuable data assets. But attackers can exploit the same search efficiencies to find the most valuable business data being touched by real users and systems and to reduce the size of data files for stealthy exfiltration. This enables attackers to sidestep traffic anomaly detection technologies as well as common deception and honeypot solutions.
  • Use of cognitive and conversational AI to monitor local host email and chat traffic and to dynamically impersonate the user – The malware’s AI could insert new conversational content into email threads and ongoing chats with the objective of socially engineering other employees into disclosing secrets or prompting them to access malicious content. Since most email and chat security solutions focus on in-bound and egress content, internal communication inspection is rare. Additionally, conversational AI is advancing quickly enough to make socially engineering IT helpdesk and support staff into disclosing secrets or making temporary configuration a high probability.
  • Use of speech to text translation AI to capture user and work environment secrets –Through a physical microphone, the AI component could convert all discussions within range of the compromised device to text. In addition, some environments may enable the AI to successfully capture the keystrokes of nearby systems and deduce what keys are being pressed. Such an approach also allows hackers to be more selective of what secrets to capture, further minimizing the volume of data that must be egressed from the business, which then reduces the odds of triggering network-based detection technologies.
  • Use embedded cognitive AI in applications to selectively trigger malicious payloads – Since it is possible for cognitive AI systems to not only recognize a specific face or voice, but also determine their race, sex, and age, it is therefore possible for a malware author to be very specific in who they choose to target. Such malware may only be malicious for the CFO of the company or may only manifest itself if the interactive user is a pre-teen female. Because the trigger mechanism is embedded within complex AI, it becomes almost impossible for automated or manual investigation processes to determine the criteria for initiating the malicious behaviors.
  • Capture the behavioral characteristics and traits of system users – AI learning systems could observe the unique cadence, timbre, and characteristics of the users typing, mouse movements, vocabulary, misspellings, etc. and create a portable “bio-profile” of the user. Such “bio-profiles” could then be reused by attackers to bypass the current generation of advanced behavioral monitoring systems that are increasingly deployed in high security zones.

These AI capabilities are commercially available today. Collectively or singularly, each AI capability can be embedded as code within malicious payloads.

Because deep neural networks, cognitive AI, and trained machine language classifiers are incredibly complex to decipher, the trigger mechanism for malicious behaviors may be deeply buried and impossible to uncover through reverse engineering practices.

The baseline for defending against these attacks will lie in ensuring all parts of the organization are visible and continually monitored. In addition, CISOs need to invest in tooling that brings speed and automation to threat discovery through AI-powered detection and response.

As malware writers harness AI for cybercrime, the security industry must push forward with a new generation of dissection and detonation technologies to prepare for this coming wave. A couple promising areas for implementing defensive AI include threat intelligence mining and autonomous response (more on this later).

-- Gunter Ollmann

First published: SecurityWeek - April 9, 2019

Wednesday, January 9, 2019

Hacker History III: Professional Hardware Hacker

Following on from my C64 hacking days, but in parallel to my BBS Hacking, this final part looks at my early hardware hacking and creation of a new class of meteorological research radar...

Ever since that first C64 and through the x86 years, I’d been hacking away – mostly software; initially bypassing copy-protection, then game cracks and cheats, followed by security bypasses and basic exploit development.

Before bug bounty programs were invented in the 2010’s, as early as 1998 I used to say the best way to learn and practice hacking skills was to target porn sites. The “theory” being that they were constantly under attack, tended to have the best security (yes, even better than the banks) and, if you were ever caught, the probability of ever appearing in court and having to defend your actions in front of a jury was never going to happen - and the folks that ran and built the sites would be the first to tell you that.

In the mid-to-late 1980’s, following France’s 1985 bombing and sinking of the Rainbow Warrior in New Zealand, if you wanted to learn to hack and not worry about repercussions – any system related to the French Government was within scope. It was in that period that war-dialing and exploit development really took off and, in my opinion, the professional hacker was born – at least in New Zealand it was. Through 1989-1991 I had the opportunity to apply those acquired skills in meaningful ways – but those tales are best not ever written down.

Digital Radar

Easily the most fun hardware hacking I’ve ever done or been involved with ended up being the basis for my post-graduate research and thesis. My mixed hardware hacking and industrial control experience set me up for an extraordinary project as part of my post graduate research and eventual Masters in Atmospheric Physics.

I was extremely lucky:
  1. The first Mhz digitizer cards were only just hitting the market
  2. PC buses finally had enough speed to handle Mhz digitizer cards
  3. Mass storage devices (i.e. hard drives) were finally reaching an affordable capacity/price
  4. My supervisor was the Dean of Physics and had oversight of all departments “unused budgets”
  5. Digital radar had yet to be built

My initial mission was to build the world’s first digital high-resolution vertically pointing radar and to use it to prove or disprove the “Seeder-feeder mechanism of orographic rainfall”.

Taking a commercial analogue X-band marine radar and converting the 25 kilo-watt radar with a range of 50 miles and a resolution measured in tens-of meters, to a digital radar with an over-sampled resolution of 3.25 cm out to a range of 10km was the start of the challenge – but successfully delivered nevertheless. That first radar was mounted on the back of a 4x4 Toyota truck – which was great at getting to places no radar had been before. Pointing straight up was interesting – and served its purpose of capturing the Seeder-feeder mechanism in operation – but there was room for improvement.

Back at the (family) factory, flicking through pages of operation specification tables for electric motors (remember – pre-Internet/pre-Google) and harnessing the power of MS-DOS based AutoCAD, I spec'ed out and designed a mounting mechanism for making the radar scan the sky like a traditional meteorological radar – but one that could operate in winds of 80 mph winds, at high altitude, in the rain. Taking a leaf out of my father’s design book – it was massively over engineered ;-)

Home for many months - the mobile high resolution radar + attached caravan. Circa 1994.

This second radar was mounted to an old tow-able camper-van. It was funny because, while the radar would survive 80+ mph winds, a gust of 50+ mph would have simply blown over the camper-van (and probably down the side of a hill or over a cliff). Anyhow, that arrangement (and the hacks it took to get working) resulted in a few interesting scientific advances:
  • Tracking bumblebees. Back in 1994, while GPS was a thing, it didn’t have very good coverage in the southern hemisphere and, due to US military control, it’s positioning resolution was very poor (due to Selective Availability). So, in order to work out a precise longitude and latitude of the radar system, it was back to ancient ways and tracking the sun. I had code that ran the radar in passive mode, scanned horizontally and vertically until it found that big microwave in the sky, and tracked its movements – and from there determine the radar’s physical location. (Un)fortunately, through a mistake in my programming and leaving the radar emitting it's 25kW load, I found it could sometimes lock-on and track bright blips near ground-level. Through some investigation and poor coding, I’d managed to build a radar tracking system for bumblebees (since bumblebees were proportional to the wavelength and over-sampled bin size – they were highly reflective and dominated the sun).
  • Weather inside valleys. The portability of the camper-van and the high resolution of the radar also meant that for the first time ever it was possible to monitor and scientifically measure the weather phenomenon within complex mountain valley systems. Old long-range radar, with resolutions measured in thousands of cubic meters per pixel, had only observed weather events above the mountains. Now it was possible to digitally observe weather events below that, inside valleys and between mountains, at bumblebee resolution.
  • Digital contrails. Another side-effect of the high resolution digital radar was its ability to measure water density of clouds even on sunny days. Sometimes those clouds were condensation trails from aircraft. So, with a little code modification, it became possible to identify contrails and follow their trails back to their root source in the sky – often a highly reflective aircraft – opening up new research paths into tracking stealth aircraft and cruise missiles.
It was a fascinating scientific and hacking experience. If you’ve ever stood in a doorway during a heavy rainfall event and watched a curtain of heavier rainfall weave its way slowly down the road and wondered at the physics and meteorology behind it, here was a system that digitally captured that event from a few meters above the ground, past the clouds, through the melting layer, and up to 10 km in the air – and helped reset and calibrate the mathematical models still used today for weather forecasting and global climate modeling.

By the end of 1994 it was time to wrap up my thesis, leave New Zealand, head off on my Great OE, and look for full-time employment in some kind of professional capacity.


When I look back at what led me to a career in Information Security, the 1980's hacking of protected C64 games, the pre-Internet evolution of BBS and it's culture of build collaboration, and the hardware hacking and construction of a technology that was game changing (for it's day) - they're the three things (and time periods) that remind me of how I grew the skills and developed the experience to tackle any number of subsequent Internet security problems - i.e. hack my way through them. I think of it as a unique mix. When I meet other hackers who's passions likewise began in the 1980's or early 1990's, it's clear that everyone has their own equally exciting and unique journey - which makes it all the more interesting.

I hope the tale of my journey inspires you to tell your own story and, for those much newer to the scene, proves that us older hands probably didn't really have a plan on how we got to where we are either :-)

This is PART THREE of THREE.

PART ONE (C64 Hacking)  and PART TWO (BBS Hacking) are available to read too.

--Gunter


Tuesday, January 8, 2019

Hacker History II: The BBS Years

Post-C64 Hacking (in Part 1 of Hacker History)... now on to Part 2: The BBS Years

Late 1986 (a few months before I started my first non-newspaper delivery and non-family-business job – working at a local supermarket) I launched my first bulletin board system (BBS). I can’t remember the software that I was running at the time, but it had a single 14k dial-up facility running on all the extra C64 equipment I’d been “gifted” by friends wanting faster/always access too my latest cheats and hacks.

The premise behind the BBS was two-fold: I wanted to learn something new (and hacking together a workable and reliable BBS system in the mid-80’s was a difficult enough challenge), and I saw it as a saving time distribution channel for my cheats/hacks; others could dial-in and download themselves, instead of me messing around with stacks of floppy discs etc.

At some point in 1986 I’d also saved enough money to by an IBM PC AT clone – a whopping 12Mhz 80286 PC, complete with Turbo button and a 10Mb hard drive. I remember specking out the PC with the manufacturer. They were stunned that a kid could afford their own PC AT and that he planned to keep it in his bedroom, and that he wanted an astounding 16k of video memory (“what do you need that for? Advanced ACAD?”)!

By 1989 the BBS had grown fairly large with a couple hundred regular members with several paying monthly subscription fees, but the stack of C64’s powering the BBS were showing their age and, in the meantime my main computing had moved down the PC path from 286, to 386, and to a brand-spanking new 486.

It was time to move on from C64 and go full-PC – both with the BBS and the hacks/cheats I was writing.

So in 1990, over the Summer/Christmas break from University I set about shifting the BBS over to a (single) PC – running Remote Access, with multiple dial-in lines (14.4k for regular users and 28.8k for subscribers).


The dropping of C64 and move to fully-fledged x86 PC resulted in a few memorable times for me:
  • BBS’s are like pets. Owning and operating a BBS is a lot like looking after an oversized pet that eats everything in its path and has destructive leanings; they’re expensive and something is always going wrong. From the mid-80’s to mid-90’s (pre-“Internet”) having a BBS go down would be maddening to all subscribers. Those subscribers would be great friends when things were running, or act like ungrateful modern-day teenagers being denied “screen-time” if they couldn’t dial-in for more than a couple of days. Keeping a BBS running meant constant tinkering under the covers – learning the intricacies of PC hardware architecture, x86 assembly, live patching, memory management, downtime management, backup/recovery, and “customer management”. The heady “good-old days” of PC development.
  • International Connectivity. With me in University and too-often referred to as the “student that knows more about computers than the campus IT team”, in 1991 I added Fidonet and Usenet support to my BBS. There had been a few BBS’s in New Zealand before mine to offer these newsgroups, but they were very limited (i.e. a small number of groups) because they were reliant upon  US dial-up for synching (which was damned expensive!). My solution was to use a spare modem in the pack of a University lab PC to connect semi-permanently to my BBS. From there my BBS used the Universities “Internet” undersea cable connectivity to download and synch all the newsgroups. Technically I guess you could call it my first “backdoor” hacking experience – which ended circa 1993 after being told to stop as (by some accounts) the BBS was peak consuming 1/3 of the entire countries academic bandwidth.
  • First Security Disclosure. Setting up Remote Access (RA) was an ordeal. It was only a week later – Christmas Eve 1990 – that I publicly disclosed my first security vulnerability (with a self-developed patch); an authentication bypass to the system that controlled what games or zones a subscriber could access. I can’t remember how many bugs and vulnerabilities I found in RA, QEMM, MS-DOS, modem drivers, memory managers, and the games that ran on RA over those years. Most required some kind of assembly instruction patch to fix.
  • Mailman and Sysop. Ever since those first BBS days in 1986, I’d felt that email (or Email, or E-Mail) would be the future for communications. The tools and skills needing for managing a reliable person-to-person or person-to-group communication system had to be built and learned – as too did the management of trust and the application of security. Some BBS operators loved being Sysops (System Operators – i.e. Admins) because they could indulge their voyeurism tendencies. I hated BBS’s and Sysops that operated that way and it became an early mission of mine to figure out ways of better protecting subscriber messages.

That fumbling about and experimenting with PC hardware, MS-DOS, and Windows at home and with the Bulletin Board System, coupled with learning new systems at University such as DEC Alpha, OpenVMS, Cray OS, and HP-UX in the course of my studies, and the things I had to piece-together and program at my parents factories (e.g. PLC’s,  ICS’s, RTU’s, etc.) all combined to add to a unique perspective on operating systems and hardware hacking.

By the time I’d finished and submitted my post-grad research thesis, it was time to tear down the BBS, sell all my computers and peripherals, and leave New Zealand for my Great OE (Overseas Experience) at the end of 1994.

This is PART TWO of THREE.

PART ONE (C64 Hacking) was posted yesterday and PART THREE (Radar Hacking) will be on Wednesday.

Monday, January 7, 2019

Hacker History I: Getting Started as a Hacker

Curiosity is a wonderful thing; and the key ingredient to making a hacker. All the best hackers I know are not only deeply curious creatures but have a driving desire to share the knowledge they uncover. That curiosity and sharing underpins much of the hacker culture today – and is pretty core to people like me and those I trust the most.

Today I continue to get a kick out of mentoring other hackers, (crossed-fingers) upcoming InfoSec stars and, in a slightly different format, providing “virtual CISO” support to a handful of professionals (through my Ablative Security company) that have been thrown headfirst into protecting large enterprise or local government networks.

One of the first questions I get asked as I’m mentoring, virtual CISO’ing, or grabbing beers with a new batch of hacker friends at some conference or other is “how did you get started in computers and hacking?”.

Where did it all start?

The early days of home computing were a mixed bag for me in New Zealand. Before ever having my own computer, a bunch of friends and I would ditch our BMX’s daily in the front yard of any friend that had a Commodore VIC20 or Amstrad CPC, throw a tape in the tape reader, and within 15 minutes be engrossed in a game – battling each other for the highest score. School days were often dominated by room full of BBC Micros – where one of the most memorable early programs I wrote was to use a sensitive microphone to capture the sounds of bugs eating. I can still remember plotting the dying scream of a stick insect as it succumbed to science!

Image via: WorthPoint

I remember well the first computer I actually owned – a brand-spanking new SpectraVideo SV-328 (complete with cassette tape reader) that Santa delivered for Christmas in 1983. I thought it was great, but quickly tired of it because there weren’t many games and all my friends were getting Commodore VIC-20 or Commodore 64 microcomputers – which had oh so many more games. So, come late 1984, I flogged my SpectraVideo and brought (second-hand) my first Commodore 64 (C64).

I can safely say that it was the C64 that lit my inner hacker spark. First off, the C64 had both a tape (then later diskette) capability and a games cartridge port. Secondly, New Zealand is a LONG way from where all the new games were being written and distributed from. Thirdly, as a (pre)teen, a single cartridge game represented 3+ months of pocket money and daily newspaper deliveries.

These three constraints resulted in the following:
  • My first hardware hack. It was possible to solder a few wires and short-circuit the memory flushing and reboot process of the C64 via the games cartridge mechanism to construct a “reset” button. This meant that you could insert the game cartridge, load the game, hold-down your cobbled together reset button, remove the games cartridge, and use some C64 assembly language to manipulate the game (still in memory). From there you could add your own boot loader, save to tape or floppy, and create a back-up copy of the game.
  • “Back-up Copies” and Community. C64 games, while plentiful, were damned expensive and took a long time to get to New Zealand. So a bunch of friends all with C64’s would pool our money every few weeks to buy the latest game from the UK or US; thereafter creating “back-ups” for each-other to hold on to – just in case the costly original ever broke. Obviously, those back-up copies needed to be regularly tested for integrity.  Anyhow, that was the basis of South Auckland’s community of C64 Hackers back in 1983-1985. A bunch of 10-14 year-olds sharing the latest C64 games.
  • Copy-protection Bypassing. Unsurprisingly, our bunch of kiwi hackers weren’t the first or only people to create unauthorized back-ups of games. As floppies replaced tapes and physical cassettes as the preferred media for C64 games, the software vendors started their never-ending quest of adding copy-protection to protect unauthorized copying and back-ups. For me, this was when hacking become a passion. Here were companies of dozens, if not hundreds, of professional software developers trying to prevent us from backing-up the programs we had purchased. For years we learned, developed, and shared techniques to bypass the protections; creating new tools for backing-up, outright removal of onerous copy-protection, and shrinking bloated games to fit on single floppies.
  • Games Hacking. At some point, you literally have too many games and the thrill of the chase changes. Instead of looking forward to playing the latest game for dozens of hours or days and iteratively working through campaigns, I found myself turning to hacking the games themselves. The challenge became partially reversing each game, constructing new cheats and bypasses, and wrapping them up in a cool loader for a backed-up copy of the game. Here you could gain infinite lives, ammo, gold, or whatever, and quickly step through the game – seeing all it had to offer and doing so within an hour.
  • Hacking for Profit. Once some degree of reputation for bypassing copy-protection and creating reliable cheater apps got around, I found that my base of “friends” grew, and monetary transactions started to become more common. Like-minded souls wanted to buy hacks and tools to back-up their latest game, and others wanted to bypass difficult game levels or creatures. So, for $5-10 I’d sell the latest cheat I had.
At some point in 1986 I recognized that I had a bunch of C64 equipment – multiple floppy drives, a few modems, even a new Commodore 64C – and more than enough to start a BBS.

This is PART ONE of THREE. 

PART TWO (BBS Hacking) is up and PART THREE (Radar Hacking) on Wednesday.