Showing posts with label meteorology. Show all posts
Showing posts with label meteorology. Show all posts

Wednesday, January 9, 2019

Hacker History III: Professional Hardware Hacker

Following on from my C64 hacking days, but in parallel to my BBS Hacking, this final part looks at my early hardware hacking and creation of a new class of meteorological research radar...

Ever since that first C64 and through the x86 years, I’d been hacking away – mostly software; initially bypassing copy-protection, then game cracks and cheats, followed by security bypasses and basic exploit development.

Before bug bounty programs were invented in the 2010’s, as early as 1998 I used to say the best way to learn and practice hacking skills was to target porn sites. The “theory” being that they were constantly under attack, tended to have the best security (yes, even better than the banks) and, if you were ever caught, the probability of ever appearing in court and having to defend your actions in front of a jury was never going to happen - and the folks that ran and built the sites would be the first to tell you that.

In the mid-to-late 1980’s, following France’s 1985 bombing and sinking of the Rainbow Warrior in New Zealand, if you wanted to learn to hack and not worry about repercussions – any system related to the French Government was within scope. It was in that period that war-dialing and exploit development really took off and, in my opinion, the professional hacker was born – at least in New Zealand it was. Through 1989-1991 I had the opportunity to apply those acquired skills in meaningful ways – but those tales are best not ever written down.

Digital Radar

Easily the most fun hardware hacking I’ve ever done or been involved with ended up being the basis for my post-graduate research and thesis. My mixed hardware hacking and industrial control experience set me up for an extraordinary project as part of my post graduate research and eventual Masters in Atmospheric Physics.

I was extremely lucky:
  1. The first Mhz digitizer cards were only just hitting the market
  2. PC buses finally had enough speed to handle Mhz digitizer cards
  3. Mass storage devices (i.e. hard drives) were finally reaching an affordable capacity/price
  4. My supervisor was the Dean of Physics and had oversight of all departments “unused budgets”
  5. Digital radar had yet to be built

My initial mission was to build the world’s first digital high-resolution vertically pointing radar and to use it to prove or disprove the “Seeder-feeder mechanism of orographic rainfall”.

Taking a commercial analogue X-band marine radar and converting the 25 kilo-watt radar with a range of 50 miles and a resolution measured in tens-of meters, to a digital radar with an over-sampled resolution of 3.25 cm out to a range of 10km was the start of the challenge – but successfully delivered nevertheless. That first radar was mounted on the back of a 4x4 Toyota truck – which was great at getting to places no radar had been before. Pointing straight up was interesting – and served its purpose of capturing the Seeder-feeder mechanism in operation – but there was room for improvement.

Back at the (family) factory, flicking through pages of operation specification tables for electric motors (remember – pre-Internet/pre-Google) and harnessing the power of MS-DOS based AutoCAD, I spec'ed out and designed a mounting mechanism for making the radar scan the sky like a traditional meteorological radar – but one that could operate in winds of 80 mph winds, at high altitude, in the rain. Taking a leaf out of my father’s design book – it was massively over engineered ;-)

Home for many months - the mobile high resolution radar + attached caravan. Circa 1994.

This second radar was mounted to an old tow-able camper-van. It was funny because, while the radar would survive 80+ mph winds, a gust of 50+ mph would have simply blown over the camper-van (and probably down the side of a hill or over a cliff). Anyhow, that arrangement (and the hacks it took to get working) resulted in a few interesting scientific advances:
  • Tracking bumblebees. Back in 1994, while GPS was a thing, it didn’t have very good coverage in the southern hemisphere and, due to US military control, it’s positioning resolution was very poor (due to Selective Availability). So, in order to work out a precise longitude and latitude of the radar system, it was back to ancient ways and tracking the sun. I had code that ran the radar in passive mode, scanned horizontally and vertically until it found that big microwave in the sky, and tracked its movements – and from there determine the radar’s physical location. (Un)fortunately, through a mistake in my programming and leaving the radar emitting it's 25kW load, I found it could sometimes lock-on and track bright blips near ground-level. Through some investigation and poor coding, I’d managed to build a radar tracking system for bumblebees (since bumblebees were proportional to the wavelength and over-sampled bin size – they were highly reflective and dominated the sun).
  • Weather inside valleys. The portability of the camper-van and the high resolution of the radar also meant that for the first time ever it was possible to monitor and scientifically measure the weather phenomenon within complex mountain valley systems. Old long-range radar, with resolutions measured in thousands of cubic meters per pixel, had only observed weather events above the mountains. Now it was possible to digitally observe weather events below that, inside valleys and between mountains, at bumblebee resolution.
  • Digital contrails. Another side-effect of the high resolution digital radar was its ability to measure water density of clouds even on sunny days. Sometimes those clouds were condensation trails from aircraft. So, with a little code modification, it became possible to identify contrails and follow their trails back to their root source in the sky – often a highly reflective aircraft – opening up new research paths into tracking stealth aircraft and cruise missiles.
It was a fascinating scientific and hacking experience. If you’ve ever stood in a doorway during a heavy rainfall event and watched a curtain of heavier rainfall weave its way slowly down the road and wondered at the physics and meteorology behind it, here was a system that digitally captured that event from a few meters above the ground, past the clouds, through the melting layer, and up to 10 km in the air – and helped reset and calibrate the mathematical models still used today for weather forecasting and global climate modeling.

By the end of 1994 it was time to wrap up my thesis, leave New Zealand, head off on my Great OE, and look for full-time employment in some kind of professional capacity.


When I look back at what led me to a career in Information Security, the 1980's hacking of protected C64 games, the pre-Internet evolution of BBS and it's culture of build collaboration, and the hardware hacking and construction of a technology that was game changing (for it's day) - they're the three things (and time periods) that remind me of how I grew the skills and developed the experience to tackle any number of subsequent Internet security problems - i.e. hack my way through them. I think of it as a unique mix. When I meet other hackers who's passions likewise began in the 1980's or early 1990's, it's clear that everyone has their own equally exciting and unique journey - which makes it all the more interesting.

I hope the tale of my journey inspires you to tell your own story and, for those much newer to the scene, proves that us older hands probably didn't really have a plan on how we got to where we are either :-)

This is PART THREE of THREE.

PART ONE (C64 Hacking)  and PART TWO (BBS Hacking) are available to read too.

--Gunter


Monday, June 18, 2012

Botnet Metrics: Learning from Meteorology

As ISP’s continue to spin up their anti-botnet defenses and begin taking a more active role in dealing with the botnet menace, more and more interested parties are looking for statistics that help define both the scale of the threat and the success of the various tactics being deployed. But, as I discussed earlier in the year (see “Household Botnet Infections“), it’s not quite so easy to come up with accurate infection (and subsequent remediation) rates across multiple ISP’s.

To overcome this problem there are several initiatives trying to grapple with this problem at the moment – and a number of perspectives, observations and opinions have been offered. Obviously, if every ISP was using the same detection technology, in the same way, at the same time, it wouldn’t be such a difficult task. Unfortunately, that’s not the case.

One of the methods I’m particularly keen on is leveraging DNS observations to enumerate the start-up of conversations between the victim’s infected device and the bad guys command and control (C&C) servers. There are of course a number of pros & cons to the method – such as:
  • DNS monitoring is passive, scalable, and doesn’t require deep-packet inspection (DPI) to work [Positive],
  • Can differentiate between and monitor multiple botnets simultaneously from one location without alerting the bad guys [Positive],
  • Is limited to botnets that employ malware that make use of domain names for locating and communicating with the bad guys C&C [Negative],
  • Not all DNS lookups for a C&C domain are for C&C purposes [Negative].
On the top of all this lies the added complexity that such observations are conducted at the IP address level (and things like DHCP churn can be troublesome). This isn’t really a problem for the ISP of course – since they can uniquely tie the IP address to a particular subscriber’s network at any time.
One problem that persists though is that a “subscriber’s network” is increasingly different from “a subscriber’s infected device”. For example, a subscriber may have a dozen IP-enabled devices operating behind their cable modem – and it’s practically impossible for an external observer to separate one infected device from another operating within the same small network without analyzing traffic with intrusive DPI-based systems.

Does that effectively mean that remote monitoring and enumeration of bot-infected devices isn’t going to yield the accurate statistics everyone wants? Without being omnipresent, then the answer will have to be yes – but that shouldn’t stop us. What it means is that we need to use a combination of observation techniques to arrive at a “best estimate” of what’s going on.

In reality we have a similarly complex monitoring (and prediction) system that everyone is happy with – one that parallels the measurement problems faced with botnets – even if they don’t understand it. When it comes to monitoring the botnet threat the security industry could learn a great deal from the atmospheric physicists and professional meteorologists. Let me explain…

When you lookup the weather for yesterday, last week or last year for the 4th July, you’ll be presented with numerous statistics – hours of sunshine, inches of rainfall, wind velocities, pollen counts, etc. – for a particular geographic region of interest. The numbers being presented to you are composite values of sparse measurements.

To arrive at the conclusion that 0.55 inches of rainfall fell in Atlanta yesterday and 0.38 inches fell in Washington DC over the same period, it’s important to note that there wasn’t any measurement device sitting between the sky and land that accurately measured that rainfall throughout those areas. Instead, a number of sparsely distributed land-based point observations specific to the liquid volume of the rain were made (e.g. rain gauges), combined with a number of indirect methods (e.g. flow meter gauges within major storm drain systems), and broad “water effect” methods (e.g. radar) were used in concert to determine an average for the rainfall. This process was also conducted throughout the country, using similar (but not necessarily identical) techniques and an “average” was derived for the event.

That all sounds interesting, but what are the lessons we can take away from the last 50 years of modern meteorology? First and foremost, the use of accurate point measurements as a calibration tool for broad, indirect monitoring techniques.

For example, one of the most valuable and accurate tools modern meteorology uses for monitoring rainfall doesn’t even monitor rain or even the liquid component of the droplets – instead it monitors magnetic wave reflectivity. Yes, you guessed it – it’s the radar of course! I could get all technical on the topic, but essentially meteorological radar measure the reflection of energy waves from (partially) reflective objects in the sky. By picking the right wavelength of the magnetic wave from a radar, it gets better at detecting different sized objects in the sky (e.g. planets, aircraft, water droplets, pollutant particulates, etc.). So, when it comes to measuring rain (well, lots of individual raindrops simultaneously to be more precise), the radar system measures how much energy of a radar pulse was returned and at what time (the time component helps to determine distance).

Now radar is a fantastic tool – but by itself it doesn’t measure rainfall. Without getting all mathematical on you, the larger an individual raindrop the substantially bigger the energy reflection – which means that a few slightly larger raindrops in the sky will completely skew the energy measurements of the radar – meanwhile, the physical state of the “raindrop” also affects reflectivity. For example, a wet hailstone reflects much more energy than an all-liquid drop. There are a whole bunch of non-trivial artifacts of rainfall (you should checkout things like “hail spikes” for example) that have to be accounted for if the radar observations can be used to derive the rainfall at ground level.

In order to overcome much of this, point measurements at ground level are required to calibrate the overall radar observations. In the meteorological world there are two key technologies – rain gauges and disdrometers. Rain gauges measure the volume of water observed at a single point, while disdrometers measure the size and shape of the raindrops (or hail, or snow) that are falling. Disdrometers are pretty cool inventions really – and the last 15 years have seen some amazing advancements, but I digress…

How does this apply to Internet security and botnet metrics? From my perspective DNS observations are very similar to radar systems – they cover a lot of ground, to a high resolution, but they measure artifacts of the threat. However those artifacts can be measured to a high precision and, when calibrated with sparse ground truths, become a highly economical and accurate system.

In order to “calibrate” the system we need to use a number of point observations. By analogy, C&C sinkholes could be considered rain gauges. Sinkholes provide accurate measurements of victims of a specific botnet (albeit, only a botnet C&C that has already been “defeated” or replaced) – and can be used to calibrate the DNS observations across multiple ISP’s. A botnet that has victims within multiple ISP’s that each observe DNS slightly differently (e.g. using different static reputation systems, outdated blacklists, or advanced dynamic reputation systems), could use third-party sinkhole data for a specific botnet that they’re already capable of detecting via DNS, as a calibration point (i.e. scaling and deriving victim populations for all the other botnets within their networks).

Within their own networks ISP’s could also employ limited scale and highly targeted DPI systems to gauge a specific threat within a specific set of circumstances. This is a little analogous to the disdrometer within meteorology – determining the average size and shape of events at a specific point, but not measuring the liquid content of the rainfall directly either. Limited DPI techniques could target a specific botnet’s traffic – concluding that the bot agent installs 5 additional malware packages upon installation that each in turn attempt to resolve 25 different domain names, and yet are all part of the same botnet infection.

Going forward, as ISP’s face increased pressure not only to alert but to protect their subscribers from botnets, there will be increased pressure to disclose metrics relating to their infection and remediation rates. Given the consumer choice of three local ISP’s offering the same bandwidth for the same price per month, the tendency is to go for providers that offer the most online protection. In the past that may have been how many dollars of free security software they bundled in. Already people are looking for proof that one ISP is better than another in securing them – and this is where botnet metrics will become not only important, but also public.

Unfortunately it’s still early days for accurately measuring the botnet threat across multiple ISP’s – but that will change. Meteorology is a considerably more complex problem, but meteorologists and atmospheric physicists have developed a number of systems and methods to derive the numbers that the majority of us are more than happy with. There is a lot to be learned from the calibration techniques used and perfected in the meteorological field for deriving accurate and useful botnet metrics.