Breach Counts: We Don’t Know What We Don’t Know (Foghorn Leghorn Edition)

I asked a question last week on Twitter that provoked some interesting discussion and even a slap on the hand.  I thought my question was relatively simple and sensible:

Is it reasonable to wonder if the breaches we know about – the adversary was caught for lack of a better term – might we only be viewing a sample that represents the less well conceived and/or constructed attacks?

Seemed reasonable.  I asked the question because I use the various breach reports for statistics, and they of course report on breaches that are discovered. Think back to the hide and seek of your childhood.  In my experience, the worst hiders were very likely the first caught.  I even mentioned the old Monty Python “How to Hide” sketch.  So it seemed sensible to ask if the reports were skewed to the worst hiders of the attack population.  Or to quote that great security analyst and philosopher Foghorn Leghorn: “that breach is about as sharp as a bowling ball”.

I try very far to stay away from fear, uncertainty and doubt (FUD), but my question pushed the FUD detector of Pete Lindstrom (@SpireSec), a security analyst and founder of Spire Security, past his tolerance point.  Pete’s contention was that raising the question without supporting evidence was a form of FUD, because I was raising a level of uncertainty and perhaps fear.  Point taken, but that does not stop my intellectual curiosity because I still believe there is a bit of Gordian Knot at play here.  I raised the question because I really study the reports and use the presented statistics to support my points about Triumfant so I am not spreading FUD.  Foghorn would likely say that I am “more mixed up than a feather in a whirlwind”.  But the more I look at the statistics, the more I see unanswered questions that lie beyond the available evidence.

Which takes me back to the point of my original question: it is impossible to gauge the problem we collectively face in IT security because we do not know what we do not know.  And what we do not know is the proportion between detected and undetected breaches.  I raised a similar question in a blog post about malware detection rates tow years ago and noted that an undetected attack is still an attack, even if we can’t count it.

The breach counts in the collective reports actually rely on two things: detection and disclosure.  The Verizon Business report is based on the Verizon caseload and cooperation from law enforcement agencies from several countries.  How many breaches are detected that do not show up on the Verizon report or the others? How many breaches are not reported to the authorities?  There are regulatory mandates that require an organization to disclose breaches that involve the loss of certain types of data, but what happens when those regulatory lines are not crossed?  The Verizon Report is actually called the Data Breach Investigations Report.

I go back to what we don’t know.  How many breaches go undiscovered?  How many breaches are discovered and not disclosed?  Are the detected and disclosed breaches representative of the broader population or are they representative of the less well written and less well executed breaches? Are the breaches in the report 99% of the breaches? 50%? The tip of the proverbial iceberg?

These questions have ramifications, particularly when we put them in the context of what evidence we do have.  For example, if we discover that the discovered breaches are not exactly, as Foghorn would note, the sharpest knives in the drawer, what does it say about the ability of organizations to detect breaches when the average time from infiltration to detection is 173.5 days as reported by the Trustwave report?

I agree with Pete – we need evidence.  Unfortunately, a reasonable conclusion that can be drawn from the collective evidence of these studies is that most organizations are not equipped to detect breaches.  Which of course adds to the conundrum the evidence points to the fact that we will struggle to gather the proper evidence.

I don’t think the collective industry will answer these questions, because they are the uncomfortable detritus of years of placing so much emphasis on prevention. The “2011: Year of the Breach” declarations have been an uncomfortable public realization for the industry and for organizations.  Even if we were better at detecting breaches, organizations will not self-disclose unless required to do so for a variety of valid reasons.

So, FUD accusations aside, I stand by my question.  Of course, Foghorn would likely say that I “Got a mouth like a cannon. Always shooting it off”.

2011 – The Year We Recognized We Were Getting Breached

I just read the Symantec 2011 Internet Security Threat Report from cover to cover, which is a great report with a lot of great information.  But I have the same problem with this report as I do with the ones from Verizon Business, IBM X-Force, Trustwave, and Mandiant (also all great reports with great information) and several of the writers and general industry pundits.  In their report, Symantec calls 2011 “The Year of the Breach” which is consistent with the other reports and other discussions in the broader market.

I am sorry, but I just hate that term.  Hate it.  The fact that the industry, in many case begrudgingly, has had to publicly acknowledge that shields are being evaded and organizations are getting breached does not make 2011 a milestone for breaches.  Companies were getting breached in 2010 and prior, and will be breached in 2012 and beyond.  Breaches are not a 2011 thing, or some annual phase we entered, watched peak, and ultimately ebb away

I will agree that 2011 is the year that the IT Security Industry came to terms with the fact that vendors that sold preventative software could no longer conveniently ignore that organizations were being breached.  Many of the statistics that have been a consistent theme of reports like the Verizon Business 2012 Data Breach Investigation Report seem to have suddenly found resonance.  Statistics such as the 173.5 days on average from breach to detection reported in the Trustwave 2012 Global Security Report became impossible to ignore.

Therefore, calling 2011 “The Year of the Breach” seems disingenuous to me.  In fairness, calling 2011

“The Year We Stated the Obvious” or

The Year We Woke up and Smelled the Coffee” or

“The Year We Got Our Heads Out of Our Collective… (filters engaging) the Sand” or

“The Year Vendors Realized They Could No Longer Sell Just Shields”

is clearly not as catchy.

For the record, this is not a criticism of the reports or the people that produce them.  These reports are hugely informative and I respect the efforts of those who produce them.  As I noted previously, the relentless presentation of the statistics in those reports was at least partially responsible for changing the predominant messaging in the market.  The hype could no longer shout down the reality presented by the numbers.  Notice I said messaging, because I think most pragmatic, right-thinking folks in IT security already knew about the breach situation.

Don’t get me wrong; I am happy that the market has decided to recognize that organizations are being breached.  I work for the company that I think offers the best and most innovative solution for detecting breaches at the point of infiltration.  And with one child about to leave for college, I am all about contributions to the Ivers Foundation.

Which leads me to another comment about these reports.  The reports – rightfully so – talk about detected breaches.  The reports indicate that a high percentage (>90%) of breaches are discovered by someone outside of the organization, indicating that organizations are not equipped to detect breaches.  One could make the case that the breaches that get detected do not represent the best and brightest because they were detected.  Without dissolving into hype or FUD, what percentage of breaches do we really detect? All? Half? 10%?  It is a question worth asking, and as organizations begin to put breach detection capability in place, the resulting statistics will be interesting.

By the way – anyone want to place bets that 2012 will be “The Year of the Targeted Attack”?

I Smell a RAT – Breaking Into Your House to Prove a Point About Breaches

I am going to break into your house.  This is obviously a hypothetical, so there is no need to report me to the local authorities. But stay with me.

As I said, I am going to break into your house.  I can get in one of two ways.  I could use simple psychology to entice you to essentially opening the door and letting me in (social engineering) or I could use some basic information gathered about you to let me know where you are vulnerable and force my way in (hacking).  I say force, but I am a pro and in spite of your protections, if I want in I will get in and the amount of force used will be minimal.

Either way, I will break into your house undetected.

The funny thing is that once I am in, all of the money you have spent on technology to keep me out will be useless.  Not one of those technologies will be able to detect that I have evaded those technologies and am now inside.  Since I am now inside, I could turn them all off, but why bother? They are no longer of consequence to me.  The thought of that makes me chuckle as I take steps to further obfuscate my presence from the inside.

If this scenario unsettles you, I am afraid it gets worse.  Because once I am inside and have had sufficient time to cover my tracks, I am, for all intents and purposes, undetectable.  That gives me full access to your home and I will now live with you for as long as I choose.  What you see, I will see, and eventually I will know where everything in you home is, including your secret stuff.  Access to all of your accounts? Well, I was looking over your shoulder every time you logged into an account, so I have all of your IDs and passwords. When you are not home I will even have time to rummage around the house at will.  Remember that valuable thing you thought you lost? I found it.

After a while, I do not even have to watch, because you decided that all of that stuff about not using the same User ID and password for your accounts was just a bunch of scare tactics.  Anyway, even if you got the slightest bit suspicious and changed anything, I am right there and will actually watch you change your password in real-time.

If I am found, odds say it will not be by you.  You would never find me on your own.  A business partner might notice something odd, or law enforcement may get a lead on my whereabouts, but you only have a one in sixteen chance of finding me.  Even if I am found out, my average stay is about six months.  Not much more to see here anyway.

And good luck getting rid of me.  Did you think I spent all of my time eating bon-bons on the couch watching Dr. Phil? Nope. I created a little thing I like to call persistence.  There are little bits of me inside the house so if you do sweep me out I can sweep right back in.  Like those little ants that come back under your sink.  I have also used your house to control other houses I have also occupied.  After all, yours was not the first.

I write this because when we do demos, we use Poison Ivy, a generally available Remote Administration Tool (RAT) to build a RAT Trojan and take over a machine.  I am surprised to learn that this is often the first time many people see exactly what it means when a hacker owns a system.  That the hacker can see the screen, capture everything that was typed, access every application and file.  People hear about RAT tools, but in my experience, they only have an academic understanding of what it means.  Showing them firsthand gives them a very jarring emotional understanding.  If you would like to see more, we have a short (5 minute) demo video that shows exactly that.

When (not if, kids) I access your system, bypass your defenses, and install a RAT on that machine, I am by definition now a malicious insider, a topic I will expand further on my next post. I am not after Grandma’s jewels, I am after the Crown Jewels.  I am after your intellectual property and your most sensitive data.  I am looking to steal things that can set your company back financially and strategically. I am not on your couch – I am in your boardroom and in your labs and on your production line and I am watching every keystroke your CFO makes.

And I am a malicious insider with staying power.  A recent statistic published in the Trustwave 2012 Global Security Report said that on average a breach lasts 173.5 days before being discovered.  Furthermore, studies show that organizations are not equipped to discover such breaches on their own.  The 2011 Verizon Business Breach Investigation Report states that breaches are discovered by the breached organization only 6% of the time.

I would tell you to wake up and smell the coffee but you are out of coffee and you should pick up a gallon of milk while you are out.  And those new curtains? Please.  I would also tell you to lock the door on the way out, but somehow that would be a bit too ironic.

The Worldwide Malware Signature Counter Lives On

At the bottom of the Triumfant home page is the Worldwide Malware Signature Counter, a fixture on the site since May of 2009.  The Counter was designed, according to the associated blog post marking its debut, “to graphically reinforce what many in the IT security industry believe is a growing problem that is being largely ignored – that the reliance on signatures to protect endpoints and servers against malicious attack is simply unsustainable”.  My only regret is that I never found a way to add the hard clunking sound from the timer on “24” to add emphasis.

I periodically check the Counter against reported malware counts to ensure that it is an accurate and fair representation of the signature story.  Truthfully, the Counter was designed to err on the side of understatement to avoid the impression of FUD or sensationalism, so I normally have to correct it up instead of down. Yes, IT security folks, there are actually marketing people with restraint.  Go figure.

Last week I updated the Counter to track to the signature counts reported by Symantec at the close of 2011.   Doing so led to a time of reflection on the genesis and objective of the Counter, and the changes in the threat landscape between then and now.

When Triumfant introduced the Counter three years ago, the world was still coming to terms with the evolution of malicious attacks and the hard realization that signature based protections could no longer be their primary shield. I would hope that there are very few serious members of the IT security community who need further convincing today.

Ironically, in the past three years the large vendors that owe their market presence largely on selling AV software have shifted their messaging.  Most dropped signature counts from their annual threat reports in spite of such counts being a featured staple in years past.  I noted in one blog post that one such vendor dropped any mention of the word “signature” completely.  In an interesting twist, some of these vendors now use the large malware sample numbers to sell other products and solutions in their portfolio.  The flood of annual reports that are the precursor for the RSA Conference scream numbers such as 75 million and 250 million for malware samples.  You have to feel for signature software: it made these vendors market leaders and it is now being dismissively kicked to the curb. Think Sunset Boulevard for security software.

Meanwhile, the battle to protect sensitive data and intellectual property continues to rapidly evolve.  The first malware sprung to life when sensitive information moved from corporate systems to the first personal computers.  Those early attacks now seem laughable against the volume and sophistication of the threats we face today, and things will only get more complicated when you consider the flood of mobile devices and BYOD machines that will soon be accessing corporate systems.  Furthermore, the adversary has changed from basement hackers to well organized, well funded, and highly motivated groups driven by monetary gain or political motives.  The sum total of this evolution creates a gap between signature based protections and the current reality that grows faster than a simple signature counter can capture.

The counter was a great visual to help people grasp the shift in the IT security world and helped bring attention to Triumfant’s ability to detect malware without signatures.  The counter often provoked people to ask if we were a replacement for signature based protections, and we always said no.  Signature based protections are a logical brick in the wall around IT assets, but they are just a brick, not the entire wall.  I should add that the Counter now serves as a symbol for all solutions that based their detection capability on some form of prior knowledge, not just AV.

My next thoughts went to the Counter itself and its continued existence on the Triumfant site.   After some consideration, I decided to keep it around because while the thinking of the IT security world has evolved there are still plenty of other business people outside of security that are still coming to terms with the concept.  Truth be told, I have an emotional fondness for the Counter and it is still a place for people to discover Triumfant and the uniqueness of our approach.

The Triumfant Worldwide Malware Signature Counter will live on.  Maybe I will finally add that sound effect.  Clunk…Clunk…Clunk…

The Evidence is Overwhelming: Organizations are not Prepared for the Inevitable Breach

84 and 173.5.

These are two significant statistics I picked up from the “Trustwave 2012 Global Security Report”.  I downloaded the report yesterday to review the analysis and the salient numbers from the study.  If you read this blog, you know I quote liberally from the Verizon Business “2011 Data Breach Investigations Report”.  I felt it prudent to see if the Trustwave report aligned with the VBDBIR and my frequent calls to wake up and smell the coffee about breaches.

The short answer is that they do and it does.  84 represents the percentage of breaches that were discovered by someone other than the breached organization.  This aligns with the VBDBIR number of 86%.  I noted that the 84% is actually up from the 2011 Trustwave Report number of 80%.

The numbers on self-detection are of interest to me for two reasons.  One, they scream that organizations are quite ill-equipped to detect a breach and the problem is getting worse.  They dump money in pursuit of the perfect shield, but are essentially unable to know when those shields fail.  And frankly, if I have to convince you that your shields are failing, you may be in the wrong profession.

Second, they underscore that when an organization gets breached, knowledge of the breach is not being contained within the organizational walls.  If a third party finds it, the secret is out.  Organizations cannot ignore the reputational risk that comes from a breach. And there is a coming storm of breach notification legislation that will make the problem even harder to ignore.

The real thunderbolt comes from the 173.5.  Because 173.5 is the average number of days between the initial infiltration and discovery for those attacks discovered by third parties.  173.5 represents the average amount of time that the adversary has free access to the systems and confidential information of the attacked organization.  The report notes that for companies with active discovery initiatives, this number goes down to 43 days.  Better, but no less unacceptable.

I will say it again (and again, and again).  Organizations are going to be breached.  Organizations are not equipped to detect breaches, and once a breach is detected, organizations are not equipped and prepared to respond.  Stop trying to build the perfect shield, step back, and address your exposure to breaches now.  Embrace the fact that you will be breached, and build a rapid detection and response capability.

Need to see something beyond statistics? Just today an article on the Wall Street Journal Online noted that Nortel had been breached without detection for over ten years.  The article discusses SEC breach notification guidelines and the impact on acquiring companies, the potential impact of the breach on Nortel equipment, and implies that the breaches may have contributed to the ultimate decline of the company.

The lesson is simple really.  The Trustwave report and the Nortel story show (again) that while you are busily trying to build that perfect shield, you may already have an adversary working undetected on your systems with relative impunity.

Prediction Regarding Data Breach Detection – Soon to be a Regulatory Requirement

In a post last week titled “Proposed EU Data Protection Fines Push the Lack of Breach Detection Capabilities into the Light“, I noted that the proposed European Union data protection rules would impose fines against organizations who did not report data breaches in a timely manner.  After that post I came across a story (“Companies worry about SEC’s advice to disclose cyberthreats“) in the San Jose Mercury News that noted that the SEC is continuing to amp up the pressure on companies to disclose breaches in their public disclosures.

I am not usually in the prediction business, but I noted in a blog post on February 25, 2010 titled “Intel Notes Attack on 10K – Are We Heading to Mandated Disclosure of Cyber Attacks?” that the SEC might soon mandate disclosure of breaches.  Given the increasingly digital economy, it would make sense that investors would consider breaches material information.

I am old enough to have seen similar patterns like this through the years.  Guidance by the SEC is one very public data breach away from being regulation, and those organizations that read the tealeaves and are prepared have a distinct advantage over those who ignore the signs and signal and are forced to play catch-up.

So I will break from form and make a prediction: by the New Year, we will either have or will be on the way to having multiple regulatory provisions that will require prompt (24 hour) notification of breaches.  Organizations can scramble then, or they can start looking at technologies (like Triumfant) that are focused on detecting the attacks that evade their protection software (shields).  Given that knowing when (again, the IF ship has sailed) you have been breached is critical information that every organization should want and have anyway, this is not the worst initiative ever catalyzed by regulatory mandate.

Why not beat the rush?

The American Airlines Phishing Attack – Front Row Seat to the Psychology of an Attack

Today I came face to face with the phishing attack and was able to watch firsthand as the attack worked on the human element of IT security.  This morning I contacted by a friend who had received an email that confirmed the purchase of a flight on American Airlines.   The friend was now convinced that a credit card had been compromised and that immediate steps were necessary.

Savvy IT security guy that I am, I immediately smelled a rat and asked that my friend (who lives close by) bring the PC with the email to me.  After all, I did not want potentially malicious stuff on my machine.

Sure enough, everything about the email spoke of fraud.  The appearance and format of the email was not even close to looking like a professional email from a large company that does lots of business online.  The email address was suspect, and having been on an airplane or two (or a thousand), I noted that the flight number was not even close to the American Airlines flight numbering system.  Lastly, there was the ubiquitous .zip file attached, just waiting to be clicked.  An example fo the email can be found on the American Web site here.

What was an interesting study was the reaction of my friend to all of this.  I have had a credit card stolen so I knew it was not the end of the world.  I also knew that the credit card companies actually handle fraud pretty well, so every second did not really count.  My friend was very nervous about the credit card being used to buy all manner of unseemly things all the while laying waste to credit ratings.

But most of all, I noted that the .zip file hung like a ball of yarn in front of an over-caffeinated kitten.  My friend so wanted to click on that file.  The psychological pull was palatable.

I walked my friend through the process of recognizing such an attack, and went to the American Airlines web page to demonstrate that the flight number on the email did not exist.  In fact, it was a digit longer than the field on the site for the flight number status.  Next I listened as my friend called American, and then the credit card company.  Both verified that no transaction had occurred and that this was part of a wide reaching scheme.  The American agent actually spent a lot of time walking my friend through the phishing concept at a high level an provided steps on how to dispose of the email without releasing the malware.  I was impressed.

I had several takeaways from the experience.  First, while the attack seemed amateurish and hackneyed to me, I was taken by how quickly my friend swallowed the hook and was quickly prepared to react.  The simple psychology involved was brutally effective, and I saw why such attacks succeed.  If a wide enough net is cast, someone will react the way the bad guys want.

Second, it reinforced the critical nature of the human element in IT security.  My friend is bright, educated, and computer savvy.  Yet that same person immediately and kinetically reacted to what was a cut-rate phishing attack.  People will always be the X-factor in IT security whether it be opening .zip files, shutting off their AV software, or gleefully inserting USB devices from any and every source.

Lastly, the experience screamed for the need for Rapid Detection and Response, because in spite of shields and protections the human factor can be leveraged to bypass or evade those protections.  Stuff gets through, and in front of me was a simple example of how.

I have to go, I just received another email from another friend who says he just got a confirmation for a flight to Atlanta he did not buy.  Seriously.