Breach Counts: We Don’t Know What We Don’t Know (Foghorn Leghorn Edition)

I asked a question last week on Twitter that provoked some interesting discussion and even a slap on the hand.  I thought my question was relatively simple and sensible:

Is it reasonable to wonder if the breaches we know about – the adversary was caught for lack of a better term – might we only be viewing a sample that represents the less well conceived and/or constructed attacks?

Seemed reasonable.  I asked the question because I use the various breach reports for statistics, and they of course report on breaches that are discovered. Think back to the hide and seek of your childhood.  In my experience, the worst hiders were very likely the first caught.  I even mentioned the old Monty Python “How to Hide” sketch.  So it seemed sensible to ask if the reports were skewed to the worst hiders of the attack population.  Or to quote that great security analyst and philosopher Foghorn Leghorn: “that breach is about as sharp as a bowling ball”.

I try very far to stay away from fear, uncertainty and doubt (FUD), but my question pushed the FUD detector of Pete Lindstrom (@SpireSec), a security analyst and founder of Spire Security, past his tolerance point.  Pete’s contention was that raising the question without supporting evidence was a form of FUD, because I was raising a level of uncertainty and perhaps fear.  Point taken, but that does not stop my intellectual curiosity because I still believe there is a bit of Gordian Knot at play here.  I raised the question because I really study the reports and use the presented statistics to support my points about Triumfant so I am not spreading FUD.  Foghorn would likely say that I am “more mixed up than a feather in a whirlwind”.  But the more I look at the statistics, the more I see unanswered questions that lie beyond the available evidence.

Which takes me back to the point of my original question: it is impossible to gauge the problem we collectively face in IT security because we do not know what we do not know.  And what we do not know is the proportion between detected and undetected breaches.  I raised a similar question in a blog post about malware detection rates tow years ago and noted that an undetected attack is still an attack, even if we can’t count it.

The breach counts in the collective reports actually rely on two things: detection and disclosure.  The Verizon Business report is based on the Verizon caseload and cooperation from law enforcement agencies from several countries.  How many breaches are detected that do not show up on the Verizon report or the others? How many breaches are not reported to the authorities?  There are regulatory mandates that require an organization to disclose breaches that involve the loss of certain types of data, but what happens when those regulatory lines are not crossed?  The Verizon Report is actually called the Data Breach Investigations Report.

I go back to what we don’t know.  How many breaches go undiscovered?  How many breaches are discovered and not disclosed?  Are the detected and disclosed breaches representative of the broader population or are they representative of the less well written and less well executed breaches? Are the breaches in the report 99% of the breaches? 50%? The tip of the proverbial iceberg?

These questions have ramifications, particularly when we put them in the context of what evidence we do have.  For example, if we discover that the discovered breaches are not exactly, as Foghorn would note, the sharpest knives in the drawer, what does it say about the ability of organizations to detect breaches when the average time from infiltration to detection is 173.5 days as reported by the Trustwave report?

I agree with Pete – we need evidence.  Unfortunately, a reasonable conclusion that can be drawn from the collective evidence of these studies is that most organizations are not equipped to detect breaches.  Which of course adds to the conundrum the evidence points to the fact that we will struggle to gather the proper evidence.

I don’t think the collective industry will answer these questions, because they are the uncomfortable detritus of years of placing so much emphasis on prevention. The “2011: Year of the Breach” declarations have been an uncomfortable public realization for the industry and for organizations.  Even if we were better at detecting breaches, organizations will not self-disclose unless required to do so for a variety of valid reasons.

So, FUD accusations aside, I stand by my question.  Of course, Foghorn would likely say that I “Got a mouth like a cannon. Always shooting it off”.

2011 – The Year We Recognized We Were Getting Breached

I just read the Symantec 2011 Internet Security Threat Report from cover to cover, which is a great report with a lot of great information.  But I have the same problem with this report as I do with the ones from Verizon Business, IBM X-Force, Trustwave, and Mandiant (also all great reports with great information) and several of the writers and general industry pundits.  In their report, Symantec calls 2011 “The Year of the Breach” which is consistent with the other reports and other discussions in the broader market.

I am sorry, but I just hate that term.  Hate it.  The fact that the industry, in many case begrudgingly, has had to publicly acknowledge that shields are being evaded and organizations are getting breached does not make 2011 a milestone for breaches.  Companies were getting breached in 2010 and prior, and will be breached in 2012 and beyond.  Breaches are not a 2011 thing, or some annual phase we entered, watched peak, and ultimately ebb away

I will agree that 2011 is the year that the IT Security Industry came to terms with the fact that vendors that sold preventative software could no longer conveniently ignore that organizations were being breached.  Many of the statistics that have been a consistent theme of reports like the Verizon Business 2012 Data Breach Investigation Report seem to have suddenly found resonance.  Statistics such as the 173.5 days on average from breach to detection reported in the Trustwave 2012 Global Security Report became impossible to ignore.

Therefore, calling 2011 “The Year of the Breach” seems disingenuous to me.  In fairness, calling 2011

“The Year We Stated the Obvious” or

The Year We Woke up and Smelled the Coffee” or

“The Year We Got Our Heads Out of Our Collective… (filters engaging) the Sand” or

“The Year Vendors Realized They Could No Longer Sell Just Shields”

is clearly not as catchy.

For the record, this is not a criticism of the reports or the people that produce them.  These reports are hugely informative and I respect the efforts of those who produce them.  As I noted previously, the relentless presentation of the statistics in those reports was at least partially responsible for changing the predominant messaging in the market.  The hype could no longer shout down the reality presented by the numbers.  Notice I said messaging, because I think most pragmatic, right-thinking folks in IT security already knew about the breach situation.

Don’t get me wrong; I am happy that the market has decided to recognize that organizations are being breached.  I work for the company that I think offers the best and most innovative solution for detecting breaches at the point of infiltration.  And with one child about to leave for college, I am all about contributions to the Ivers Foundation.

Which leads me to another comment about these reports.  The reports – rightfully so – talk about detected breaches.  The reports indicate that a high percentage (>90%) of breaches are discovered by someone outside of the organization, indicating that organizations are not equipped to detect breaches.  One could make the case that the breaches that get detected do not represent the best and brightest because they were detected.  Without dissolving into hype or FUD, what percentage of breaches do we really detect? All? Half? 10%?  It is a question worth asking, and as organizations begin to put breach detection capability in place, the resulting statistics will be interesting.

By the way – anyone want to place bets that 2012 will be “The Year of the Targeted Attack”?

Detection is the Horse, Investigation is the Cart – Use in That Order

I received some interesting responses from my last week’s post (Incident Detection, Then Incident Response) so let me try to answer them all collectively.

No, my post was not a knock against incident response (IR) or forensics tools.  I believe we are getting things out of order.  It is about detection first.  Better analysis? Good. Better Response. Good. But it all starts with breach detection.  In fact, if we had better breach detection, organizations would actually get more value out of their IR/forensics tools.

The inability of organizations to detect breaches is easily explained.  The picture below is my attempt to illustrates what I call The Breach Detection Gap.  This gap exists  between the numerous layers of prevention solutions and IR/forensics tools leaving organizations unable to detect breaches at the point of infiltration.

The IT security  market has been fixated – technically and emotionally – on prevention. Hence the numerous “usual suspects” on the left side of the breach.  I think my position is clear (cystal) that a prevention-centric strategy is doomed to failure.  Tradecraft relentlessly and rapidly evolves to evade any gains in prevention, and targeted attacks and the Advanced Persistent Threat are engineered to evade the specific defenses meant to defend their target.

IR and Forensic tools provide deep insight and valuable analysis to the breach investigation process, but are only brought to bear after the breach is detected.  Unfortunately, this is where most organizations spend the meager budget slice that is set aside for post infiltration.

The Breach Detection Gap is the critical exposure between prevention tools and IR/forensics tools that leave organizations without the means necessary to detect breaches in real-time.  Obviously, without detection there can be no timely response.  Which is my point of last week’s post: re-packaging IR tools as the solution for breach detection problems is not the answer.  The answer must start with faster and more accurate detection.

Someone also asked why I don’t name names.  I try to write this blog to stimulate thought and while I unashamedly say where Triumfant solves specific issues I try very hard to keep this from being an ongoing advertisement.  I also have never believed that there is any value from directly speaking in a negative manner about any other vendor.  There are some good IR/forensics tools in the market that are very hot right now, and when products get hot, the market begins to act strangely around them.  My post was not a knock on those products, but on the efforts I see in the market to position those tools with professional services as the solution to the Breach Detection Gap.  Make no mistake, the organizations around these hot products and event the vendors behind these products see this as a chance to sell professional services projects to hunt down breaches.  I will leave it to you to figure out who those vendors are, but I think in most cases the answer will be easily discerned if organizations resist the hype.

What I did not say in last week’s post is that Triumfant is positioned to detect breaches in real time.  There are ample posts that address that directly as well as a new whitepaper on our site, so I won’t go into details here.   I will say that while heuristics, behavioral, and IPS/HIPS are also being directed to the problem, I think that Triumfant’s use of change detection and the analysis of change in the context of the host machine population is uniquely suited for the role of breach detection.  You get rapid detection (real-time), and within minutes we provide detailed information to help formulate an informed response, and we custom-build a remediation to stop the attack and repair the machine.  That is rapid detection and response.

And while Triumfant provides a wealth of IR/forensics data, we fully endorse the use of IR/forensics tools to provide the full range of post-breach investigative work.

But it all starts with detection.

Incident Detection, Then Incident Response

There seems to be an interesting and, I believe unfortunate, trend emerging in IT security:  Incident Response (IR) and Forensics tools are being wrapped in professional services and being sold as the solution to the breach detection problem. While I am happy that there is growing understanding that there is a breach detection problem, the reaction to that recognition is disappointing and misses the mark.

I think the point is obvious and is right there in the name “Incident Response”.  Response is not detection.  It is a step after detection – 1. Detect the problem. 2. Analyze the problem. 3. Fix the problem.  You could group #2 and #3 as respond, but they still follow detect.

You see, I thought detection was the issue.  While coming up with faster and more efficient ways to respond is laudable, I did not think what we needed was a better response to breaches that go undetected for an average of 173.5 days (Trustwave Report).  Just to make sure I was not missing something, I reviewed all of the excellent breach investigations and reports (Verizon Business, Trustwave, IBM X-Force, and Mandiant).  While some note that the time from detection to containment, but it is certainly not the focus.  The consistent focus I take from my reading is that organizations are getting breached and are not prepared to detect those breaches.

Unfortunately, there are several organizations making hay with selling professional services engagements under the umbrella of incident response.  The IT security market has a long history of seeing success and extrapolating that success into a rush to copy that success.  This is one of those cases.  Then marketing kicks in and the opportunity for the market to take constructive steps forward is squelched by the vendors rushing toward the next pot of gold, and organizations being swept into the hype.  Then these same reports will come out next year and there will be collective head scratching as to why the numbers have not improved.

The winner is the adversary, who is quite fine with 173.5 days of undetected access to organizational networks.

A simple analogy is firefighting.  Firefighters diligently and continuously train to better respond to a fire when called.  There are constant technological breakthroughs in equipment that also help them respond to a fire when called.  All of that training and equipment is put into use when they are called (the fire is detected).  Firefighters are not responsible for detection, they are all about the response. And while I am not a firefighter, my guess is that firefighters would tell you that the sooner the fire is detected, the better their response.  I would also guess that rapid detection is a key component to reducing loss.  Having a better, more expensive fire investigator will not reduce loss.

The first step to solving the breach detection problem is deploying tools that rapidly detect breaches at the point of infiltration.  Studies prove that prevention tools cannot provide that detection, and IR/Forensic tools are not built for detection.  Detection must be addressed first.  Then you can deploy all of these marvelous response offerings.

Another explanation is that organizations have twisted themselves into a really unfortunate Gordian knot. Maybe they are just beginning to understand the problem, but have reconciled that they will take action if and when then are breached.  This is not a good strategy, because statistics say it is likely they already have been breached, but simply don’t know it yet because they lack the tools to detect breaches.   There is no more “if”, and the “when” has likely already happened.  That is not FUD, that is what the statistics say.  Once a breach is detected  – the statistics say that 92% of those breaches will be detected by a third party and not the breached organization – then they will spend enormous amounts of money to have someone come in and do lots of expensive analysis and make recommendations that they will likely ignore.  The organization of course must deal with the financial, regulatory, and reputational effects of the 173.5 days the adversary had access to their confidential data and intellectual property.

To paraphrase a quote from Churchill I have used before, people frequently stumble over the truth; unfortunately, they often pick themselves up and carry on as if nothing happened.  I fear this is one of those collective moments when organizations have stumbled onto the truth and will not be the better for it.

The Evidence is Overwhelming: Organizations are not Prepared for the Inevitable Breach

84 and 173.5.

These are two significant statistics I picked up from the “Trustwave 2012 Global Security Report”.  I downloaded the report yesterday to review the analysis and the salient numbers from the study.  If you read this blog, you know I quote liberally from the Verizon Business “2011 Data Breach Investigations Report”.  I felt it prudent to see if the Trustwave report aligned with the VBDBIR and my frequent calls to wake up and smell the coffee about breaches.

The short answer is that they do and it does.  84 represents the percentage of breaches that were discovered by someone other than the breached organization.  This aligns with the VBDBIR number of 86%.  I noted that the 84% is actually up from the 2011 Trustwave Report number of 80%.

The numbers on self-detection are of interest to me for two reasons.  One, they scream that organizations are quite ill-equipped to detect a breach and the problem is getting worse.  They dump money in pursuit of the perfect shield, but are essentially unable to know when those shields fail.  And frankly, if I have to convince you that your shields are failing, you may be in the wrong profession.

Second, they underscore that when an organization gets breached, knowledge of the breach is not being contained within the organizational walls.  If a third party finds it, the secret is out.  Organizations cannot ignore the reputational risk that comes from a breach. And there is a coming storm of breach notification legislation that will make the problem even harder to ignore.

The real thunderbolt comes from the 173.5.  Because 173.5 is the average number of days between the initial infiltration and discovery for those attacks discovered by third parties.  173.5 represents the average amount of time that the adversary has free access to the systems and confidential information of the attacked organization.  The report notes that for companies with active discovery initiatives, this number goes down to 43 days.  Better, but no less unacceptable.

I will say it again (and again, and again).  Organizations are going to be breached.  Organizations are not equipped to detect breaches, and once a breach is detected, organizations are not equipped and prepared to respond.  Stop trying to build the perfect shield, step back, and address your exposure to breaches now.  Embrace the fact that you will be breached, and build a rapid detection and response capability.

Need to see something beyond statistics? Just today an article on the Wall Street Journal Online noted that Nortel had been breached without detection for over ten years.  The article discusses SEC breach notification guidelines and the impact on acquiring companies, the potential impact of the breach on Nortel equipment, and implies that the breaches may have contributed to the ultimate decline of the company.

The lesson is simple really.  The Trustwave report and the Nortel story show (again) that while you are busily trying to build that perfect shield, you may already have an adversary working undetected on your systems with relative impunity.

Targeted Attacks Versus Advanced Persistent Threat – Pragmatic Versus Dogmatic

In some circles of IT security, debating the exact definition of what constitutes an Advanced Persistent Threat (APT) is far more incendiary than debating politics or religion.   I was forced to wade into these tumultuous waters this week as I was making updates to the Triumfant Web site.   Specifically, I was curious to see if there was some industry consensus as to the dividing line between the two classifications. Silly me.  I should have known better.

The volatile nature of the definition of APT makes the dividing line between targeted attacks and APT equally volatile.  The industry has not settled on any one dimension to distinguish and APT attack, much less a specific point on that dimension.  For some, APT is determined by the nature of the attack, or the target of the attack.  Some, most notably Richard Bejtlich (@taosecurity) define APT by the threat actor.

After some research, it became obvious that the one thing the debate needed was yet another attempt to differentiate APT attacks and targeted attacks, and being shallow and self-centered, I knew I was just the guy for the job.  My simple classification came down to pragmatic (targeted attacks) versus dogmatic (APT) and actually incorporates most of the elements of the debate.

At the high level, I consider APT attacks as a subset of the broader category of targeted attacks as both are attacks written to perform a specific purpose against a specific target.  Both value stealth and seek long-term infiltrations.  Both involve sophisticated adversaries that often use many of the same techniques.  Given the two categories are not exclusive, what I am attempting to capture is the point where a targeted attack becomes an APT.

Targeted attacks are pragmatic because their motivation, and therefore their approach and behavior, lies in monetary gain.  A targeted attack is likely designed to extract confidential information or intellectual property.  It is conceivable that the attack could be disruptive, but pragmatically, disruption does not provide a return on investment.   Targeted attacks value stealth and long-term infiltration, but only to the point where they serve the pragmatic need.   Not quite smash and grab, but not the longer-term persistence sought with APT.  Targeted attacks rely heavily on techniques that leverage human nature (social engineering) because the adversary lacks access to the human-gathered intelligence available to the APT threat actor. Finally, a targeted attack may be reusable against other targets, albeit with some modification and mutation of the malware.

I use the term dogmatic to describe APT attacks because APT attacks are largely driven by emotional/philosophical motivations, primarily politics.  This places higher value on stealth and persistence than a targeted attack because it enables the adversary the freedom to alter post-infiltration activity to respond to evolving external events.   This is the proverbial low and slow approach that places high value on maintaining an established presence in the targeted system or network.  APT attacks may also be broader in their impact to the targeted organization because disruption may provide the same political impact as exfiltration.  APT attacks often consist of multiple parallel attacks to ensure infiltration and ensure that discovery of one path does not cut off presence in the network.   That is because a pragmatic adversary may be able to move onto the next target, but the target for a dogmatic adversary is dictated by the politics of the moment.

I am going to be very candid and say that I really have no real emotional or professional stake in this debate.  Triumfant excels at detecting these attacks, and the dividing line has no affect on that capability.  I simply was creating a web page on targeted attack detection and a separate page for APT detection, and I was doing the due diligence to be as accurate as possible.  Why separate pages? Both terms (“targeted attacks” and “advanced persistent threat”) are frequently used search terms, so it was all about providing information to those who get to the Triumfant site through organic search.

So there is my take on the debate.  Not sure if the pragmatic versus dogmatic designation helps, but it resonated with me, so who am I to not feed the fire?

 

Proposed EU Data Protection Fines Push the Lack of Breach Detection Capabilities into the Light

Recently proposed updates to the European Union’s data protection rules may force companies in the U.S. and abroad to take a hard look at solutions that tell them when they have been breached.  According to a WSJ article, the proposed updates will affect U.S. companies that “are active in the EU and offer their services to EU citizens”.

Of specific note is the requirement to notify authorities and customers of data breaches within 24 hours.  Breach notification laws are not new and there are notification statutes in the U.S. at the state level.  But the breadth of the EU provisions, the 24-hour requirement, and the fines for noncompliance have seriously amplified the debate.

In particular, the 24-hour requirement has companies really nervous.  This is justified when you consider that the Verizon Business “2011 Data Breach Investigations Report” showed that less than 5% of data breaches were discovered in the first 24 hours.   An article on the EU updates in CSO Online leads with the subheading “Many companies don’t have the sophisticated systems for identifying breaches in the first place”.

I have no sympathy here.  There are solutions that can detect an intrusion to corporate systems within minutes of the infiltration, so the lack of capability is not from a lack of technology.  Companies have long settled for shielding the perimeter with traditional approaches to defense from the usual suspects of IT security.  Forgive my lack of compassion, but the EU requirements are the bill coming due for stubbornly sticking with old approaches to new problems and blindly relying on the large IT security vendors rather than considering innovative solutions.

In the interest of disclosure, Triumfant does provide a solution that will detect a breach within minutes of the infiltration.  Triumfant is not a DLP tool, but what Triumfant will do is quickly detect an attack that gets past the company’s shields and provide a very detailed analysis of the attack within minutes.  Triumfant uses change detection and contextual analytics to detect the attacks that evade other security software, making Triumfant able to detect new malware attacks, detect targeted attacks, and detect the advanced persistent threat.  Security professionals tell me that the analysis Triumfant returns would take a seasoned security professional hours or days to produce.  We call this Rapid Detection and Response: the ability to detect the problem, provide actionable analysis, and remediate the attack within minutes of the infection.  Once the point of entry is identified, the company can then determine if data has been compromised, and if so, the extent of that compromise.

Companies continue to ignore the realities in front of them (such as the 5% statistic) and continue to pour their resources into shields.  Plugging in another appliance onto the network or installing another solution that requires prior knowledge to detect attacks won’t fix the problem.  Nor will blindly trusting the large IT security companies.

The time to look beyond traditional approaches and the usual suspects has not only come, it has passed.  Companies have resisted change for reasons only they know, but I suspect they are not willing to look past traditional approaches and embrace technologies that re-write their perceptions of how IT security tools work.

The EU requirements are not causing the problem; they are pushing the problem into the light.  And in doing so, they are also dragging into the light the companies that have too long ignored the changing realities of security.  Companies that were unwilling or unable to step into the light themselves.