2011 – The Year We Recognized We Were Getting Breached

I just read the Symantec 2011 Internet Security Threat Report from cover to cover, which is a great report with a lot of great information.  But I have the same problem with this report as I do with the ones from Verizon Business, IBM X-Force, Trustwave, and Mandiant (also all great reports with great information) and several of the writers and general industry pundits.  In their report, Symantec calls 2011 “The Year of the Breach” which is consistent with the other reports and other discussions in the broader market.

I am sorry, but I just hate that term.  Hate it.  The fact that the industry, in many case begrudgingly, has had to publicly acknowledge that shields are being evaded and organizations are getting breached does not make 2011 a milestone for breaches.  Companies were getting breached in 2010 and prior, and will be breached in 2012 and beyond.  Breaches are not a 2011 thing, or some annual phase we entered, watched peak, and ultimately ebb away

I will agree that 2011 is the year that the IT Security Industry came to terms with the fact that vendors that sold preventative software could no longer conveniently ignore that organizations were being breached.  Many of the statistics that have been a consistent theme of reports like the Verizon Business 2012 Data Breach Investigation Report seem to have suddenly found resonance.  Statistics such as the 173.5 days on average from breach to detection reported in the Trustwave 2012 Global Security Report became impossible to ignore.

Therefore, calling 2011 “The Year of the Breach” seems disingenuous to me.  In fairness, calling 2011

“The Year We Stated the Obvious” or

The Year We Woke up and Smelled the Coffee” or

“The Year We Got Our Heads Out of Our Collective… (filters engaging) the Sand” or

“The Year Vendors Realized They Could No Longer Sell Just Shields”

is clearly not as catchy.

For the record, this is not a criticism of the reports or the people that produce them.  These reports are hugely informative and I respect the efforts of those who produce them.  As I noted previously, the relentless presentation of the statistics in those reports was at least partially responsible for changing the predominant messaging in the market.  The hype could no longer shout down the reality presented by the numbers.  Notice I said messaging, because I think most pragmatic, right-thinking folks in IT security already knew about the breach situation.

Don’t get me wrong; I am happy that the market has decided to recognize that organizations are being breached.  I work for the company that I think offers the best and most innovative solution for detecting breaches at the point of infiltration.  And with one child about to leave for college, I am all about contributions to the Ivers Foundation.

Which leads me to another comment about these reports.  The reports – rightfully so – talk about detected breaches.  The reports indicate that a high percentage (>90%) of breaches are discovered by someone outside of the organization, indicating that organizations are not equipped to detect breaches.  One could make the case that the breaches that get detected do not represent the best and brightest because they were detected.  Without dissolving into hype or FUD, what percentage of breaches do we really detect? All? Half? 10%?  It is a question worth asking, and as organizations begin to put breach detection capability in place, the resulting statistics will be interesting.

By the way – anyone want to place bets that 2012 will be “The Year of the Targeted Attack”?

Malware Counts – Shock, Yawn, or a Useful Reminder of Today’s IT Security Reality?

5 million new threats in Q3 2011!

This was one of the hot lead statistics from the Q3 2011 PandaLabs Report released at the beginning of this month.  Instead of pondering that number, I found myself pondering how the market reacts to that number as we move toward the end of 2011.  Shock? Knowing nod of the head? Yawn?

When I joined Triumfant in November of 2008, the world had entered that year with less than 1 million signatures according to Symantec’s Internet Threat Report series.  Those were simpler times.   In 2009, the number of new signatures exceeded the number of total signatures reported in 2008.  The statistics were sobering and captured the attention of the market as organizations began to internalize that the malware game had changed dramatically across multiple dimensions – volume, velocity, and sophistication.  Threats were also shifting from broad, opportunistic blunt instruments to targeted attacks, some written for a single target.  The term Advanced Persistent Threat moved from the MIC into the broader consciousness.

As we close out 2011, my impression is that the 5 million number by PandaLabs generates very little response and such numbers numbers no longer resonate.  Maybe these numbers have gotten large enough where they loose a sense of connection.  Maybe the numbers have been overused to the point that they no longer have any impact (the marketing bashers so prevalent in IT security will quickly form a line here).  Or maybe most right thinking people have seen the weight of evidence and have accepted the new threat reality.  Regardless, they appear to no longer capture the imagination.

What the numbers continue to say is that the world of IT security has changed dramatically and continues to rapidly evolve.  The numbers dictate that organizations need to be open-minded to new solutions and must stay nimble to keep up with this evolution.  For example, I think organizations now academically understand that the notion of the 100% shield is obsolete, but far too many have to emotionally accept that reality and take action accordingly.

The numbers also remind us of the relentless nature of the adversary, who never stop trying to broaden the always-present gap between offense and defense.  The numbers indicate that your defenses have plenty to do, so make sure that they are stood up and properly configured on every machine so as not to give the bad guys a beachhead.  There is no 100% shield, but you should ensure that your shields stop what they can.

The numbers reinforce the fact that you should expect to be breached.  Accept that there will be attacks written specifically to evade your shields and get to your sensitive data and IP.  Think beyond shields and have rapid detection and response software in place for those times when you are breached.

In the end, the only real number that is truly significant is how many breaches that go undetected and result in loss of revenue, loss of customer confidence, or loss of intellectual property.  All you have to do is read this very frank assessment of the cost of the RSA breach to know that the number “1” may be far more impactful than 5 million.

What is Missing from the Symantec Internet Threat Report – Signatures

Life has been busy at Triumfant, and I finally had enough of a break to read the Symantec Internet Security Threat Report for 2010 released last month. The Symantec report is extremely thorough and very well done and provides lots of insight into what those of us in IT security battle daily.

After reading the document, it struck me that there were no mentions about signatures. I have used the Symantec report as the baseline to build the Triumfant Worldwide Signature Counter, so I am very much attuned to the signature statistics in the report on a yearly basis. So I did a search to be sure and, sure enough, no mentions. It is like the scene from “The Ten Commandments” when Seti the King decrees “Let the name of Moses be stricken from every book and tablet”.

Just for comparison’s sake, I then brought up the 2009 report (released in 2010) and performed a search on the word “signature” which returned 37 hits. To be fair, some of those mentions were in footnotes, but there was also this direct reference:

“Symantec created 2,895,802 new malicious code signatures in 2009, a 71 percent increase over 2008; the 2009 figure represents 51 percent of all malicious code signatures ever created by Symantec.” (Symantec Global Internet Security Threat Report Volume XV, P. 17)

Could it be? Did the Triumfant Worldwide Signature Counter drive Symantec to take the entire notion of signatures underground? Or was it the idea of having to enumerate a signature count that now exceeds 10,000,000 (11,143,811 as of this morning)? Don’t worry – even though I am admittedly shallow and self-centered even I would not take full credit for this shift. However…

In a May 4, 2011 press release about their updated reputation based offerings, Symantec said the following: “The sheer volume of sophisticated attacks targeting organizations of all sizes poses a daunting challenge for traditional signature-based security solutions that can’t keep up.”

Triumfant agrees, and has been saying so for the past three years. When we introduced the Signature Counter in 2009, the web site said: “The point of the counter is simple: malicious attacks are growing in both volume and complexity, and the sheer volume is reaching a point where it begins to surpass the collective capability of security vendors to keep pace.”

Okay, so maybe we influenced the issue. The idea behind the Signature Counter was to create awareness.

Now that we have clearly come to grips with the signature problem, the discussion has to shift to detection taking a prominent seat next to detection. We call that “The New Math Endpoint Protection” and you can get more details here (rather than read about it from one of the big vendors in 2014). The Symantec report for 2010 says that they recorded three billion malicious attacks last year. Those are the ones that were detected – the mind boggles at how many more got through undetected. Moving past signatures was a big step – now we must embrace the notion of detecting the attacks that get through. More on that soon.

1.6 Reasons Why Triumfant’s Automated Remediation Approach is Superior

Remediation is becoming a hot topic and already the FUD is flying.  Of course, we are excited about our remediation story and I am often asked why our approach to remediation is different from others on the market.  Let me see if I can help by borrowing a statistic.

I was at a meeting at Symantec headquarters on Friday where Francis deSouza, senior vice president of the Enterprise Security Group at Symantec, was first on the agenda.  In his presentation, deSouza noted that Symantec research indicated that attacks are morphing so quickly that any given variation of an attack is used against 1.6 machines before a new variant appears.

Most companies (maybe everyone but Triumfant) employ an approach to remediation that employs previously written scripts that are matched to detected attacks.  This approach of course requires that such scripts can only be written for known attacks.  While there are some generic approaches that may apply to previously unknown attacks, for any moderately complex unknown attack there will likely be no remediation script.

Now let us put deSouza’s statistic to work in the discussion about remediation.  If we put the script-based approach in the context of deSouza’s statistic, we can conclude that any remediation script is good for 1.6 machines.  Makes sense because if the remediation is morphing, then it follows that the remediation needs would also change.  New variant requires a new script.

I am already reluctant to believe that any pre-written script can be completely effective for attacks of even moderate complexity because attacks may cause varying primary and secondary damage based on the unique combination of factors for any given machine such as OS version, installed applications, and differences in configuration.  Adding the restriction to previously known attacks and Mr. deSouza’s statistic and a logical conclusion is that scripted remediations will fall short.  Even if a script will apply, it is reasonable to doubt that the script is capable of remediating the machine without leaving one or more artifacts that will make the machine vulnerable.  This doubt normally translates to organizations re-imaging the machine as a matter of standard.

There are other differences such as the need for context.  For example, a process may be part of an attack.  A generic script may mark that process for deletion, when it may be a process shared by other benign applications.  A script would have to either shoot it on sight, potentially corrupting other applications, or contain the logic required to know what other applications the process shared and then have the ability to determine if those applications were installed on the machine.  Accounting for every “except for” would certainly be aq challenge.

Triumfant constructs a remediation that is specific to the identified incident for that machine and requires no previous knowledge to build this remediation.  We correlate all of the changes to the machine to build a remediation so complete you should not have to reimage the machine.  The remediation is surgical, contextual and specific.  As a bonus, our remediations can leverage our patent pending donor technology to restore deleted or corrupted files.

There is more, but I feel like the point has been made and anything else would be showing off.  The difference between common remediation solutions and Triumfant’s approach are profound.  Now I need to figure out how you attack 0.6 of a machine.

The Worldwide Malware Signature Counter – A One Year Report Card

About a year ago we had the idea of the Worldwide Malware Signature Counter as a graphical representation of how the reliance on previous knowledge of attacks for detection was no longer a serviceable approach for protecting endpoint machines.  Much of the actual data used to build the math behind the filter (yes, it has thoughtfully constructed, sound mathematical principles behind it) was taken from the Symantec Internet Security Threat Report (ISTR).  Since Symantec just released the annual update to that report, it seemed an appropriate time to look back at the counter and see how well our analysis held up a year later.

All things considered, the counter was remarkably accurate.  Charting both the year-over-year and cumulative signature counts through 2008, we concluded that new signatures were growing at 60% of the cumulative rate on an annual basis.  This proved to be a bit aggressive, as Symantec’s actual numbers showed 2009 growth to be 51% of the cumulative number, or just under 2.9M new signatures.   But because we conceived the counter to be instructional and not hyperbole, we built the calculations on the conservative side and the counter in fact lagged just slightly behind the actual numbers reported by Symantec throughout the year and eventually in the ISTR.

When I first did the math on the numbers from the ISTR in 2009, I was struck on how the signature numbers broke down as a practical drag on the resources of the AV companies.   Of course a 51% increase year-over-year only exacerbates the problem.  Using the numbers from the recent ISTR, that burden translates to 241,316 signatures per month, roughly 7,934 signatures per day, 5.5 signatures per minute, and ultimately one signature every 12 seconds.  It is a model that is simply not sustainable, and by every indication, it will only get worse.

The bigger question after a year is “so what?”.  Well, the language from the AV vendors has certainly changed.  In fact, the following is a direct quote from the Symantec document:

Signature-based detection is lagging behind the creation of malicious threats—something which makes newer antivirus technologies and techniques, such as behavioral-based detection, increasingly important. …. This trend suggests that security technologies that rely on signatures should be complemented with additional heuristics, behavioral monitoring techniques, and reputation-based security. (page 48, Symantec Global Internet Threat Report – Trends for 2009,  Volume XV, Published April 2010)

During his keynote at this year’s RSA, Symantec CEO’s Enrique Salem was quoted as saying “Traditional signature-based approaches to security are not keeping up.”  Of course, such admissions are directly colored by the alternative technologies the AV companies recently introduced to the market after ignoring calls from the rest of the industry for alternative detection methods.  But at least they have stepped away from their defense of signature based AV in the face of all evidence to the contrary.  I am not claiming they were driven to such mea culpa’s by our signature counter, but I do think we helped point out the issue.

Unfortunately, while organizations have also come to terms with the limitations of signature based AV, many are adopting the alternatives provided by the AV vendors instead of looking to more promising technologies.  Symantec brought Quorum to the market, so reputation based security is their answer.  McAfee bought a whitelisting technology so – surprise! – whitelisting is their answer.  I guess I was hoping that organizations would see the past the entrenched vendors for alternatives given that these vendors were so slow to come to terms with the signature problem, but factors such as risk avoidance have suppressed some innovative alternatives from getting play.

Meanwhile, the counter continues to increment, and recently passed 7 million signatures on pace to add over 4 million signatures for 2010.  I was recently asked if we planned on retiring the counter given the shift in sentiment toward signature based AV.  We still see enough executives and security people that don’t yet understand the problem, so the counter will live on to help us make the point.

So in regards to a grade, how about an gold star for creativity, an “A” for the math, and an “I” (incomplete) for changing the world.

Antivirus Detection Rates – Undetected Attacks Are Still Attacks

I came across an article in The Business Times this morning that contained a quote that caught my eye.  The article was called “Singapore a growing platform for cyber attacks on region” which talked about the growing number of cyber attacks originating in Singapore.  In the article there was a definition attributed to Symantec:

“By Symantec’s definition, an attack denotes any malicious activity carried out over a network that has been detected by a firewall, intrusion detection or prevention systems.”

Obviously, the word that stuck out in this definition was “detected”.  Why?  Because I have news for you – malicious activity that goes undetected is also an attack.  In fact, I would say that undetected attacks would be placed in a higher tier of the definition, because Rule One of criminal behavior is Don’t Get Caught.  Attacks that would fall under the characterization of an Advanced Persistent Threat are engineered to evade detection and are very much an attack.

(This reminds me of one of my favorite movie scenes.  In Stripes, Harold Ramis and Bill Murray are sitting in the Army recruitment office and the recruiter asks them if they have “ever been convicted of a felony?”.  Bill Murray’s response: “Convicted?”.)

In fairness to Symantec, I am not sure if this quote from the article was paraphrased or misquoted, and I am not out to pick on Symantec.  What I do want to point out is a huge flaw in how in the industry measures malicious activity.  Let me explain.

Both AV software vendors and internal security groups often report on what was detected.  Makes sense, right?  If you could count undetected attacks they would instantly be now detected.  But according to the Symantec Internet Security Threat Report: “Symantec created 2,895,802 new malicious code signatures in 2009, a 71 percent increase over 2008”.  It therefore makes sense that the number of detected attacks would go up proportionately with the number of identified signatures.  An organization could be doing a worse job year over year detecting attacks but their raw volume of detected attacks would still go up, giving a perception of success.

Executives look at the bulk score and are mollified that the organization is protected.  But if the number of attacks grew by 71%, the number of attacks detected by the organization better track to that same 71% or the organization is losing ground.  If you think it through, that 71% may be deceiving because what Symantec and the other AV vendors don’t tell you is how long your organization was exposed between when the attack actually was first introduced and when they finally detected it and wrote a signature. It could have been six hours, but it could have also been six months.

In short, gauging success from bulk detection numbers is a quick way to obfuscate the real risk to any organization.  But if you are selling a shield that has known flaws, it is a great way to use the steadily growing malware volume to present either software or organizational effectiveness in a successful light.

Because Triumfant uses change detection to identify malicious attacks, we have always been open about our ability to see attacks that are resident prior to our installation.  That being said, we inevitably see anomalies that are artifacts of attacks that have passed through the organization’s shields soon after we are installed.  Once installed, we can readily detect what does make it through the organization’s shields or attacks being done by maliciously intended insiders.  It is eye opening to the organization just how many attacks have and are getting through.

Don’t let yourself be lulled to sleep by bulk detection rate numbers.  A lot of attacks are getting through, so counting detected attacks is potentially a false gauge of success.

It is Raining and You Will Get Wet

Ever walk down the street on a rainy day?  You can have the best umbrella in the world and you will still get wet.  When I get asked the question “why do I need Triumfant when I have other defensive software?” the answer is found in that rainy walk – because you will still get wet.   Malicious stuff will get through your defensive shields and when it does you need something that will address these problems on your endpoint machines. 

YouWillGetWet

Notice that I am not looking to convince you I have a better umbrella, because we never portray Triumfant as a shield.  Nor am I telling you to throw away your existing umbrella, because we never position Triumfant as a replacement for antivirus software, nor do we claim that having Triumfant means you no longer need AV.

But you do need to recognize it is raining and you will get wet.  I have touched on the proof points separately at times but I have never laid them end to end until now.  So here they are:

  • It rains harder every day.  Symantec reported in their Global Internet Security Threat Report, 2009 that there were 1.6M new malware instances in 2008, exceeding the 1M counted as the number of attacks for all previous years combined.  Both McAfee and Symantec show that this 1.6M number was passed sometime mid-summer for 2009.  If you graph the numbers you will see that they increase geometrically.  For example, McAfee saw twice as many attacks in the second half of 2008 than the first half of that same year.
  • It is raining sideways more than ever. McAfee Avert Labs noted in a recent blog post that they see 6,000 new malware instances per day that pass through their signatures, generic filters and heuristics.  Extrapolating this number for the entire year would get you to over 2M attacks that pass through the traditional protections.
  • The rain comes from a different direction every second. An August 13 article in SC Magazine notes a study that found that cyber criminals are now designing malware to last 24 hours before becoming inactive.  The study noted that 52 percent spread for just 24 hours, nineteen percent last for two days, and nine percent persist for three days.  Malware designers produce hundreds of unique samples that carry the malicious payload to evade detection.   Essentially, by the time the malware is detected, analyzed and a signature created, the cyber criminals have long since moved on.
  • The rain is straining the capacity of your umbrella. A recent White Paper called the Cyber Intelligence Report, August 2009 by Cyveillance provided average daily detection rates for the period of 5/12/09 through 06/10/09.  Cyveillance fed active attacks consisting of confirmed malicious files they had detected from the Web into 13 of the top antivirus solutions and tracked the detection rates.  The results are, to say the least, eye opening, as the average detection rate reported was roughly 30 percent.

It is raining hard and relentlessly on your endpoints and sometimes it is coming down sideways.   But it is not just the traditional attack vectors that you must address in the fight for endpoint protection.  There are increasingly nasty rootkits that evade traditional defenses.  There are polymorphic attacks with rotating binaries that automatically morph themselves to never look the same way on any two machines. There are new classes of attacks like drive-by SQL injections and registry based attacks.  There is the work of the maliciously intended insider who either directly corrupts the machine or alters its defenses so it can be corrupted by outside influences.  There are new ways to subvert software assurance and the software supply chain to imbed malicious code in what is thought to be trusted software.  And as always, there is the most nefarious problem of them all – the carbon based life form installing peer-to-peer software, using Facebook, and going to Jessica Biel picture sites.  It is not just raining sideways, sometimes it must feel like it is raining up!

What is clear is that bad things will get past the traditional defenses to the endpoint, and it is time to consider what will protect your organization when that happens.  That is where we come in – we see the malicious attacks that make it to your endpoints.  The stuff that falls through the other defenses, the zero day attacks, and the newest variations of existing attacks.  And all of the attacks that come through exotic vectors that defensive endpoint security software may not yet address.  We build a normative whitelist of your environment and can tell you if something is installing that does not exist anywhere else in your environment. 

And once we detect it, we can also remediate it.  The context provided by our patented analytics enables Resolution Manager to see all of the changes to a machine that are part of the attack, making our solution uniquely able to build a remediation to address the entire scope of the attack and restore the machine to its pre-attack condition.  BTW, that context I speak of is what really sets us apart – for example it allows us to beat the false positive problem – so you may want to look at the associated post.

Folks, it is raining, and don’t look for the rain to quit or even subside because it gets worse by the day.  And you will get wet.  That is the value of Triumfant – we are that last line of defense when you do.