See-And-Be-Seen: Triumfant’s CEO John Prisco to Present at Upcoming DC-Area Cyber Security Events

It’s a busy Spring season with several industry and regional events on the docket.  As a prominent security professional and head of one of DC’s fastest-growing security vendors, John Prisco has been invited to participate as a featured panelist at several upcoming events to take place in the region and on the national stage. 

First up is the FS-ISAC Annual Summit taking place this week on Amelia Island in Florida. The event is the only security conference created by members for members to present the latest information on cyber security related threats, trends, and technology.  John will demonstrate how to stop memory-based attacks before they become persistent during the event’s Solutions Showcases to the nearly 500 industry executives and practitioners anticipated to be in attendance.

On May 22, the CyberMontgomery event will take place at the Universities of Shady Grove (USG) Conference Center in Rockville, Md.  CyberMontgomery Forum events examine cyber security as a major growth engine for Montgomery County and how to bring together federal government, industry and academic  assets so that they can coalesce and elevate the cyber ecosystem to a level of national prominence.  John will participate in the Innovative Cyber Solutions from Montgomery County portion of the event.

At the invitation-only Breakfast Discussion on Continuous Monitoring on June 4 at the Ronald Reagan Center, John will join executives from IDC, Websense and SMS to discuss the Office of Management and Budget (OMB) security mandate 14-3 requiring compliance by 2017. The panel will address challenges and confusion around the implementation and discuss the technologies, best practices and services that can support a successful deployment.

During The Wall Street Journal’s DC Metro Security Summit, June 5 at the Sheraton in Tyson’s Corner, John will lend his experience and expert commentary to the panel on cyber policy.  Establishing a cohesive national cyber-security initiative has become one of the major emerging security challenges of the new century.  This panel will analyze the ongoing debate surrounding the implementation and enforcement of strategies, policies and emerging areas of enterprise security architecture that will govern people and information in the years ahead.

We hope to see you there!

 

Top Security Threats in 2014

Using the dynamic events of 2013 as a baseline and future indicator, we’ve set out to predict the security threats and headline-making trends that will plague the industry in 2014.  

1.      The Rise of In-Memory Attacks or Advanced Volatile Threats (AVTs)

A growing number of cyber-exploits are designed to elude current defenses by attacking computers in their volatile memory.  Triumfant refers to this technique as Advanced Volatile Threats (AVTs).  These memory-based attacks enable a hacker to access a computer’s random access memory (RAM) or other volatile memory processes to redirect a computer’s behavior. AVTs allow attackers to steal data or insert malware, but because they are never stored in long-term memory, they can be difficult to detect.  Triumfant cautions organizations to invest in endpoint defense solutions that continuously scan for objects that may be manipulated in-memory so that memory-based attacks never become persistent threats.

2.      World Sporting Events Create Opportunities for Mischief and Harm

The upcoming Winter Olympics and World Cup provided sophisticated hackers and nation-state actors with a high-profile venue to ramp up criminal and cyber-espionage activities.  Taking a cue from the media industry breach in 2013 — which saw The New York Times and other major media companies compromised by the Chinese military to find information on Chinese leaders —  governments, media outlets and commercial organizations should be on high alert, instituting new services to detect, counter and mitigate threats.  A layered approach to security to protect sensitive systems and data is needed, one that includes endpoint security measures as part of the overall defense-in-depth strategy.

3.      Mobile Malware and Network-Connected Devices Ripe with Vulnerabilities

As corporate cloud-based networks proliferate and more people work from home, hackers will develop new types of attacks on remote platforms.  The rapid adoption of network-connected devices, by consumers and businesses, will make the “Internet of Things” more attractive to cybercriminals.  Security vulnerabilities are rampant in embedded devices, as manufacturers hurry to bring new product to market, all too often making security an afterthought.  This need for speed may also have contributed to the Target breach, where a three-year “smart card” pilot was cancelled because it was shown to slow check-out times. The Target breach also points to the flaws of the payment card industry’s data security standards (PCI-DSS) which only conducts audits on a monthly basis.  Major retailers should deploy endpoint security on check-out terminals, in additional to the processing servers, to ensure continuous monitoring of breaches and that the systems are audit-ready every day.

 4.      Rapid Detection Becomes the New Prevention

Attacks happen.  The security industry is beginning to rethink its focus on protecting the perimeter, shifting its mindset and focus to rapid detection and prevention.  Endpoint security is the final frontier — picking up where network-based tools fall short.  With the understanding that breaches are going to happen, 2014 will see more resources devoted to detection and remediation than in years past.  According to research from Enterprise Strategy Group (ESG), 51 percent of enterprise organizations say they will add a new layer of endpoint software to protect against zero day and other types of advanced malware. 

Malware Counts – Shock, Yawn, or a Useful Reminder of Today’s IT Security Reality?

5 million new threats in Q3 2011!

This was one of the hot lead statistics from the Q3 2011 PandaLabs Report released at the beginning of this month.  Instead of pondering that number, I found myself pondering how the market reacts to that number as we move toward the end of 2011.  Shock? Knowing nod of the head? Yawn?

When I joined Triumfant in November of 2008, the world had entered that year with less than 1 million signatures according to Symantec’s Internet Threat Report series.  Those were simpler times.   In 2009, the number of new signatures exceeded the number of total signatures reported in 2008.  The statistics were sobering and captured the attention of the market as organizations began to internalize that the malware game had changed dramatically across multiple dimensions – volume, velocity, and sophistication.  Threats were also shifting from broad, opportunistic blunt instruments to targeted attacks, some written for a single target.  The term Advanced Persistent Threat moved from the MIC into the broader consciousness.

As we close out 2011, my impression is that the 5 million number by PandaLabs generates very little response and such numbers numbers no longer resonate.  Maybe these numbers have gotten large enough where they loose a sense of connection.  Maybe the numbers have been overused to the point that they no longer have any impact (the marketing bashers so prevalent in IT security will quickly form a line here).  Or maybe most right thinking people have seen the weight of evidence and have accepted the new threat reality.  Regardless, they appear to no longer capture the imagination.

What the numbers continue to say is that the world of IT security has changed dramatically and continues to rapidly evolve.  The numbers dictate that organizations need to be open-minded to new solutions and must stay nimble to keep up with this evolution.  For example, I think organizations now academically understand that the notion of the 100% shield is obsolete, but far too many have to emotionally accept that reality and take action accordingly.

The numbers also remind us of the relentless nature of the adversary, who never stop trying to broaden the always-present gap between offense and defense.  The numbers indicate that your defenses have plenty to do, so make sure that they are stood up and properly configured on every machine so as not to give the bad guys a beachhead.  There is no 100% shield, but you should ensure that your shields stop what they can.

The numbers reinforce the fact that you should expect to be breached.  Accept that there will be attacks written specifically to evade your shields and get to your sensitive data and IP.  Think beyond shields and have rapid detection and response software in place for those times when you are breached.

In the end, the only real number that is truly significant is how many breaches that go undetected and result in loss of revenue, loss of customer confidence, or loss of intellectual property.  All you have to do is read this very frank assessment of the cost of the RSA breach to know that the number “1” may be far more impactful than 5 million.

You Need a Plan B for Endpoint Security

You need a Plan B.

Plan A in endpoint security is to prevent malicious software from infiltrating a machine.  Most of the software on the exhibit floor of any IT security show is Plan A software with the remainder aimed at identity management.  As the number and complexity of attacks steadily increase, the amount of Plan A tools deployed at any given site has gone up proportionately.  Every year brings out a new “it” Plan A product and another layer of shields.

In spite of all of this Plan A activity, the number of successful infiltrations is on the rise.  Malware detection rates vary from study to study, but if you are RSA, NASDAQ, Sony, or any of the scores of recent breaches you realize that the bickering over the numbers on these studies is meaningless once you are attacked.  Add targeted attacks and the Advanced Persistent Threat to the mix, and the picture is less than rosy.

You need a Plan B.  Plan B is not a difficult concept to grasp or justify.  It simply says that there are no 100% shields, no fool-proof Plan A.  It accepts the hard truth that motivated, well-funded attackers will infiltrate your systems.  Therefore, you need a Plan B to detect the attacks that evade your Plan A software and so you can take informed action based on that knowledge.

The “Verizon Business 2011 Data Breach Investigations Report”, Published May 2011 had two interesting facts that scream for the need for a Plan B:

  • 60% of the breaches they studied went undetected for over a month.  The bad guys had free access to internal systems for extended periods.
  • 86% of the breaches were discovered by an external party.  The organizations would have never known they had been breached if someone from the outside had not told them.

Don’t take for granted that you have not been infiltrated because your Plan A software has not detected the presence of an attack.  That is self-deceiving logic.  If the attack gets past the protection of Plan A it has already evaded the detection capabilities of Plan A.

Here is something else to consider:  most of the Plan A software are shields to defend the increasingly porous perimeter.  Successful infiltrations are obviously at the endpoint.  Furthermore, the shields are often concerned with the attack vector and not the payload.  Once an attack makes it to the machine, it is all about the payload.  So again, we are back to the need for a Plan B that has a different focus and methodology than Plan A.

Having a Plan B is not an admittance of failure or running up a white flag on the idea of prevention.  It is a prudent, pragmatic and necessary response to the current threat environment.  You need a Plan B that focuses on detecting successful attacks and provides the analysis necessary to take immediate and informed action.  You need a Plan B that is not tied to traditional techniques that rely on prior knowledge such as signatures.  Finally, you need a Plan B that lives where the attacks happen – the endpoint.

It all goes back to the opening line: You need a Plan B.

Needle in a Haystack? How to Find an Unknown in an Ill-Defined, Shifting Maelstrom

In the March 17,2011, post, I demolished the “Finding a Needle in a Haystack” analogy by pointing out that in IT Security we don’t know what we are looking for (the needle) and our haystack is not a homogonous pile of hay but is instead a continuously changing, utterly non-homogenous population of one-off configurations and application combinations.  We went from “Finding a Needle in a Haystack” to “Finding an <unknown> in a <ill-defined, shifting maelstrom>”.

I ended by promising you a solution and that is where I begin.

The first step toward a solution is getting your hands around the “ill-defined, shifting maelstrom” that is your endpoint population.  To find what is unwanted or anomalous in that population, you first need a way to establish what is normal for that population.  You could build and dictate normal, and then enforce that normal in a total lockdown.  That is expensive and hard to do, and in my many travels, I have seen exactly two such environments.  The alternative is to monitor the machines in that population, and accurately create a baseline learned from the environment itself.  One that captures all of the exceptions and disparity in all of its glory.  The end result is a normalized, well defined representation of your ill-defined, shifting maelstrom.  A normalized haystack, as it were.

Easy, right?  Not really.  You have to remember that your target is unknown, so you have no idea where it will appear and in what form.  You must also consider that whoever is putting the unknown in your haystack does not want it to be found, and will so design the unknown to evade detection.  Zero day attacks don’t show up as shiny needles.  You can assume nothing; therefore, you must monitor everything as part of your normalized haystack.  You must also remember that the population shifts (wanted change) and drifts (unwanted change) by the moment, so you will need to keep it current.

In short, you will need continuous monitoring that is comprehensive and granular.  Not the kind the scanner vendors sell you that sees some of the machines in weekly or monthly increment, or the kind the AV vendors sell you that sees parts of the machine and not the entire picture.  You will need comprehensive and truly continuous monitoring.

In yesterday’s post, I noted that if you had a homogonous haystack you could remove everything that was hay and what is left should be the thing you are looking for, even if you do not know what that thing was.  Our haystack is not homogonous, but now we have created a baseline that provides the next best thing.  We can’t throw out the hay, so we need a slightly modified approach that uses changes to the machines as our potential indicators to compliance issues and malicious attacks.

If we are smart, we can use this approach to our advantage because once we establish our normative haystack we can continuously monitor the machines and identify changes.  This fuels our detection process and drives efficiency in managing the shift (we want to control the drift, but that is another post) in the population.  By capturing changes, we can keep the image of the population current with minimal drag on the endpoints and the network by moving changes across the wire.  No need to move large images when incrementally smaller change captures will do.

Once we identify the changes, we will need analytics that assess the impact of those changes to the associated machine.  These analytics will leverage the context provided the normalized model of the haystack to identify those changes that are anomalous.  Changes identified as anomalous are further analyzed to gauge their effect on the state of the machine and identify those changes believed to be malicious.  We can use the context and other analytic processes to group changes so that we see the malicious code and all of the damage done to the machine by the malware.

We have successfully identified the unknown in our ill-defined, shifting maelstrom, which, like I said yesterday, is infinitely harder than finding a needle in a haystack.  We did not just find the unknown, we have detailed its composition, analyzed the effect to the machine, and identified its path of destruction.

I think we are onto something here.  This could revolutionize malware detection, creating a detection capability that is agnostic to attack type, vector, and delivery.

But wait, there is more

The Nasdaq Breach Illustrates the Need for Continuous Monitoring

Dear Nasdaq, call me.  I am here to help.

The Wall Street Journal reported late Friday that Nasdaq had discovered that they had been hacked.  The hackers never made it to the trading programs, but instead infiltrated an area of Nasdaq called Directors Desk where directors of publicly traded firms share documents about board meetings.

What caught my eye was the following quote from the AP story filed about the attack: “…Nasdaq OMX detected “suspicious files” during a regular security scan on U.S. servers unrelated to its trading systems and determined that Directors Desk was potentially affected.”

People, people, people.  You have got to get on the continuous scanning bandwagon.  Seriously.

Connect the dots.  The story says that “the hackers broke into the service repeatedly over more than a year”.  Notice that the scans that found the suspicious files were “regular” meaning periodic.  Monthly? Quarterly?  How many of these regular scans were run before the activity was discovered.  I understand the need for network based, agentless scans.  I also know their limits, and deep down inside in a place most IT security people don’t want to admit, so do you.  “Regular” is not continuous.

Don’t stop yet, because the story says that the scan determined that the systems were “potentially affected”.  The diagnosis was partial because agentless scans, even credentialed scans, only get part of the story and therefore can only point out “potential” exploitation.

I have zero data about the actual attack and therefore am speaking in general terms.  But I am confident that a granular, continuous scanning tool should have been able to detect enough anomalous and exceptional artifacts on the Nasdaq servers to spot an attack like this.  The story says that suspicious files were ultimately discovered, so we know that there were persistent artifacts created by the attack.

This is a prime example of why you must have continuous, granular monitoring of endpoints and servers.  Periodic scans, while effective, leave too many blind spots.  A continuous scanning tool should have fond the artifacts.  And if the tool used change detection like Triumfant, it would have flagged the files as anomalous at a minimum within 24 hours of the attack.

Don’t throw the shield argument at me here.  These attacks went on for over a year.  Triumfant would have spotted the artifacts in 24 hours or less.  If you can’t see that difference and want to live the lie of the perfect shield, you are on the wrong blog.  In fact, if those files triggered our continuous scan that looks for malicious actions (an autostart mechanism, opening a port, etc.), Triumfant would have flagged the files within 60 seconds.

Regardless of which of our continuous scans would have detected the incident, Triumfant would have performed a deep analysis of the files and been able to show any changes to the affected machine that were associated with the placement of the suspicious files on the machine.  You likely could have deleted the word “potentially” from the conversation almost immediately.  I would also add that we would have built a remediation to end the attack.

Strong words for someone who has no details?  Perhaps.  But I would bet the farm that we would have found this attack in less than a year.

I don’t understand where we have arrived in regards to why organizations don’t implement continuous scanning.  Innovative solutions like Triumfant get throttled by old predispositions and the disconnect between IT security and the operations people who manage the servers and endpoints.  The security teams are forced to use agentless tools because the ops people refuse to consider a new agent, even if that agent is unobtrusive and allows them to remove other agents in a trade of functionality.  As a result, the IT security people to protect machines with periodic scans that cannot possible see the detail available when an agent is used.

Machines get hacked, the organization is placed at risk, countless hours and dollars are spent investigating the problem and then more hours and dollars are spent putting useless spackle over the cracks.  This is worth dismissing even the consideration of an agent?

Let me put it a different way.  We allows users to run whatever they want on endpoint machines, yet block IT security from deploying granular, continuous scanning tools that can actually detect attacks such as the one we see in Nasdaq.

What am I missing here?

Dear Nasdaq, call me.  Don’t rinse, repeat and be in the WSJ again.  I can help.  Promise.

Triumfant and Operation Aurora – Detecting the Advanced Persistent Threat

When new malicious attacks get a lot of attention in the press, we get asked the same question: “would Triumfant have seen that attack?”. Such is the case with the recent Google Attack, aka Operation Aurora. Given the discussions around the Advanced Persistent Threat (APT) and attacks like Aurora, I asked our CTO, Dave Hooks, to analyze the available data and provide details on how Triumfant would respond if Resolution Manager had been deployed on an endpoint machine or server that was exposed to this attack.   Dave’s response is illustrative of how Triumfant works in the context of an actual attack and how our unique capabilities enable Triumfant to detect an attack with characteristics common to those attacks seen in APT.

I offer Dave’s analysis with the full disclosure that it is based solely on detailed analysis of the attack, and that we had no firsthand exposure to the attack itself.  Dave broke his analysis into four parts: initial detection, diagnosis, knowledge base, and remediation, showing how Triumfant can identify an attack without prior knowledge, diagnose the attacks and correlate all of the changes to the machine associated with the attack, and build a situational and contextual remediation to return the machine to its pre-attack condition.

———-

Analysis of Operation Aurora

Initial Detection

Operation Aurora creates several service keys during three specific steps: execution of the dropper, the first stage of installation, and the second stage of installation.  Some of these keys are subsequently deleted but at least one is persistent.  The appearance of one or more of these keys would trigger the Triumfant agent’s 30 second scan cycle for markers of malicious activity, resulting in the agent requesting permission to execute a fast scan.  The Triumfant server would respond within seconds, green lighting the scan.  The agent would then capture the state of the machine immediately after infection and send the data to the server for analysis within 3 minutes.

Diagnosis

The Triumfant server would receive the snapshot, recognize that is was executed as a result of suspicious behavior, and immediately compare it to the adaptive reference model (the unique context built by our patented analytics).  The result of this comparison would be a set of anomalous files and registry keys.  The fact that the files and keys associated with Operation Aurora have random names would guarantee that they would be perceived as anomalous despite the fact that humans might tend to confuse them with legitimate Windows services.  Further analysis would then be applied to the anomaly set to identify important characteristics and functional impacts.  In this case the salient characteristics would be an anomalous service and a number of anomalous system32 files.

The discovery of an anomalous service would cause the Triumfant server to launch a probe requesting the Triumfant agent to explore the service further.  The probe would contain a list of all of the anomalous attributes found by the server during its analysis.  The Triumfant agent would activate a series of correlation functions to partition the anomalous attributes into related groups.  In this case it would group all of the anomalous attributes related to Operation Aurora.  It would then perform a threat analysis on this group and discover, for example, that it was communicating over the internet.  The results of the correlation and threat analysis would then be sent back to the Triumfant server.

At this point the diagnosis would be complete and the Triumfant server would alert the appropriate personnel that an “Anomalous Application” had been discovered and the data would be available on the console.  It would then be possible for an analyst to view all of the persistent attributes of Operation Aurora as well as the corresponding threat analysis, as well as readily share the data with CIRT and forensics teams.

Knowledge Base

An analyst can save the analysis for an Anomalous Application such as Operation Aurora to the Triumfant database.  This would allow the analysis to be converted into a new recognition filter.  Recognition filters have a number of benefits.  First, they provide a very precise mechanism for storing and sharing knowledge about an incident.  Second, they allow the system to search for any other instances of that particular condition in other environments.  Third, they enable the operator to pre-authorize automatic responses such as remediation should that incident be detected again in the future.

Remediation

If a Triumfant server detected Operation Aurora as an anomalous application, it would have sufficient knowledge of the anomalous attributes to synthesize a remediation response.  This remediation would be custom built to exactly match the attributes of the anomalous application on an attribute by attribute basis.  The ability to create remediations on the fly would enable the Triumfant system to surgically and reliably remove the components of Operation Aurora without reimaging the machine.  It would also enable follow on variants to be addressed without the need for new signatures.

———-

Again, let me state for the record that this is based on Dave’s analysis and not actual “live fire” data of our software responding to an actual attack.  But we are quite confident that Triumfant would have responded as described, detecting the attack and building a situational and contextual remediation.