South Korea Cyber Attacks: Incident Response or Proactive Monitoring?

Last week’s malware attacks against several South Korean banks and television networks have left security experts questioning how malware continues to penetrate these “well-protected” networks. The problem is, how do we define “well protected?” Incident response teams such as those used in the recent attacks on the NYT/WSJ are part of the solution and no one is saying they don’t do good work, but tracking the source retroactively has become a tool that enterprises are solely falling back on. As stated accurately in this Network World article around the S. Korea attacks, “companies need to constantly examine hardware and software audit logs to track information that has left the network to look for abnormalities”. Constant and proactive monitoring on the end-point is what is lacking in each of these recent attacks.

A recent Threatpost article discussed the specific Wiper malware that was used in the attack. While Wiper malware is nothing new, it is very advanced in that it wipes any trace of itself from the infected computer, leaving incident response teams almost no way of detecting it. By the time incident response arrived, the entire network was shut down, when in reality, if a proactive monitoring tool was used, the malware would have been detected and potentially remediated.  Once again, these attacks underline the importance of using analytics-based, constant monitoring as a necessity to help mitigate similar cyber threats… not to mention the millions of dollars spent on incident response.

As with many of these high-scale attacks the causes are unknown. In this particular case, experts speculate that an offline extraction attack was used. Offline attacks are less common recently, as many of the major breaches use tools like spear-phishing emails to break into networks. The criminal networks have an insider and undercover plan that is rarely detected until the attack has finished. There is no silver bullet here, but the first step should always be detection, in real time, not after the fact. Why are these attacks still successful? With the most sophisticated technology in the US, most malware can be detected in less than 15 seconds, yet we still rely on incident response teams to come in hours and days after the attack. The bottom line: we should look to proactive endpoint protection, not retroactive scrambling.


Till next time,

John Prisco, President & CEO

APTs vs. AVTs? Cutting Through the Hype

Last month, security company Mandiant released a major report that revealed several organized cybercrime groups in China are actively trying to hack into U.S. entities. This report caused widespread attention due to the fact that this is the first time there has been direct evidence – attribution if you will – against the Chinese that they are responsible for what will likely become a very heated cyber war at some point.

The Chinese attacks that Mandiant found are commonly known as “Advanced Persistent Threats” or APTs, and these threats have been around for years. While, yes, APTs have successfully allowed other countries to steal U.S. intellectual property for cyber espionage, the security community has been battling these threats for quite some time. In the meantime, while our attention has been diverted towards APT1-style attacks, a more sophisticated and dangerous attack vector has emerged and will likely become more and more commonplace among cyber criminals: the Advanced Volatile Threat or AVT.

Unlike APTs that create a pathway into the system and then automatically execute every time you reboot, an AVT comes in, exfiltrates the data it is looking for and then immediately wipes its “hands” clean – leaving no trace behind as the computer is shut down.  An AVT executes within the volatile memory of a computer, which means that once it is turned off, the AVT is gone.  It’s important to note that all malware STARTS in the memory, but it doesn’t stay there. AVTs take what they need from the memory and get out once the computer shuts down and before anyone even knows they were there – they don’t install themselves on the hard drive.

These “in-memory” attacks have been done for years, but what’s happening now is that attackers are getting more sophisticated and looking for creative ways to beat current defenses. In-memory attacks are a great way to do that, because most signature or behavior-based tools won’t detect them.  Based on our own research at Triumfant and what we’re seeing with our customers, we believe that over time, the use of AVTs will increase as the preferred attack vector among very intelligent and diligent cyber criminals. Each time they want to run an AVT attack, the cyber criminals have to get creative and find a new way to re-enter the system with an exploit. APTs, like the ones Mandiant identified, are already in the system and stay in the system, and oftentimes leave telltale fingerprints behind.

Given the level of sophistication involved, AVTs are often executed by state run cyber criminals (as opposed to clumsy hackers) specifically to make sure they remain under the radar and are completely undetectable. Everything about the AVT shouts out “real time” – you have to be able to catch it in the act, red handed. If you don’t catch it in real-time, you’ve already lost, unlike an APT that could take weeks or months to execute.

We’re well aware that the security community has raised their eyebrows at AVTs – mainly because most pen testers and the like already know about these types of attacks in memory and there are some tools out there that address these. To be clear, we’re not saying AVTs are new. The problem is that up until this point, the industry as a whole has not been very good at detecting attacks in the memory. Since the cyber criminals are always 10 steps ahead of us, we know that they are constantly looking for creative ways to defeat our best defenses.  When APTs are no longer successful because our defenses have actually improved to better detect them, we firmly believe AVTs will take the limelight and be the root cause of cyber espionage and other damaging threats in the future.

Simply put, you’ve been warned.

Till next time,

John Prisco, President & CEO

Why Security Technology Continues To Fail – And How We Can Stop The Cycle: Part 2

In our last post we addressed the fundamental failure of signature-based technologies, but an effective solution is tangible.

There is a slew of new technology emerging on the market that promises to solve the “signature problem,” but the truth is that some of them don’t fix the problem at all. The following are a few tips and observations to help you and your organization evaluate the available solutions, and choose the ones that will best defend.

1. Current signature-based security technologies are increasingly failing to stop malware. Evaluating the target of current technologies is a key first step in determining whether they will work for your enterprise. Many of our modern signature-based technologies are primarily geared toward consumer nuisance attacks, not addressing targeted malware attacks. These targeted attacks are engineered by an adversary with a specific end goal in mind. Classic “throw malware in a machine and hope it sticks” attacks are leaving targeted attacks with a wide-open door. Countless signature-based security technologies leave no way for a signature to exist – if a signature must be created, it will likely arrive too late to confront the problem. Cyber criminals have specific targets. Now it’s our job as security pros to do the same.

 2. Older vendors and technologies are being re-cast as solutions – but are no better at stopping the problem. Signature-based security tools look at millions of signatures – but signatures have to be written before technology can determine how they’re increasing, and how to stop them. With cloud computing, older vendors are recasting solutions that neglect new platforms. Cloud-based signature repositories are offering more of the same — an inelegant solution to the problem. Remember, all you need to miss is one piece of malware, and your system has been compromised. Many security companies aren’t selling this “one and done” mentality because they worry that their product can’t effectively fight off every attack – and with good reason. Even with wonderful, sophisticated databases, criminals can come up with one exploit that can bypass a network.

 3. Technologies that detect specific types of behavior and system changes have the best chance to actually find and eradicate next-generation threats. Although behavior detection strategies are seemingly up and coming, focusing solely on behavior changes can make a system rapidly vulnerable. Products that look at the intelligence of an attack do have the capability to find zero-day exploits – they send up a red flag you wouldn’t expect in detection systems that are solely anomaly-based. Combining behavior detection with anomaly based detection and removal is a vital, necessary strategy.

 4.  Companies and government agencies can build a new strategy that not only warns about new threats, but actually helps prevents them. Although complete prevention is unattainable, companies and government agencies need to focus on detecting AND removing the threat. Most products on the market focus on the detecting side and omit removal, leaving systems open to exploitation. Taking measures on both the network and the endpoint fronts is crucial if you don’t want to leave your systems exposed. The network is the easy part. Endpoint removal is the challenge, and the key.

The sophistication of today’s malware calls for a fundamental shift in the way anti-malware technology detects and remediates against new threats – and in the way people and processes respond. As long as technology and people continue to rely on what they know – such as signatures – they will continue to be defeated by what they don’t know, such as polymorphic malware. And as long as that trend continues, the tide of new breaches and infections will continue to rise.

It’s time for real change in security thinking, both at the technology level and at the process level. And if we don’t take action soon, 2013 is likely to be the worst year of malware yet.

Till the next post,

John Prisco, CEO

Why Security Technology Continues To Fail – And How We Can Stop The Cycle: Part 1


In 2012, as in previous years, commercial industry and government agencies spent record numbers of dollars on information security. Yet in 2012, as in previous years, the issue of breaches and malware infections grew more acute than in any year before.

Just look at the numbers. The most recent Verizon Data Breach Investigations Report indicates that breaches involving hacking and malware were both up considerably last year, with hacking involved in 81 percent of incidents and malware involved in 69 percent. According to the Ponemon Institute’s Cost of a Data Breach Report, malicious attacks on enterprise data rose last year, and the cost of a breach is at an all-time high ($222 per lost record). According to figures posted last month by Panda Labs, more than 6 million new malware samples were detected in the third quarter alone — and more than a third of machines across the globe are already infected.

So what does this tell us? Security technology is fundamentally failing. And we, as an industry, need to take action.

One of the chief reasons for this failure is our continued reliance on signature-based anti-malware technologies, such as traditional antivirus and intrusion prevention systems. Such systems block malware by blacklisting it – an approach that works only when the malware has been recognized and its “signature” is recorded in memory. Today’s sophisticated malware avoids this defense by constantly changing, morphing into new “zero-day” exploits that have not been detected or recorded.

Over the past month, several news organizations have once again pointed out the flaws in signature-based technologies, but even these reports are largely missing some fundamental points. A recent piece in the New York Times, for example, discusses the failure of antivirus software to stop next-generation malware. But, antivirus software imperfections have been known for years, and the Times did very little to advance the discussion of actual solutions.

Dark Reading on Dec. 27 took a more current view of the problem, discussing the flaws in today’s “layered” antimalware defenses. This article points out the flaws in today’s signature-reliant enterprise security strategies, but again, it fails to deliver much depth on how to solve the problem.

The fact is that signature-based technologies such as AV and IPS – still the cornerstones of many enterprise security strategies – are actually getting *worse* at preventing malware infections. A study published last month by Imperva indicates that the initial rate of detection of new viruses by AV solutions was less than 5 percent. While AV vendors took issue with the methods of this study, the substance of the findings is clear: signature-based solutions are failing at record rates.

With compliance regulations drowning our enterprise security professionals, proactive threat management falls to the way side and new technology solutions continue to neglect the data right in front of our eyes. End the failed attempt and address the real issues. How will you protect your corporate network?

In our next post we’ll discuss how we, as security practitioners can implement technology that truly combats the constant cyber threat cycle.

Till next time,

John Prisco, CEO

Story on Targeted Attacks Dispels the Presumption of Complexity

I came across a story today that really speaks to the mythology of targeted attacks and their much-hyped subset, the Advanced Persistent Threat.  In a story on the Threatpost Blog by Paul Roberts (@paulroberts) called “Attackers Reused Adobe Reader Exploit Code From 2009 In Extremely Targeted Hacks“, Roberts provides insightful details on a targeted attack that used Adobe exploit to go after system integrators that specialize in working with the DoD.

The story nicely shows how targeted attacks don’t have to use a cutting edge zero day exploit or some new DeathRay level malware to succeed.  In this attack, the attackers went after an Adobe vulnerability (since patched) called CVE-2011-2642 (first reported December 9, 2011) and leveraged exploit code that dated back to 2009.  The malware planted was the Sykipot Trojan, malicious code known to the IT security industry.

Too often I think that business people hear “Targeted Attack” or “Advanced Persistent Threat” and get a visual image of super smart adversaries in white lab coats creating exceedingly complex and sophisticated attacks.  They assume that targeted means specialty built attacks that take enormous effort to conceive, construct and deploy.  They see it as rocket science.  And in some ways, I think that they use these misconceptions to talk themselves into thinking that no one would expend such effort to target their systems and creating a false sense of security.  They apply the business concept of “barriers to entry” to presume they are safe.

As this analysis shows, a targeted attack can be cobbled together from spare parts on their workbench. The barriers to entry in regards to the technical side of targeted attacks are nominal and easily scaled. All it takes is a motivated and intentional adversary that believes that your systems have something of value, and you can be the victim of a targeted attack.

As Robert’s story shows, companies cannot hide behind false presumptions that there is inherent complexity that reduces the odds that they will be the victim of a targeted attack or APT.  Companies need to step up to a rapid detection and response strategy as part of their IT security thinking.  Triumfant excels at detecting targeted attacks and detecting the advanced persistent threat, and is an example of solutions that can close the security gaps that leave companies open to such attacks.

USB Drives – Cool Tool or Malware Delivery Device

Behold the USB drive. Simple. Functional. Efficient. The USB device is also a symbol of all that makes IT security so difficult. But take heart, because the USB device is also illustrative of the functions and benefits of Triumfant.

Why does the USB key represent the difficulties with IT security? Because a USB device
is an infiltration and exfiltration method wrapped into one tidy package. The bad guys are using USB devices to deliver malicious payload to host machines because this vector readily evades perimeter network defenses that use techniques like deep packet inspection and sandboxing. Unfortunately, techniques require that the attack come across the wire to work, so the attacks delivered by a USB device easily fly under their radar. The USB device has become a very effective mechanism for delivering the targeted and sophisticated zero day attacks and advanced persistent threats that are becoming increasingly difficult to detect.For an example, start with Stuxnet, the malicious attack that grabbed more headlines than a Britney Spears midnight trip for a haircut. Stuxnet evaded protection by using USB drives for transport to the host machines from which the attack spawned.

In regards to exfiltration, there is no simpler tool for offloading data than a USB device. While this has great utility, it is a major problem in the context of data loss prevention (DLP) activities, as once data is loaded onto a device there is absolutely no control of where that data may land. All bets are off.

You would think that USB devices would be the bane of every IT security person on the planet, yet security vendors give them away at industry tradeshows. Most people will pop in a USB key with little thought of the risk, so a “just say no” approach is not effective. Our CTO was at a customer recently and was told that USB devices were not allowed at the site. Minutes later he produced a report that showed that USB devices had been used in over 20% of the machines in the past two weeks. So much for strongly worded guidelines.

The problems surrounding USB devices are useful in pointing out the value of Triumfant:

Malware detection and remediation. Triumfant will detect attacks that are delivered to a machine via a USB device, analyze the attack, and build a remediation to stop the attack and repair all of the damage to the machine. Infection to remediation in minutes. Remember, Triumfant detects attacks by identifying and analyzing changes to the machine, and is therefore attack vector agnostic.

Continuous enforcement of policies and configurations. With Triumfant you can build and enforce policies that disables the use of removable media like USB devices. Triumfant will set the policy and remediate any machine found to be out of compliance.

Continuous monitoring/situational awareness. Your organization may choose to not disable USB devices. Triumfant can provide information about what machines have had a USB device inserted and can identify machines with unusually high levels of data movement. Alternately, if you do disable the devices you may also have users with Admin rights to their machines, enabling them to change the configuration of the machine to override the policies. Triumfant can provide information about what machines have had a USB device inserted and identify those machines where the policy has been altered. Triumfant is not a data loss prevention (DLP) tool and therefore cannot tell you what, if any, data was exfiltrated, but we can tell you that such an exfiltration was possible.

In summary, Triumfant is able to protect machines from attacks delivered by USB devices,
is able to enforce configurations that disable the use of USB devices, and provide insight into usage patterns of USB devices.

If only Triumfant could help me find the numerous USB devices my teenagers borrow and never return. Of course, once they have them, perhaps it is best I don’t plug them into my machine.

Continuously Monitoring the Haystack (Needle Inventory Report)

In the previous two posts (Part 1 here and Part 2 here) I first shot down the much over-used “Finding a Needle in a Haystack” analogy by showing that the problem facing IT security professionals is far more complex and then defining a new approach that will find the <unknown> in a <ill-defined, shifting maelstrom>.  I closed the second post by adding a “but wait, there is more”, so here it is: what if I told you that the approach I described not only solves the needle in the haystack problem, it also provides an innovative approach to continuous monitoring and situational awareness.  Without changing the core process one iota.

Think of it – this is like inventing cold fusion and finding out that the solution repairs the ozone layer.  Or inventing a beer that tastes great and is less filling.  Or something like that.

To recap, my thesis is that the analogy assumes that you know are looking for a known thing in a well-defined and completely homogenous population, when in fact we do not what we are looking for in most cases and the machine population that doubles as our proverbial haystack is anything but well-defined and completely homogenous.  Therefore, the proper expression of “Finding a Needle in a Haystack” is in fact “Finding an <unknown> in a <ill-defined, shifting maelstrom>”.

I then outlined how you could solve the problem by first building a normalized view of the endpoint population to create a baseline of the ill-defined, shifting maelstrom, effectively giving you a haystack.  Then you could continuously monitor machines for changes under the assumption that any change may be someone introducing something to your haystack.  By analyzing and grouping the changes and using the baseline as context, you could then assess the impact of those changes to the machine and determine if the changes represented a needle (malicious attack) or just more hay.

Now for the continuous monitoring part.  Because I do not know what I am looking for, I have no choice but to scan everything persistent on the host machine.  Since I am building my normalized model on the server, I have to move the raw data to the server and store that data into some form of data repository.  Logic dictates that the repository that I use to power the process of finding needles in the haystack can be used as the data source for all forms of situational awareness and continuous monitoring activities.

It gets better!  I can perform those activities without incurring any additional burden on the endpoints or on the network.  I can ask questions of the repository without the need to collect additional data from the endpoints.  Most solutions use an agent that scans only segments of the data, or use an agentless scan to collect segments of the data.  If either is not currently scanning the data needed, the process has to be altered and repeated.  For example, a new scan script might have to be pushed to the agents.  Furthermore, organizations often run multiple scans using different tools, each dutifully thrumming the endpoints for data and often times collecting much of the same information collected by the scan that ran an hour earlier.

Of course, I must continuously refresh the data in my repository to be accurate.  Luckily, I already thought of that, and I am using my agent-based precision to detect and cache changes on each host machine in a very efficient way.  I then send those changes to the server once per day and use the changes to refresh the repository.  Given I have a large number of data attributes that rarely change, sending only the changes across the wire keeps the network burden to a minimum.  Obviously, I have to move down a full scan when I initially deploy the agent, but for subsequent updates using the change-data-capture approach results in comparatively small answer sets per machine per day.

My result?  I have a complete repository of the granular data for each endpoint machine efficiently collected and managed and available for reporting and analysis.  I can feed my highly sophisticated information portals, scorecarding processes, and advanced analytics.  I can build integrations to feed other systems and applications in my security ecosystem.  I can create automated data feeds such as the feed required to comply to the FISMA CyberScope monthly reporting requirement.  Best of all, this activity does not require me to go back to the endpoint ceaselessly for more information.

I have implemented true continuous monitoring and comprehensive situational awareness without running any incremental data collection processes.  I am continuously scanning every persistent attribute on every machine.  My data collection routine to find the needles in my haystack is the only data collection process required!  You want situational awareness?  From this data I can readily produce reports for application inventories, patch inventories, vulnerabilities, and non-compliance with policies and configurations.  I can tell you the machines that have had a USB key plugged into them in the past week.

I can keep history of my machine scans and show you the detail of the granular elements of what I am scanning.  Think of the impact for investigating incidents on a machine.  You could pull up the snapshot for any given day, or select two dates and generate a summary of the diffs between the two images so you could see exactly what changed on the machine.  .

I am just getting started.  Since I have my normative baseline in place to interpret the changes I detect on each machine, I can also provide reports on anomalous applications and other activity that is exceptional.

To recap, the data collection processes required to implement my approach to finding the <unknown> in a <ill-defined, shifting maelstrom> provides true continuous monitoring and broad situational awareness.  Or you could say that my approach is a continuous monitoring solution that can identify the<unknown> in a <ill-defined, shifting maelstrom>.  Either way, what results is an accurate picture of my <ill-defined, shifting maelstrom> without having to run any additional scans or data collection, so I get the benefits without incremental burden to the endpoint machines or the network.

But wait, there is more.