Triumfant Expands Partner Connect Program with Addition of Defensative

partnershipWe are pleased to announce that Defensative has joined the Triumfant Partner Connect Program.  As a leading provider of proactive enterprise-level security management services, Defensative is the latest MSSP to join Triumfant’s expanding partner roster.

Offering 24 hours a day, 365 days a year monitoring of client networks, Defensative gives organizations the ability to review real-time security alarms and vulnerabilities and provides tips on how to help better protect their network. By strategically adding companies like Defensative to Partner Connect, Triumfant is able to offer small and mid-size businesses access to advanced malware protection, elevating their network security to a level equivalent to that of Fortune 1,000 enterprises.

Triumfant CEO John Prisco explains: “Triumfant understands that in an evolving marketplace, no single company can deliver all parts of a comprehensive, critical solution.  With the addition of Defensative to our MSSP program, we are able to provide more complete end-to-end IT solutions for our customers.”

Since its launch in October 2014, Triumfant has signed six new resellers to Partner Connect and expects to expand its MSSP program to 15 partners across a global geography by year end. For more information about Triumfant and the Partner Connect program, please visit



Triumfant’s John Prisco Talks IRS Breach on Knowledge@Wharton SiriusXM


The data breach at the IRS that left the personal information of 104,000 taxpayers in the hands of thieves was the topic of the May 28 Knowledge@Wharton program broadcast on BusinessRadio Channel 111, SiriusXM.  Triumfant’s President and CEO John Prisco joined host Dan Loney to discuss the breach, how it happened, what it means for the future of government agencies and how this breach impacts the average individual.

The unprecedented surge in online tax scams by increasingly sophisticated criminals, potentially backed by the Russian government, has challenged the IRS to respond quickly to get ahead of the fraudsters, especially during this year’s tax season after hackers targeted TurboTax, the country’s largest online filing service. Tax officials estimate that the government has lost billions of dollars in recent years to fraudulent refunds filed by hackers who steal personal information on tax returns, then use it to claim a refund in a taxpayer’s name before they file.

Loney: How notable or worrisome is the IRS breach?

Prisco: The IRS breach indicates a more difficult and worrisome problem:  companies (and government agencies) don’t practice enough cyber hygiene to prevent these types of breaches.  Had the IRS had two-factor authentication in place, this breach wouldn’t have occurred. Now the public is paying the price.

This breach was really a perfect storm.  Not only do you have the information obtained by very patient adversaries but also the hack on tax preparation software.  TurboTax was too complacent and didn’t have enough security measures built into their software to properly guard against skilled adversaries.

Loney: How difficult is it to put in two-factor authentication?

Prisco:  Not difficult at all.  Many banks do it today, where they send a text message to your phone with a code to complete your transaction, login or filing.  We’re seeing with recent breaches, the IRS and Anthem breaches in particular, that very rich and personal information like Social Security Numbers, medical records, email addresses, credit card numbers, are being targeted. I think we’re going to see this same data being used by perpetrators in years to come.  Adversaries are skilled and patient. But this can be avoided, if prudent steps are taken by the good guys to make it harder for the bad guys to succeed.

Loney: So if companies aren’t fully vested in IT security, they are missing the ball?

Prisco: Very true. We see examples of major breaches occurring on a monthly basis and they will continue to occur. Take a look at the Sony hack for example.  It was almost a ‘man amongst boys’ scenario.  North Korea had very sophisticated capabilities and Sony was running outdated, unsupported, and unpatched Window XP machines.  Once again ignoring basic cyber hygiene and making it very easy for the attackers to not only take huge amounts of data but also cripple systems like payroll.  If companies aren’t invested at the CEO or Board level in taking proper security measures, it can be a disaster for the company.

Loney:  The IRS said it will contact the 104,000 taxpayers whose information was compromised, as well as the 100,000 for whom attempts were unsuccessful. The first group will be offered credit monitoring, while the second will be warned that thieves have their personal information.  Is this enough?

Prisco:  Unfortunately this seems a lot like taking home the home version of the game when you lost on the gameshow. The problem isn’t going away.  We’re tossing 20th century technologies at 21st century adversaries.  The class of security products used today, like anti-virus, relies on prior knowledge or signatures.  This is effective in only 20-25% of attacks.  The future of cybersecurity is dependent on new products entering the market now.  These products can analyze large data sets, leverage machine learning and examine the behaviors taking place on the endpoint to take action based on these behaviors.  As long as we continue to use old technology, it will be easy for adversaries to beat us.

Loney: What recourse is there for the IRS?

Prisco: Besides installing stronger security systems and flagging anything suspicious in a taxpayer’s return, from addresses that didn’t match up with what the government had on file to large deductions for self-employed people, they should also be looking at machine behaviors.  If there’s more frequency than normal with the Get Transcript function on their web site, this is an indicator of possible malicious activity. Something could be amiss requiring further investigation.  I like to say, “never send a human to do a machine’s job.”  Install endpoint security software that continuously monitors machine behavior, investigates anomalous activity and preforms automatic remediation.

Loney: It seems like we’ve only reached the tip of the iceberg and have a far way to go to claim victory, if at all?

Prisco: If you look at BYOD, the Bring Your Own Device to work phenomena, it gets a lot of media attention for being a possible security gap.  But the truth is mobile devices aren’t often targeted yet because it’s so easy to penetrate a regular work computer.  Here again, companies don’t practice basic hygiene like patching their systems.  Take for example Microsoft’s decision to end support for Windows Server 2003.  It’s like ringing the dinner bell for hackers.  If you are running Windows Server 2003, expect it to be hacked.

Loney: Is it understood among large companies that security needs to be their #1 priority?

Prisco:  A small percentage of companies feel and act that way.  Most companies run a skeleton security crew.  They don’t have the sufficient staff or budget to properly prevent targeted attacks from occurring.  Security is really viewed as a cost center vs. a strategic necessity.  Attitudes need to change if we are to be triumphant against hackers.


Endpoints: Cyber Security’s Growing Blind Spot

While attacks on user devices become a favorite point of entry for attackers, most enterprises can’t see what’s happening to them

For most enterprises, the endpoint has become the weakest link – and the attacker’s target of choice. Take a look at this year’s Verizon Data Breach Investigations Report. Endpoints – desktops, laptops, and ATMs – accounted for more than three quarters (77 percent) of all breaches last year. The top two categories of people associated with breaches were end users and customer service representatives (63 percent). Almost all (95 percent) of the breaches in the study involved some type of social engineering attack on a user.

Yet, while most of the bad guys have recognized the endpoint as the low-hanging fruit on the enterprise data tree, most enterprises have not. While corporations have spent billions of dollars on technology for monitoring and managing security in the data center, on servers, and in the network, most of them have very little visibility into the endpoint. From a security perspective, in fact, the endpoint has become the enterprise’s most dangerous blind spot.

Think about it. In your enterprise, the desktops, laptops, and smartphones might have antivirus software, authentication tools, or even full-disk encryption. But do you have a way to spot users who have turned off their personal firewalls?  Can you tell if an end station has changed its configuration to connect to an insecure wi-fi network? If an infected PC became a zombie in a new botnet, how long would it take your administrators to detect this activity?

The fact is that most enterprises have instrumented their servers and networks to report anomalous activity, but they generally have not instrumented their endpoints to do the same. Oh, they might have a way to track a lost laptop or spot known infections, but there generally is no way to detect configuration changes or behaviors that might indicate zero-day malware activity. As the Verizon report indicates, they may go for months, even years, before spotting a problem.

Much of the problem stems from enterprises’ resistance to the concept of “agents,” those small pieces of software that sit on the endpoint and report anomalous behavior back to the security team. Years ago, agents were bulky and intrusive, often creating performance problems on the endpoint device or even preventing devices from operating properly. And there were so many different products that proposed to add an agent to the end station that many enterprises boycotted them altogether.

Today, however, agents have become the last, best hope for tracking risky end user behavior. Yesterday’s signature-based tools simply no longer cut the mustard – with so many new threats emerging every day, they have become bloated and unable to stop the rapidly-morphing types of attacks that are being sent against them. Perhaps even more importantly, most endpoint defense tools don’t flag the security team when key features are turned off by the user, or when telltale malware behaviors have been initiated. The only way to recognize these developments is through an agent that is tuned to recognize them and cannot be turned off by the end user.

And agent technology has improved significantly. My company, Triumfant, has developed a lightweight agent that can report virtually any change on the endpoint without affecting performance, adding appreciable memory overhead, or incurring high costs. And we are not the only company that is doing so – a number of other security vendors also are using lightweight agents in their products that help enterprises monitor the security posture of the end station without inhibiting the user experience or adding heavy overhead. Today, it is practical to instrument the endpoint with real monitoring and change management technology – the same types of technology that previously were limited to servers and enterprise networks.

As long as attackers and malware developers know that their exploits at the endpoint will not be detected, they will continue to take advantage of them. It’s time for enterprises to extend their visibility into endpoint security and stop the attacks before they happen – instead of months or years later, after Verizon or some other third party has been called in to pick up the pieces after a breach.

– John Prisco, CEO, Triumfant

South Korea Cyber Attacks: Incident Response or Proactive Monitoring?

Last week’s malware attacks against several South Korean banks and television networks have left security experts questioning how malware continues to penetrate these “well-protected” networks. The problem is, how do we define “well protected?” Incident response teams such as those used in the recent attacks on the NYT/WSJ are part of the solution and no one is saying they don’t do good work, but tracking the source retroactively has become a tool that enterprises are solely falling back on. As stated accurately in this Network World article around the S. Korea attacks, “companies need to constantly examine hardware and software audit logs to track information that has left the network to look for abnormalities”. Constant and proactive monitoring on the end-point is what is lacking in each of these recent attacks.

A recent Threatpost article discussed the specific Wiper malware that was used in the attack. While Wiper malware is nothing new, it is very advanced in that it wipes any trace of itself from the infected computer, leaving incident response teams almost no way of detecting it. By the time incident response arrived, the entire network was shut down, when in reality, if a proactive monitoring tool was used, the malware would have been detected and potentially remediated.  Once again, these attacks underline the importance of using analytics-based, constant monitoring as a necessity to help mitigate similar cyber threats… not to mention the millions of dollars spent on incident response.

As with many of these high-scale attacks the causes are unknown. In this particular case, experts speculate that an offline extraction attack was used. Offline attacks are less common recently, as many of the major breaches use tools like spear-phishing emails to break into networks. The criminal networks have an insider and undercover plan that is rarely detected until the attack has finished. There is no silver bullet here, but the first step should always be detection, in real time, not after the fact. Why are these attacks still successful? With the most sophisticated technology in the US, most malware can be detected in less than 15 seconds, yet we still rely on incident response teams to come in hours and days after the attack. The bottom line: we should look to proactive endpoint protection, not retroactive scrambling.


Till next time,

John Prisco, President & CEO

APTs vs. AVTs? Cutting Through the Hype

Last month, security company Mandiant released a major report that revealed several organized cybercrime groups in China are actively trying to hack into U.S. entities. This report caused widespread attention due to the fact that this is the first time there has been direct evidence – attribution if you will – against the Chinese that they are responsible for what will likely become a very heated cyber war at some point.

The Chinese attacks that Mandiant found are commonly known as “Advanced Persistent Threats” or APTs, and these threats have been around for years. While, yes, APTs have successfully allowed other countries to steal U.S. intellectual property for cyber espionage, the security community has been battling these threats for quite some time. In the meantime, while our attention has been diverted towards APT1-style attacks, a more sophisticated and dangerous attack vector has emerged and will likely become more and more commonplace among cyber criminals: the Advanced Volatile Threat or AVT.

Unlike APTs that create a pathway into the system and then automatically execute every time you reboot, an AVT comes in, exfiltrates the data it is looking for and then immediately wipes its “hands” clean – leaving no trace behind as the computer is shut down.  An AVT executes within the volatile memory of a computer, which means that once it is turned off, the AVT is gone.  It’s important to note that all malware STARTS in the memory, but it doesn’t stay there. AVTs take what they need from the memory and get out once the computer shuts down and before anyone even knows they were there – they don’t install themselves on the hard drive.

These “in-memory” attacks have been done for years, but what’s happening now is that attackers are getting more sophisticated and looking for creative ways to beat current defenses. In-memory attacks are a great way to do that, because most signature or behavior-based tools won’t detect them.  Based on our own research at Triumfant and what we’re seeing with our customers, we believe that over time, the use of AVTs will increase as the preferred attack vector among very intelligent and diligent cyber criminals. Each time they want to run an AVT attack, the cyber criminals have to get creative and find a new way to re-enter the system with an exploit. APTs, like the ones Mandiant identified, are already in the system and stay in the system, and oftentimes leave telltale fingerprints behind.

Given the level of sophistication involved, AVTs are often executed by state run cyber criminals (as opposed to clumsy hackers) specifically to make sure they remain under the radar and are completely undetectable. Everything about the AVT shouts out “real time” – you have to be able to catch it in the act, red handed. If you don’t catch it in real-time, you’ve already lost, unlike an APT that could take weeks or months to execute.

We’re well aware that the security community has raised their eyebrows at AVTs – mainly because most pen testers and the like already know about these types of attacks in memory and there are some tools out there that address these. To be clear, we’re not saying AVTs are new. The problem is that up until this point, the industry as a whole has not been very good at detecting attacks in the memory. Since the cyber criminals are always 10 steps ahead of us, we know that they are constantly looking for creative ways to defeat our best defenses.  When APTs are no longer successful because our defenses have actually improved to better detect them, we firmly believe AVTs will take the limelight and be the root cause of cyber espionage and other damaging threats in the future.

Simply put, you’ve been warned.

Till next time,

John Prisco, President & CEO

Why Security Technology Continues To Fail – And How We Can Stop The Cycle: Part 2

In our last post we addressed the fundamental failure of signature-based technologies, but an effective solution is tangible.

There is a slew of new technology emerging on the market that promises to solve the “signature problem,” but the truth is that some of them don’t fix the problem at all. The following are a few tips and observations to help you and your organization evaluate the available solutions, and choose the ones that will best defend.

1. Current signature-based security technologies are increasingly failing to stop malware. Evaluating the target of current technologies is a key first step in determining whether they will work for your enterprise. Many of our modern signature-based technologies are primarily geared toward consumer nuisance attacks, not addressing targeted malware attacks. These targeted attacks are engineered by an adversary with a specific end goal in mind. Classic “throw malware in a machine and hope it sticks” attacks are leaving targeted attacks with a wide-open door. Countless signature-based security technologies leave no way for a signature to exist – if a signature must be created, it will likely arrive too late to confront the problem. Cyber criminals have specific targets. Now it’s our job as security pros to do the same.

 2. Older vendors and technologies are being re-cast as solutions – but are no better at stopping the problem. Signature-based security tools look at millions of signatures – but signatures have to be written before technology can determine how they’re increasing, and how to stop them. With cloud computing, older vendors are recasting solutions that neglect new platforms. Cloud-based signature repositories are offering more of the same — an inelegant solution to the problem. Remember, all you need to miss is one piece of malware, and your system has been compromised. Many security companies aren’t selling this “one and done” mentality because they worry that their product can’t effectively fight off every attack – and with good reason. Even with wonderful, sophisticated databases, criminals can come up with one exploit that can bypass a network.

 3. Technologies that detect specific types of behavior and system changes have the best chance to actually find and eradicate next-generation threats. Although behavior detection strategies are seemingly up and coming, focusing solely on behavior changes can make a system rapidly vulnerable. Products that look at the intelligence of an attack do have the capability to find zero-day exploits – they send up a red flag you wouldn’t expect in detection systems that are solely anomaly-based. Combining behavior detection with anomaly based detection and removal is a vital, necessary strategy.

 4.  Companies and government agencies can build a new strategy that not only warns about new threats, but actually helps prevents them. Although complete prevention is unattainable, companies and government agencies need to focus on detecting AND removing the threat. Most products on the market focus on the detecting side and omit removal, leaving systems open to exploitation. Taking measures on both the network and the endpoint fronts is crucial if you don’t want to leave your systems exposed. The network is the easy part. Endpoint removal is the challenge, and the key.

The sophistication of today’s malware calls for a fundamental shift in the way anti-malware technology detects and remediates against new threats – and in the way people and processes respond. As long as technology and people continue to rely on what they know – such as signatures – they will continue to be defeated by what they don’t know, such as polymorphic malware. And as long as that trend continues, the tide of new breaches and infections will continue to rise.

It’s time for real change in security thinking, both at the technology level and at the process level. And if we don’t take action soon, 2013 is likely to be the worst year of malware yet.

Till the next post,

John Prisco, CEO

Why Security Technology Continues To Fail – And How We Can Stop The Cycle: Part 1


In 2012, as in previous years, commercial industry and government agencies spent record numbers of dollars on information security. Yet in 2012, as in previous years, the issue of breaches and malware infections grew more acute than in any year before.

Just look at the numbers. The most recent Verizon Data Breach Investigations Report indicates that breaches involving hacking and malware were both up considerably last year, with hacking involved in 81 percent of incidents and malware involved in 69 percent. According to the Ponemon Institute’s Cost of a Data Breach Report, malicious attacks on enterprise data rose last year, and the cost of a breach is at an all-time high ($222 per lost record). According to figures posted last month by Panda Labs, more than 6 million new malware samples were detected in the third quarter alone — and more than a third of machines across the globe are already infected.

So what does this tell us? Security technology is fundamentally failing. And we, as an industry, need to take action.

One of the chief reasons for this failure is our continued reliance on signature-based anti-malware technologies, such as traditional antivirus and intrusion prevention systems. Such systems block malware by blacklisting it – an approach that works only when the malware has been recognized and its “signature” is recorded in memory. Today’s sophisticated malware avoids this defense by constantly changing, morphing into new “zero-day” exploits that have not been detected or recorded.

Over the past month, several news organizations have once again pointed out the flaws in signature-based technologies, but even these reports are largely missing some fundamental points. A recent piece in the New York Times, for example, discusses the failure of antivirus software to stop next-generation malware. But, antivirus software imperfections have been known for years, and the Times did very little to advance the discussion of actual solutions.

Dark Reading on Dec. 27 took a more current view of the problem, discussing the flaws in today’s “layered” antimalware defenses. This article points out the flaws in today’s signature-reliant enterprise security strategies, but again, it fails to deliver much depth on how to solve the problem.

The fact is that signature-based technologies such as AV and IPS – still the cornerstones of many enterprise security strategies – are actually getting *worse* at preventing malware infections. A study published last month by Imperva indicates that the initial rate of detection of new viruses by AV solutions was less than 5 percent. While AV vendors took issue with the methods of this study, the substance of the findings is clear: signature-based solutions are failing at record rates.

With compliance regulations drowning our enterprise security professionals, proactive threat management falls to the way side and new technology solutions continue to neglect the data right in front of our eyes. End the failed attempt and address the real issues. How will you protect your corporate network?

In our next post we’ll discuss how we, as security practitioners can implement technology that truly combats the constant cyber threat cycle.

Till next time,

John Prisco, CEO