The Winter of Detection Techniques

Author: Ariel Waissbein
Senior Researcher
@SeniorWata

Simple collection and rule-based detection strategies yield false positive alarms which will require manual inspection, which in turns implies the need of spending additional resources to deal with them.

Security frameworks (e.g., NIST SP 800-53, ISO 27000 series, COBIT, SOC2) are popular frameworks that help to implement processes and policies for managing information security controls. They have been guiding the implementation of security for years while helping to conform with laws and government policies.

Yet, securing an organization was never more difficult. Supply-chain attacks (e.g., 2020’s Sunburst) and pervasive client-sided attacks have deemed defense in-depth strategies fruitless. Moreover, even criminals looking for financial gain can break into companies regardless of their size or location, as shown by the recent wave of ransomware attacks (e.g., Colonial Pipeline’s attack).

At the center of security controls is detection of security-related events and the triaging of this information. Take for example NIST’s Cybersecurity Framework that implements five functions (Identify, protect, detect, respond, recover). A standard path, thus, after the identification of data, assets and potential threats, is to implement sensors throughout the network, collecting data and producing security information out of this data.

Sensors

Sensors feed data which is collected for analysis. Sensors are tools that run within production servers or personal computers, within network equipment, and in general are responsible for generating events (e.g., incident alarms). They include network routers, firewalls, web applications, servers, web-application firewalls, antivirus software,intrusion-detection systems, and more. Sometimes the sensor will be coupled with a prevention mechanism and also stop an (alleged) attack action. But it is more likely that the data produced by the sensor alone is not enough to determine an attack, and blocking the event could result in unavailable resources that are needed.

Malware host detection is a classical example of signature-based detection. These techniques have been used by antivirus since their inception. Roughly put, signature-based prevention looks for “strings” that are characteristic of an attack in a file or web or network session and block its use. They require (signature-based) companies working on the clock to capture new attack samples, derive their signatures, and update their software to include this signature. One such product rarely covers all known infections and relies on its sources of information for recognizing new attacks.

Adding a second analogous product from another company (e.g., a 2nd antivirus) may (slightly) improve coverage while duplicating required resources and duplicating false-positive alerts.

Detection based on network events addresses both attack signatures (like antivirus) and event patterns. Both anomaly detection and misuse detection are techniques aiming to learn from normal usage patterns and defining everything else as an attack, or the opposite. Regardless, the two techniques rely on the training set and the ulterior AI or statistical inference technique used for detection. Neither ensures full coverage, and since they rely on sensors at one (or a few) points in the network and time-bound training in an ever-changing environment, they are bound to throw false positive alarms.

We could continue listing different sensor types and their pros and cons. Yet, the consensus appears to be that they are necessary but not sufficient (regardless of their number and placement).

SIEM

Threat modeling has thus indicated that we should use layered detection techniques, mixing technologies and, for the last decade (at least), a good SIEM (security information event management) suite to concentrate and analyze all information securely.

Nonetheless the search for automation appears to be doomed. Simple collection and rule-based detection strategies yield false positive alarms which will require manual inspection, which in turns implies the need of spending additional resources to deal with them. In order to cope with this problem and improve the precision and recall of alarms, artificial intelligence has been used for some time. It helps to generate true positive alarms and detect (and ignore) alarms which are false negatives. An immediate cost of this approach is that there is no first hand access to the information: the AI, which is not fail-proof, outputs a processed analysis hiding the raw data, therefore hiding some attack patterns from the analysts.

Nevertheless, once an attacker knows there is AI analyzing logs on the other side, he can craft his attack accordingly by training the AI to recognise the attacker’s actions as benign, and hence not raise alarms. For example, an attacker could use generative adversarial networks (see GAN) to train his own AI to win over the defender’s AI. This is a subject of active research, and we have yet to discover how AI improves defense.

Persuading attackers not to damage

A technique often used in physical security yet overlooked in (computer) information security is dissuasion. A burglar alarm at some houses will bias a thief to go elsewhere for the unprotected neighbors.

Is there an analogy to the dissuasion-by-burglar-alarm-system in computer security? The Building Security into Maturity model (BSIMM) interestingly proposes scoring companies across 12 different dimensions and comparing them within their vertical. This should work against the financially-motivated hacker casing companies in a vertical, which is a portion of the attackers population, but not all.

In computer security, attackers may automate their attacks using a handful of exploits or attack vectors they have acquired to compromise as many assets as possible, alternatively they may go against one specific adversary, sometimes they will be financially motivated or other. Bittrap is an attack detection technique with dissuasion at its core. In brief, BitTrap places a wallet with a risk-adjusted bounty in every device of a company, it automatically monitors each wallet, and if an attacker hacks into a device, finding the wallet and cashing the bounty, BitTrap detects which device was hacked and triggers alerts. Moreover, the risk-adjusted bounty is calibrated carefully to entice the company’s attacker population.

Since the solution allows an attacker to desist from the attack and get a financial reward, it is disuading him from staying “unnecessarily” monitoring the hacked device or damaging the company through ransomware or other. In short, Bittrap allows an attacker who has compromised a computer host, internet account or other kind of information asset, to get a monitored bitcoin unspent transaction. When the hacker sees this unspent transaction in the asset he compromised he is at a crossroads: he immediately understands he can get this money anonymously and at that instant his attack will be noticed, but if he waits (e.g., while canvassing the network or looking for more valuable information, someone else may get this money in his stead. He could leave empty handed.

Conclusion

The mantra for risk management seems to be “Identify threats, then manage”. The reliance on identifying a priori the attacker’s steps in order to detect and prevent these attacks is too strong. In this ideation, resources go to sensors, sensor data collection and analysis guided by threat identification and their risk assessment. Once properly armed, sensors generate valuable data and one can build intelligence to extract security-relevant information out of this data–and in particular, to stop attacks or generate alerts including information about potential ongoing attacks.

But we know this is not sufficient. We know that more (sensors/SIEM) is not tantamount to more detection. Sensors generate both false positive alarms and true positive alarms. No Artificial Intelligence or other technology has correctly dealt with the mix of these alarms and data pieces from the conjunction of all the sensors. They inevitably fail at some point.

Dissuasion, thus, is a layer of security that targets the attackers population, and as such, complements with the insufficient detection/prevention mechanisms. It begs for a more profound analysis of the attackers, their motivations and how to provide them with an alternative to damaging more companies.

Contact us