Author: Ariel Waissbein
Colonial pipeline, Electronic Arts and JBS recent ransomware hacks made the news, and so did other hacks to platforms and protocols managing crypto coins, like the EasyFi network, Uranium Finance and PancakeBunny, and many other attacks which happen every hour. Yet, these do not represent the majority of hacks happening even though they may occupy the news headlines.
In the case of Colonial Pipeline, attackers disrupted the service for several days, and it was only restored after the company paid a ransom. In JBS, again, a ransomware attack stopped meat production for several days; and in the case of EA, the attackers stole trade secrets which they could sell on the black market. Attackers somehow retrieved mnemonics and stole 59M in coins from the EasyFi network, found bugs in contracts of Uranium Finance and PancakeBunny which allowed attackers to make 57M and 45M respectively (see link).
All of these attacks were, most probably, targeted attacks. For example, in the case of the crypto contracts, the attackers spent resources into finding mnemonics or bugs, ways to use these in order to steal funds, and then exercised these attacks. Attackers knew there was a seven figures bounty behind their preys before they started to work.
How do the aforementioned threats represent your threat distribution? That is to say, the threat distribution of each entity, be it a person or an organization, is different. The impact that these potential threats could cause, if exercised, is different. Moreover, these threats depend on the attacker exercising them in the same sense that playing a basketball game against an NBA team and an average street basketball team is different.
Generally speaking your organization is at risk of receiving an attack anytime, or even receiving collateral damage from attacks to other organizations. For example, the Stuxenet worm family ([wikipedia: Operation Olympic Games], [Zetter, 2014]), which targeted the Natanz nuclear power plant in Iran, infected other networks hosting SCADA systems. Each organization suffers or is threatened by different incidents. Incidents and threats originate from entities we call adversaries. So one adversary may select different kinds of tactics, toolsets, and even objectives than others, and more importantly, he may select different kinds of targets. Hence, one organization may only face a subset of adversaries; and even some of these adversaries are more likely to attack than others. This is what we call the adversarial distribution: namely the adversary types and the associated probabilities that they engage in an attack.
It is these adversaries, the ones among your adversarial population, which may change their behavior from time to time and yet constantly be behind the damage that we want to analyze, not individually, but as a group. That is, given an organization, we are interested in finding out what kind of adversaries need they worry about. Explicitly: what is their adversarial population and how is it (statistically) distributed.
The importance of understanding your adversarial population stems from the fact that protection needs to be tailored for the threats this population causes. Rephrasing, we want to understand what is the adversarial population we are facing in view of improving our cybersecurity stance.
For example, we would probably do assessment, protection, detection, response and recovery differently for two different sets of adversarial populations. Consider two organisations: on the one hand a retailer processing hundreds of low-valued transactions at the same time for thousands of customers; on the other hand, a political party in the process of a presidential campaign with emails and hundreds of pieces of information that include strategy, donors, insights and other pieces of information.
So we ask what are the adversaries which may attack you or your organization?
Before any adversary targets a host or other asset, he makes some preparations, including getting exploits, post-exploitation tools, and acquiring general and technical knowledge.
Adversaries -generally speaking- pick their victims through one of the following two procedures. In a first sampling procedure, victims are drawn randomly from a given population. In this case, all hosts in the population have a priori the same chance of being targeted. There may be some research done to determine this population, so that it aligns with the adversary’s expectations. For example, the adversary comes up with an exploit or tool that is (more) effective against a given operating system or cloud provider, then this is going to define the population.
A second selection procedure has to do with specifically-chosen targets; when the adversary starts with a target person or organization in mind. The adversary then needs to investigate the target and use resources so that every active action, be it an exploitation attempt or probe, which may be noticed by the target’s sensors, is likely to succeed.
When the target of an attack is picked from a wide population, and more generally, when the adversary’s objective is not associated with a specific target, then the adversary is after something else. This may be financial returns, producing damage, or other. In any case, one victim is equivalent to the other. In this case, the adversary may initiate attacks against more than one target in parallel. Regardless, if the adversary needs to pull excessively more resources for a specific target than what he uses for other targets, then he may abandon the attack against this target. That is, the adversary is bound to optimize resource allocation to fulfill his objective. This logic applies to the time he uses, the computational resources he needs to allocate (e.g., brute-forcing a password for hours vs using 5 minutes for each target and moving on), using zero-day exploits or other nouvelle features which may lose value once used in the open, et cetera.
Of course, the resources are going to depend on the adversarial type. For example, an adversary may be a hobbyist that spends a few moments now and then in his attacks, a kid spending some time everyday after school, a hacker working nine-to-five on this endeavor, a team of dedicated technicians, or even a multidisciplinary taskforce coordinating an attack.
So the size and constitution of the team matters. Also, their budget. For example, one hacker could be using the same exploit and toolset to first compromise and later inject a payload that replicates, encrypts disks and displays a ransomware message, or he may craft a special attack using an exploit for a fresh new zero-day vulnerability and a likewise new toolset for which endpoint protection systems and off-the-shelf detection tools do not have a detection signature.
Time and opportunity are also essential. Let's talk about the time an attacker invests in an attack. Returning to the above example, say the hacker reusing his toolset is going after compromising hundreds of computers for ransomware and extracting a few thousand US dollars in one every eight cases. This hacker, or hacker team, will then probably devote little time to each new compromise as long as his numbers follow within ranges (say, his robot compromises between six and ten computers per day and on average 1-out-of-8 pays the ransom). Assume that in the other case, this is a government-sponsored actor which may devote months or even years to this hack. These examples are just two in a wide range of possibilities.
Regarding opportunity, this is basically the distinction between a targeted attack and one with no target. But, as we said above, the size of the population of potential victims is also relevant (in the second case). For example, sometimes a hacker will procure an exploit for a specific version of an application or platform, say, e.g., Wordpress or Drupal are two popular content management systems but Wordpress is used in 41% of the top 10MM websites and Drupal by less than 1% according to this report.
Last, but not least, is the intent of an attack. Contrary to what some may think, the vast majority of attacks are perpetrated by criminal hackers doing business (e.g., harvesting credit cards, siphoning sensitive information, asking for ransomware). In these cases, the hacker will use resources optimizing for his reward --money. So perhaps a hacker that used to harvest credit card information switched to ransomware some time ago, and will probably switch again when business mandates. This hacker may try to reutilize his tools (e.g., exploits) as long as possible, and does not mind that some of his attacks are stopped or detected. On the contrary, a hacker with a set intent, be it to siphon intellectual property from one company, or secret information from a government organization, then this hacker optimizes for success. He may have one shot to succeed in his endeavor so he will minimize chances of being detected, chances of an exploit he uses fails leaving a trace, and may invest more time, money and, generally speaking, resources to achieve his goal.
So we established that adversarial populations may differ from organization to organization. Therefore, defenses need to be tailored to the threats this population poses. If we think, e.g., following NIST’s Cybersecurity framework, in terms of the core functions Identify, Protect, Detect, Respond, Recover, then it should be clear that biasing how we implement them against a different adversarial distribution would be counterproductive. For example, the Identify tasks:
ID.GV-4: Governance and risk management processes address cybersecurity risks
ID.RA-3: Threats, both internal and external, are identified and documented
ID.RA-4: Potential business impacts and likelihoods are identified
ID.RA-5: Threats, vulnerabilities, likelihoods, and impacts are used to determine risk
ID.RA-6: Risk responses are identified and prioritized
all depend on this adversarial population. Something similar happens if the organization were measuring their security stance using BSIMM framework. In that case, there is one specific Intelligence activity,
AM1.3 Identify potential attackers.
that addresses the adversarial population, and other related activities
AM1.5 Gather and use attack intelligence.
AM2.5 Build and maintain a top N possible attacks list.
AM2.6 Collect and publish attack stories.
Other security frameworks define analogous tasks and activities. The requirement of understanding one’s attacker population is undisputed when designing a defense system.
These frameworks which do help to improve your security stance, thus appear to require understanding your adversarial population. What is more, they provide steps in order to do this. The inventory of attacker properties provided above, while probably not exhaustive, does cover a considerable portion of the global population. So that one's attacker population is -minus a negligible set- a subset of these.
We provided a non exhaustive catalog of adversaries and pointed to some techniques with which one can discover his adversarial population. We further showed examples on how understanding one’s adversarial population is used in security frameworks to instantiate defenses. This thought process then prepares us to understand what are the threats we need to protect from, which are the assets targeted by these threats, and balance what are the resources we need to invest to protect against these threats. Improving one’s security stance requires a thoughtful administration of resources. The adversarial population is thus a great candidate for weighing this resource administration.