Swift | Silent | Deadly


Threat Modeling: An Introduction

By , on



I have previously written about categorizing attackers based on their levels of skill and focus. I have also written about categorizing security measures to defeat attackers with a given level of skill or focus. Both of these posts tie in closely with (and were early attempts at) a topic that I want to explore more fully in coming months: threat modeling.

Featured image courtesy of https://commons.wikimedia.org/wiki/File:Orange_Standout_(15948709611).jpg, used on Creative Commons 2.0 License.

Threat Modeling

Threat modeling is the examination of two things as they relate to each other: an adversary and a security measure. The effectiveness of the security measure is weighed against the skill and capabilities, focus, and time available to the attacker. Threat modeling allows you to understand what you “look like” to your opposition, understand his or her capabilities, and select effective mitigations.

Whether we realize it or not, we all participate in some level of threat modeling every day. The example we will use here is one that many of us may have rationalized at some point: “I’m just running to the corner store; I don’t need to lock the deadbolt (or set the alarm, or close the windows, or shut down my computer; choose your option as appropriate. The principle will remain the same)”. The assumption made in this case is that locking the knobset will defeat a threat and a break-in will not occur. This is not necessarily true, but why do we assume it is? What logical process drives this conclusion and its subsequent decisions? Rightly or wrongly this decision and the presuppositions that drive it provide an example of threat modeling. The speaker has made some baseline assumptions. “The mitigations I have chosen will be effective given:

  1. my likelihood of being targeted,
  2. the skill level and motivation (focus) of a potential attacker who would target me,
  3. the opportunity presented to the attacker based on time of day, location, operational window, and other environmental factors, and
  4. additional mitigations (the deadbolt) do not provide a payoff equal to or greater than the time spent engaging and disengaging them.”

Security measures are all too often discussed in a vacuum; I am guilty of this in my own writing. Little attention is paid to exactly what threat a security measure is designed or intended to defeat. In some cases it doesn’t matter and the measure in question makes sense regardless of threat model. A good example of this is a passcode on your iPhone. It doesn’t matter if your adversary is the kid down the street or the government of the nation you are traveling to – the mitigation is the same in both cases. It just so happens that the mitigation to defeat the kid down the street is also effective against very powerful adversaries, like goverments. This example is an outlier; the effectiveness of a technique typically rises in tandem with its level of difficulty or inconvenience.

So How Do We Model a Threat?

Threat modeling occurs by asking and answering two questions, and continuing to do so through a constant cycle of evaluation and reevaluation. They are:

  1. Who is my adversary?
  2. Who am I?

Who is my adversary? Understanding your adversary is the first, and perhaps most important, component of an effective threat model. Understanding the adversary involves more than just an awareness of who “they” are. To fully understand your adversary you must also have a grasp of his or her capabilities; underestimating capabilities can be potentially catastrophic if you are operating under effective opposition. The threat and his/her/its capabilities may vary widely depending on financial resources that can be brought to bear against you, and willingness to do so. This willingness will be based the actor’s focus which is depends heavily on the next question in the sequence. I recommend reading my Attacks and Attackers, Categorized post before proceeding.

Who am I? This true nature of this question may or may not be immediately apparent, but put another way, “who am i?” is essentially asking, “what do I look like to the opposition” or “what is my level of exposure/heat state, from the opposition’s vantage point?” This is possibly the hardest question to answer and one that must constantly be reevaluated. How you look to your opposition depends on operational successes and failures in both the physical and digital worlds. It can depend on an intercepted communication, a compromised member of the operational unit, or physical evidence.

How you “look” to the opposition can also depend on personal motivation. Your opposition may be a scorned business associate, a spurned lover – this list could continue ceaselessly and will create a feedback loop with the first question, “who is my adversary” (your adversary ultimately equals a confluence of both skill and focus. Unfortunately a threat with personal motivations to attack you may be incredibly hard to model. He or she may be unpredictable; a period of relative calm may be shattered by a Facebook post, an errant comment by a mutual friend, or any number of unguessable and unknowable reasons. If you face an adversary in this category your model should skew toward assuming this adversary has a very high level of motivation. This can be balanced somewhat by the skill level he or she presents; the confluence of irrational personal motivation and very high levels of skill is somewhat rare in the world of advanced persistent threats.

Is Threat Modeling Always Necessary?

To deliberately model every possible would be a nearly crippling exercise. Sometimes, the ability to rely on one’s “gut” or instincts or conventional wisdom may be sufficient, or even necessary. When it comes to information security, however, I believe this is a faulty. Answering any of these questions with one-hundred percent accuracy may never be possible depending on your adversary and his or her level of secrecy. Motivation can also be hard to ascertain; human emotional is incredibly unpredictable. The best course of action is to answer it as honestly as possible and to err on the side of caution. There is a large amount of risk associated with underestimating your threat. There is also risk associated with overestimating your threat.

The risks of underestimating your risk are obvious: your operational security measures will fail, you will be caught by your opposition. The risks of overestimating your threats are less obvious. By overestimating your adversary’s focus and capabilities can cripple your ability to operate. They can also make your mitigations needlessly complex, which introduces other operational security problems. Perhaps the most prevalent of these problems is that the complexity of your operational security measures is directly proportional to your likelihood of making a mistake. Even if your mitigations far exceed the capabilities of your threat actor, a single mistake could undo them all. The second major factor is that bringing in new members to your group can become more difficult than it needs to be. Time is required to train them and new members unaccustomed to using advanced security measures are extremely mistake-prone.

A third potential risk in overestimating your threat is the risk of profile elevation. If you are employing mitigations that are incongruent with your appearance to your threat, you may make yourself a more interesting or attractive target. This requires a full reassessment of your adversary, yourself through the adversary’s lens, and the effectiveness of your mitigations.

The Bottom Line

Gather the requisite information on your adversary and assess yourself honestly through the lens of your adversary. Choose mitigations that are effective but are within the your reach technically. Also attempt to choose mitigations that do not elevate your profile and cause the adversary to become more focused or dedicate more resources to you. Reassess based on actions taken in the digital and physical worlds. Rinse. Lather. Repeat.


Keep Reading