Threat Modeling for Simplicity

Dave Soldera
|
Security Architect, Electronic Arts (EA)
September 13, 2024

Threat modeling as a practice isn’t as common as it needs to be.  A security design analysis (a.k.a. threat modeling) should be an integral part of every software development lifecycle, and until it is, we, the consumers in this software world, will continue to bear the burden of the threats that get exploited.

Here is one idea to help encourage threat modeling as a practice:

Threat modeling should be simpler.

This is exactly the sort of broadly agreeable statement that I’ve seen put to paper numerous times, followed by a lot of generic words that are neither insightful, nor particularly useful to anyone.

So why am I stating what many might consider banal or obvious? Because arguably many of the most popular approaches to threat modeling are not simple, either to incorporate as a new security activity, or in the level of effort they require from participants.  That means we as a community need to make them simpler if we want threat modeling as a practice to be more common - but I'm not sure that we’d agree on what “simpler” means.

The path less traveled 

I do think “simple” is the right direction, not due to some sort of epiphany, but because I came to it more as a journey, one of walking through the threat modeling wilderness, taking the path less traveled, continuing the journey with no path at all, trying to get my practice of threat modeling to a place where I felt I could justify my approach to any and all who would question it’s worthiness.  It was only after I found my own threat modeling oasis in this perilous journey, after I had made decisions about how and why every aspect of my process existed, did I come across something to help contextualize my decisions, something to show me my decisions were part of a framework for improving a process; striving for simplicity.

What I found most appealing about what I came across was that it wasn’t something from the world of security, or even really from the world of technology, it was broader than that, and yet still felt so applicable.  What I happened upon was a framework for ‘simple’, and this is almost embarrassing, as part of a TED talk.  Now not all TED talks are equal, but this one blew my mind.  Now it might not blow your mind, but it blew mine because it was like discovering a unifying theory for hard earned decisions I had made, it was like cracking a code where you finally see where the cribs fit together, it was sort of like the climax of a good mystery novel where the detective finally explains the mystery and links together clues you intrinsically knew were relevant, but you couldn’t quite connect.

At this stage it’s worth watching the talk - Toward a science of simplicity, and the summary below assumes you have.  The basic framework of properties that make something ‘simple’ are:

  • Predictable(/Reliable): It’s got to work and work in an expected way.  If we take or do 2 of the things, we will get the same output for the same input.
  • Cheap: It can’t “cost” a lot to do/make.  If it is expensive then no one can afford it, or the number of ways it can/will be used will be severely limited.
  • High performance or value for cost: In other words, it must have a useful function, it can't just have minimal cost, it must also deliver value.
  • Stackable ("Building Blocks"): It is composable, or at least it can be easily used as a material/component of something else.

If you watched the talk then hopefully you’ll agree that there is huge potential in simplicity, especially if like me you want to see an activity like threat modeling be more broadly adopted.  This speaks to one of the core reasons I think the properties of simplicity should be more closely adopted by the threat modeling community - threat modeling as a process can only flourish if it is adopted, and I think there is a strong correlation between widely-adopted processes and simple processes.

Applying the science of simplicity to threat modeling

The talk gives some examples, but it’s easy to think of examples from the computing world, such as

  • Unix software: under the philosophy of “do one thing well” and ensuring the output of one program can be input to another.
  • OSS libraries: all modern software is built from reliable, free, useful and stackable software libraries.
  • APIs: exposing APIs is a great way to make functionality available for composing into other purposes, and if reliable and inexpensive it unlocks amazing potential e.g. OpenAI, AWS S3.

There is still some room for interpretation about what these properties actually mean in the context of threat modeling, but this would be my interpretation:

  • Predictable(/Reliable: If a threat modeling process was completed by 2 independent teams on the same system, the resulting output would be remarkably similar.
  • Cheap: The threat modeling process should be cheap in terms of time and resources to complete, specifically for the target user of the process.  When people talk about simplifying or making the process easier, they are optimizing for this property. 
  • High performance or value for cost: The threat modeling process has to do a good job at finding relevant threats.  For any given threat modeling process this is dictated by the “model” (the information required to represent the system), and the “security properties” (i.e. types of threats) it attempts to ascertain from the “model”.
  • Stackable: Is the threat modeling process composable? Not only allowing multiple threat models to be used to analyze a large system, but does it complement other security activities essential to determining the overall risk posture of a system?

Now with this refined understanding of these properties in the context of threat modeling, the question is can we use this framework’s properties to evaluate our threat modeling processes to determine if they are ‘simple’ (not that the determination will be ‘yes’ or ‘no’ mind you).  Let’s evaluate 2 of the most common approaches to threat modeling; brainstorming and the Microsoft Threat modeling Tool (MSTMT).

Threat modeling in practice - brainstorming 

Threat modeling via brainstorming involves getting all relevant people into a room for a significant period of time (e.g. hours), and using diagrams and notes to capture a description of the system, analyse it from the point of view of someone malicious, and discuss and evaluate any threats.  Let’s analyse threat modeling via brainstorming for simplicity:

  • Predictable(/Reliable): Two different brainstorming sessions can easily have quite varying results depending on what parts of the system get focus and who is available on the day to be involved. The actual captured output from the session varies according to the experience of the facilitator and note-taker.
  • Cheap: Getting a large number of relevant people in a room at the same time, often senior people, is expensive (in terms of opportunity cost).  However, minimal training/upskilling is required in order for people to take part in the process.
  • High performance or value for cost: It can be effective at finding novel threats, but it can be ineffective at getting good coverage of common threats because time is limited so usually not everything is examined in equal detail.  Conversations can also easily get derailed with multiple people heading off down various rabbit holes.
  • Stackable: If you gave the output of a brainstorming session to someone who wasn’t part of the session, and asked them to use that output as input to a larger security analysis, it is likely unusable because it would lack the fidelity of scope to offer assurance of what was covered.

I am not a huge fan of brainstorming as a threat modeling process, and that bias is probably clear in the analysis above, but I would be surprised if anyone thought all of the points made are invalid.  Which is to say I think a reasonable conclusion is that brainstorming doesn’t align particularly well with this definition of a simple process.  This conclusion doesn’t mean don’t use brainstorming, it just means that brainstorming is unlikely to be an approach to threat modeling that facilitates threat modeling becoming a ubiquitous security activity. 

Threat modeling in practice - Microsoft Threat Modeling Tool 

Now let’s analyze the Microsoft Threat modeling Tool (MSTMT).  MSTMT is a tool that lets someone draw a DFD to model a system, which the tool can then evaluate using the STRIDE security properties and any custom security threats of specific technologies the tool knows about.  Let’s analyze this process for simplicity:

  • Predictable(/Reliable): largely if 2 people draw the same diagram they get the same set of threats, and also the output will be a consistent format, so I think it is predictable.
  • Cheap: for Security people who are familiar with STRIDE then the process it is relatively cheap, however for non-Security people there is a learning curve that leans towards people needing to become knowledgeable in security e.g. learning about repudiation, understanding when it is tampering vs elevation of privilege, what’s a trust boundary?, etc.  My experience is that leveraging MSTMT is expensive if non-Security folk are the target users.
  • High performance or value for cost: the MSTMT is noisy, it produces a lot of threats that are often not relevant, and this significantly lowers it’s value due to the cost of dealing with the noise.
  • Stackable: as the tool relies on DFDs and DFDs can naturally be created in layers of granularity, I think it is stackable to use.

For me, it was the very challenges in using the MSTMT that started me on my own journey to find a better threat modeling process, and I think it is a common story that your needs outgrow its capabilities.  Nevertheless it’s a popular tool and can serve its purpose to a point, but I think it doesn’t offer good value (for cost) at a certain scale and over a certain period of time.  Again, that’s not to say don’t use it, just that it isn’t going to be what takes threat modeling to the next level of adoption.

A new approach to achieve simplicity 

Of course it is all too easy to be critical of others' work, so I want to talk about how my own process aligns (and doesn’t align) with the properties of simplicity.  By no means do I think my approach meets the goal of simplicity for threat models that we should strive for, but it’s the alignment I did see that got me excited about this framework.  My process is detailed here, and it’s full of design decisions that align to the properties of simplicity:

  • Predictable(/Reliable)
    • Information about a system is captured in a Google doc or Confluence page using a known document structure, so the output of all threat models is predictable.
    • Just using a structured document doesn’t mean the information captured will be consistent and valid, and initially this was a laborious part of the process trying to manually ensure these things.  To solve this I wrote a tool that parses the threat model document and validates different aspects (e.g. fields aren’t missing, fields have the correct format), but also ensures referential consistency between different sections (e.g. threats must reference a previously declared component).
  • Cheap: My threat modeling process targets developers, so I made several design decisions trying to optimise the process to be easier for that audience:
    • Use as many familiar tools as possible - learning a new tool is a cost and can also lead to vendor lock-in.  My process decided to use common documentation tooling i.e. Google Docs, Confluence, to capture threat models, as virtually everyone is familiar with these tools and how to use them.
    • No new diagrams - the threat model must have a diagram, but any relevant or mostly accurate diagram will be fine.  The diagram is not the main analytical tool in my process, so it doesn’t have to be perfect.
    • No terminology they aren't familiar with - developers shouldn’t have to be security experts, so using security specific terminology (e.g. STRIDE) is a cost to them.  This is hard though as it’s difficult to avoid all security language, but I settled on developers needing to understand Confidentiality, Integrity, Authentication and Authorisation (and to be fair authn/z can still be difficult concepts in certain situations).  These are explained as simply as possible (e.g. confidentiality = read access, integrity = write access).
    • Design focused, not vulnerability focused.  The focus is just on asking questions about how their system works i.e. populating the “model” (the information required to represent the system).  The model is chosen just to elicit design issues.  Other threats can of course still be captured, but there is a variety of tooling already being used to find vulnerabilities (e.g. SAST, DAST), so we let that tooling do its job.
    • Threat focused, not risk focused. Don’t (initially) ask for information that solely helps evaluate risk (e.g. threat actors, data classification, compliance requirements etc.), especially if teams are unlikely to know the answers, but delay gathering it until the Security team can look at the captured threats and existing controls and decide if a risk needs to be raised.  Sometimes there are adequate controls in place already, so gathering information (to evaluate risk) turns out to be irrelevant in these cases.
    • Small in-scope - scope is critical!  Threat models become more of a burden when they cover the work of multiple teams, so reducing scope to the work of a single team (and environment) means that team can really own that threat model, and they are the experts on their system so they already know the answers to all the information that needs to be captured. 
    • Reusable templates.  A structure is good, but similar systems share a lot of information, context, threats, controls etc., for example web apps, so creating specific (e.g. web app) templates that are already partially populated makes the process easier.
    • Share all threat models.  Teams learn best from examples, and since the format is common for all threat models, other threat models can be leveraged to accelerate the process.
    • Detailed documentation.  By writing detailed documentation on the threat modeling process, teams can solve common problems themselves, which is faster for them and lowers the burden on Security.
  • High performance or value for cost
    • Focused security model.  The focus is just on design issues relating to access control, so access to systems (authn/z) and access to data (confidentiality/integrity).  This focus allows the process to get excellent coverage of these specific issues.
    • Aggregate threats. Evaluating security properties for every permutation in a model can lead to an explosion of threats.  Using convention of expression and tooling, threats can be aggregated which greatly reduces the resulting list of threats, making review and comprehension of threats easier and more meaningful.
    • Assurance.  By structuring the document in a way that the same parts of the systems are captured in multiple locations, inconsistencies can be detected (using tooling), which then leads to assurance when no inconsistencies are found.
    • Sharing is caring.  By making threat models that are complete and approved (by Security) available to other teams so they can learn from and reuse content, this drives higher consistency and higher quality across all threat models in an organization.
  • Stackable
    • Clearly scoped.  Everything in-scope in a threat model is clearly marked as such, and the goal is to have the components of a system in-scope in a single threat model, which means numerous threat models can be created without overlap, making them complimentary to analyzing the security of a larger system.
    • Calling out controls.  A threat model that captures all existing security controls is a useful document for other security activities, like security testing or penetration testing, as it lays out a set of controls to evaluate for effectiveness.

Some of these design decisions you’ll find in other threat modeling approaches, whilst others may seem more controversial, but they have all contributed, at least in my experience, towards making my process simpler.  However, no process is perfect, and mine is no exception:

  • It works best for systems that can be more easily decomposed, and becomes more unwieldy the more components that are in-scope.
  • It doesn’t work well if you want to threat model a new feature and there isn’t an existing threat model of the system.
  • Asking for all the controls to be captured may not be required for some businesses (although I would argue the effort is worthwhile).
  • Using a document format means that controlling the format of the fields is difficult, as mistakes can only be detected when validation is run.  Threat modeling inside a dedicated application would give real-time feedback.
  • The range of threats generated doesn’t include novel or creative threats, while these can be added, it’s dependent on the knowledge for the team or the Security person reviewing the threat model.

I highlight my own work as an example that we can make clear design decisions (and tradeoffs) in our approach to threat modeling that focus on making the process better for those involved, and absolutely yes, even at the cost of finding fewer threats.  A process that gets used will always find more threats than one that doesn’t.  

A community challenge for creating simple frameworks 

I then challenge the rest of the threat modeling community to critically look at your own approaches and evaluate it against this framework for simplicity.  But bear in mind the following quote which I think is good guidance:

Observe that you do not make things simple in general, but for specific persons, often here and now. To do this, empathy has no replacement; you must imagine yourself in the other’s place.

http://wisdom.tenner.org/the-power-to-make-things-simple.html 

Here are some question in the context of the framework you can use to critique your threat modeling process:

  • Predictable(/Reliable)
    • If the threat model was done twice, would you get very similar results?  
    • If not, what can you change in your process or approach that would improve this?
  • Cheap
    • What do the people involved need to know in order to complete the process?  
    • How can you minimise what they need to know?  This is in terms of both security knowledge and the amount of information they need to provide.
  • High performance or value for cost
    • What sort of threats does your process discover and are these of value?  ‘Value’ means the threats relate to your security requirements, but also the threats are novel and not found by other security activities? (e.g. scanning tools)
    • How can you change your process to find more relevant threats and ignore more irrelevant threats?
  • Stackable
    • Are your threat models composable with each other?  
    • How does threat modeling integrate with other security activities, and what can be changed to make it of more value to other activities?

Please share your conclusions!  We will all benefit from the lessons learnt and if it creates a dialogue, all the better. If we focus on designing our threat modeling processes for simplicity, it will help get the adoption that’s required to make threat modeling core to the software and systems that run the modern world.