For those engaged in the timely production of high-quality software, threat modeling is an invaluable method to minimize rework. Design defects can include useless code, or “cruft” and can be costly to fix. But the manual process of threat modeling doesn’t always fit well into ever-tightening iterative development methodologies.
Fortunately, our industry is making big strides in the direction of automating threat modeling. I’ve explored as many of these tools as I can while looking for something that works best for us here at NetSPI. While I’m quite optimistic about the very near future of threat modeling automation, I’ve got my reservations. These reservations can be generalized as conjectures.
If you are buying or building threat modeling automation, here are three conjectures to take into consideration.
Conjecture 1: The Only Automation is Semi-Automation
Don’t expect to run a threat modeling process entirely free of human care and attention. No matter your methodology, threat modeling operates on ideas about software – and the outputs of threat modeling are other, better ideas about software. Completely automating the improvement of these ideas requires expressing them in a useful format and yielding the resulting better ideas in a format suitable for implementation by another automated system.
You see where this is going.
It also requires producing genuine improvements which would unquestionably result in a better system overall: error free output.
So, let’s consider how semi-automation is a more realistic expectation than full automation by looking at this input-process-output (IPO) model in more detail, from bottom to top.
Automating the Outputs
The results of a threat modeling assessment do not need to be consumable as non-functional requirements (NFRs). They can inform coding standards, product roadmaps, build procedures, test plans, monitoring activities, and more.
For example: let’s say the consumer-grade IoT device your team is building requires customizations to the kernel level containerization system for OTA updates. Your product manager sees this as an indicator of how cutting edge this device is compared to the market, while your architect sees this as a necessary annoyance. But what the threat modeler sees is an unmanaged attack surface, accessible from the network, written in C, executing in Ring 0.
What can your team do with this information, besides draft security requirements? Adjust your static analysis strategy? Amend your vendor management boilerplate to mandate relevant training? Whip up a fuzzing protocol? How many of these represent automatable opportunities?
Threat modeling is a decision support process. You can automate aspects of it, but you’ll be limited by the amount of decision-making that is automated.
Now, you may have scripts available to automate the creation of backlog items—Jira tickets and the like. Keep in mind that 90 percent of all security tooling outputs are false positives. Threat modeling automation systems make no promises of being any different. So, you can either devote human care and attention to triaging the results, or you can let the implementation team do the triage work themselves. Either way, there’s still work to be done.
Automating the Processing
Threat modeling is a security process, and security is one of many aspects of quality. We used to think of the interaction between security and the software development process as one of trade-offs. Perhaps some still do.
Many organizations are beginning to approach their going software concerns by finding an optimal balance considering known limitations. It isn’t security versus usability. It’s making sure our products are suitably usable, secure, performant, testable, resilient, scalable, marketable, et cetera.
So, your turnkey end-to-end threat modeling automation has to be able to recognize and accommodate other requirements in terms of the product’s usability, reliability, marketability, scalability, et ceterability. If it doesn’t, it will fall to you to strike the right balance. And if you’re the one striking a balance, you don’t have a fully automated system.
Automating the Inputs
What tools do your security architects use? The ones I work with mostly use whiteboards. Many use team collaboration / CMS software like Confluence. Some use drawing tools like Visio. Does anyone still use Rational Rose?
If your threat modeling automation can meaningfully parse this information, great. If not, and you have to reproduce the architect’s design, then you won’t achieve full automation.
Otherwise, what inputs can be automatically fed into your threat modeling tool?
Automatic scanning of Infrastructure-as-Code files can bring to light threats to the infrastructure. They may not have much to say about the actual software, though. And automatic code scanners tend to ignore those values of quality that I enumerated above.
Finally, threat modeling tools that scan implementation artifacts often lack efficiency. You’ve already built to your design. Any findings produced by a scanner are opportunities for rework, and as I said at the beginning, threat modeling is supposed to minimize rework.
Conjecture 2: Your Tool’s Diagrams and Your Team’s Diagrams Should Be Compatible
Whether your tool consumes or emits them, diagrams of the subject system must be recognizable by the implementation team as being a genuine, faithful reflection of the values of that system. Tools that invite you to re-invent or re-think the system’s architecture in a new schema tend to miss the mark.
This is not to say that re-diagramming is always problematic. Architecture diagrams must reflect the values of the organization, such as structure, redundancy, symmetry, priority, urgency, or flow. This helps them present the system—especially its attack surfaces—naturally. Automatically generated diagrams tend to disregard these values.
Conjecture 3: Your Tool’s Guidance Should Be Delivered with Humility
As mentioned earlier, threat modeling operates on ideas about software and its outputs include better ideas about software. The best tools and techniques will lead the threat modeler to the best ideas, faster.
But architecture works with abstractions about systems. Lacking a complete architecture description, any threat modeling tool is working on incomplete input. And who has time to produce complete architecture documents?
Have you seen a 300-page architecture document? Probably. But have you ever seen a 300-page architecture document that was up to date?
The problem arises when a threat modeling tool can’t adjust to the subtleties of your software. If a tool mistakes design elements for threats, you’ll be required to spend time adjusting its output.
Sometimes your tool will just be wrong through no fault of its own and it is easier to ignore the tool than to correct it.
Your Threat Modeling System Shouldn’t Be Repudiating Raisins
Some design intricacies are difficult to articulate. Consider the ‘R’ in STRIDE: Repudiation.
The Orange Book lists accountability as a fundamental requirement of computer security:
“Audit information must be selectively kept and protected so that actions affecting security can be traced to the responsible party. A trusted system must be able to record the occurrences of security-relevant events in an audit log. The capability to select the audit events to be recorded is necessary to minimize the expense of auditing and to allow efficient analysis. Audit data must be protected from modification and unauthorized destruction to permit detection and after-the-fact investigations of security violations.”
Clearly, the non-repudiation of audit logs is an important aspect of a system, and conventions around logging should be designed to be of adequate depth and granularity, and resilient against forging and deletion.
But what’s true for audit logs isn’t true for every single aspect of every single software product.
Suppose you were threat modeling a smart appliance, like a smart toaster. We want to make a simple change with little security impact, perhaps extending its capabilities to allow it to handle raisin bread. What are our repudiation concerns? What does that mean? Someone fakes a raisin? The question is trivial, and pondering it is not a great use of time.
A little time spent deciding what actions really warrant logging is time well spent. Applying a blanket repudiation standard to every system element, on the other hand, is tedious. By extension, tools that alert to every form of threat every time you make an adjustment to your architecture are tedious. A tool should be able to measure the threat at the proper scale. Tool output should be non-punitive.
Threats Can Be Features
Moreover, sometimes repudiation is not an attack but a feature. Consider repudiation in the following system contexts: ballot secrecy, civil-rights-related anonymity, digital cash, drive encryption.
For these systems, the implementation of some non-repudiation controls is antithetical to the business goal of the system.
Similarly, many systems offer user-impersonation features for support purposes, basically spoofing-as-a-service. Such functionality needs to thread a tight needle of security attention. Uniformly treating all forms of spoofing as threats is incorrect.
Should tooling let users treat threats as security features? Maybe. These are edge cases. Perhaps this is a nice-to-have. It would suffice to have a tool treat its recommendations as suggestions for consideration.
Threat modeling is a time-consuming process and deserving of as much automation as we can throw at it. The teams making the current generation of tooling are right to be proud of their products. But these tools have limitations to be kept in mind, whether you are building or buying them.