I read two different things last week which really made me think in different ways and drove me to where the seed for this blog post was ready to grow. The first was Dan North's post "McKinsey Developer Productivity Review" about a McKinsey "insight", which attempted to tackle some pretty hard problems about measuring developers and understanding the development process. The second was a post about "Software Supply Chain Vendor Landscape" by Clint Gibler and Francis Odum, which true to its title, captured a useful overview of security tooling around our software supply chain.
What those two articles got me thinking about was how developers relate to security tools now.
One thing I took away from Dan North's post was a healthy reminder of the nuance to the art and science of software engineering. Several times in the discussion he reminded readers that a lot of time for software developers should be thinking about how to solve the problem. Not just continually writing code and running tools. In a successful team, it also requires a lot of time mentoring, training and guiding others.
In security, people often simplify what they think developers should do. Just run this tool, do something with the output and you'll be more secure! A problem with this point of view is that in my experience, all AppSec tools produce a huge amount of noise. That is - information that I really shouldn't take action on. Not only that, it is often hard to get the tools running in a meaningful way to begin with.
An interesting question to ask is how security can engage with the rich problem solving part of software development. I used to think about this when we were talking about Rugged Software before DevOps even existed as popular concept. But even that was more of a mindset thing than a practical guide. I think there is really no substitute for having folks think hard about security while defining software requirements. All kinds of important problems only manifest here and not in any of the tools I've seen - things like:
None of these get done with tools or automation and all require developers to stop and think.
More importantly, I believe that for all practical purposes they always will. There are security problems that are inherently more conceptual and harder to detect with a tool. It might be interesting to see if you can train AI to identify common pitfalls, but I don't personally believe that such solutions will exist in a near term horizon relevant to developers.
So committing to training, defining security requirements, estimating security time into our roadmap and taking time to think about security as part of software development is an important and worthy cause. Maybe there should be a security jam session once a week.
What about automation?
My general impression is that Application Security practitioners think of security automation as an unequivocal Good Thing™. If you can automate something, then you should. Often, security organizations adopt this view and start with "let's do the security we can automate first". I believe this position is fraught for several reasons.
Of course, this position plays also to vendors and high growth business models as well. If something requires human time (eg. the time thinking about the problem to understand it better) then it isn't going to be conducive to a SaaSSaaS stands for Software as a Service, which is a cloud computing model that delivers software applications over the internet as a subscription-based service. With SaaS, users can access software applications and data from anywhere with an internet connection, without the need for on-premise installation or maintenance. SaaS providers manage the infrastructure, security, and maintenance of the software application, freeing users from the burden of software updates, patches, and backups. based hockey sticked outcome. The result is that a lot of tools in the landscape focus on problems that are reducible (another concept Dan North touched on) to simpler problems. Or at least problems that they claim are reducible. Those that aren't tend to be noisy.
Some good questions to ask (as a developer) when you think about adopting security tooling include:
In my experience, if you can't mentor your team to do these kinds of things themselves through their understanding of the code, you may be heading toward a hard time when you turn the tool on.
I also advise people to build the automation they can imagine first and then plug in commercially excellent tools instead of the other way around. Too often, we build crappy automation that development teams won't adopt and pair that false positives that make the whole thing fall down.
Security tools are great for developers where they solve a reducible problem. Meaning, they make something definite (but maybe otherwise hard or time consuming) very fast. A reducible problem might be as simple as naive software composition analysis (SCA): are you referencing an old vulnerable version of Log4J? That is an easy question to answer quickly in most cases. While I was writing this, I added a license check for LGPL (the only license you probably need to actually worry about) to the open source Crush tool so that we can instantly see if a bad thing is present.
Security tools also work great for developers when they are built right into the tools they are using. If my command line tool automatically does (quick) work when I do things, that is a bonus for me. Eg. if I can check for secrets in a pre-commit hook, that is sweet.
The more I make this true at an infrastructure level, the better. For example, if I can preferentially pull dependencies from a registry that hosts secure versions, then if those are secure we are making the overall outcome better with minimal changes required from the developer. If I can check Terraform with OPA before deploying, that seems like a good thing - as long as it doesn't create a lot of noise.
Of course, if you have the resources to run a great Application SecurityApplication security is the protection of software applications from cyber threats and vulnerabilities. Policies are established to guide the development and deployment of applications in a secure manner. Procedures are created to detail the steps necessary to secure applications and to ensure that policies are consistently followed. Training is provided to developers and other personnel to ensure that they understand the policies and procedures and are able to apply them effectively. By implementing policies, procedures, and training in SPIO, organizations can reduce the risk of cyber attacks on their applications and protect sensitive data from theft or damage. team at your company, then these tools can be amazingly useful. You can dig in and really understand the output. You can tune them to work well. You can use them where you want and ignore them where you don't.
Some of the opportunities I see to improve how we think about security and software development are so fundamental and against the grain of everything in the industry. They make me seem like a Luddite or anti-automation (which I'm not). They involve empowering developers and giving them time to think about and solve problems ... which just happen to be security problems.
Some specific opportunities include: