Software Security Insurance

Matt Konda No Comments


Last week a well established application security company (that I respect) published availability of a $1,000,000 insurance policy for breach related costs related to applications it provides security source code review for.  I assume that the idea is that the review has more value if it has some financial assurance behind it.  Some folks who are cornerstones of the application security community, like Jeremiah Grossman, voiced strong support.

On the one hand, I think this is innovative.  As a consumer or business partner, I would like to know that a vendor is going to stand behind their work.  This type of guarantee suggests that.

On the other hand, I think this is an untenable and even negative approach in the real world for software security.  There are a few reasons.  I’ll lay them out in this post.

Double Down Reaction

Some online reaction (also from people I really respect) suggested that the software companies themselves should provide such quality assurance.  Why should the security vendor offer this when the software company should to begin with!

This also resonates.  Let’s just build secure software in the first place and stand by our products, right?


But as a developer (or boutique shop delivering a whole system), I would never sign a contract that would hold me responsible for security issues arising from the software I was building.  The reasons are myriad:

  1. On any non trivial project, there are numerous developers with different skill levels and training
  2. Tools don’t actually work to catch many classes of security issue, let alone all the cases within the classes they do catch
  3. I’m using libraries and I don’t have time to read the code or even the EULA’s for all the code
  4. I’m using 3rd party services I can’t control
  5. I’m using compute and storage I can’t control
  6. I don’t get to interview IT, Support and other people that are going to interact with the system
  7. Changes are strong that we’ve not funded all the infrastructure indicated
  8. My window on the project is almost always limited in time or to parts of the system
  9. There is no way we can get contracts in place that express the details over who is liable for what within a large complex system

Here’s another big one:  there are types of vulnerabilities even the security ninjas don’t know about yet.  As of what point in time do I get a free pass because nobody knew XYZ wasn’t secure yet?  Its all a big moving target.

Believe it or not, even though I’m questioning this, I’m a developer that is highly committed to security.  I’m going to do my best.  I’m never going to knowingly introduce an issue.  I’m going to fight to do the right thing.  But my job is already hard.  Like I said back in Builders Vs. Breakers at AppSec in Austin in 2012:  Nobody is giving me a bonus for a secure system.  OK, hardly anybody.

The Cynic

The cynic in my believes that there will be a huge increase in CyberInsurance and that this whole offer is self serving.  We can offer you an insurance assessment, and oh by the way, conveniently, we have insurance too.

This is in a day when I pay an extra fee to add reasonable warranty to electronics at any major big box store.  I’m not sure quality (or insurance, but I’ll get to that next) work the way people expect them to.  Not to mention, most software I use still has bugs in it.  We’re not even close to being able to hold software firms accountable for quality, let alone security.  Most contracts I see use language like “Professional and workmanlike” to describe the level of quality.  I realize lawyers think this sets a standard but I’m not sure there is any broadly shared idea of what that is in the software industry.

Insurance and Accountability

Sometimes with things like this, the devil is in the details.

I’ve seen well insured neighbors lose their house to a fire and fight tooth and nail to get some portion of the costs to rebuild back.  That is a relatively well understood market where the cost of the house and of materials to rebuild can be estimated fairly easily.  Furthermore, legal proceedings to resolve differences are probably deterministic at this point because there are precedents.

If you are buying CyberInsurance, did you expose every ugly truth to the insurer during their assessment?  If not, will those become issues that complicate policy review later?  With the general level of fantasy I see in the industry around standards and compliance, I’m

Ultimately, insurance exists because the high cost to an individual can be spread across a much larger group and the companies issuing policies generally understand the chances for different events – and they make money.

Do we think the Cybersecurity market is stable enough to make really safe bets?  I’m not sure I do.

No Choices

Another concern is that most consumers don’t really think of themselves as having choices related to security.  My banking app may not be secure but I can’t choose another one.  Most social apps are used without any clear cost structure.  I can choose to install a game or desktop tool or use a SaaS based CRM but how am I really supposed to reason about security for any of these choices?

Take this to a logical conclusion and it would seem like only the biggest and richest companies will be able to build secure apps.  Hat tip to Wendy Nather for article about living below the security poverty line.  I’m worried that the innovation in the system will be limited to small edgy applications.


I’m not going to take a stand one way or the other on regulation in general.  Whether they are good or bad always depends.  That said, in areas that are highly complex or change quickly, my observation is that regulations are hard to follow and lose value very fast.  I believe that software security is just such a domain.  It is so hard to reason about that even security experts and developers usually only know a small part of it.  We lack tools to assess in any kind of consistent or meaningful way.

Alternative:  Markets

I’m sitting in an airport as I write this.  My phone, the plane, the food .. all represent healthy businesses that strive to keep me coming back because they know that they won’t have consumers if they they mess up.  Sure they may have insurance, but they’re there to make money and people can reason about how what they pay translates into quality or security.  The better food costs more.  Simple as that.

When it comes to software, or IT in general, rather than regulate or insure, I would argue that we need to find ways to make the difference in security visible so that people buying can reason about it.  Then it can factor into prices and businesses can rightly prioritize it.

Suppose we invested in developers and training to address this issue and genuinely worked to build a secure system in the first place?  When the market rewards that, we’ll see it.

AppSec Qualifications

Matt Konda No Comments

At Jemurai, we often find ourselves in situations where a company wants to build their own application security program but doesn’t really know how.  That’s a common and very understandable problem given the trends in the industry (increasing focus on app security) and the inherent complexity of doing application security well.  We take great pride teaching and coaching organizations such as these to build successful programs.  Inevitably there comes a point where they want to hire someone to “run AppSec.”  Often, we’ll be asked for feedback on resumes or about candidates.  This happens often enough that I wanted to take a minute and write down some of the things we’ve learned and how we approach situations such as these.

Our Point of View

We fundamentally believe that to succeed with application security, a team needs to have development & SDLC expertise, excellent communication skills and security knowledge.  Tools are a part of the package, but only a part we use to help save time so we can focus on the hard problems as a team of people.  The number of times we have seen “last mile” problems, where issues are known but can’t be fixed for organizational reasons is too many to dismiss.  Emphasize relationships, communication and the human element of making security fixes happen and the tools part will come along easily.  Starting with expert developers that can win the respect of the technical team while engaging in a non-invasive way is our goal.


Usually, one of the first questions I’ll ask in this situation is if there is anyone internal that would be a good fit.  Most medium to large development groups have a few developers that are proven but might be looking for their next thing.  Giving them a way to pivot into application security can be a great way to leverage their organizational knowledge, domain expertise and familiarity in the current process.  If they’ve been building things, they almost certainly understand the SDLC and main programming languages in use.  If they understand the domain, they can communicate with developers and business stakeholders.  Provided that they have the disposition to work with a variety of teams and communicate constructively, this is the kind of person that we can coach into being a core contributor for an AppSec team.


Not surprisingly, we have also made mistakes and seen things go sour while building a program.  The following are some things to watch out for:

  • If we build around someone that is not experienced enough, developers will run circles around them.  This could be technical or process (eg. JIRA) knowledge.
  • If we don’t emphasize both written and verbal communication, we may end up with a technically qualified candidate that can’t effectively build a program.
  • Empathy is foundational to organizational change.  Weaving security into a development process can’t be done without meeting developers part way.


In order to assess candidates, we pulled together some interesting questions that reflect our point of view more concretely.  Of course it is imperfect, but we’re sharing it here because we think it is interesting.

Internal question:  Do we want the person to be able to get their hands dirty?  Our answer would generally be “yes”.

General questions:

  • Any mention of OWASP?  SANS?
  • Any experience with AppSec Tools?
    • Pick your names of popular tools in the industry.  As we said, tools are secondary but a lack of them can reflect
  • Any mention of programming languages on their resume?
  • Any product development experience?  Does it sound like they grew out of that?
  • Does the candidate have recent experience working with developers?  Do they seem to enjoy that?
  • Does the candidate know about any security standards?  ISO, NIST, Top 10, ASVS.
  • SDLC Experience?
    • Agile / Scrum / JIRA?
    • Where can security fit into an SDLC?
      • Looking for an answer that suggests familiarity with SDLC and emphasizes aspects OTHER than at the end during waterfall testing.
  • Can the candidate write code?
    • What is their language of choice?
    • IDE?
    • What have they built?
    • Any open source contributions?
  • How would they do code review?
    • Looking for some strategy for digesting code and following logical paths, eg. data flow, authorization, input validation, etc.
    • What would they do with a finding?  How have they interacted with other teams in the past around issues?
      • Looking for a discussion/relationship based approach – not a cold handoff via a spreadsheet or tool.
  • Can the candidate talk about architecture?
    • Is any of their experience relevant to web development?
    • Does the candidate know about CI/CD?  Cloud?  Mobile?

2017 Strategies

Matt Konda No Comments

As we have worked with clients in the back half of 2016, we have started to help them think about their 2017 strategies.  There are a couple of major themes we see again and again that are interesting.


We commonly see that getting budget allocated is something that is reactionary.  Often next year’s budget is 2016 + 10%.  We generally assert that our threat models have been wrong for years and that our adversaries, dependence on vendor tools and other key assumptions mean that this years budget was so far off that there is no way a 10% change will fix it.  We tend suggest that our partners leading security teams stop and develop their own mental model for how to come up with a budget and then ask for that – without reference to past years or numbers.

Maturity Model

We are big fans of the OWASP Maturity Model conceptually.  We find that in many cases it can be helpful to step back even further to think about top level areas in a simpler way.  As an example, people that come from compliance tend to favor spending in certain ways – while people that come from network tools tend to favor other kinds of spending.  We can help make sure there is a broad mental model so that investments can be conscious tradeoffs and not a reflection of past biases.

To that end, we have developed a modeling process where we build a dashboard and talk through it with clients.