It’s a Trap! Avoiding the Security Budget Trap.

Matt Konda No Comments

It’s a trap.  You know it’s a trap.  But you don’t know how to avoid the trap.

It is budget season.  You need to start defining your budget for 2019.  There are two main ways I’ve seen this play out.


You take a look at your program, think about a couple of tools you want to add.  They’ll cost what 100K-200K each?  Maybe account for some raises, maybe one or two new hires and we’re already looking at $500K – $1M.  Cool.  That’s a significant increase, let’s pull the trigger.


Most IT budgets increase somewhat year over year as companies grow and expand their investment into IT.  So … let’s just make our security budget = last year’s security budget + 10%.  Maybe we’ll ask for 25% and hope to actually get 10%.

The Trap

The trap is that neither of these approaches gets you anywhere near:

  • The right budget
  • The right argument for the right budget

Without the right argument for the right budget, you’re probably not going to be able to make major changes and or more importantly, the right major changes.


Every year (maybe every quarter) we should be looking at risk.  We should use risk to inform the whole direction and composition of our program.  Our investment in security should be a well-informed function of risk.

Guess what, it’s really hard to understand risk without having conversations with business stakeholders about what would happen if X, Y and Z happened?  You don’t want to scare them, but you need to be on the same page with them.

Whether we use the Percentages approach or the Wants approach, it is quite likely that our security program will continue to lag well behind where it should be.  That’s because it is likely that your program, as it exists today, is extremely underfunded.  So adding even a good year over year percentage won’t get you there.  What if your security investment should be 20X what it is right now?  How would you even know?


  • $20 M program.  10% = $2M.  That’s a significant program with a significant growth investment by classic standards.
  • But what if the right size of that program is $50M?  You’re still less than half way there!!!

The same argument applies for larger and smaller budgets by the way.

So let’s say your program is underfunded.  One way to deal with that is to whine about not having enough resources.  Another is to go to your stakeholders and make them accept the risks they are facing.  Like, with a specific monetary penalty in mind.


The fundamental nature of the trap is that the common approaches to sizing a security program lend themselves to an incremental addition of resources in new areas but not corrections in others.  Some tools need to be retired but rarely is the person who is running those tools going to advocate for that.  You can be sure the company that is selling them wants to keep them there.  Sometimes if you try to re-allocate resources from one area to another you risk losing them so the whole program is built on a foundation that can never be shifted.

Learn To Negotiate

Sometimes whole sectors shift.  Consider AV.  What should you really be spending on AV?  Firewalls?  If your investment in AV and Firewalls haven’t changed over the past few years, you are probably missing something.  That should be reflected in the contracts you are signing.  Think about how to secure AWS?  Maybe that actually has more to do with your internal DevOps practices than an external vendor.  (Shocking, I know!)  If you know the value of a given service and the risk of not having it, you can negotiate cleanly with vendors and stakeholders to get your best outcome.

By the way, this doesn’t mean the same thing as “shake vendors down”.  We’ve seen a few scenarios where it was obvious after the fact that the company was just talking to us to either get a better price from a competitor or to get the lowest price.  While you should absolutely do your best to know the value of what you are paying for and hold your vendors accountable, not all services can be compared apples to apples.  There are few security firms that can truly say they partner with developers the way we do for instance.  We don’t do physical tests or hardware hacking at all.  If you didn’t think about the right factors for differentiation when you made the choice, that can’t be part of your evaluation.  Beware of anyone who says they are good at everything…


Budgeting is hard but maybe not for the reason many people think.  Learn the mechanics of it early to empower yourself to understand how you got to the present moment.  The hard part of budgeting is the negotiating and prioritization.  Be up front and understand what your budgets mean.  Be able to translate that into a narrative or a story for your boss and your stakeholders.  Few companies have great stories here, but it might be the most important place that we can make a huge difference.



Equifax: What’s the Score

Matt Konda No Comments


Late last week (around 9/15) it was reported that the CIO and CSO at Equifax “resigned”.  Equifax stock is down by around 30%.  The FTC is launching an investigation and findings and settlements are likely to be in the $100’s of millions or more.  Clearly there are going to be short and medium term impact to Equifax’s security bumbling.  In this post, we’re going to present a longread with our thoughts about Equifax.


One of the first reactions people had was to talk about this Atlantic article and ask the question:  “Isn’t our data already out there?  What can we actually do?”  

Consumer data breaches have become so frequent, the anger and worry once associated with them has turned to apathy. So when Equifax revealed late Thursday that a breach exposed personal data, including social-security numbers, for 143 million Americans, public shock was diluted by resignation.

I think this article is bordering on irresponsible.  Yes, your data may be out there.  Yes, you should not be shocked by another breach.  But absolutely, it is incumbent upon us to watch our credit and work to improve security so that this doesn’t continue to be expected.

There is a term I like to use for where we are as an industry:  technical debt.  In Agile development projects, technical debt is the term we use for work we put off in order to get the main project done faster.  It could be something like testing, or refactoring to ensure we are building a more robust system.  We know we’re prioritizing delivering the system and cutting some corners.  Those corners we cut are technical debt that we may have to circle back and do.  Often this happens (we circle back and deal with debt) when a company gets acquired or partners with a larger firm.  Many times, this technical debt never actually gets paid back.  I would argue that in most corporations, the vast technical debt waiting to be paid off represents one of the biggest overall security issues.

Another way to look at this is through the lens of budgets.  If a company increases its IT budget 5% each year, it may seem to be growing and investing.  After 20 years, if the IT Security Budget starts to grow at a pace of 2x the regular IT budget (so 10% each year) then the leadership of a company feels that they are being aggressive and strongly supporting security.  But from a security perspective, we were underinvesting all along so even incremental 10% increases in budget are leaving us woefully underinvested to truly provide security.

We need to be able to come up with an independent model for what we should be doing for security and then fund the right work.  Usually that’s going to cost a lot more than people want.

What Can Consumers Do?

Consumers can do a lot to keep themselves safe – but it will be a fair amount of work and takes time, energy and money.

  1. Consumers can be smart about what services they use.  Ultimately, when we put our information online – especially for free – we should know that we are trusting a company with information that can be used against us.  So … maybe don’t let your phone know where you are all the time for any application … don’t give arbitrary numbers of companies your social graph …
  2. Watch your statements.  From a pure financial perspective, there’s no substitute for your own due diligence and fast reporting.  Use a credit card that has known and favorable process for addressing claims of false charges.
  3. Lock your credit.  If you want to try to ensure that the data Equifax had can’t be used to open new accounts in your name, you can lock your credit.  It is unfree and and time consuming.  A good writeup that explains the overall ecosystem is here:

The FTC following this is a good sign for consumers because it suggests that Equifax and other companies handling this type of data will expect to see punishments (fines) that are likely to spur them to invest more in security.

Far be it from me to say this – but it is possible that the whole credit reporting system should be re-envisioned.  Not to mention identity and SSN.

What Can Companies Do?

For companies, some key takeaways from the Equifax breach include:

  • Incremental improvement (10% budget increases) may not be enough.  IT and Security leadership are clearly vulnerable in case of failure.
  • Security should be baked into SDLC and product delivery processes.  Your products are living works and need to be maintained – security needs to be part of that.
  • Keep an inventory of your applications to avoid surprises and make sure you don’t lose account of technical debt.

If we address security from the beginning and as a first class part of every project, we can design for security, test for security, write requirements for security and ultimately deliver more secure applications.

As incidents like Equifax show, consumers and regulators expect us to be thinking about privacy and security.

A significant part of Jemurai’s business is trying to help companies reshape their application delivery to be secure.  Every client is different.  But we see a surge in demand for security that is only increasing.

What Was the Issue?

From what we can tell, the root issue was an application built with an old version of Struts.  So the immediate action Equifax could have taken would have been to keep their libraries up to date.  Of course, whenever the answer is “to patch” it suggests an underlying vulnerability that is interesting in its own right.

Behind the immediate action of staying up to date, the vulnerability had to do with remote code execution.  In layman’s terms, that means the vulnerability allowed a user to get code to run on the server.  There are a variety of ways that this happens in the real world these days … sometimes people take strings (a bunch of characters) and process those without checking that they fit certain rules.

For more detail, check out our previous blog post and video.


Another swirling discussion has been about the CSO who’s LinkedIn page cited a music degree as qualifications.  Many folks questioned that as a qualification.  I think this is basically a red herring argument.  The CSO may have been qualified and may not have been qualified, but the academic background is unlikely to have factored in the result.  Many great programmers come from music backgrounds – it is actually something some companies look for.

The more important question here is how the Equifax CEO and Board have responded to requests for funding for security.

Likely Long Term Impact

It is very hard to say what the long term impact of the Equifax breach will be.  In the case of Target, if we look at the stock chart it is hard to even know when the infamous breach occurred.  If we look closely we can see a dip around December 2013 but changes in the stock price are largely unrelated to security events and it certainly didn’t destroy the value of the company the way many thought that it might.

Equifax on the other hand, is at least in some sense specifically responsible for consumer’s security.  There could be no doubt that Equifax had sensitive data that would be coveted by schemers.  It is possible that they will held accountable to a different standard than was Target.  I wouldn’t hold my breath.

The Score

As humans, it is natural to simplify events, imagine them as a game and maybe even as having an end or closure.  So, if this is the end of the game, I would say that the score is Equifax 213 – Hackers 25.  

Equifax has still been commercially doing a lot for its shareholders.  It is most likely to continue to see success as a business.  On some level, Equifax is still winning.

On other hand, this is a very significant breach.  Nobody probably would have thought that the Equifax attackers could get 25 points.  That might represent a surprising vulnerability.  Regulators will likely add to that opposing score as they impose oversight and fines.

In order to protect their lead, Equifax is going to need to spend a lot more money and time ensuring they don’t give up more scores.  The game may slow down.  The profit may decline.  This will probably have ripple effects with their immediate competitors as well.

In the scheme of things, it is a test of all of our collective will to improve and sacrifice.  If we can envision better systems for identity and more inherently secure systems for credit scoring (blockchain anyone?), there are opportunities to dramatically improve security while realizing the benefits of a continually more technology advanced and connected world.

Of course, the game will go on.  Equifax may give up a lot of points, but if they keep their business running – it is likely that they’ll still come out with a higher score and “win”.  In some ways, I just hope the game isn’t that much fun to watch.

What is Security Engineering?

Matt Konda No Comments

Security Engineer is an interesting title.  Across our customers, it has different meanings to different people.  At one end of the spectrum, it is a synonym for a security analyst, which we think of as a skilled resource focused on a very specific portion of security – maybe monitoring the SIEM, maybe running static analysis, maybe feet on the ground doing vulnerability management.  At the other end of the spectrum, security engineering is software engineering around security related features.

If you know Jemurai well, you know we focus at the engineering end of the spectrum.  We’re developers at heart, working on security.  We’ve increasingly engaged in projects where writing code to do security has been foundational to the role we’re in.

As we have grown, we have started to extract some of the most common of these engineering tasks – things like:

  1. Improving application security signal
  2. Managing secrets in Vault
  3. Broadening input validation
  4. Data protection in big data environments.

Clients have appreciated not just advise or gap analysis, but implementation help doing this work.  Turns out it is extremely rewarding as well.


Software Security Insurance

Matt Konda No Comments


Last week a well established application security company (that I respect) published availability of a $1,000,000 insurance policy for breach related costs related to applications it provides security source code review for.  I assume that the idea is that the review has more value if it has some financial assurance behind it.  Some folks who are cornerstones of the application security community, like Jeremiah Grossman, voiced strong support.

On the one hand, I think this is innovative.  As a consumer or business partner, I would like to know that a vendor is going to stand behind their work.  This type of guarantee suggests that.

On the other hand, I think this is an untenable and even negative approach in the real world for software security.  There are a few reasons.  I’ll lay them out in this post.

Double Down Reaction

Some online reaction (also from people I really respect) suggested that the software companies themselves should provide such quality assurance.  Why should the security vendor offer this when the software company should to begin with!

This also resonates.  Let’s just build secure software in the first place and stand by our products, right?


But as a developer (or boutique shop delivering a whole system), I would never sign a contract that would hold me responsible for security issues arising from the software I was building.  The reasons are myriad:

  1. On any non trivial project, there are numerous developers with different skill levels and training
  2. Tools don’t actually work to catch many classes of security issue, let alone all the cases within the classes they do catch
  3. I’m using libraries and I don’t have time to read the code or even the EULA’s for all the code
  4. I’m using 3rd party services I can’t control
  5. I’m using compute and storage I can’t control
  6. I don’t get to interview IT, Support and other people that are going to interact with the system
  7. Changes are strong that we’ve not funded all the infrastructure indicated
  8. My window on the project is almost always limited in time or to parts of the system
  9. There is no way we can get contracts in place that express the details over who is liable for what within a large complex system

Here’s another big one:  there are types of vulnerabilities even the security ninjas don’t know about yet.  As of what point in time do I get a free pass because nobody knew XYZ wasn’t secure yet?  Its all a big moving target.

Believe it or not, even though I’m questioning this, I’m a developer that is highly committed to security.  I’m going to do my best.  I’m never going to knowingly introduce an issue.  I’m going to fight to do the right thing.  But my job is already hard.  Like I said back in Builders Vs. Breakers at AppSec in Austin in 2012:  Nobody is giving me a bonus for a secure system.  OK, hardly anybody.

The Cynic

The cynic in my believes that there will be a huge increase in CyberInsurance and that this whole offer is self serving.  We can offer you an insurance assessment, and oh by the way, conveniently, we have insurance too.

This is in a day when I pay an extra fee to add reasonable warranty to electronics at any major big box store.  I’m not sure quality (or insurance, but I’ll get to that next) work the way people expect them to.  Not to mention, most software I use still has bugs in it.  We’re not even close to being able to hold software firms accountable for quality, let alone security.  Most contracts I see use language like “Professional and workmanlike” to describe the level of quality.  I realize lawyers think this sets a standard but I’m not sure there is any broadly shared idea of what that is in the software industry.

Insurance and Accountability

Sometimes with things like this, the devil is in the details.

I’ve seen well insured neighbors lose their house to a fire and fight tooth and nail to get some portion of the costs to rebuild back.  That is a relatively well understood market where the cost of the house and of materials to rebuild can be estimated fairly easily.  Furthermore, legal proceedings to resolve differences are probably deterministic at this point because there are precedents.

If you are buying CyberInsurance, did you expose every ugly truth to the insurer during their assessment?  If not, will those become issues that complicate policy review later?  With the general level of fantasy I see in the industry around standards and compliance, I’m

Ultimately, insurance exists because the high cost to an individual can be spread across a much larger group and the companies issuing policies generally understand the chances for different events – and they make money.

Do we think the Cybersecurity market is stable enough to make really safe bets?  I’m not sure I do.

No Choices

Another concern is that most consumers don’t really think of themselves as having choices related to security.  My banking app may not be secure but I can’t choose another one.  Most social apps are used without any clear cost structure.  I can choose to install a game or desktop tool or use a SaaS based CRM but how am I really supposed to reason about security for any of these choices?

Take this to a logical conclusion and it would seem like only the biggest and richest companies will be able to build secure apps.  Hat tip to Wendy Nather for article about living below the security poverty line.  I’m worried that the innovation in the system will be limited to small edgy applications.


I’m not going to take a stand one way or the other on regulation in general.  Whether they are good or bad always depends.  That said, in areas that are highly complex or change quickly, my observation is that regulations are hard to follow and lose value very fast.  I believe that software security is just such a domain.  It is so hard to reason about that even security experts and developers usually only know a small part of it.  We lack tools to assess in any kind of consistent or meaningful way.

Alternative:  Markets

I’m sitting in an airport as I write this.  My phone, the plane, the food .. all represent healthy businesses that strive to keep me coming back because they know that they won’t have consumers if they they mess up.  Sure they may have insurance, but they’re there to make money and people can reason about how what they pay translates into quality or security.  The better food costs more.  Simple as that.

When it comes to software, or IT in general, rather than regulate or insure, I would argue that we need to find ways to make the difference in security visible so that people buying can reason about it.  Then it can factor into prices and businesses can rightly prioritize it.

Suppose we invested in developers and training to address this issue and genuinely worked to build a secure system in the first place?  When the market rewards that, we’ll see it.

AppSec Qualifications

Matt Konda No Comments

At Jemurai, we often find ourselves in situations where a company wants to build their own application security program but doesn’t really know how.  That’s a common and very understandable problem given the trends in the industry (increasing focus on app security) and the inherent complexity of doing application security well.  We take great pride teaching and coaching organizations such as these to build successful programs.  Inevitably there comes a point where they want to hire someone to “run AppSec.”  Often, we’ll be asked for feedback on resumes or about candidates.  This happens often enough that I wanted to take a minute and write down some of the things we’ve learned and how we approach situations such as these.

Our Point of View

We fundamentally believe that to succeed with application security, a team needs to have development & SDLC expertise, excellent communication skills and security knowledge.  Tools are a part of the package, but only a part we use to help save time so we can focus on the hard problems as a team of people.  The number of times we have seen “last mile” problems, where issues are known but can’t be fixed for organizational reasons is too many to dismiss.  Emphasize relationships, communication and the human element of making security fixes happen and the tools part will come along easily.  Starting with expert developers that can win the respect of the technical team while engaging in a non-invasive way is our goal.


Usually, one of the first questions I’ll ask in this situation is if there is anyone internal that would be a good fit.  Most medium to large development groups have a few developers that are proven but might be looking for their next thing.  Giving them a way to pivot into application security can be a great way to leverage their organizational knowledge, domain expertise and familiarity in the current process.  If they’ve been building things, they almost certainly understand the SDLC and main programming languages in use.  If they understand the domain, they can communicate with developers and business stakeholders.  Provided that they have the disposition to work with a variety of teams and communicate constructively, this is the kind of person that we can coach into being a core contributor for an AppSec team.


Not surprisingly, we have also made mistakes and seen things go sour while building a program.  The following are some things to watch out for:

  • If we build around someone that is not experienced enough, developers will run circles around them.  This could be technical or process (eg. JIRA) knowledge.
  • If we don’t emphasize both written and verbal communication, we may end up with a technically qualified candidate that can’t effectively build a program.
  • Empathy is foundational to organizational change.  Weaving security into a development process can’t be done without meeting developers part way.


In order to assess candidates, we pulled together some interesting questions that reflect our point of view more concretely.  Of course it is imperfect, but we’re sharing it here because we think it is interesting.

Internal question:  Do we want the person to be able to get their hands dirty?  Our answer would generally be “yes”.

General questions:

  • Any mention of OWASP?  SANS?
  • Any experience with AppSec Tools?
    • Pick your names of popular tools in the industry.  As we said, tools are secondary but a lack of them can reflect
  • Any mention of programming languages on their resume?
  • Any product development experience?  Does it sound like they grew out of that?
  • Does the candidate have recent experience working with developers?  Do they seem to enjoy that?
  • Does the candidate know about any security standards?  ISO, NIST, Top 10, ASVS.
  • SDLC Experience?
    • Agile / Scrum / JIRA?
    • Where can security fit into an SDLC?
      • Looking for an answer that suggests familiarity with SDLC and emphasizes aspects OTHER than at the end during waterfall testing.
  • Can the candidate write code?
    • What is their language of choice?
    • IDE?
    • What have they built?
    • Any open source contributions?
  • How would they do code review?
    • Looking for some strategy for digesting code and following logical paths, eg. data flow, authorization, input validation, etc.
    • What would they do with a finding?  How have they interacted with other teams in the past around issues?
      • Looking for a discussion/relationship based approach – not a cold handoff via a spreadsheet or tool.
  • Can the candidate talk about architecture?
    • Is any of their experience relevant to web development?
    • Does the candidate know about CI/CD?  Cloud?  Mobile?

2017 Strategies

Matt Konda No Comments

As we have worked with clients in the back half of 2016, we have started to help them think about their 2017 strategies.  There are a couple of major themes we see again and again that are interesting.


We commonly see that getting budget allocated is something that is reactionary.  Often next year’s budget is 2016 + 10%.  We generally assert that our threat models have been wrong for years and that our adversaries, dependence on vendor tools and other key assumptions mean that this years budget was so far off that there is no way a 10% change will fix it.  We tend suggest that our partners leading security teams stop and develop their own mental model for how to come up with a budget and then ask for that – without reference to past years or numbers.

Maturity Model

We are big fans of the OWASP Maturity Model conceptually.  We find that in many cases it can be helpful to step back even further to think about top level areas in a simpler way.  As an example, people that come from compliance tend to favor spending in certain ways – while people that come from network tools tend to favor other kinds of spending.  We can help make sure there is a broad mental model so that investments can be conscious tradeoffs and not a reflection of past biases.

To that end, we have developed a modeling process where we build a dashboard and talk through it with clients.