Tag Archive software

Popular Media Coverage of Software and Formal Methods

Matt Konda No Comments

It is interesting … in the wake of Equifax and other recent news, The Atlantic has published several articles about software:

I say it is interesting because I am completely torn about both of them.  On the one hand, they are correct.  The Equifax Breach should not really be a surprise and the fact that there are coding errors in any system of significant size is something that most software developers or security professionals would accept without argument.

On the other hand, complacency or acceptance is the last thing that I would advocate for developers, consumers or companies after the Equifax breach.  I’ve already written about that here.

Furthermore, while formal methods present an interesting direction for software verification, in practice they are limited to very specific use cases.  I’ve never seen them employed professionally for any widely used application.  That doesn’t mean they aren’t or couldn’t be, but if I haven’t seen it – probably its not real or accessible for common developers yet.

An interesting side effect of these articles being in The Atlantic is that people who wouldn’t usually ask about these things are asking.  I’ve heard about each of these articles from numerous people at clients and partners.  I suppose that is a benefit of having the discussion – provided people have the attention span to continue the discussion.

The “Saving the World From Code” article also included a general quote which I think probably should have been attributed to Marc Andreesen in the Wall St. Journal in 2011:

It’s been said that software is “eating the world.”

The fact that it is not, makes me wonder just a bit about the context the article is written from.  One thing I can’t argue with is the substance of that quote, which again was from 2011.  I would perhaps add to it that software is flawed everywhere.  I just don’t buy that formal systems or rigorous modeling are a realistic near term solution for that.  Many of our clients are adopting new languages or technology – sometimes with more security issues – even as we work to secure their systems.  The idea of a 4GL language, which has been an idea for almost my whole professional career, where we can assemble a program in an increasingly sophisticated IDE with visual blocks like the hacking scene in the movie Swordfish seems unachievable in practice.  If anything, I prefer simpler text editors than ever before.

Ultimately, there is a lot that we can do to secure our systems.  Things like threat modeling to identify and then isolate scope, actively working on architecture, building common reliable blocks, teaching developers, building cultures that value security, using tools and smarts to think about scenarios, teaching practices that encourage security to be a first class part of the SDLC … all of these are real things people in the real world are doing to make software safer.  I doubt there is a silver bullet that somehow avoids the people understanding the problem – we have to accept that as a cost or accept the insecurity of the software we use.  I guess that’s why people hire us to help them secure their software.

Incubator: Canary Data

Matt Konda No Comments

Incubator

At Jemurai, we have started incubating products.  We love security consulting and the engineering we do there, but there is something amazing about building a product.  In particular, I constantly crave the experience of pushing the limit and trying something new and a little different.  I’m even embracing marketing and failing fast.  So each month, we take an idea out of our product backlog of ideas and try pushing ourselves with it a bit.

Last month, we released a set of simple Security Policy Bundle for $249 that you can download here.  This month, we’re pushing the canary.

Canary in the Coal Mine

What is the canary in the coal mine all about anyway?  Well, miners used to take a canary with them into the mine so that if carbon monoxide levels rose enough to be dangerous, they would know.  The canary would die and they would hopefully get out before the CO caused problems for them too.

In short, the canary is an early warning signal.

How Does Canary Data Work?

The way we envision canary data working is that we provide known data that is bad.  Sounds silly, right?  Except that we track it and know who we gave it to, when and for which of their environments.  Then we search for the known bad (canary) data in increasingly sophisticated ways and when we find it, it is a strong indication that a client has had a breach (of any kind!) at a certain point in time, in a certain location, application, part of their network, cloud, etc.

By tracing which canary data shows up, we can help both notify clients early of potential issues but also pinpoint where and which parts of their operations may have issues.  Its an early warning signal.

Input?

As with any “incubator” project, we have a lot of fresh ideas about how it could work, but it will have to be tested in the wild – so we’re interested in input or anyone that would like to help us test it in the realz.  Contact me to talk further.

Software Security Insurance

Matt Konda No Comments

Background

Last week a well established application security company (that I respect) published availability of a $1,000,000 insurance policy for breach related costs related to applications it provides security source code review for.  I assume that the idea is that the review has more value if it has some financial assurance behind it.  Some folks who are cornerstones of the application security community, like Jeremiah Grossman, voiced strong support.

On the one hand, I think this is innovative.  As a consumer or business partner, I would like to know that a vendor is going to stand behind their work.  This type of guarantee suggests that.

On the other hand, I think this is an untenable and even negative approach in the real world for software security.  There are a few reasons.  I’ll lay them out in this post.

Double Down Reaction

Some online reaction (also from people I really respect) suggested that the software companies themselves should provide such quality assurance.  Why should the security vendor offer this when the software company should to begin with!

This also resonates.  Let’s just build secure software in the first place and stand by our products, right?

Developers

But as a developer (or boutique shop delivering a whole system), I would never sign a contract that would hold me responsible for security issues arising from the software I was building.  The reasons are myriad:

  1. On any non trivial project, there are numerous developers with different skill levels and training
  2. Tools don’t actually work to catch many classes of security issue, let alone all the cases within the classes they do catch
  3. I’m using libraries and I don’t have time to read the code or even the EULA’s for all the code
  4. I’m using 3rd party services I can’t control
  5. I’m using compute and storage I can’t control
  6. I don’t get to interview IT, Support and other people that are going to interact with the system
  7. Changes are strong that we’ve not funded all the infrastructure indicated
  8. My window on the project is almost always limited in time or to parts of the system
  9. There is no way we can get contracts in place that express the details over who is liable for what within a large complex system

Here’s another big one:  there are types of vulnerabilities even the security ninjas don’t know about yet.  As of what point in time do I get a free pass because nobody knew XYZ wasn’t secure yet?  Its all a big moving target.

Believe it or not, even though I’m questioning this, I’m a developer that is highly committed to security.  I’m going to do my best.  I’m never going to knowingly introduce an issue.  I’m going to fight to do the right thing.  But my job is already hard.  Like I said back in Builders Vs. Breakers at AppSec in Austin in 2012:  Nobody is giving me a bonus for a secure system.  OK, hardly anybody.

The Cynic

The cynic in my believes that there will be a huge increase in CyberInsurance and that this whole offer is self serving.  We can offer you an insurance assessment, and oh by the way, conveniently, we have insurance too.

This is in a day when I pay an extra fee to add reasonable warranty to electronics at any major big box store.  I’m not sure quality (or insurance, but I’ll get to that next) work the way people expect them to.  Not to mention, most software I use still has bugs in it.  We’re not even close to being able to hold software firms accountable for quality, let alone security.  Most contracts I see use language like “Professional and workmanlike” to describe the level of quality.  I realize lawyers think this sets a standard but I’m not sure there is any broadly shared idea of what that is in the software industry.

Insurance and Accountability

Sometimes with things like this, the devil is in the details.

I’ve seen well insured neighbors lose their house to a fire and fight tooth and nail to get some portion of the costs to rebuild back.  That is a relatively well understood market where the cost of the house and of materials to rebuild can be estimated fairly easily.  Furthermore, legal proceedings to resolve differences are probably deterministic at this point because there are precedents.

If you are buying CyberInsurance, did you expose every ugly truth to the insurer during their assessment?  If not, will those become issues that complicate policy review later?  With the general level of fantasy I see in the industry around standards and compliance, I’m

Ultimately, insurance exists because the high cost to an individual can be spread across a much larger group and the companies issuing policies generally understand the chances for different events – and they make money.

Do we think the Cybersecurity market is stable enough to make really safe bets?  I’m not sure I do.

No Choices

Another concern is that most consumers don’t really think of themselves as having choices related to security.  My banking app may not be secure but I can’t choose another one.  Most social apps are used without any clear cost structure.  I can choose to install a game or desktop tool or use a SaaS based CRM but how am I really supposed to reason about security for any of these choices?

Take this to a logical conclusion and it would seem like only the biggest and richest companies will be able to build secure apps.  Hat tip to Wendy Nather for article about living below the security poverty line.  I’m worried that the innovation in the system will be limited to small edgy applications.

Regulations

I’m not going to take a stand one way or the other on regulation in general.  Whether they are good or bad always depends.  That said, in areas that are highly complex or change quickly, my observation is that regulations are hard to follow and lose value very fast.  I believe that software security is just such a domain.  It is so hard to reason about that even security experts and developers usually only know a small part of it.  We lack tools to assess in any kind of consistent or meaningful way.

Alternative:  Markets

I’m sitting in an airport as I write this.  My phone, the plane, the food .. all represent healthy businesses that strive to keep me coming back because they know that they won’t have consumers if they they mess up.  Sure they may have insurance, but they’re there to make money and people can reason about how what they pay translates into quality or security.  The better food costs more.  Simple as that.

When it comes to software, or IT in general, rather than regulate or insure, I would argue that we need to find ways to make the difference in security visible so that people buying can reason about it.  Then it can factor into prices and businesses can rightly prioritize it.

Suppose we invested in developers and training to address this issue and genuinely worked to build a secure system in the first place?  When the market rewards that, we’ll see it.