Security Policies Rebooted

Security Policies Rebooted

Matt Konda No Comments
  Security Policy

Here’s a deep dark secret:  I don’t particularly like security policy.  I don’t always follow policy.  Goodness knows that with the 50-250 page policies I’ve seen, I didn’t even understand the whole policy at a legal level – and if you don’t understand them at a legal level can you really say you’re following them?  Not to mention when one policy contradicts another.

Even at companies with very robust security programs that include policy, it is very common that I approach developers and they don’t understand their companies policy either – like for example what data they need to protect.  At a previous employer, we used to tease the folks that worked on PCI as having a “passion for compliance.”  That was not a compliment.  Policy came to sort of feel like a necessary evil at best.

Then I met and started to work with our CISO Rocio Baeza.  I didn’t know that I’d end up hiring her as an internal policy, governance and risk resource for Jemurai but I’m lucky I did.  Initially, we did policy because many of our clients that needed technical help also needed policies – some kind of rules to follow.

As we challenged Rocio to “get meta” on the problems with policy the way we try to “get meta” with the technical issues we see, she extended and then surpassed our expectations by developing an approach for Agile Governance.  She implemented policies for clients that were short, to the point, readable and in our collective judgment captured the important things they needed to think about even better than the policy “books” we saw.

Writing policy in layman’s terms, with a focus on simplicity, was something that wasn’t immediately easy to appreciate.  The shorter simple policy reads easily and doesn’t feel like it hurts the same way some policies do.  Its like the old quote from Blaise Pascal:

 “If I had more time, would have written a shorter letter.”

We worked hard to make it shorter.  Does that mean it doesn’t work?  On the contrary, we think it works even better.  In fact, it works so well that we captured the policy in a more digestible way so that people could get access to the policies without a whole consulting engagement.  You can now purchase the policy bundle, which includes the core policy, a license and a simple one page implementation guide right off of our website for less than an hour of a security pro’s time.  Check it out:  https://jemurai.com/product/general-security-policy-bundle/ and let us know what you think.

Incubator: Canary Data

Matt Konda No Comments
  Incubator

Incubator

At Jemurai, we have started incubating products.  We love security consulting and the engineering we do there, but there is something amazing about building a product.  In particular, I constantly crave the experience of pushing the limit and trying something new and a little different.  I’m even embracing marketing and failing fast.  So each month, we take an idea out of our product backlog of ideas and try pushing ourselves with it a bit.

Last month, we released a set of simple Security Policy Bundle for $249 that you can download here.  This month, we’re pushing the canary.

Canary in the Coal Mine

What is the canary in the coal mine all about anyway?  Well, miners used to take a canary with them into the mine so that if carbon monoxide levels rose enough to be dangerous, they would know.  The canary would die and they would hopefully get out before the CO caused problems for them too.

In short, the canary is an early warning signal.

How Does Canary Data Work?

The way we envision canary data working is that we provide known data that is bad.  Sounds silly, right?  Except that we track it and know who we gave it to, when and for which of their environments.  Then we search for the known bad (canary) data in increasingly sophisticated ways and when we find it, it is a strong indication that a client has had a breach (of any kind!) at a certain point in time, in a certain location, application, part of their network, cloud, etc.

By tracing which canary data shows up, we can help both notify clients early of potential issues but also pinpoint where and which parts of their operations may have issues.  Its an early warning signal.

Input?

As with any “incubator” project, we have a lot of fresh ideas about how it could work, but it will have to be tested in the wild – so we’re interested in input or anyone that would like to help us test it in the realz.  Contact me to talk further.

Glue 0.9.4 and Scout2

Matt Konda No Comments
  Engineering OpenSource

Glue

We spend a fair amount of time building and using OWASP Glue to improve security automation at clients.  The idea is generally to make it easy to run tools from CI/CD (eg.  Jenkins) and collect results in JIRA.  In a way, Glue is like ThreadFix or other frameworks that collect results from different tools.  Recently, we thought it would be cool to extend some of what we were doing to AWS.  We have our own scripts we use to examine AWS via APIs but we realized that Scout2 was probably ahead of us and it would be a good place to start.

Scout2

The fine folks at NCCGroup wrote and open sourced a tool for inspecting AWS security called Scout2.  You can use it directly, and we recommend it, based on the description here:  https://github.com/nccgroup/Scout2.  It produces an HTML report like this:

For most programmers, running Scout2 is easy.  It just requires a little bit of python setup and an appropriate AWS profile.  So it wasn’t so much the barrier to entry that made us want to integrate it into Glue so much as the idea that we could take the results and integrate them into the workflow (JIRA) that we are using for other findings from other tools.  We thought that having an easy way to pull the results together and publish them based on Jenkins would be pretty useful.

What’s Coming with Glue

Glue has been a fun project that we’ve used opportunistically.  The next set of goals with Glue is to clean it up, improve tests and documentation and prepare for a legitimate 1.0 release.  At that point, we’ll probably also try to get Glue submitted for Lab status as an OWASP project.

I Don’t Need A Security Policy…Right?

Keely Caldwell No Comments
  Security Policy

By: Rocio Baeza

At some point, security policies will become an area that you will need to address in your company. If you are reading this, you are probably rummaging the internet for security policies. It’s likely that a client or investor is conducting some type of due diligence on your company and you’re looking to give them what they need so you can close the deal. Or maybe you’ve reached the line item in your business plan to tackle security. Regardless of the reason you’re here, we hope to provide you with more information to help you figure your next step.

Let’s start out with addressing some basic areas:

 

What is a security policy?

Security policies establish your company’s position on protecting data. If you’re in a regulated industry, this is likely “required” for you to stay in business. If you’re on the cutting edge of a new idea or product, you probably want to make sure that the valuable information you are creating is well-guarded. A security policy should be a document that captures your position on securing the data you process. The intended audience for the document are those employees (and/or contractors) that are helping you run your business.

 

When do you need a security policy?

The typical security professional will argue that you need a security policy as soon as you start to collect data. In the ideal world, yes, I too would agree with that position. However, let’s be realistic. Creating a company has many moving parts. You need to create an MVP (minimum viable product) before your business is able to generate revenue from customers or raise funding from investors. If you ask us, you need a security policy when your gut tells you that you need to address this.  Some of our customers find out they need a policy when they do their first big deal.

 

Our philosophy on policy …

We do policy differently than a lot of other security companies.  Many of our bigger customers have existing huge policy sets written by a legion of consultants that were actually copy and pasted from previous clients.  The policies don’t fit and they cost an arm and a leg to develop and maintain.

We aim to make policy simple.  If anyone on the team can’t understand it, it is not serving its purpose.  If it is more than a few pages, it is not serving its purpose because people won’t read it.

 

Where do you start?

Luckily, you have several options:

Option 1: Continue to run searches on Google for free security templates. Yes, there are many out there. Go on, go ahead and download the endless pages of Word documents. **Warning: You may need some eye drops after you’re done reading those documents.

Option 2: Find a big firm that charges a ton of money for their policy templates. It’s our experience that they tend to be heavy, super long, filled with jargon, and as a result will only fit with some heavy-duty customization.

Option 3: Try our policy bundle.  Our team of experts distilled the most important security policy into the simplest possible document.  We believe you will understand and be able to apply it out of the gate.

Signal, Audit and Logging – Introduction

Matt Konda No Comments
  Engineering

At clients, we work to make sure the best information is available to:

  1. Debug an application
  2. Track what happens in an application
  3. Produce security signal for monitoring

Often, developers or security folks think of these as overlapping.  We hear:  “We’re using Log4J wrapped with SLF4J, it seems really redundant to do anything else.”

In practice, we believe the information is different and needs to be stored and reviewed in different ways by different people.  That’s why we build libraries to help integrate these things into applications easily – each as a first class piece of information.  As we examine each in further detail, we’ll call out the technology, audience involved and typical content.

Logging

Let’s start with logging because every developer knows about logging, right?  We work with some companies that log every request that they process.  That seems like a lot and should start to trigger alarm bells about what information lives in the logs – but let’s not be mad at logging.  For the most part, there are established solutions for doing it.  The logs need to get aggregated or centralized somewhere and then we can try to see what happened.

We would be remiss here not to point out that it is really important to keep sensitive data out of logs.  We’ve seen everything from card numbers to passwords to reset tokens to session ids …

But the point is, there isn’t anything wrong with a little log.debug(“XYZ”); or log.warn(“Data is stale”);.  From a maintenance and debugging perspective, this information is valuable – generally to operations.

Technology:  Typically file based, then aggregated.  Need text search.  High volume.  Retained for relatively short periods of time (weeks).

Audience:  Developers, Operations

Content:  Freeform – anything a developer might think is useful.

Audit

Some applications explicitly need to be able to produce an audit record for the objects they manage.  This might be who created it, when it changed and how – at who’s direction.  It might even be who accessed this data?  Consider the Stripe interface where they let you access your secret.  The secret is obscured and you have to take an extra action to see it.  Pretty sure they audit that so they know who saw it when.

Technically, you could write audit messages to logs.  This results in tedious work getting the detail back out of the logs and in any system where logs are not smoothly aggregated or can’t be managed at scale, this approach falls down.  Furthermore, someone looking for the messages needs to sift through lots of unrelated data.

A deeper issue is that if you want to produce a true audit record, like for a partner or a customer or an auditor, you can’t just give them all your dev logs!  We want to be able to produce a report tailored for the question they are asking and containing only data for the users they should be able to ask about.  Also, audit records need to be stored for a lot longer than log messages.

Technology:  Queryable interface, centralized long term storage, retained “for a long time”

Audience:  Compliance, Partners, Auditors

Content:  Specific object reads and changes (ideally with before / after visibility) associated to users & time.

Two deeper notes here:

  1. There is a lot more that you can do with structured auditing to be proactive, but that is Aaron Bedra magic so we’ll leave that to him to describe in some venue of his choosing.
  2. Audit information is richer and more valuable the closer it is to actions.  So we discourage general filters that indicate which endpoint got called, for example, because it can’t provide the rich context that a deeper integration can.

Signal

When I say signal, I really mean security signal.  As in, the opposite of noise.  Let’s face it, most of what is in logs is noise.  Even cutting edge technology built to collect and analyze logs produces a ton of noise.  When we get into the application and signal specific events there, we can break out of the noise and give security monitoring teams lots of rich data.

For example, we may want to know about failed logins.  A stream of failed logins looks bad.  A similar stream followed by a successful login looks worse.  (Exercise for reader)  Either way, this is information the security team probably doesn’t see right now.  Go a step deeper – what if input validation fails?  What if someone tries to do something they shouldn’t have permission to do?  Should security get notified?  Obviously the goal would be to take some defensive actions autonomously and in systems we’ve worked on, this works best when you can capture the actual events you care the most about.  Where can you do that?  In the application.

Another key thing with Signal is that it needs to go to the right place.  Often security operations teams are using their own SEIM that is different than the log collector used by developers.  That is smart.  They are optimized for different things.  But we need to help developers get the security events to the SEIM.

Technology:  Push signal to syslog + SEIM, ideally not retained for more than weeks but aggregated and processed for future context.

Audience:  Security Operations Team (The Watchers), Automated Security Response

Content:  Specific security events only.

Our Conclusion

At companies that have the capability and resources (say they have compliance and security monitoring teams) separating these application generated log stream messages has value because they are used by different people for different things in different tools.

We may circle back in the future with another post about our libraries for doing these things and some of the more extended benefits or specific examples of data to track.  Let us know if you are interested!

Automate All The Things

Matt Konda No Comments
  OpenSource

Today I gave a talk at a company’s internal security conference about automation.  The slides are on speakerdeck.  A video is on vimeo.

The point of the talk was threefold:

  1. Explain where automation works well and examples of where we use it with OWASP Glue
  2. Explain newish cool automation like cloud analysis and pre-audit preparation
  3. Talk about how really, automation can only get us so far because we need the interaction and communication to fix things

I’d be interested to hear feedback!

 

The 10 OWASP Commandments

Matt Konda No Comments
  OpenSource

Here at Jemurai, we have at least a few Hamilton fans.  OK, I might be the biggest … but I’m definitely not alone.

At our quarterly meeting in early April, we were talking about our window of opportunity and “not throwing away our shot”, and somehow we started talking about “The Ten Duel Commandments” song and how cool it would be to do a version of it for the OWASP Top 10.

After no more than a few days later, one of our key contributors, Corregan Brown, had written lyrics.  A week later we had an audio version.  Now here’s a video to back it up.  All written and produced by Corregan.  I enjoy it because it is factual, educational, clever and fun.  Thanks, Corregan!

Of course, this is just an artistic rendition to draw attention to the great work OWASP and the Top 10 project team has done.

Ten OWASP Commandments from Jemurai on Vimeo.

What is Security Engineering?

Matt Konda No Comments
  Strategy

Security Engineer is an interesting title.  Across our customers, it has different meanings to different people.  At one end of the spectrum, it is a synonym for a security analyst, which we think of as a skilled resource focused on a very specific portion of security – maybe monitoring the SIEM, maybe running static analysis, maybe feet on the ground doing vulnerability management.  At the other end of the spectrum, security engineering is software engineering around security related features.

If you know Jemurai well, you know we focus at the engineering end of the spectrum.  We’re developers at heart, working on security.  We’ve increasingly engaged in projects where writing code to do security has been foundational to the role we’re in.

As we have grown, we have started to extract some of the most common of these engineering tasks – things like:

  1. Improving application security signal
  2. Managing secrets in Vault
  3. Broadening input validation
  4. Data protection in big data environments.

Clients have appreciated not just advise or gap analysis, but implementation help doing this work.  Turns out it is extremely rewarding as well.

 

Glue Update

Matt Konda No Comments
  OpenSource

There have been several recent improvements with Glue.  Its been awesome to have more people committing to the project and adding in different ways.

One is related to ZAP integration, which is finally getting more of the attention it needs.  Another is related to reporting to JIRA.  Still another is a way to fail builds only on certain thresholds of errors.  We have also been working on integrations for Contrast and Burp.  We’ve added a more representative Jenkins Build Pipeline integration example.

We added support to search for entropy in passwords via TruffleHog.

What would you like to see in Glue?  Where do you think we need to be to get to a credible 1.0?

Software Security Insurance

Matt Konda No Comments
  Strategy

Background

Last week a well established application security company (that I respect) published availability of a $1,000,000 insurance policy for breach related costs related to applications it provides security source code review for.  I assume that the idea is that the review has more value if it has some financial assurance behind it.  Some folks who are cornerstones of the application security community, like Jeremiah Grossman, voiced strong support.

On the one hand, I think this is innovative.  As a consumer or business partner, I would like to know that a vendor is going to stand behind their work.  This type of guarantee suggests that.

On the other hand, I think this is an untenable and even negative approach in the real world for software security.  There are a few reasons.  I’ll lay them out in this post.

Double Down Reaction

Some online reaction (also from people I really respect) suggested that the software companies themselves should provide such quality assurance.  Why should the security vendor offer this when the software company should to begin with!

This also resonates.  Let’s just build secure software in the first place and stand by our products, right?

Developers

But as a developer (or boutique shop delivering a whole system), I would never sign a contract that would hold me responsible for security issues arising from the software I was building.  The reasons are myriad:

  1. On any non trivial project, there are numerous developers with different skill levels and training
  2. Tools don’t actually work to catch many classes of security issue, let alone all the cases within the classes they do catch
  3. I’m using libraries and I don’t have time to read the code or even the EULA’s for all the code
  4. I’m using 3rd party services I can’t control
  5. I’m using compute and storage I can’t control
  6. I don’t get to interview IT, Support and other people that are going to interact with the system
  7. Changes are strong that we’ve not funded all the infrastructure indicated
  8. My window on the project is almost always limited in time or to parts of the system
  9. There is no way we can get contracts in place that express the details over who is liable for what within a large complex system

Here’s another big one:  there are types of vulnerabilities even the security ninjas don’t know about yet.  As of what point in time do I get a free pass because nobody knew XYZ wasn’t secure yet?  Its all a big moving target.

Believe it or not, even though I’m questioning this, I’m a developer that is highly committed to security.  I’m going to do my best.  I’m never going to knowingly introduce an issue.  I’m going to fight to do the right thing.  But my job is already hard.  Like I said back in Builders Vs. Breakers at AppSec in Austin in 2012:  Nobody is giving me a bonus for a secure system.  OK, hardly anybody.

The Cynic

The cynic in my believes that there will be a huge increase in CyberInsurance and that this whole offer is self serving.  We can offer you an insurance assessment, and oh by the way, conveniently, we have insurance too.

This is in a day when I pay an extra fee to add reasonable warranty to electronics at any major big box store.  I’m not sure quality (or insurance, but I’ll get to that next) work the way people expect them to.  Not to mention, most software I use still has bugs in it.  We’re not even close to being able to hold software firms accountable for quality, let alone security.  Most contracts I see use language like “Professional and workmanlike” to describe the level of quality.  I realize lawyers think this sets a standard but I’m not sure there is any broadly shared idea of what that is in the software industry.

Insurance and Accountability

Sometimes with things like this, the devil is in the details.

I’ve seen well insured neighbors lose their house to a fire and fight tooth and nail to get some portion of the costs to rebuild back.  That is a relatively well understood market where the cost of the house and of materials to rebuild can be estimated fairly easily.  Furthermore, legal proceedings to resolve differences are probably deterministic at this point because there are precedents.

If you are buying CyberInsurance, did you expose every ugly truth to the insurer during their assessment?  If not, will those become issues that complicate policy review later?  With the general level of fantasy I see in the industry around standards and compliance, I’m

Ultimately, insurance exists because the high cost to an individual can be spread across a much larger group and the companies issuing policies generally understand the chances for different events – and they make money.

Do we think the Cybersecurity market is stable enough to make really safe bets?  I’m not sure I do.

No Choices

Another concern is that most consumers don’t really think of themselves as having choices related to security.  My banking app may not be secure but I can’t choose another one.  Most social apps are used without any clear cost structure.  I can choose to install a game or desktop tool or use a SaaS based CRM but how am I really supposed to reason about security for any of these choices?

Take this to a logical conclusion and it would seem like only the biggest and richest companies will be able to build secure apps.  Hat tip to Wendy Nather for article about living below the security poverty line.  I’m worried that the innovation in the system will be limited to small edgy applications.

Regulations

I’m not going to take a stand one way or the other on regulation in general.  Whether they are good or bad always depends.  That said, in areas that are highly complex or change quickly, my observation is that regulations are hard to follow and lose value very fast.  I believe that software security is just such a domain.  It is so hard to reason about that even security experts and developers usually only know a small part of it.  We lack tools to assess in any kind of consistent or meaningful way.

Alternative:  Markets

I’m sitting in an airport as I write this.  My phone, the plane, the food .. all represent healthy businesses that strive to keep me coming back because they know that they won’t have consumers if they they mess up.  Sure they may have insurance, but they’re there to make money and people can reason about how what they pay translates into quality or security.  The better food costs more.  Simple as that.

When it comes to software, or IT in general, rather than regulate or insure, I would argue that we need to find ways to make the difference in security visible so that people buying can reason about it.  Then it can factor into prices and businesses can rightly prioritize it.

Suppose we invested in developers and training to address this issue and genuinely worked to build a secure system in the first place?  When the market rewards that, we’ll see it.