Signal, Audit and Logging – Introduction

Signal, Audit and Logging – Introduction

Matt Konda No Comments

At clients, we work to make sure the best information is available to:

  1. Debug an application
  2. Track what happens in an application
  3. Produce security signal for monitoring

Often, developers or security folks think of these as overlapping.  We hear:  “We’re using Log4J wrapped with SLF4J, it seems really redundant to do anything else.”

In practice, we believe the information is different and needs to be stored and reviewed in different ways by different people.  That’s why we build libraries to help integrate these things into applications easily – each as a first class piece of information.  As we examine each in further detail, we’ll call out the technology, audience involved and typical content.

Logging

Let’s start with logging because every developer knows about logging, right?  We work with some companies that log every request that they process.  That seems like a lot and should start to trigger alarm bells about what information lives in the logs – but let’s not be mad at logging.  For the most part, there are established solutions for doing it.  The logs need to get aggregated or centralized somewhere and then we can try to see what happened.

We would be remiss here not to point out that it is really important to keep sensitive data out of logs.  We’ve seen everything from card numbers to passwords to reset tokens to session ids …

But the point is, there isn’t anything wrong with a little log.debug(“XYZ”); or log.warn(“Data is stale”);.  From a maintenance and debugging perspective, this information is valuable – generally to operations.

Technology:  Typically file based, then aggregated.  Need text search.  High volume.  Retained for relatively short periods of time (weeks).

Audience:  Developers, Operations

Content:  Freeform – anything a developer might think is useful.

Audit

Some applications explicitly need to be able to produce an audit record for the objects they manage.  This might be who created it, when it changed and how – at who’s direction.  It might even be who accessed this data?  Consider the Stripe interface where they let you access your secret.  The secret is obscured and you have to take an extra action to see it.  Pretty sure they audit that so they know who saw it when.

Technically, you could write audit messages to logs.  This results in tedious work getting the detail back out of the logs and in any system where logs are not smoothly aggregated or can’t be managed at scale, this approach falls down.  Furthermore, someone looking for the messages needs to sift through lots of unrelated data.

A deeper issue is that if you want to produce a true audit record, like for a partner or a customer or an auditor, you can’t just give them all your dev logs!  We want to be able to produce a report tailored for the question they are asking and containing only data for the users they should be able to ask about.  Also, audit records need to be stored for a lot longer than log messages.

Technology:  Queryable interface, centralized long term storage, retained “for a long time”

Audience:  Compliance, Partners, Auditors

Content:  Specific object reads and changes (ideally with before / after visibility) associated to users & time.

Two deeper notes here:

  1. There is a lot more that you can do with structured auditing to be proactive, but that is Aaron Bedra magic so we’ll leave that to him to describe in some venue of his choosing.
  2. Audit information is richer and more valuable the closer it is to actions.  So we discourage general filters that indicate which endpoint got called, for example, because it can’t provide the rich context that a deeper integration can.

Signal

When I say signal, I really mean security signal.  As in, the opposite of noise.  Let’s face it, most of what is in logs is noise.  Even cutting edge technology built to collect and analyze logs produces a ton of noise.  When we get into the application and signal specific events there, we can break out of the noise and give security monitoring teams lots of rich data.

For example, we may want to know about failed logins.  A stream of failed logins looks bad.  A similar stream followed by a successful login looks worse.  (Exercise for reader)  Either way, this is information the security team probably doesn’t see right now.  Go a step deeper – what if input validation fails?  What if someone tries to do something they shouldn’t have permission to do?  Should security get notified?  Obviously the goal would be to take some defensive actions autonomously and in systems we’ve worked on, this works best when you can capture the actual events you care the most about.  Where can you do that?  In the application.

Another key thing with Signal is that it needs to go to the right place.  Often security operations teams are using their own SEIM that is different than the log collector used by developers.  That is smart.  They are optimized for different things.  But we need to help developers get the security events to the SEIM.

Technology:  Push signal to syslog + SEIM, ideally not retained for more than weeks but aggregated and processed for future context.

Audience:  Security Operations Team (The Watchers), Automated Security Response

Content:  Specific security events only.

Our Conclusion

At companies that have the capability and resources (say they have compliance and security monitoring teams) separating these application generated log stream messages has value because they are used by different people for different things in different tools.

We may circle back in the future with another post about our libraries for doing these things and some of the more extended benefits or specific examples of data to track.  Let us know if you are interested!

Automate All The Things

Matt Konda No Comments

Today I gave a talk at a company’s internal security conference about automation.  The slides are on speakerdeck.  A video is on vimeo.

The point of the talk was threefold:

  1. Explain where automation works well and examples of where we use it with OWASP Glue
  2. Explain newish cool automation like cloud analysis and pre-audit preparation
  3. Talk about how really, automation can only get us so far because we need the interaction and communication to fix things

I’d be interested to hear feedback!

 

The 10 OWASP Commandments

Matt Konda No Comments

Here at Jemurai, we have at least a few Hamilton fans.  OK, I might be the biggest … but I’m definitely not alone.

At our quarterly meeting in early April, we were talking about our window of opportunity and “not throwing away our shot”, and somehow we started talking about “The Ten Duel Commandments” song and how cool it would be to do a version of it for the OWASP Top 10.

After no more than a few days later, one of our key contributors, Corregan Brown, had written lyrics.  A week later we had an audio version.  Now here’s a video to back it up.  All written and produced by Corregan.  I enjoy it because it is factual, educational, clever and fun.  Thanks, Corregan!

Of course, this is just an artistic rendition to draw attention to the great work OWASP and the Top 10 project team has done.

Ten OWASP Commandments from Jemurai on Vimeo.

What is Security Engineering?

Matt Konda No Comments

Security Engineer is an interesting title.  Across our customers, it has different meanings to different people.  At one end of the spectrum, it is a synonym for a security analyst, which we think of as a skilled resource focused on a very specific portion of security – maybe monitoring the SIEM, maybe running static analysis, maybe feet on the ground doing vulnerability management.  At the other end of the spectrum, security engineering is software engineering around security related features.

If you know Jemurai well, you know we focus at the engineering end of the spectrum.  We’re developers at heart, working on security.  We’ve increasingly engaged in projects where writing code to do security has been foundational to the role we’re in.

As we have grown, we have started to extract some of the most common of these engineering tasks – things like:

  1. Improving application security signal
  2. Managing secrets in Vault
  3. Broadening input validation
  4. Data protection in big data environments.

Clients have appreciated not just advise or gap analysis, but implementation help doing this work.  Turns out it is extremely rewarding as well.

 

Glue Update

Matt Konda No Comments

There have been several recent improvements with Glue.  Its been awesome to have more people committing to the project and adding in different ways.

One is related to ZAP integration, which is finally getting more of the attention it needs.  Another is related to reporting to JIRA.  Still another is a way to fail builds only on certain thresholds of errors.  We have also been working on integrations for Contrast and Burp.  We’ve added a more representative Jenkins Build Pipeline integration example.

We added support to search for entropy in passwords via TruffleHog.

What would you like to see in Glue?  Where do you think we need to be to get to a credible 1.0?

Software Security Insurance

Matt Konda No Comments

Background

Last week a well established application security company (that I respect) published availability of a $1,000,000 insurance policy for breach related costs related to applications it provides security source code review for.  I assume that the idea is that the review has more value if it has some financial assurance behind it.  Some folks who are cornerstones of the application security community, like Jeremiah Grossman, voiced strong support.

On the one hand, I think this is innovative.  As a consumer or business partner, I would like to know that a vendor is going to stand behind their work.  This type of guarantee suggests that.

On the other hand, I think this is an untenable and even negative approach in the real world for software security.  There are a few reasons.  I’ll lay them out in this post.

Double Down Reaction

Some online reaction (also from people I really respect) suggested that the software companies themselves should provide such quality assurance.  Why should the security vendor offer this when the software company should to begin with!

This also resonates.  Let’s just build secure software in the first place and stand by our products, right?

Developers

But as a developer (or boutique shop delivering a whole system), I would never sign a contract that would hold me responsible for security issues arising from the software I was building.  The reasons are myriad:

  1. On any non trivial project, there are numerous developers with different skill levels and training
  2. Tools don’t actually work to catch many classes of security issue, let alone all the cases within the classes they do catch
  3. I’m using libraries and I don’t have time to read the code or even the EULA’s for all the code
  4. I’m using 3rd party services I can’t control
  5. I’m using compute and storage I can’t control
  6. I don’t get to interview IT, Support and other people that are going to interact with the system
  7. Changes are strong that we’ve not funded all the infrastructure indicated
  8. My window on the project is almost always limited in time or to parts of the system
  9. There is no way we can get contracts in place that express the details over who is liable for what within a large complex system

Here’s another big one:  there are types of vulnerabilities even the security ninjas don’t know about yet.  As of what point in time do I get a free pass because nobody knew XYZ wasn’t secure yet?  Its all a big moving target.

Believe it or not, even though I’m questioning this, I’m a developer that is highly committed to security.  I’m going to do my best.  I’m never going to knowingly introduce an issue.  I’m going to fight to do the right thing.  But my job is already hard.  Like I said back in Builders Vs. Breakers at AppSec in Austin in 2012:  Nobody is giving me a bonus for a secure system.  OK, hardly anybody.

The Cynic

The cynic in my believes that there will be a huge increase in CyberInsurance and that this whole offer is self serving.  We can offer you an insurance assessment, and oh by the way, conveniently, we have insurance too.

This is in a day when I pay an extra fee to add reasonable warranty to electronics at any major big box store.  I’m not sure quality (or insurance, but I’ll get to that next) work the way people expect them to.  Not to mention, most software I use still has bugs in it.  We’re not even close to being able to hold software firms accountable for quality, let alone security.  Most contracts I see use language like “Professional and workmanlike” to describe the level of quality.  I realize lawyers think this sets a standard but I’m not sure there is any broadly shared idea of what that is in the software industry.

Insurance and Accountability

Sometimes with things like this, the devil is in the details.

I’ve seen well insured neighbors lose their house to a fire and fight tooth and nail to get some portion of the costs to rebuild back.  That is a relatively well understood market where the cost of the house and of materials to rebuild can be estimated fairly easily.  Furthermore, legal proceedings to resolve differences are probably deterministic at this point because there are precedents.

If you are buying CyberInsurance, did you expose every ugly truth to the insurer during their assessment?  If not, will those become issues that complicate policy review later?  With the general level of fantasy I see in the industry around standards and compliance, I’m

Ultimately, insurance exists because the high cost to an individual can be spread across a much larger group and the companies issuing policies generally understand the chances for different events – and they make money.

Do we think the Cybersecurity market is stable enough to make really safe bets?  I’m not sure I do.

No Choices

Another concern is that most consumers don’t really think of themselves as having choices related to security.  My banking app may not be secure but I can’t choose another one.  Most social apps are used without any clear cost structure.  I can choose to install a game or desktop tool or use a SaaS based CRM but how am I really supposed to reason about security for any of these choices?

Take this to a logical conclusion and it would seem like only the biggest and richest companies will be able to build secure apps.  Hat tip to Wendy Nather for article about living below the security poverty line.  I’m worried that the innovation in the system will be limited to small edgy applications.

Regulations

I’m not going to take a stand one way or the other on regulation in general.  Whether they are good or bad always depends.  That said, in areas that are highly complex or change quickly, my observation is that regulations are hard to follow and lose value very fast.  I believe that software security is just such a domain.  It is so hard to reason about that even security experts and developers usually only know a small part of it.  We lack tools to assess in any kind of consistent or meaningful way.

Alternative:  Markets

I’m sitting in an airport as I write this.  My phone, the plane, the food .. all represent healthy businesses that strive to keep me coming back because they know that they won’t have consumers if they they mess up.  Sure they may have insurance, but they’re there to make money and people can reason about how what they pay translates into quality or security.  The better food costs more.  Simple as that.

When it comes to software, or IT in general, rather than regulate or insure, I would argue that we need to find ways to make the difference in security visible so that people buying can reason about it.  Then it can factor into prices and businesses can rightly prioritize it.

Suppose we invested in developers and training to address this issue and genuinely worked to build a secure system in the first place?  When the market rewards that, we’ll see it.

Glue 0.9.3

Matt Konda No Comments

Introduction

At Jemurai, we contribute extensively to OWASP Glue and use it on some of our projects where it makes sense to tie together automation around security.  We kept seeing the same types of integration challenges and found that it was useful to have a common starting point to solve them.  It is far from perfect and we would refer people to alternatives like ThreadFix and OWTF

What is Glue?

Glue is basically a Ruby gem (library) that knows how to run a variety of security tools, normalize the output to a set structure and then push the output to known useful places like Jira.  We package Glue in a docker image to try to make it easy to set up all the different important moving parts (eg. Java, Python, Ruby and tools).  You can get and run Glue from docker as easy as:

docker run owasp/glue:0.9.3

Or, for a more helpful example, we can run brakeman and get the output as follows:

docker run --rm owasp/glue:0.9.3 -t brakeman https://github.com/Jemurai/triage.git

Architecture

The idea behind Glue is to be able to process different types of files via Mounters.  Then to be able to analyze using different Tasks, filter with different Filters and report with different Reporters (CSV, Jira, Pivotal, etc.).  Ultimately, there are the concept of stages that can be easily extended.  The reason I’m writing the post today is because I wanted to add a Bandit task and it was so easy that all I had to do was add this one file:  https://github.com/OWASP/glue/blob/master/lib/glue/tasks/bandit.rb.

When I was done, you could run bandit from the Glue docker image and push results to Jira or anywhere else:

docker run --rm owasp/glue:0.9.3 -t bandit https://github.com/humphd/have-fun-with-machine-learning.git

The following diagram illustrates the stages of the pipeline of functions that Glue performs.

Check out Glue or reach out if you want to talk about some of the common challenges in security automation.

AppSec Qualifications

Matt Konda No Comments

At Jemurai, we often find ourselves in situations where a company wants to build their own application security program but doesn’t really know how.  That’s a common and very understandable problem given the trends in the industry (increasing focus on app security) and the inherent complexity of doing application security well.  We take great pride teaching and coaching organizations such as these to build successful programs.  Inevitably there comes a point where they want to hire someone to “run AppSec.”  Often, we’ll be asked for feedback on resumes or about candidates.  This happens often enough that I wanted to take a minute and write down some of the things we’ve learned and how we approach situations such as these.

Our Point of View

We fundamentally believe that to succeed with application security, a team needs to have development & SDLC expertise, excellent communication skills and security knowledge.  Tools are a part of the package, but only a part we use to help save time so we can focus on the hard problems as a team of people.  The number of times we have seen “last mile” problems, where issues are known but can’t be fixed for organizational reasons is too many to dismiss.  Emphasize relationships, communication and the human element of making security fixes happen and the tools part will come along easily.  Starting with expert developers that can win the respect of the technical team while engaging in a non-invasive way is our goal.

Internal

Usually, one of the first questions I’ll ask in this situation is if there is anyone internal that would be a good fit.  Most medium to large development groups have a few developers that are proven but might be looking for their next thing.  Giving them a way to pivot into application security can be a great way to leverage their organizational knowledge, domain expertise and familiarity in the current process.  If they’ve been building things, they almost certainly understand the SDLC and main programming languages in use.  If they understand the domain, they can communicate with developers and business stakeholders.  Provided that they have the disposition to work with a variety of teams and communicate constructively, this is the kind of person that we can coach into being a core contributor for an AppSec team.

Gotchas

Not surprisingly, we have also made mistakes and seen things go sour while building a program.  The following are some things to watch out for:

  • If we build around someone that is not experienced enough, developers will run circles around them.  This could be technical or process (eg. JIRA) knowledge.
  • If we don’t emphasize both written and verbal communication, we may end up with a technically qualified candidate that can’t effectively build a program.
  • Empathy is foundational to organizational change.  Weaving security into a development process can’t be done without meeting developers part way.

Questions

In order to assess candidates, we pulled together some interesting questions that reflect our point of view more concretely.  Of course it is imperfect, but we’re sharing it here because we think it is interesting.

Internal question:  Do we want the person to be able to get their hands dirty?  Our answer would generally be “yes”.

General questions:

  • Any mention of OWASP?  SANS?
  • Any experience with AppSec Tools?
    • Pick your names of popular tools in the industry.  As we said, tools are secondary but a lack of them can reflect
  • Any mention of programming languages on their resume?
  • Any product development experience?  Does it sound like they grew out of that?
  • Does the candidate have recent experience working with developers?  Do they seem to enjoy that?
  • Does the candidate know about any security standards?  ISO, NIST, Top 10, ASVS.
  • SDLC Experience?
    • Agile / Scrum / JIRA?
    • Where can security fit into an SDLC?
      • Looking for an answer that suggests familiarity with SDLC and emphasizes aspects OTHER than at the end during waterfall testing.
  • Can the candidate write code?
    • What is their language of choice?
    • IDE?
    • What have they built?
    • Any open source contributions?
  • How would they do code review?
    • Looking for some strategy for digesting code and following logical paths, eg. data flow, authorization, input validation, etc.
    • What would they do with a finding?  How have they interacted with other teams in the past around issues?
      • Looking for a discussion/relationship based approach – not a cold handoff via a spreadsheet or tool.
  • Can the candidate talk about architecture?
    • Is any of their experience relevant to web development?
    • Does the candidate know about CI/CD?  Cloud?  Mobile?

Predictions Sure To Go Wrong for 2017

Matt Konda No Comments

I don’t have much time to listen to Sports Radio anymore, but I used to love to listen to Mike & Mike on ESPN Radio.  They had a segment called Predictions, Sure to Go Wrong which was clearly their way of having fun making predictions while making fun of themselves and admitting they really had a strong likelihood of being wrong.  In that spirit, I offer these predictions for 2017.

Easy Predictions

Ransomware will continue to explode and countermeasures will evolve.

Phishing and Social attacks will continue to be a common and easy attack vector.

Vendors will continue to sell “Security in a Box” ™ despite the fact that this hasn’t worked for years.  People will continue to buy “Security in a Box” ™ even though they know it doesn’t work well because they don’t have any other options.

Technical debt will continue to grow and realizations about the scope of technical debt will explode.

Security leaders will continue to be underfunded not only because of the asymmetric nature of security but also because they will fail to own up to planning for the wrong adversary for the last few years.  Even substantial increases in budget (eg. 25% increase) will be a pittance compared to what is needed.

Lots of household name companies will get hacked.  Security will continue to be visible in geopolitical sphere.

Harder Predictions

Cloud providers – both at the platform and the security level – will continue to innovate and be able to provide some of the best security solutions available.  Already providing identity, WAF, key management, logging and network controls, automated monitoring and platform level predictive algorithms will advance and become more accessible to common users.

Efforts to build warrantees will fail.  The idea of accountability for software vulnerabilities is well founded.  Its just that software development is so complicated that a clear line of responsibility seems almost impossible to establish.  In cases where it might be, software firms I know would never sign on because they con’t control each and every developer to a level where they can absorb the inevitable breach.

Industry Wide

There will be active growth and consolidation in events, communities and vendors.

There will be emerging certifications for developers around security.

There will be broad training for people to get into security.

Vision

Companies will see the need for engineering work specific to security.  Things like the following will be increasingly interesting:

  • Authentication service
  • Authorization service
  • Managing secrets
  • Security automation
  • Application level signal for logs
  • Frameworks for mobile infrastructure

Hiring in 2017

Matt Konda No Comments

In 2017, we will be hiring both governance associates and security engineers / application security specialists.

Governance

Our governance team, lead by Rocio Baeza, is looking for people to assist with risk registers, policy, vendor management, and audit preparation work.

Application Security | Security Engineers

The technical consulting team is looking for software developers with security interest or a deep interest in security to help us with projects around security automation, building security components, etc.  For additional detail or to apply, see our job listing.