Announcing Jemurai Security Automation

Announcing Jemurai Security Automation

Matt Konda one comments

Its been an exhilarating few weeks.  I had to remind myself to take a breath and blog today.

What’s new-ish is that, we have a core team working on a new platform for security automation.  It extends the work we’ve done with Glue and other ideas taken from a great sample of client consulting engagements.  Across all of those engagements it just never felt like any solution was complete or as intuitive as it should have been.  We were always bending rules to live in Jenkins or to write custom scripts to check things that maybe aren’t that unique.

I’ve tried to always remind my team:  get meta.  What’s the problem behind the problem?  That vuln isn’t the problem.  Its the SDLC that allows the vuln to be introduced and not reviewed or noticed that is the problem.

Frankly, as a team we all started to think that all the tooling we’re using for DevOps and Security Automation is all too opinionated.  None of it is at its root there to be the backbone of a flexible solution.  Usually the existing tooling is pushing you into a particular vendors security tool.

We don’t have a combination of a rich inventory, accessible through an API, combined with a flexible way to do things with source code, containers, live systems, clouds in an abstracted way that can be assembled to build a better security automation solution.  The opportunity presented itself to start working on it and as a team we couldn’t be more excited to be building the next generation of tooling.  We’ve got prototypes flying that look at this problem in a whole new way.

If you’re interested in hearing more, are struggling with security automation, or have ideas you want to share to help us, I would personally love to hear from you.  We’re also actively looking for beta testers.  Catch me at matt at the company domain dot com.

Impersonation Failures

Matt Konda No Comments

Several times in the last few weeks we have looked at applications that have significant issues with “impersonation” features.  What is impersonation?

an act of pretending to be another person for the purpose of entertainment or fraud.  (From Google)

In practice, impersonation is a feature that is commonly implemented to allow Customer Support to “see what the user sees”.  Typically, it involves the support user effectively logging in as the actual user.  As such, as a feature it can be invaluable.

It can also go quite wrong. 

We reviewed one application that allowed any user to impersonate any other user.  Only certain users should be able to perform impersonation.  This is an issue of authorization.  Rarely do we want regular users to be able to impersonate internal users.

It is almost a surprise if we see an application that has proper auditing around impersonation and actions during the impersonated session.

Things that should be auditable include:

  • When the impersonation started and who initiated it.
  • What actions were taken on behalf of the user by the impersonating user.
  • When the impersonation stopped.

Extending on auditing just a bit, it is generally advisable that logging facilities record both the acting user and the logged in user.  Imagine that Joe wants to impersonate Matt’s account to help troubleshoot an issue.  All of the actions he takes should show changes to Matt’s data but there should also be a record that Joe is the acting / impersonating user.

This is also a good reminder that we often don’t need to show sensitive data, even to users themselves.  If we avoid showing specific sensitive data without an extra event (click), we can ensure that during impersonation sessions support personnel don’t see data that they don’t need to.

Another fail mode for impersonation is to have a common account with a “bypass” password, kind of like a shared super user for impersonation.  We’ve recently seen systems that allow users to get a bypass password/session and then do whatever they want without any audit trail as to what the users were looking at.  Using a shared user for this means that all of the audit trail will converge on a service account that we can’t now tie to a particular user.  It is always a good exercise to walk through negative abuse case scenarios and ensure that the information needed is being collected to make sense of them.

Let’s think about the risk model for a moment:

  • Malicious internal user wants to collect information from application users
  • Non-malicious internal user helps lots of people and ends up with their data on their system – then they become a victim of a phishing or other type of attack
  • Malicious external user can elevate privileges by assuming another identity

Since impersonation is a way to bypass authentication and authorization, it should receive significant scrutiny.  If you have impersonation code, it may be worth reviewing code to ensure that only the users that you want to do it can, and that when they do their actions are tracked.

Dependency Management for Developers

Matt Konda No Comments

I recently got asked about best practices for dependency management by an old colleague.  In response, I wrote the following which I realized might be useful to a broader audience.


So … things like github’s new notification on this have been automated via platforms like CodeClimate or via Jenkins internally at dev shops for a long time using tools such as:

  • Retire.js (JavaScript)
  • Bundler Audit (Ruby)
  • Safety Check (Python)
  • Dependency Check (Java)

Generally, we write these checks into a build so that the build fails if there are vulnerabilities in libraries.  We have contributed to an open source project called Glue that makes it easy to run tools such as these and then push the results into JIRA or a CSV for easier integration with normal workflows.

Note that there are also commercial tools that do that ranging from Sonatype to BlackDuck to Veracode’s Software Composition Analysis.  I generally recommend starting with an open source thing to prove you can do the process around the thing and then improving the thing.

At a higher level, I pretty much always recommend that companies adopt a tiered response system such as the following:

  1. Critical – This needs to get fixed ASAP, interrupts current development and represents a new branch of whatever is in Prod.  Typical target turnaround is < 24 hours.  Examples of vulnerabilities in this category might be remote code execution in a library – especially if it is weaponized.
  2. Flow – This should get fixed in the next Sprint or a near term unit of work, within the flow of normal development.  These types of issues might need to get addressed within a week or two.  Typical examples are XSS (in non major sites, major sites treat XSS like an emergency).
  3. Hygiene – These are things that really aren’t severe issues but if we don’t step back and handle them, bad things could happen.  If we don’t update libraries when they come out with minor updates then we get behind.  The problem with being far behind (eg. 3 year old JQuery) is that if there is an issue and the best fix is to update to current, the actual work to remediate could involve API changes that require substantial development work.  So philosophically, I think of this as being the equivalent of keeping ourselves in a position where we could realistically meet our Critical SLA (say 24 hours) on updating to address any given framework bug.

An important part of this is figuring out how a new identified issue gets handled in the context of the tiering system.  In other words, if a new issue arises, we want to know who (a team?) gets to determine which tier it should be handled as.  We definitely do not want to be defining the process and figuring this all out while an active major issue looms in the background.

Of course, with all of this and particularly the hygiene part, we need to be pragmatic and have a way to negotiate with dev teams.  We can weigh the cost of major updates to a system against the cost of keeping it running.  Planned retirement of applications can be the right answer for reducing overall risk.

Ultimately, we want to translate the risk we see with this into terms stakeholders can understand so that they don’t balk at the hygiene work and they are prepared when we need to drop current work to accommodate critical updates.

Using the OWASP Top 10 Properly

Matt Konda No Comments

I have gone to great lengths to strictly separate my OWASP activities from my Jemurai activities in an effort to honor the open and non-commercial aspects of OWASP to which I have committed so much volunteer time and energy.

Today I want to cross the streams for a very specific reason, not to promote Jemurai but to stop and think about how some of OWASP’s tools are used in the industry and hopefully prevent damage from misuse.

I want to address perspectives reflected in the following statements, which I’ve heard from a few folks:

I want a tool that is OWASP Compliant

And:

I need a tool to test for the OWASP Top 10

And:

I want an OWASP Report

Or alternatively:

Our tool tests for the OWASP Top 10

OWASP Is Generally Awesome

First of all, let’s ground ourselves.  OWASP provides terrific open resources for application security ranging from the Top 10 to ZAP to ASVS to OpenSAMM to Juice Shop to Cheat Sheets and many more.  The resources are invaluable to all sorts of folks and when used as intended are extremely valuable and awesome.

The Top 10 Is Not Intended To Be Automated

Let me be very clear:  there is no tool that can find the OWASP Top 10.  The following are items among the Top 10 that can rarely if ever be identified by tools.

  • #3:  Sensitive Data Exposure
    • Tools can only find some predefined categories of sensitive data exposure.  In my experience, a small subset.  One reason is that sensitive data is contextual to a company and system.  Another is that exposure can mean anything from internal employees seeing data to not encrypting data.
  • #5:  Broken Access Control
    • Tools can’t generally find this at all.  This is because authorization is custom to a business domain.  A tool can’t know which users are supposed to have access to which things to be able to check.
  • #6:  Security Misconfiguration
    • Tools can find certain known misconfigurations that are always wrong (eg. old SSL), but things like which subnets should talk to other subnets aren’t going to be identified by Burp or ZAP.  We find custom tools that can find them but they are just that custom.
  • #10:  Insufficient Logging & Monitoring
    • What does this even mean?  Our team has been delivering custom “security signal” for a while, but this isn’t binary that you have it or you don’t.  No company I have ever seen has comprehensive evidence.  There’s no tool you can plug in and immediately “get it”.

Even among the things that can be identified, there is no one tool that can find all of them.

Stepping back, its actually a good thing that the Top 10 isn’t easily identified by a tool.  That reflects the thought and human expert opinions that went into it.  It wasn’t just a bunch of canned stuff that got put together.

What The Top 10 is Great For

The Top 10 are a great resource to help frame a conversation among developers, to be the basis for training content, to be thought provoking for people actively thinking about the issues at hand.  More than 10 items is too much.  These approximate the best ideas we currently have.  Of course there is always room for improvement but the Top 10 are a great resource when used well.

Please just understand them as a guide and not a strict compliance standard or something a tool can identify.

References

https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

The Importance of Your Inventory

Matt Konda one comments

We work with companies building security programs a lot.  Across all aspects of the program, the word inventory is a term that seems to have a surprisingly high level of general awareness but a surprisingly low level of common definition.

in·ven·to·ry       ˈinvənˌtôrē/
noun  1.  a complete list of items such as property, goods in stock, or the contents of a building.  (Google:  https://www.google.com/search?q=define+inventory)
synonyms: listlistingcatalogrecordregisterchecklistlogarchive;

In particular, it is always interesting to ask if:

  1. All apps from all business units represented?
  2. The inventory is updated automatically?

Unless the answer to 2. is “Yes” then the answer to 1. is almost always “No”.  If the answer to “Are all apps from all business unites represented?” is no, then how can the inventory be serving as the cornerstone of your security program?  I’m curious what else you might be using as a cornerstone instead…

Let’s step back – what’s the point of the inventory?  The most important reason to have an inventory might be to make sure that we know what we need to do and what we’ve done.  If you have a list of all of your applications that is updated (ideally automatically) and tracks security activities against those, that is a great starting point.

Many people track this in Excel or some form of spreadsheet.  That isn’t inherently bad unless that is spread throughout various teams in different states.  Most security vendors want you to use their tool for your inventory.  The reason is that if they own the main place you go for your TODO list, that suggests you’ll be coming back to them a lot.  In what I’ve seen, all those inventories and dashboards leave a lot to be desired.  You might actually be better off in Excel where you can keep your vendor blinders off.

Let’s focus a bit on getting an inventory.  We’ve done it by hand, by automatically pulling it from GitHub, a service meta tier, or an artifact store and in other ways.  In larger companies, it is often a combination of the above.  We would generally advise that the inventory be seen as a living dataset that can be a reference throughout purchasing decisions.

In future posts we’ll talk about the conversations we need to have to triage an inventory and how we can turn an inventory into a budget and a roadmap.

 

Commercial Software Using Open Source

Matt Konda No Comments

Here’s an interesting slightly different spin on the otherwise tired “Open Source” vs. “Closed Source” being more secure debate!

The topic is inspired by a conversation with a client that is using a whole slew of old open source libraries.  They know they need to update those libraries, but it is very difficult because they are using them as part of a distribution of a commercial product that they are paying for.  So, they buy XYZ product and pay good money for it.  XYZ product brings in all of these dependencies that are problematic.  The customer can’t update the libraries because it may break the commercial product and because the commercial product is large and not open source, they can’t see and edit the code to fix issues that arise.  Ultimately, it seems like that’s the vendor’s responsibility.  But that doesn’t mean they see it that way.

The obvious avenue for recourse is to tie payment to keep the libraries up to date.  That isn’t standard contract language … yet.

Another way to hedge in this situation is, of course, to avoid commercial platforms that don’t have a strong track record of updating their dependencies.  It would be interesting to see a scorecard or standardized assessment method to keep track of which vendors and which products do well keeping up with updates in their dependencies.  It seems like it might be relatively easy to ascertain …

In this case, we have an organization that went all in on a commercial platform and even has their own code built on top of it.  Now they wonder if they should have built from a lower level base so that they wouldn’t be locked into a platform that they really can’t update.  Interesting design decision, right!?

Tend Your Digital Garden

Matt Konda No Comments

Something that is really hard about application security is that it isn’t something you can just point a tool at and be finished at some point in time.  It is always going to take ongoing work.  I like to use the analogy of a garden.  Both the plants in the garden and the conditions around them change no matter what we do.  Maintaining a beautiful garden is a labor of love and an ongoing investment in time.  We could think of our applications in the same way.

Unfortunately, many applications look more like this example of an overgrown garden.  The original intent of many applications tends to get bent, expanded or even lost as systems evolve.  In some cases, the original beauty and architecture are lost in the complexity and difficulty managing the result.

When we think about application security, we are always looking for ways to make it a habit – something that people naturally think about and take care of.  I’d even go so far as to say that tending our security garden needs to be a labor of love.

So what do we do?  There are many layers to these examples that we can learn from:

  • We get tools to help us: clippers, weed whackers, fertilizer, hoses, wheelbarrows, etc.  We learn how to use the tools.
  • We plan to work in the garden periodically.  If we don’t, we know it is going to take more work dedicated to clean up.
  • We plan the garden and take out the plants that aren’t working.
  • We balance our time around different areas.  One wildly overgrown plant can make the whole garden less pleasant.  We know some plants take more work than others.
  • We aren’t afraid to get dirty.  We know it is work.  We’re satisfied when we’re done.

Unfortunately, with software, outside of the development team, it is often difficult to tell whether the garden looks great and is well tended or if it is a bit of a mess…

That’s one of the key reasons Jemurai is built the way we are – around expert software developers that know good software – only very strong developers can look at systems and help make them beautiful like the Japanese garden pictured above.

 

Top 5.5 AppSec Predictions Sure To Go Wrong

Matt Konda No Comments

In keeping with an all too popular industry practice of producing year end Top 10 lists, at Jemurai we developed a Top 5.5 Application Security Trends for 2018.  It is obviously meant to be a little bit fun, given the “Top 5.5” title but we tried to capture what we think are significant important things to keep in mind.

#1.  Continued Framework Level Vulnerabilities

  • Expect to see additional massive breaches related to framework level vulnerabilities that were slow to be identified and patched (old and new).
  • Recommendations:
    • Actively stay up to date on libraries
    • Use a mechanism to detect in CI/CD that your libraries are aging
    • Commit to maintenance

#2.  Innovation Applying Artificial Intelligence and Machine Learning to Security

  • Expect to see more threat intelligence, smarter intrusion detection, better malware detection, improved identity – all through these technologies.
  • Recommendations:
    • If you are very mature and have money, look to these tools.
    • If you are not very mature or don’t have money, work on the basics first.
    • If you are a security company, figure out where these fit for your tools.

#3.  Changes to Static Analysis Market

  • Companies will adopt smaller, purpose built static code analysis tools
  • Companies will start developing their own tooling to perform checks in a DevOps fashion, especially for their growing cloud environments.
  • Commercial tools will continue to have high false positive rates, be too slow to include in developer workflows and will work well with only a few programming languages.
  • Recommendations:
    • Think twice before adopting a new static tool.
    • Look at the API and make sure it is usable (REST / JSON).
    • Leverage open tools to get the basics done and prove a process.
    • Teach your developers and ops (DevOps folks) ways to think about security.

#4.  Security Engineering

  • Companies will start to see the value in security libraries for things like:
    • Audit information
    • Application security signal
    • Encryption
    • Honey Data
    • Customize cloud auditing and assurance
  • Recommendations:
    • Look for places where security impacts architecture and consider building reusable component to handle it properly.

#5.  Software for Risk and Security Program Management

  • Just like companies use systems for procurement, recruiting, HR, finance and business flows, companies will start using software to help them manage their risk and security programs.
  • Recommendations:
    • Keep an eye out for these.  Try to identify your best practices and assess if the tools can help keep programs moving.

#5.5  Some Things That Should Not Be Forgotten Will Be Lost

  • Tools are never a panacea but we will increasingly focus on tools.
  • Awesome instructor led hands on training is expensive and hard to find but worth it.  Computer based training is widely hated by developers, but it will grow much faster.
  • Authorization is hard and tools don’t find gaps.  No advances will be made.
  • It doesn’t matter what you find, it matters what you fix.  We’ll continue to see a focus on finding problems instead of fixing them.
  • People will reuse passwords.  This will undermine all sorts of other controls but we won’t see substantial change.

Turns Out Policy in Markdown in Github Works!

Matt Konda No Comments

I’ve seen policies from lots of companies big and small.  Generally, I’m a techie engineer so I don’t love policy.  I’ve also seen a fair number of companies that clearly don’t follow their policy.  I’ve also seen companies that get certifications like SOC2 and ISO that are meaningless because they systematically lie and their auditors (not us, we don’t do auditing) never check lots of basic things we see.  Sometimes the security teams at the companies aren’t lying, they just don’t even know the truth about their own company.  I get it, there’s all kinds of reasons we can’t always have nice things.

In response to that, we spent a few years at Jemurai trying to write minimal policies that people could understand and follow.  I even published a blog post last summer about it and we tried selling a minimal policy bundle off of our website.  It seemed like a good idea at the time.  I think the philosophy was generally sound in a pure sense.

The problem is, people use policy as a defense against auditors and without more explicit direction, you can’t say you have controls around a variety of things.  You don’t even know you need to know the answer to questions about data loss protection or mobile devices in your network.  Inevitably, sooner or later someone is going to run up against a SIG Lite or a more exhaustive partner checklist or some trigger that forces them to articulate a more complicated policy.

To update our position on this, while staying arms length from auditing and full on policy work in the future, we developed policies in Markdown and published them to our private Github repo.  They look nice and everybody can immediately see what the policies are and who changed them when.  We can also track approvals using pull requests.  For smaller tech companies this makes for a simple more digestible way to get, use and publish policy.  It keeps it in a relevant and accessible place.  We can share it with their security point of contact by letting them fork our policy in Github.  They can subscribe to updates and merge our new best practices in as they evolve.  So far, this seems to be a good direction.

 

Your Vulnerability Spreadsheet Says More Than You Think

Matt Konda No Comments

More often than I’d care to say, I work on projects where a client has a vulnerability spreadsheet to rule them all.  They’re using the spreadsheet to track all of the open items that were found across all of their projects with different tools and pentests.

One initial interesting point is that these companies don’t particularly seem to consider this data to be extremely sensitive.  The spreadsheets get mailed around with different tabs for different apps or organizations to teams across the company.  Good thing we just told all of our IT and development resources where the known problems are …

Going a little deeper, I can often tell a lot about a company based on what I see in the spreadsheet.  Maybe a simple Apache web server patch hasn’t been applied in 9 months.  Maybe some teams respond and others don’t.  Maybe it’s hard to find owners or they keep shifting.

Experienced security folks can often map vulnerabilities in a report back to the tools that find them.  You know the X-Frame-Options item came from ZAP or Burp.  The listable directory too.  Content types, password fields that don’t have autocompletion turned off, JS from foreign sites, etc.  You know the drill.

Something that I also find very interesting is what is NOT in the report.  If there aren’t ANY authorization related findings or other items that I wouldn’t expect a tool to find, I can often be quite confident that either the application testing was very time-limited or the testing methodology did not include human-driven testing.  This should be a red flag.

Unless you are trying to pay for only an automated test, look for a vulnerability you know a tool can’t find but ANY tester should.  Maybe even consider adding something you can use as a canary to test the tester.  There are lots of examples but an easy one is a user from one access role or organization being able to access data or a function from another access role or organization – basically an authorization failure.  Tools can’t find these.  If you don’t have any, it might be because your testers are only using tools.

Jemurai Newsletter

Privacy Policy

Recent Comments