Implementing Authorization Properly

Implementing Authorization Properly

Matt Konda No Comments
  Application Security Engineering


Almost every time we do a penetration test or code review, we find problems with authorization.  Sometimes we call these horizontal or vertical privilege escalation.  Sometimes we call it instance based restriction gaps or function based restriction gaps.  Ultimately, many applications fail to implement clear restrictions on who can do what.  This post attempts to revisit these types of findings, explain a bit about how we test for them and talk about some standard ways of addressing them.

Dispelling A Dangerous Assumption

Sometimes we hear developers say that a particular issue is not important because a user has to be authenticated (usually by presenting a user and password) before they could ever cause the issues we are showing.  Yet, in many systems, it is trivially easy to enroll with a throw away account and learn about how the system works.  That means that I can both get an account easily and I can generally hide who I really am while I’m testing.  So the truth is that in many many systems, it is perfectly possible to experiment with authorization as an anonymous but authenticated user.

In cases where the authentication process is more rigorous, or in a business to business setting, the risk may be reduced but it is probably not eliminated.  We certainly see people trying to guess and misuse API tokens.

Instance Based Authorization Failures (Horizontal Privilege Escalation)

The easiest way to think about instance based authorization (authz) failures, is to use an example.  Consider the case where I am working with a timesheet.  The system presents me with a form for my timesheet.  As I fill out the form, the javascript front end is watching for changes and saving them in the background as I go.  Here is an abbreviated example of a background POST that saves the current work in progress on a timesheet:

POST /entry/save HTTP/1.1
X-CSRF-Token: ABC--
Cookie: _time_sess=abc

What is interesting here is that as a legitimate user, I can capture any number of realistic requests like this.  Once I have a few, I can examine them and replay them quite easily within a valid session.  In this case, see the red and bold user id number (307476).  That appears to be tied to my logged in user.  It is quite likely that other numbers would trigger this request being applied to other users.  The question is, is there anything in the back end logic to prevent a request for user 307477 from successfully updating that user’s data from my session?

The simplest possible prevention of this type of problem is to tie each instance of a timesheet to its owning user and to only allow users to update their own timesheets.  This might look like the following in CanCan, which is a handy Authz library used with Ruby on Rails that we like for illustration because it is direct:

class TimesheetController < ApplicationController

  def show
    # @timesheet is already loaded and authorized

Typically this depends on the Timesheet object having a column tying it to a user that is the owner.  The magic here is that CanCan does all of the work to load and check the owner automatically for any kind of model object.

This might look like this in Java with Apache Shiro:

Subject currentUser = SecurityUtils.getSubject();

We also see this custom coded in many clients.

One way or another, we need to make sure to control access to objects at an instance level.

Function Based Authorization Failures (Vertical Privilege Escalation)

Typically function based access control is implemented with role mappings.  Some examples of vertical privilege escalation is if a user can call a function they shouldn’t be allowed to call.  For example, a team member Natalie is not a manager so she should not be able to approve timesheets.  We would never expect the following POST to start from her browser, because we wouldn’t expect the button that triggers the post from even showing.

POST /approve/finalize?return_to=…I wonder if this is open redirect but that is another topic…  HTTP/1.1
Cookie: _timesheet_sess=ABC
Upgrade-Insecure-Requests: 1


Since the Approve button is only visible to people in the admin or manager role, developers wouldn’t expect this post to ever happen so they might not write code to prevent it.

Note that the user id is also in this payload.  It is easy enough to imagine taking a request like this and replaying it for every id in the system for every period.  Burp Intruder would make that an easy exercise that requires no programming.

Something to keep in mind is that Single Page Apps (SPA’s) like those written in Angular, typically ship a Controller to the client which can be easily explored to identify potential endpoints that might not otherwise be obvious.  This only raises the importance of implementing comprehensive authorization on the server.

The Spring Security annotations illustrate how roles are often enforced:

public void approve(Timesheet timesheet);

This might look like this in Java with Apache Shiro:

Subject currentUser = SecurityUtils.getSubject();

You get the idea.

Note that we most commonly have to do both function and instance based approval.  Meaning, I can approve some timesheets but only those from people on my team – not across the company or all of the tenants in the solution.  That means that the real logic needs to check both:

  • The POST /approve/finalize …
  • The user_id=1770921

I must be a user that has the role required to approve timesheets AND I should be specifically allowed to approve this user’s timesheet.

Defining Intended Authorization

As is often the case with software, a lot of times the authorization gaps we find have to do with a lack of communication between a product manager (stakeholder) and the engineering team.  The requirements said that Matt should be able to see and submit his timesheet.  The requirements did not say that Matt should NOT be able to see Joe’s timesheet or delete Brian’s timesheet.  Since the requirements were never captured, the engineers may not have stopped to think about what could go wrong.  Sometimes making these requirements work is easy, sometimes it is hard.

Once we’ve defined the intended authorization logic, we can also test it.  This is a place where programmatic unit and functional testing can be extremely effective.  We always recommend writing security unit tests and incorporating things such as horizontal and vertical access attempts (and blocking) in the tests.

Bonus Points

Whenever authorization fails, we need to know.  Most systems don’t make it easy to call something where you are not authorized.  If you click a button to delete something as an admin, we presume that you saw the button.  If as described above, an unauthorized user sends a request that suggests they clicked the button and we detect and block that, we should also LOG that and escalate because there is almost certainly something nefarious going on.  Even in a horizontal access case, if I try to access a timesheet for another user with an ID I shouldn’t have access to and you block it, the event should be logged in such a way that security will see it.

Another area that makes a big difference when it comes to access control is simplifying management of user roles.  In practice, this might translate to a single source of the truth like Active Directory tied to a SSO solution that transmits group memberships from AD and end solutions that translate those AD group memberships into local group memberships or roles.  This way, by removing a user from a group in Active Directory, I can remove them from the admin role in the Timesheet system.

Advanced Paradigms

A lot of time instance based access control boils down to keeping a more detailed authorization database (or store of some kind) that understands user roles and groups and therefore object ownership.  Function based access control can be implemented based purely on roles and coded with annotations or fairly broad controller level directives in most languages.  Sometimes what we really want though is a more contextual understanding of the user.  Attribute based access control extends on the idea of RBAC and adds the idea that a “yes” or “no” decision on whether a user can do something may incorporate other attributes – things like geolocation, time of day, effective role, recent activity, etc.

We also see changes happening in terms of roles being assigned for a limited time.  Systems like AWS STS or Hashicorp Vault can help implement session based role assignment.  In this paradigm, a user is given the right to assume a role.  The user itself does not have any privileges directly assigned.  Rather, when they need them they ask and assuming they have been cleared to ask for that access, they are granted specific access to what they requested for a limited time.  Since the user doesn’t have any rights directly and the request for any session is audited, this reduces exposure due to lost credentials and makes any role usage visible at granular level.

While we’re talking, we should also think about how we implement authorization on our computing platforms.  Kubernetes offers RBAC based controls to define who can do what in a given cluster, pod, etc.  As defining the environment moves toward being a developer task, we need to keep these authorization scenarios top of mind.

An area we haven’t seen a good consistent solution is in service to service authorization.  Usually within a given set of boundaries, microservices authenticate (or not!) and then return or don’t.  We believe there is room to grow frameworks for controlling what data can be requested by which microservices and this is an area we’re putting in some research work.


As developers, we always need to check that the logged in user has the right to perform the action requested against the instance(s) requested.  That implies both checking the function and the instance.



JASP Check Deep Dive: Redshift

Matt Konda No Comments
  Cloud Security JASP


Redshift is Amazon’s data warehousing solution.  Here’s how they describe it at:

Redshift delivers ten times faster performance than other data warehouses by using machine learning, massively parallel query execution, and columnar storage on high-performance disk. You can setup and deploy a new data warehouse in minutes, and run queries across petabytes of data in your Redshift data warehouse, and exabytes of data in your data lake built on Amazon S3.

Obviously, anywhere you have lots of data is a place where security matters.  So let’s talk about what JASP will check about at Redshift environment.  Before we do that we should make sure to point out that with Redshift, we’re usually talking about clusters and many of the parameters or settings for those are managed by parameter groups.


So … if you have lots of data, especially if you think there might be anything sensitive in it, you should probably think about encrypting that data.  Redshift makes that relatively easy but often people don’t always do it.  JASP will check this to make sure your Redshift data is encrypted.  Redshift works with clusters.  JASP checks each cluster to see if it has been configured with encryption – which is literally a radio button in the cluster configuration.

For most organizations, using KMS is totally reasonable.  You may want to have different keys for different environments or purposes.


Another thing we check with JASP is whether Redshift is accessible publicly.  We would never expect public access to a Redshift cluster to be intended.  In practice, this looks like a cluster with a VPC and Security Group with open ports to the outside world.  It is easy to check via the API as well.

aws redshift describe-clusters


Redshift also has a setting that allows it to be updated.  This has obvious risk, in the case that there is some kind of change that breaks something.  It also has an obvious upside, which is that if there are any security issues that indicate an update is needed, they will be applied automatically.  JASP checks this setting as well.


We can check that SSL is required when connecting to Redshift by checking each parameter group and ensuring that they require SSL.  Generally speaking, we would expect connecting to access sensitive data to be over an SSL/TLS connection.  To get more information about parameter groups from the CLI, we can do this:

aws redshift describe-cluster-parameter-groups

Activity Logging

Redshift makes it easy to log user connections, changes to users and queries run.  Having this logging on provides an audit trail and is strongly indicated for any data stores with sensitive or regulated data.  JASP checks this on each parameter group.


AWS Redshift has a pretty basic profile in terms of security.  Without diving deeper into what data is present, we can still make some initial observations and very general security recommendations.



Don’t rely on X-XSS-Protection to protect you from XSS

Brian Fore No Comments
  Application Security Engineering


The X-XSS-Protection header only helps protect against certain reflected XSS attacks. It does nothing for stored XSS attacks. Don’t rely on it to protect your site from XSS!

What it can do: Block reflected XSS attacks

Reflected XSS occurs when a malicious query parameter in a page’s URL is rendered unsanitized on the page.

The XSS protection against this is not implemented uniformly across browsers (and it’s not supported at all by Firefox). But the basic idea is that the browser searches through the html looking for a script tag that exactly matches the script tag in the URL. If it finds a match, then the page is prevented from loading.

As an example, suppose our page takes the ‘user’ query parameter from the URL and displays it in a welcome message:

We’ll turn off Chrome’s default XSS auditor by including the following header in our server responses (this is not something you should normally do):

X-XSS-Protection: 0

Now we’ll replace ‘Mario’ with a script tag and see what happens:

With the X-XSS-Protection turned off, the script runs and an alert box appears.

Now we’ll turn the X-XSS-Protection back on by including the following header in our server responses (this is the usual setting we recommend):

X-XSS-Protection: 1; mode=block

We’re blocked when we attempt to reload the page:

We see the following message in the Chrome dev console:

The XSS Auditor blocked access to ‘http://localhost:8080/todos?user=%3Cscript%3Ealert(1)%3C/script%3E’ because the source code of a script was found within the request. The server sent an ‘X-XSS-Protection’ header requesting this behavior.

Here we have an example of the X-XSS-Protection in action. It has signaled to the browser to block the loading of the page, because


appears both in the URL and within the page’s html.

We generally recommend setting X-XSS-Protection: 1; mode=block, which (as we saw above) prevents the loading of the page if a reflected script is detected. An alternative is to set X-XSS-Protection: 1, which signals the browser to sanitize the reflected script and then continue to load the page.

What it can’t do: Block stored XSS attacks

Stored XSS occurs when a user submits malicious input to the website, the website stores this input, and then redisplays it, unsanitized, on one or more pages on the site.

As an example, suppose the todos on our site are displayed unsanitized, and the user submits a new todo with an onmouseover action:

When the user clicks Submit, the todo is stored server-side and the page refreshes with the updated list of todos from the server. When we hover over the malicious todo we see the alert popup:

The X-XSS-Protection does nothing against this sort of attack.


Generally speaking you should include the X-XSS-Protection header in your server responses:

X-XSS-Protection: 1; mode=block

But realize this only helps protect against certain reflected XSS attacks, and it’s not implemented uniformly across browsers. It will not protect against stored XSS attacks.

So how can you protect your site from stored XSS attacks?

  • Have a strong Content-Security-Policy.
  • Sanitize (or reject) malicious user input before it is stored server-side.
  • Always sanitize user input before it is displayed client-side. (Modern frameworks will usually take care of this for you, assuming you follow their best practices.)
  • Don’t assume 3rd party libraries (markdown editors, chart creators, etc.) are properly sanitizing all possible user input.



JASP Check Deep Dive: S3

Matt Konda No Comments
  Cloud Security JASP

It is very common to find Amazon S3 buckets misconfigured. 

We found one in a pen test this week.  We find them frequently.  The most common things we see with S3 buckets is that people leave them open to the world and don’t encrypt them.  The one we found this week also let us delete and write files.

Something cool about using a tool like JASP ( is that it will not only detect the kinds of settings we’re about to go deeper on, but it will also check them every day and alert you if something changes.  Finally, you should be able to go look at reports to determine when the bucket first showed up with that config (though ideally you could get that from CloudTrail too).

Why encrypt S3 drives?

Since we’re in a shared file system in AWS, even though we expect AWS to prevent anyone from ever being able to read raw disk attached to any system we run in our account, because there could be shared host systems or infrastructure, we need to take extra precautions to make sure the data we write isn’t mistakenly available to another tenant.  This is also why we advise encrypting any other data storage as well.  Fundamentally, if a rogue user were able to identify a problem and break out of their guest instance to read raw disk, I don’t want them to know what I have.  If I encrypt the disk, they shouldn’t be able to.

Thinking about permissions for S3 drives

Sometimes S3 drives are used to host files like images, videos or web content.  In that case, you need the files to be readable to be able to use the service the way you want.  In that event, we would recommend double checking that the directory is not listable.  In general, we don’t want directories to be listable.  We would also recommend using different buckets if you intend to have some files be readable and others not be.  Finally, and this sounds obvious when we say it like this, but if the intent is for people to be able to read the files, don’t let them write or delete them!

Other times we have S3 buckets that are more like file sharing drives.  We want a limited group of people to be able to read those buckets.  Of course, we also want a limited number of people to be able to write or delete in those buckets as well.


A couple of things related to S3 and logging.  First, your cloudtrail logs that get stored in S3 should not be publicly readable.  Second, access to any non web files should probably have access logging going to cloudtrail.  That will come in handy if you ever need to know who did read that file.


These are just some examples of things that JASP ( can identify for you related to S3 buckets.  However you choose to manage your environment, you may want to implement some sort of automatic check.

JASP Meta: November 2018 Edition

Matt Konda No Comments
  JASP Startup

Building JASP has been a really interesting experience for all of us at Jemurai (  This post captures some of what I think we’re seeing and learning right now.

We bootstrapped.  Lots of people think raising venture capital for an idea is the best way to build and grow.  We still bootstrapped.  That means we paid for all of the development of the tool by also working our tails off on client projects.  Some of those client projects were cloud security audits, and that’s where the whole idea came from.  Keeping close to clients needs has actually helped us.  Bootstrapping also means we’ve stayed very lean.  Which I hope means the team feels an organic sense of ownership and learning as we go.  We’re hungry.  Not scared.  But hungry.

As consultants and engineers, we are sure there is a lot of value in the JASP tool.  It makes it easy to find and fix AWS security issues.  It provides ongoing visibility and notifications when something isn’t right.  We have learned a lot from the awesome tools listed below in the references and you can run a lot of those by hand to get some of the results JASP gives.  But you have to figure out how to run those tools and digest the results.  We tried to build a tool around a lot of those same ideas that would take away a lot of the complexity of running the tool and exchange that for clearer information and resources for the user.  We believe there is a significant need for easier to use tools that don’t take a dev team or DevOps process and pull them out of the things they know.

I believe someone can get started in JASP in < 10 minutes and get the benefit of daily checking and alerting around AWS security items, for a price that is a small fraction of having someone do it by hand.  – MK

What I am personally learning is that having a tool with value isn’t enough.  I can show people the results, help them fix real security issues and while they are happy to have fixed them – they don’t necessarily feel a clear and present need to buy the tool.  It is still a “nice to have”.  Even most people who might want to try it for free can’t make the time.  People that do try it for free, rarely opt in to purchase unless they need the deeper support we can provide at the higher tiers.  It could be discouraging but I understand, there is so much noise and everyone is so busy, we need to make it easier and find the best ways to communicate about it.

Of course, one big goal was to be able to provide this value in a leveraged way, with a software platform.  We didn’t want to be constrained by what our team could do by hand.  (And of course, we wanted to be a SaaS business with a great hockey stick J curve)  So we’ve been trying to find ways to get the message out about our product.

As we’ve engaged with firms to help us with marketing and growing the idea beyond our technical ideation, we’ve learned more.  We digested their suggestions and it is clear that almost any framing of the tool comes off as a gross oversimplification.  Very common advice is to scare people or drive people to the tool with Compliance.  There’s a place for fear and compliance, but I told myself a long time ago I didn’t want to be one of those security firms.  I hate the idea of using people’s fear and ignorance to get them to buy my tool.  From the beginning, I have wanted to find developers and put tools in their hands that made their lives remarkably better.  I wanted to approach security as a constructive partner, helping people to do the right thing with a positive spin.  I believe by approaching the space this way, we can earn long term trust from partners.

But of course, like I said we also want to grow beyond what we can do directly with word of mouth.  So, following marketing advisors input, we have run ads on various platforms.  We have tried several messaging approaches.  All suggest that what we’re doing isn’t resonating for people in a major way yet.  We have done a couple of feature releases we think might help (better prioritization, user management, dashboards, reporting) but ultimately, we’re at that place where we’re not chasing users by adding more features – we’re trying to find the right messaging, framing, pricing, etc. to make the tool relevant and useful to people.

When I started my career, I hated having to build new features that sales people had already promised to customers.  Now, I’m laughing at my old self and thinking about how hard I would work for each new customer.  That’s not to say I am willing to lose my identity chasing them, but I definitely underestimated the complexity of building and running a software business – in particular the ability to engage with customers and have lots of great learning conversations.  I assumed that with a free tier in the tool we’d be having lots of those conversations.

There is a lot more to do here in terms of building a platform for automation.  The vision has always been to be able to do Azure, GCP and even integrate Glue for source code analysis.  We want to make these kinds of analysis really easy so that they just show up where developers need them.  We want to be an API first backbone for security automation that makes it possible to quickly apply new tools, rules, etc.  Yes, we can build some of these things into one off Jenkins jobs or run them as scripts, but there is a lot more value when the results are aggregated, stored over time, compared to industry standards, and get escalated with support to fix.

It will be very interesting to continue to learn what the industry has an appetite for.  The good news is that it’s all been fun along the way and its the journey not only the destination that matters.


JASP Check Deep Dive: ECR

Matt Konda No Comments
  Application Security Cloud Security JASP

As we build JASP, we’re brainstorming and learning about security (so far, primarily in AWS).  This is the first in a series of “Check Deep Dive” posts that talk about things we are checking for in JASP.  It seems like an interesting area to share information.  Incidentally, we’re also going to post more meta posts about the Jemurai and JASP journey.

The first simple check we’ll talk about is around AWS ECR or Elastic Container Registry.  If you are using Docker containers and managing their lifecycle in AWS, you may be using ECR.  You may also be using Docker Hub or other container registries.  This check really demonstrates some of the power of checking security things through an API.  By using the ECR API, we can know some things about the containers hosted in AWS ECR just by asking, the way we do about anything else.

Specifically, we can know the age of the image, any tags and when it was last pushed.  We can easily iterate across regions and find older tagged images.  The idea for most clients we work with is that they want their docker images to be recent.  Older images suggest that they are not patching or updating a given container.  Especially older tagged images are likely places that need to be updated.

Essentially, JASP will check each region for images that are old and alert you to that.

Now, AWS allows you to set lifecycles policies for ECR.  This is a really cool feature.  This can allow you to expire and track this right in AWS.  We totally recommend doing this.  That said, we only have one client that lives this hardcore and actually automatically removes any expired images after every 30 days.  In that case, if they haven’t built an updated image within 30 days, too bad for them.  They’re in it to win it.  And frankly, they are walking the walk there.

On a side note, we have another client that is using Docker heavily and claimed to be patching every 30 days because they pushed new Docker images every 30 days.  When we dove a layer deeper though, we realized that they were hard setting to a very old version of Alpine Linux, which removed many of the benefits of updating frequently.  In other words, they were updating the layer they were building but not the layers they were building on.  To be crystal clear, this “check” won’t identify this issue – you’ll want to look at your dependencies with a tool like dive to do that.


JASP Dashboards

Matt Konda No Comments
  Cloud Security JASP

JASP is a platform for security automation.  We currently focus on monitoring AWS environments for potential security issues.

Throughout September and October, we have been refining JASP dashboards.  The goal is to give user’s the simplest possible summary view of how they are doing.  We wanted to help convey a sense of how a user’s environment stacks up.  We found that people want to know not just what their issues are but how they are doing … relative to other companies and relative to where they should be.  To do that, we did some number crunching and analysis of typical security issues we find.

To implement this, we drew a bit from SSL Labs Grades and Github contributions.

The goal was to show:

  • Roughly how you are doing in simple terms.  (Nobody wants a “D”!)
  • How we calculated your grade and some history so you can see improvement or quality sliding right there in the dashboard
  • Our “The One Thing™” idea – the one thing you should go fix today.

Below is an example from one of the intentionally insecure environments we test with.  Note that the services are clickable and drill through to the specific security issues that are causing the grade for the given service.

JASP users should be able to see their dashboards now at:  We are adding notifications now to alert users if their grade changes and to ensure that we communicate what their grades are periodically.

Live Coding a Glue Task at AppSecUSA – Video

Matt Konda No Comments
  Application Security OpenSource

Here is the video from the Glue and live coding talk at AppSecUSA.

Live Coding a New Glue Task at AppSecUSA

Matt Konda No Comments
  Application Security Engineering OpenSource

At AppSecUSA, OWASP Glue, a project we contribute heavily to, was asked to present in the project showcase.  I put together an overview talk about how the tool is architected and what we use it for.  Then, we added a new task live during the talk.  I thought that was enough fun that it was worth blogging about.  I would like to start by acknowledging Omer Levi Hevroni, of Soluto, who has also contributed and continued support for Glue over the past year +.

Why Glue?

We started working on Glue because we found that most security tools don’t integrate all that cleanly with CI/CD or other developer oriented systems.  Our goal is always to get the checking as close to the time of code authorship as possible.  Further, we thought that having it wrap open source tools would make them more accessible.  Philosophically, I’m not that excited about tools (see earlier post) but we should definitely be using them as much as we can provided they don’t get in our way.  Ultimately, we wrote Glue to make it easy to run a few security tools and have the output dump into Jira or a CSV.

Glue Architecture

So we defined Glue with the following core concepts:

  • Mounters – these make the code available.  Could be pulling from GitHub or opening a docker image.
  • Tasks:  Files – these analyze files.  Examples are ClamAV and hashdeep.
  • Tasks:  Code – these are some level of code analysis.  Some are first class static tools, others are searching for dependencies or secrets.
  • Tasks:  Live – these are running some kind of live tool like Zap against a system.
  • Filters – filters take the list of findings and eliminate some based on whatever the filter criteria are.  An example is limiting ZAP X-Frame-Options findings to 1 per scan instead of 1 per page.
  • Reporters – reporters take the list of findings and push them wherever you want them to end up.  JIRA, TeamCity, Pivotal Tracker, CSV, JSON, etc.

Live Coding

OK, so a common task is to say

“Hey, this is cool but I want to do this with a tool that Glue doesn’t yet support.”

Great.  That should be easy.  Let’s walk through the steps.  We’ll do this with a python dependency checker:  safety.

Set Up the Task

A good way to start a new task is to copy an existing one.  In this case, we’re going to do a python based task so let’s copy the bandit task and alter to match:

require 'glue/tasks/base_task'  # We need these libraries.  The base_task gives us a report method.
require 'json'
require 'glue/util'
require 'pathname'

# This was written live during AppSecUSA 2018.
class Glue::SafetyCheck < Glue::BaseTask  # Extend the base task.

Glue::Tasks.add self  # Glue is dynamic and discovers tasks based on a static list.  This adds this task to the list.

def initialize(trigger, tracker)
  super(trigger, tracker)
  @name="SafetyCheck"                             # This is the name of the check.  -t safetycheck (lowered)
  @description="Source analysis for Python"
  @stage= :code                                   # Stage indicates when the check should run.
  @labels<<"code"<<"python"                       # The labels allow a user to run python related tasks.  -l python

Now we have the start of a class that defines our SafetyCheck task.

Run the Tool :  Safety Check

Here we just need to implement the run method and tell the task how to call the tool we want to run.  In this case, we want it to create json.  The resulting code is:

def run
  rootpath = @trigger.path
  @result=runsystem(true, "safety", "check", "--json", "-r", "#{rootpath}/requirements.txt")

The @trigger is set when the Task is initialized (see above) and includes information about where the code is.  We use that to know where the path that we want to analyze is.  Then we use one of the provided util methods runsystem to invoke safety check with the parameters we want.

Note that we are putting the result in the @result instance variable.

Parse the Results

Once the tool runs, we have the output in our @result variable.  So we can look at it and parse out the JSON as follows:

 def analyze
    puts @result
    results = clean_result(@result)
      parsed = JSON.parse(results)
      parsed.each do |item|  
        source = { :scanner => @name, :file => "#{item[0]} #{item[2]} from #{@trigger.path}/requirements.txt", :line => nil, :code => nil }
        report "Library #{item[0]} has known vulnerabilities.", item[3], source, severity("medium"), fingerprint(item[3]) 
    rescue Exception => e
      Glue.warn e.message
      Glue.warn e.backtrace
      Glue.warn "Raw result: #{@result}"

Here, we call a clean_result method on the result first.  You can look here for detail, but it is just pulling the warnings that the tool emits that make the output not valid JSON.  We do this to make sure the JSON is parseable for our output.  This is a common issue with open source tools.  I don’t underestimate the value of making these things just work.

The magic here is really in the two lines that set the source and then report it.  The source in Glue terms is the thing that found the issue and where it found it.  In this case, we’re setting it to be our Task (SafetyCheck) and the library name in the output from file containing the dependencies. (requirements.txt)

The report method is supplied by the parent BaseTask class and takes as arguments:  description, detail, source, severity and fingerprint.

def report description, detail, source, severity, fingerprint
    finding = @trigger.appname, description, detail, source, severity, fingerprint, )
    @findings << finding

You can see if you look closely that we set all of the severities to Medium here because safety check doesn’t give us better information.  We also use the supplied fingerprint method to make sure that we know if we have a duplicate.  You can also see that the result of calling report is that we have a new finding and the finding is added to the array of findings that were created by this task.  We get the trigger name, the check name, a timestamp, etc. just by using the common Finding and BaseTask classes.

Making Sure the Tool is Available

In addition to run and analyze the other method we expect to have in our Task is a method called supported?.  The purpose of this method is to check that the tool is available.  Here’s the implementation we came up with for safety check, which doesn’t have a version call from the CLI.

 def supported?
    supported=runsystem(true, "safety", "check", "--help")
    if supported =~ /command not found/
      Glue.notify "Install python and pip."
      Glue.notify "Run: pip install safety"
      Glue.notify "See:"
      return false
      return true

The idea here is to run the tool in a way that tests if it is available and alerts the user if it is not.  Graceful degredation as it were…

The Meta

Here is the code from the Tasks ruby file that runs all the tasks.

if task.stage == stage and task.supported?
  if task.labels.intersect? tracker.options[:labels] or      # Only run tasks with labels
    ( run_tasks and run_tasks.include? task_name.downcase )  # or that are explicitly requested.

    Glue.notify "#{stage} - #{task_name} - #{task.labels}"

Here you see the supported?, run and analyze methods getting called.  You also see the labels and tasks being applied.  Its not magic, but it might look weird when you first jump in.


We wanted to create a new task.  We did it in about 20 minutes in a talk.  Of course, I wrote the code around it so it was easy.  But it does illustrate how simple it is to add a new task to Glue.

If you want to see this check in action, you can run something like:

bin/glue -t safetycheck

We’re getting the Docker image updated, but once available you can run:

docker run owasp/glue -t safetycheck

We hope people can digest and understand Glue.  We think it is an easy way to get a lot of automation running quickly.  We even run a cluster of Kubernetes nodes doing analysis.  This in addition to integrating into Jenkins builds or git hooks.  Happy Open Source coding!!!

How it Works: TOTP Based MFA

Aaron Bedra No Comments
  Application Security Engineering OpenSource


Multi-Factor Authentication has become a requirement for any application that values security. In fact, it has become a regulatory requirement in some industries and is being adopted as a requirement in several others. We often discover misconceptions or downright misunderstandings about how MFA works and think this is a topic worth diving into. This particular article will focus on one of the most common second factors, Time based One Time Password, or TOTP.

TOTP authentication uses a combination of a secret and the current time to derive a predictable multi-digit value. The secret is shared between the issuer and the user in order to compare generated values to determine if the user in fact posses the required secret. You may have heard this incorrectly referred to as “Google Authenticator”. While Google had a major part in popularizing this method, it has nothing to do with how TOTP actually works. Any site may create and issue tokens and any mobile application with a correct implementation of TOTP generation may produce a one time value. In this article we will implement server side TOTP token issuing and discuss its security requirements.

To read more about TOTP token generation, please take a look at RFC 6238.

The example code in this article is written in Java. This task can be accomplished in any programming language that supports the underlying cryptographic functions.

Establishing a Seed

The foundation for the security of a TOTP token begins with the seed. This value is used in conjunction with the current time to derive the instance of the token. Because time can be calculated it is not suitable as the only value for our token. Choosing a seed is incredibly important and should not be left up to the user. Seeds should be randomly generated using a Cryptographically Secure Pseudo Random Number Generator. You can determine the number of bytes you want to use, and in this example we are using 64. We will use the SecureRandom implementation provided by the Java language.

static String generateSeed() {
    SecureRandom random = new SecureRandom();
    byte[] randomBytes = new byte[SEED_LENGTH_IN_BYTES];
    return printHexBinary(randomBytes);

In order to cosume the token we will return the hex representation of the bytes generated. This allows us to pass the value around a bit easier.

Establishing a counter

The other side of TOTP token generation relies on the current time. We take the current time represented as a long, which is the number of seconds since epoch. This can be derived using System.currentTimeMillis() / 1000L. Next, we take the value and divide it by our period, or the number of seconds the token will be valid before rotating. We will use a value of 30 in our example, which is the recommended setting. Finally, we need to put the value into a byte array. There are a few ways to do this, but the following method is on the conservative side accounting for non 64 bit longs and possible endian differences.

private static byte[] counterToBytes(long time) {
    long counter = time / PERIOD;
    buffer = new byte[Long.SIZE / Byte.SIZE];
    for (int i = 7; i >= 0; i--) {
        buffer[i] = (byte)(counter & 0xff);
        counter = counter >> 8;
    return buffer;

Once we have this value we can execute our hmac operation to produce the long form of our OTP value.

Generating a Value

Using the seed as a key and the counter as a message, we will derive our long form OTP value. The value returned from our HMac operation will be truncated in order to produce the 6 digit value we will compare against in the end. The following code is an HMacSHA1 operation using the standard Java encryption libraries. While RFC 6238 describes the possible options of HMacSHA256 and HMacSHA512, they are not viable when distributing the secret for use on most mobile authenticator applications.

private static byte[] hash(final byte[] key, final byte[] message) {
    try {
        Mac hmac = Mac.getInstance("HmacSHA1");
        SecretKeySpec keySpec = new SecretKeySpec(key, "RAW");
        return hmac.doFinal(message);
    } catch (NoSuchAlgorithmException | InvalidKeyException e) {
        log.error(e.getMessage(), e);
        return null;

With the ability to hash our seed and message we can now derive our TOTP value:

static String generateInstance(final String seed, final byte[] counter) {
    byte[] key = hexToBytes(seed);
    byte[] result = hash(key, counter);

    if (result == null) {
        throw new RuntimeException("Could not produce OTP value");

    int offset = result[result.length - 1] & 0xf;
    int binary = ((result[offset]     & 0x7f) << 24) |
                 ((result[offset + 1] & 0xff) << 16) |
                 ((result[offset + 2] & 0xff) << 8)  |
                 ((result[offset + 3] & 0xff));

    StringBuilder code = new StringBuilder(Integer.toString(binary % POWER));
    while (code.length() < DIGITS) code.insert(0, "0"); 
    return code.toString(); 

Using the seed as the key and the counter as the message, we perform the necessary conversions, perform the hash, and truncate the message according to the specification. Finally, we divide the result by 10 to the power of the expected digits (in our case 6) and convert that to a string value. If the result is less than 6 digits we pad it with zeros.

Providing the Secret to the User

In order to provide the secret to the user, we need to provide a consistent string in a format that allows the user to generate tokens reliably. There are several ways to do this, including simply providing the secret and issuer to the user directly. The most common method is by providing a QR code that contains the information. For this example we will use the Zebra Crossing library.

class QrCode {
    private static final int WIDTH = 350;
    private static final int HEIGHT = 350;

    static void generate(String applicationName, String issuer, String path, String secret) {
        try {
            String qrdata = String.format("otpauth://totp/%s?secret=%s&issuer=%s", applicationName, secret, issuer);
            generateQRCodeImage(qrdata, path);
        } catch (WriterException | IOException e) {
            System.out.println("Could not generate QR Code: " + e.getMessage());

    private static void generateQRCodeImage(String text, String filePath) throws WriterException, IOException {
        QRCodeWriter qrCodeWriter = new QRCodeWriter();
        BitMatrix bitMatrix = qrCodeWriter.encode(text, BarcodeFormat.QR_CODE, WIDTH, HEIGHT);
        Path path = FileSystems.getDefault().getPath(filePath);
        MatrixToImageWriter.writeToPath(bitMatrix, "PNG", path);

Executing this code will save a PNG with the corresponding QR code. In a real world situation, you would render this image directly to the user for import by their application of choice.

Consuming the Token

In order to test our implementation we will need a program that can accept an otpauth:// string or a QR Code. This can be done a number of ways. If you want to do this via a mobile device, you can use Google Authenticator or Authy. Both of these programs will scan a QR code. If you want to try locally, 1Password provides a way to import a QR image by adding a label to a login and selecting One-Time Password as the type. You can import the created image using the QR code icon and selecting the path to the generated image. Once the secret is imported it will start producing values. You can use these as your entry values into the example program.

Protecting the Seed

It is important to respect the secret for what it is — a secret. With any secret we must do our part to protect it from misuse. How do we do this? Like any other piece of persisted sensitive information, we encrypt it. Because these secrets are not large in size, we have a number of options at our disposal. The important part is not to manage this step on your own. Take advantage of a system that can encrypt and decrypt for you and just worry about storing the encrypted secret value. There are cloud based tools like Amazon KMS, Google Cloud Key Management, and Azure Key Vault as well as services you can run like Thycotic Secret Server, CyberArk Conjur, and Hashicorp Vault. All of these options require some kind of setup, and some are commercial products.

Setting up Secret Storage

To keep this example both relevant and free of cost to run, we will use Hashicorp Vault. Vault is an open source project with an optional enterprise offering. It’s a wonderful project with capabilities far past this example. There are a number of ways to install Vault, but since it is a single binary, the easiest way is to download the binary and run it. Start vault with the development flag:

λ vault server -dev

During the boot sequence you will be presented with an unseal key and a root token. The example program will expect the VAULT_TOKEN environment variable to be set to the root token provided. Your output will be similar to the following:

λ vault server -dev ==> Vault server configuration:

Api Address:
Cgo: disabled
Cluster Address:
Listener 1: tcp (addr: "", cluster address: "", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: (not set)
Mlock: supported: false, enabled: false
Storage: inmem
Version: Vault v0.11.2
Version Sha: 2b1a4304374712953ff606c6a925bbe90a4e85dd

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.

You may need to set the following environment variable:


The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.

Unseal Key: mkniY94IlJngQz07gfPZlQnZnvEHMXWQ3/MiFegsfr8=
Root Token: 4uYnD1vVZZcNkbYe03t0cLkh

Development mode should NOT be used in production installations!

Once Vault is booted you will need to enable the Transit Backend. This allows us to create an encryption key inside of Vault and seamlessly encrypt and decrypt information.

λ export VAULT_ADDR=
λ vault secrets enable transit
Success! Enabled the transit secrets engine at: transit/

λ vault write -f transit/keys/how_it_works_totp
Success! Data written to: transit/keys/how_it_works_totp

λ echo "my secret" | base64

λ vault write transit/encrypt/myapp plaintext=Im15IHNlY3JldCIgDQo=
Key Value
--- -----
ciphertext vault:v1:/HeILzBTv+JbxdaYeKLVB9RVH9o/b+Lilrja88VhCuaSSlvUY+IzHp2Uλ vault write transit/encrypt/myapp plaintext=Im15IHNlY3JldCIgDQo=

λ vault write -field=plaintext transit/decrypt/myapp ciphertext=vault:v1:/HeILzBTv+JbxdaYeKLVB9RVH9o/b+Lilrja88VhCuaSSlvUY+IzHp2U | base64 -d
"my secret"

We can now encrypt and decrypt our TOTP secrets. The only thing left is to persist those secrets so that they can be referenced on login. For this example we will not be creating a complete user system, but we will setup a database and create an entry with an encrypted seed value to show what the end to end process will resemble.

To encrypt and decrypt our seed we can use a Vault library. The following example demonstrates the essential pieces:

String encryptSeed(String seed) throws VaultException {
    final Map<String, Object> entry = new HashMap<>();
    entry.put("plaintext", seed);
    final LogicalResponse response = client.logical().write("transit/encrypt/myapp", entry);
    return response.getData().get("ciphertext");

String decryptSeed(String ciphertext) throws VaultException {
    final Map<String, Object> entry = new HashMap<>();
    entry.put("ciphertext", ciphertext);
    final LogicalResponse response = client.logical().write("transit/decrypt/myapp", entry);
    return response.getData().get("plaintext");

Note the lack of error handling. In a production system you would want to handle the negative and null cases appropriately.

Finally, we take the output of the vault encryption operation and store it in our database. The sample code contains database handling logic, but it is typical boilerplate database code an not directly relevant to explaining the design of a TOTP system.


By now it should be pretty obvious that time synchronization is of the utmost importance. If the server and client differ more than the period, the token comparison will fail. The RFC describes methods for determining drift and tolerance for devices that have drifted for too many periods. This example does not address drift or resynchronization, but it is recommended that a production implementation address this issue.

Running the Example

The source code for this example is available on Github. Make sure you have read an executed all of the steps above before attempting to run the example. Additionally, you will need to have PostgreSQL installed and running. If this is your first time running the example, you will need to be sure to import the generated token using your preferred application before attempting to type in a value. You should have your MFA token generator application open and the test token selected. You can setup and execute the program by running:

createdb totp
mvn flyway:migrate
mvn compile
# For Unix users
# For Windows users
mvn -q exec:java -Dexec.mainClass=com.jemurai.howitworks.totp.Main

You will be prompted to enter your token value. After pressing return the program will echo the value you entered, the expected token value, and if the values match. This is the core logic necessary to confirm a TOTP based MFA authentication sequence. If your token values do not match, make sure to enter your token value with plenty of time to spare on the countdown. Because we have not implemented a solution that accounts for drift, the value must be entered during the same period the server generates the expected value. If this is your first time running the example you will need to import the QR code that was generated before the input prompt. If everything was done correctly you will see output similar to the following:

MFA Token:
Entered: 808973 : Generated: 808973 : Match: true

At this point you have successfully implemented server side TOTP based MFA and used a client side token generator to validate the implementation.

Security Pitfalls of TOTP

For a long time TOTP or really, just OTP based MFA was the best option. It was popularized by RSA long before smart phones were capable of generating tokens. This method is fundamentally secure but is open to human error. Well crafted phishing attacks can obtain and replay TOTP based MFA responses. Several years ago FIDO and U2F were introduced and this is now the “most secure” option available. It comes with benefits and drawbacks and like all solutions should be carefully considered before use.


Multi-Factor Authentication is an important part of the security of your information systems. Any system providing access to sensitive information should employ the use of MFA to protect that information. With credential theft via phishing on the rise, this could be one of the most important controls you establish. While this article examines an implementation of TOTP token based MFA, you should seek an established provider like Okta, OneLogin, Duo, Auth0, etc. to provide a production ready solution.