Announcing “Inside Out” Tech Talks

Announcing “Inside Out” Tech Talks

Matt Konda No Comments
  Engineering Startup

As a small, growing and disruptive company we place a major focus on training our employees.

We’ve tried a lot of different things:  Capture the Flag games, internal videos, weekly tech talks, etc.  It’s an ongoing challenge and a continually improving process.  In a recent team discussion, we realized that there might be an interesting value to making some of those tech talks public.  It’s a way for us to provide something valuable to our community while giving our team a platform to present and cross training on technology and software security problems we’re facing.  For example, we’re seeing Hashicorp Vault and Marathon at Client X, or we’re using OWASP Glue with Jenkins at Client Y.

Somehow we came up with the idea of Inside Out Tech Talks, where we take one of our regular tech talks and make it open to the public.

The first will be 12/13 at 1:00 PM CST.

Join us on Zoom:  https://zoom.us/meeting/register/cd9408314686923e7510d14dfea9e911.

The topic is Security Automation.

 

Free Developer Security Training Wed, 11/15 @ 1pm CST

Keely Caldwell No Comments
  Application Security

Jemurai is hosting a free Dev Security training on “3 Open Source Tools for Secrets Management.” Join us at 1 pm CST on 11/15, and learn from Jemurai CEO and application security expert, Matt Konda.

In this training you will learn:

  • Security vulnerabilities that emerge from storing secrets in Git
  • 3 open source tools for managing secrets
  • Solutions in clear language, applicable to both engineering & security

This training is beneficial to the leadership and staff of both engineering and security teams.

Sign up here: https://www.jemurai.com/webinar/3topopensourcetoolsforsecretsmanagement

Thinking About Secrets

Matt Konda No Comments
  Application Security Engineering

Introduction

We have two types of projects that often uncover secrets being shared in ways that aren’t well thought through.

  1. During code review, it is actually rare that we do not find some sort of secret.  Maybe a database password or maybe an ssh key.  Sometimes, it is AWS credentials.  We’ve even built our own code review assist tool to check for all the ones we commonly see.
  2. During security engineering or appsec automation projects we end up wiring static analysis tools to source code and JIRA and this often uncovers plaintext secrets in git.

So, generally, plaintext secrets are everywhere.

Best Practice

I said above that people are sharing secrets in ways that aren’t well thought through.  Let me expand on that.  If all of your developers have access to secrets, that’s a problem.  Of course, most developers aren’t going to do anything nefarious but they might, especially after they leave.  Most companies have a further challenge that it is very difficult to change system passwords.  So .. developer leaves the company and chances are low any secrets are changing.  Suppose a laptop with source code on it gets stolen?

The other problem with having secrets around is that it makes it easy for an attacker to pivot and find other things they can target.  Suppose I get code execution on Server 1.  If all of the secrets Server 1 uses are stored in files on the server that the code uses, it makes that pivot to get on Server 2 via SSH or pull data from DB Server 3 trivial.

Testing

Here are two instant ways to look for secrets in your code:

docker run owasp/glue -t sfl https://github.com/Jemurai/triage.git

This runs a check for sensitive files that are often in the source code.

docker run owasp/glue -t trufflehog https://github.com/Jemurai/triage.git

This looks for entropy in the files in a project.  It can take a while to run but is a good way to find things like keys or generated passwords.

Nothing beats a person that is looking carefully and knows what to look for but grep is your friend for sure.  We try to find these and offer alternatives for storing secrets.

The Alternatives

Generally, we want to keep secrets in a place where:

  • People can’t just go read them
  • It is easy to change them
  • We know if anyone uses them (audit)

We see a lot of projects adopting Vault and tools like it to store secrets.

Even better is a scenario where credentials don’t even exist but get generated for a particular action and then automatically expired.  Ideally, we require MFA.  99-Design’s AWS-Vault does this with AWS and its sessions in an elegant way.  This pattern, in general, allows us to know that there aren’t existing passwords out there that people could use without our necessarily realizing.  It also reduces the challenge of responding to a stolen laptop for example.

References

An older reference from a ThoughtWorker:  https://danielsomerfield.github.io/turtles/

A tool:  https://www.vaultproject.io/

Another tool:  https://github.com/99designs/aws-vault

An upcoming Jemurai Tech Talk:  https://www.jemurai.com/webinar/3topopensourcetoolsforsecretsmanagement

Free Developer Security Training: Improve Your Application Security Wed, 10/10 @ 1PM CST

Keely Caldwell No Comments
  Application Security

 

We have a free developer security training:“3 Practices Your Dev Team Can Adopt Today to Improve Application Security.” It takes place on this Wednesday, October 11 at 1 PM CST.

You will learn:

  • Why it’s important for Devs & Devs management to add security into the SDLC: architecture, user stories code review, unit & integration tests and Q&A
  • Actionable activities the Dev Team can implement today
  • Solutions described in clear language, applicable to both engineering and security

Sign up here: https://www.jemurai.com/webinar/3itemsdevscanaddtodaytoimproveapplicationsecurity

Jemurai takes an agile, iterative approach to implementing security into our clients’ code & SDLC. This free training will provide tips for doing so in your environment.

Popular Media Coverage of Software and Formal Methods

Matt Konda No Comments
  Application Security

It is interesting … in the wake of Equifax and other recent news, The Atlantic has published several articles about software:

I say it is interesting because I am completely torn about both of them.  On the one hand, they are correct.  The Equifax Breach should not really be a surprise and the fact that there are coding errors in any system of significant size is something that most software developers or security professionals would accept without argument.

On the other hand, complacency or acceptance is the last thing that I would advocate for developers, consumers or companies after the Equifax breach.  I’ve already written about that here.

Furthermore, while formal methods present an interesting direction for software verification, in practice they are limited to very specific use cases.  I’ve never seen them employed professionally for any widely used application.  That doesn’t mean they aren’t or couldn’t be, but if I haven’t seen it – probably its not real or accessible for common developers yet.

An interesting side effect of these articles being in The Atlantic is that people who wouldn’t usually ask about these things are asking.  I’ve heard about each of these articles from numerous people at clients and partners.  I suppose that is a benefit of having the discussion – provided people have the attention span to continue the discussion.

The “Saving the World From Code” article also included a general quote which I think probably should have been attributed to Marc Andreesen in the Wall St. Journal in 2011:

It’s been said that software is “eating the world.”

The fact that it is not, makes me wonder just a bit about the context the article is written from.  One thing I can’t argue with is the substance of that quote, which again was from 2011.  I would perhaps add to it that software is flawed everywhere.  I just don’t buy that formal systems or rigorous modeling are a realistic near term solution for that.  Many of our clients are adopting new languages or technology – sometimes with more security issues – even as we work to secure their systems.  The idea of a 4GL language, which has been an idea for almost my whole professional career, where we can assemble a program in an increasingly sophisticated IDE with visual blocks like the hacking scene in the movie Swordfish seems unachievable in practice.  If anything, I prefer simpler text editors than ever before.

Ultimately, there is a lot that we can do to secure our systems.  Things like threat modeling to identify and then isolate scope, actively working on architecture, building common reliable blocks, teaching developers, building cultures that value security, using tools and smarts to think about scenarios, teaching practices that encourage security to be a first class part of the SDLC … all of these are real things people in the real world are doing to make software safer.  I doubt there is a silver bullet that somehow avoids the people understanding the problem – we have to accept that as a cost or accept the insecurity of the software we use.  I guess that’s why people hire us to help them secure their software.

Equifax: What’s the Score

Matt Konda No Comments
  Application Security Strategy

Introduction

Late last week (around 9/15) it was reported that the CIO and CSO at Equifax “resigned”.  Equifax stock is down by around 30%.  The FTC is launching an investigation and findings and settlements are likely to be in the $100’s of millions or more.  Clearly there are going to be short and medium term impact to Equifax’s security bumbling.  In this post, we’re going to present a longread with our thoughts about Equifax.

Malaise

One of the first reactions people had was to talk about this Atlantic article and ask the question:  “Isn’t our data already out there?  What can we actually do?”  

Consumer data breaches have become so frequent, the anger and worry once associated with them has turned to apathy. So when Equifax revealed late Thursday that a breach exposed personal data, including social-security numbers, for 143 million Americans, public shock was diluted by resignation.

I think this article is bordering on irresponsible.  Yes, your data may be out there.  Yes, you should not be shocked by another breach.  But absolutely, it is incumbent upon us to watch our credit and work to improve security so that this doesn’t continue to be expected.

There is a term I like to use for where we are as an industry:  technical debt.  In Agile development projects, technical debt is the term we use for work we put off in order to get the main project done faster.  It could be something like testing, or refactoring to ensure we are building a more robust system.  We know we’re prioritizing delivering the system and cutting some corners.  Those corners we cut are technical debt that we may have to circle back and do.  Often this happens (we circle back and deal with debt) when a company gets acquired or partners with a larger firm.  Many times, this technical debt never actually gets paid back.  I would argue that in most corporations, the vast technical debt waiting to be paid off represents one of the biggest overall security issues.

Another way to look at this is through the lens of budgets.  If a company increases its IT budget 5% each year, it may seem to be growing and investing.  After 20 years, if the IT Security Budget starts to grow at a pace of 2x the regular IT budget (so 10% each year) then the leadership of a company feels that they are being aggressive and strongly supporting security.  But from a security perspective, we were underinvesting all along so even incremental 10% increases in budget are leaving us woefully underinvested to truly provide security.

We need to be able to come up with an independent model for what we should be doing for security and then fund the right work.  Usually that’s going to cost a lot more than people want.

What Can Consumers Do?

Consumers can do a lot to keep themselves safe – but it will be a fair amount of work and takes time, energy and money.

  1. Consumers can be smart about what services they use.  Ultimately, when we put our information online – especially for free – we should know that we are trusting a company with information that can be used against us.  So … maybe don’t let your phone know where you are all the time for any application … don’t give arbitrary numbers of companies your social graph …
  2. Watch your statements.  From a pure financial perspective, there’s no substitute for your own due diligence and fast reporting.  Use a credit card that has known and favorable process for addressing claims of false charges.
  3. Lock your credit.  If you want to try to ensure that the data Equifax had can’t be used to open new accounts in your name, you can lock your credit.  It is unfree and and time consuming.  A good writeup that explains the overall ecosystem is here:  http://www.kalzumeus.com/2017/09/09/identity-theft-credit-reports/

The FTC following this is a good sign for consumers because it suggests that Equifax and other companies handling this type of data will expect to see punishments (fines) that are likely to spur them to invest more in security.

Far be it from me to say this – but it is possible that the whole credit reporting system should be re-envisioned.  Not to mention identity and SSN.

What Can Companies Do?

For companies, some key takeaways from the Equifax breach include:

  • Incremental improvement (10% budget increases) may not be enough.  IT and Security leadership are clearly vulnerable in case of failure.
  • Security should be baked into SDLC and product delivery processes.  Your products are living works and need to be maintained – security needs to be part of that.
  • Keep an inventory of your applications to avoid surprises and make sure you don’t lose account of technical debt.

If we address security from the beginning and as a first class part of every project, we can design for security, test for security, write requirements for security and ultimately deliver more secure applications.

As incidents like Equifax show, consumers and regulators expect us to be thinking about privacy and security.

A significant part of Jemurai’s business is trying to help companies reshape their application delivery to be secure.  Every client is different.  But we see a surge in demand for security that is only increasing.

What Was the Issue?

From what we can tell, the root issue was an application built with an old version of Struts.  So the immediate action Equifax could have taken would have been to keep their libraries up to date.  Of course, whenever the answer is “to patch” it suggests an underlying vulnerability that is interesting in its own right.

Behind the immediate action of staying up to date, the vulnerability had to do with remote code execution.  In layman’s terms, that means the vulnerability allowed a user to get code to run on the server.  There are a variety of ways that this happens in the real world these days … sometimes people take strings (a bunch of characters) and process those without checking that they fit certain rules.

For more detail, check out our previous blog post and video.

Qualifications

Another swirling discussion has been about the CSO who’s LinkedIn page cited a music degree as qualifications.  Many folks questioned that as a qualification.  I think this is basically a red herring argument.  The CSO may have been qualified and may not have been qualified, but the academic background is unlikely to have factored in the result.  Many great programmers come from music backgrounds – it is actually something some companies look for.

The more important question here is how the Equifax CEO and Board have responded to requests for funding for security.

Likely Long Term Impact

It is very hard to say what the long term impact of the Equifax breach will be.  In the case of Target, if we look at the stock chart it is hard to even know when the infamous breach occurred.  If we look closely we can see a dip around December 2013 but changes in the stock price are largely unrelated to security events and it certainly didn’t destroy the value of the company the way many thought that it might.

Equifax on the other hand, is at least in some sense specifically responsible for consumer’s security.  There could be no doubt that Equifax had sensitive data that would be coveted by schemers.  It is possible that they will held accountable to a different standard than was Target.  I wouldn’t hold my breath.

The Score

As humans, it is natural to simplify events, imagine them as a game and maybe even as having an end or closure.  So, if this is the end of the game, I would say that the score is Equifax 213 – Hackers 25.  

Equifax has still been commercially doing a lot for its shareholders.  It is most likely to continue to see success as a business.  On some level, Equifax is still winning.

On other hand, this is a very significant breach.  Nobody probably would have thought that the Equifax attackers could get 25 points.  That might represent a surprising vulnerability.  Regulators will likely add to that opposing score as they impose oversight and fines.

In order to protect their lead, Equifax is going to need to spend a lot more money and time ensuring they don’t give up more scores.  The game may slow down.  The profit may decline.  This will probably have ripple effects with their immediate competitors as well.

In the scheme of things, it is a test of all of our collective will to improve and sacrifice.  If we can envision better systems for identity and more inherently secure systems for credit scoring (blockchain anyone?), there are opportunities to dramatically improve security while realizing the benefits of a continually more technology advanced and connected world.

Of course, the game will go on.  Equifax may give up a lot of points, but if they keep their business running – it is likely that they’ll still come out with a higher score and “win”.  In some ways, I just hope the game isn’t that much fun to watch.

Mitigating the Vulnerability Widely Thought to Have Caused the Equifax Breach

Keely Caldwell No Comments
  Application Security

By: Warren Chain

The recent Equifax data breach may have exposed Personally Identifiable Information (PII) on over 143 millions Americans. 

It appears that this breach was caused by a Struts vulnerability – which allows a remote user to run code on a site. This vulnerability would be categorized under #9 of the OWASP Top 10 list of the Most Critical Web Application Security Risks.

Matt Konda, Jemurai CEO & OWASP GlobalChair, created a short video training for developers, where he shares his thoughts on mitigating this vulnerability.  

Check it out.

Mitigating the Vulnerability Widely Thought to Have Caused the Equifax Breach from Jemurai on Vimeo.

Insecure About Your Apps Security?

Keely Caldwell No Comments
  Uncategorized

Here at Jemurai, we take a human based approach to cybersecurity.

So, what does that mean? Security tools catch some vulnerabilities, but not all of them. For example, tools typically miss vulnerabilities related to business logic and user authorization and authentication. Addressing these vulnerabilities requires embedding security into your software development life cycle and code.

Want to learn more about securing your code?

Our CEO and the chair of OWASP, Matt Konda, is speaking on “3 Vulnerabilities That Security Tools Can’t Catch” at our free webinar on Wednesday, Sept. 13 at 1 pm CT.

This training will be valuable to the staff and leadership of both engineering and security teams.

Get information you can use today to improve the security of your code by signing up here.

You don’t want to miss this!

Security Policies Rebooted

Matt Konda No Comments
  Security Policy

Here’s a deep dark secret:  I don’t particularly like security policy.  I don’t always follow policy.  Goodness knows that with the 50-250 page policies I’ve seen, I didn’t even understand the whole policy at a legal level – and if you don’t understand them at a legal level can you really say you’re following them?  Not to mention when one policy contradicts another.

Even at companies with very robust security programs that include policy, it is very common that I approach developers and they don’t understand their companies policy either – like for example what data they need to protect.  At a previous employer, we used to tease the folks that worked on PCI as having a “passion for compliance.”  That was not a compliment.  Policy came to sort of feel like a necessary evil at best.

Then I met and started to work with our CISO Rocio Baeza.  I didn’t know that I’d end up hiring her as an internal policy, governance and risk resource for Jemurai but I’m lucky I did.  Initially, we did policy because many of our clients that needed technical help also needed policies – some kind of rules to follow.

As we challenged Rocio to “get meta” on the problems with policy the way we try to “get meta” with the technical issues we see, she extended and then surpassed our expectations by developing an approach for Agile Governance.  She implemented policies for clients that were short, to the point, readable and in our collective judgment captured the important things they needed to think about even better than the policy “books” we saw.

Writing policy in layman’s terms, with a focus on simplicity, was something that wasn’t immediately easy to appreciate.  The shorter simple policy reads easily and doesn’t feel like it hurts the same way some policies do.  Its like the old quote from Blaise Pascal:

 “If I had more time, would have written a shorter letter.”

We worked hard to make it shorter.  Does that mean it doesn’t work?  On the contrary, we think it works even better.  In fact, it works so well that we captured the policy in a more digestible way so that people could get access to the policies without a whole consulting engagement.  You can now purchase the policy bundle, which includes the core policy, a license and a simple one page implementation guide right off of our website for less than an hour of a security pro’s time.  Check it out:  https://jemurai.com/product/general-security-policy-bundle/ and let us know what you think.

Incubator: Canary Data

Matt Konda No Comments
  Incubator

Incubator

At Jemurai, we have started incubating products.  We love security consulting and the engineering we do there, but there is something amazing about building a product.  In particular, I constantly crave the experience of pushing the limit and trying something new and a little different.  I’m even embracing marketing and failing fast.  So each month, we take an idea out of our product backlog of ideas and try pushing ourselves with it a bit.

Last month, we released a set of simple Security Policy Bundle for $249 that you can download here.  This month, we’re pushing the canary.

Canary in the Coal Mine

What is the canary in the coal mine all about anyway?  Well, miners used to take a canary with them into the mine so that if carbon monoxide levels rose enough to be dangerous, they would know.  The canary would die and they would hopefully get out before the CO caused problems for them too.

In short, the canary is an early warning signal.

How Does Canary Data Work?

The way we envision canary data working is that we provide known data that is bad.  Sounds silly, right?  Except that we track it and know who we gave it to, when and for which of their environments.  Then we search for the known bad (canary) data in increasingly sophisticated ways and when we find it, it is a strong indication that a client has had a breach (of any kind!) at a certain point in time, in a certain location, application, part of their network, cloud, etc.

By tracing which canary data shows up, we can help both notify clients early of potential issues but also pinpoint where and which parts of their operations may have issues.  Its an early warning signal.

Input?

As with any “incubator” project, we have a lot of fresh ideas about how it could work, but it will have to be tested in the wild – so we’re interested in input or anyone that would like to help us test it in the realz.  Contact me to talk further.