Tag Archive check deep dive

JASP Check Deep Dive: S3

Matt Konda No Comments

It is very common to find Amazon S3 buckets misconfigured. 

We found one in a pen test this week.  We find them frequently.  The most common things we see with S3 buckets is that people leave them open to the world and don’t encrypt them.  The one we found this week also let us delete and write files.

Something cool about using a tool like JASP (https://app.jasp.cloud) is that it will not only detect the kinds of settings we’re about to go deeper on, but it will also check them every day and alert you if something changes.  Finally, you should be able to go look at reports to determine when the bucket first showed up with that config (though ideally you could get that from CloudTrail too).

Why encrypt S3 drives?

Since we’re in a shared file system in AWS, even though we expect AWS to prevent anyone from ever being able to read raw disk attached to any system we run in our account, because there could be shared host systems or infrastructure, we need to take extra precautions to make sure the data we write isn’t mistakenly available to another tenant.  This is also why we advise encrypting any other data storage as well.  Fundamentally, if a rogue user were able to identify a problem and break out of their guest instance to read raw disk, I don’t want them to know what I have.  If I encrypt the disk, they shouldn’t be able to.

Thinking about permissions for S3 drives

Sometimes S3 drives are used to host files like images, videos or web content.  In that case, you need the files to be readable to be able to use the service the way you want.  In that event, we would recommend double checking that the directory is not listable.  In general, we don’t want directories to be listable.  We would also recommend using different buckets if you intend to have some files be readable and others not be.  Finally, and this sounds obvious when we say it like this, but if the intent is for people to be able to read the files, don’t let them write or delete them!

Other times we have S3 buckets that are more like file sharing drives.  We want a limited group of people to be able to read those buckets.  Of course, we also want a limited number of people to be able to write or delete in those buckets as well.

Logging

A couple of things related to S3 and logging.  First, your cloudtrail logs that get stored in S3 should not be publicly readable.  Second, access to any non web files should probably have access logging going to cloudtrail.  That will come in handy if you ever need to know who did read that file.

Conclusion

These are just some examples of things that JASP (https://app.jasp.cloud) can identify for you related to S3 buckets.  However you choose to manage your environment, you may want to implement some sort of automatic check.

JASP Check Deep Dive: ECR

Matt Konda No Comments

As we build JASP, we’re brainstorming and learning about security (so far, primarily in AWS).  This is the first in a series of “Check Deep Dive” posts that talk about things we are checking for in JASP.  It seems like an interesting area to share information.  Incidentally, we’re also going to post more meta posts about the Jemurai and JASP journey.

The first simple check we’ll talk about is around AWS ECR or Elastic Container Registry.  If you are using Docker containers and managing their lifecycle in AWS, you may be using ECR.  You may also be using Docker Hub or other container registries.  This check really demonstrates some of the power of checking security things through an API.  By using the ECR API, we can know some things about the containers hosted in AWS ECR just by asking, the way we do about anything else.

Specifically, we can know the age of the image, any tags and when it was last pushed.  We can easily iterate across regions and find older tagged images.  The idea for most clients we work with is that they want their docker images to be recent.  Older images suggest that they are not patching or updating a given container.  Especially older tagged images are likely places that need to be updated.

Essentially, JASP will check each region for images that are old and alert you to that.

Now, AWS allows you to set lifecycles policies for ECR.  This is a really cool feature.  This can allow you to expire and track this right in AWS.  We totally recommend doing this.  That said, we only have one client that lives this hardcore and actually automatically removes any expired images after every 30 days.  In that case, if they haven’t built an updated image within 30 days, too bad for them.  They’re in it to win it.  And frankly, they are walking the walk there.

On a side note, we have another client that is using Docker heavily and claimed to be patching every 30 days because they pushed new Docker images every 30 days.  When we dove a layer deeper though, we realized that they were hard setting to a very old version of Alpine Linux, which removed many of the benefits of updating frequently.  In other words, they were updating the layer they were building but not the layers they were building on.  To be crystal clear, this “check” won’t identify this issue – you’ll want to look at your dependencies with a tool like dive to do that.

References

https://docs.aws.amazon.com/cli/latest/reference/ecr/index.html

https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_GetLifecyclePolicy.html

https://docs.aws.amazon.com/AmazonECR/latest/userguide/LifecyclePolicies.html

https://docs.aws.amazon.com/AmazonECR/latest/userguide/lifecycle_policy_examples.html

https://github.com/wagoodman/dive