Recently I’ve been working on a series of unit tests in Ginkgo (a popular BDD testing framework for Golang) and thought it might make for an interesting point of reference.
The tests ensure that my builds detect security misc onfigurations in our AWS / Kubernetes environment. I write the code just like a regular test case.
The upshot is that with this base set of tests, we can drop into your organization and within just a few days supply unit tests that run in your build pipeline to provide assurance around the state of your AWS or Kubernetes environment.
Maybe the best part is that any developer on your team can run and update the tests, which capture and self document the intended configuration.
If you’ve been tracking what we do at Jemurai, you know we built a tool called JASP to help our clients identify potential security configuration issues. We stopped actively selling and marketing it, but we still use it on projects and it has a ton of value. You can check it out if you want at https://app.jasp.cloud.
But what it struggles with are things that people often think should be easy. Things like:
I’ve been working more and more in Go (see S3S2 and GAA) for a few reasons. I like the typing. I like the speed of development. I like being able to build cross platform native executables. I like the concurrency model. I like the robustness of the cloud SDK’s. I see a lot of the Kubernetes community looking at it. I guess it’s also because I never really liked JavaScript on the server (don’t tell my team) and I was an early Ruby user before Python made a surge with both security and cloud tools. In any case, I’m enjoying Go.
Ginkgo provides a standard framework for describing expected behavior, which is the root of BDD (or Behavior Driven Design). I used to write tests in Cucumber or even RSpec with Ruby and I liked that model because you are effectively declaring how you want something to work instead of worrying about checking the details of its implementation.
So I write a test that logically says something like:
In the framework, the logic described above would look like:
Describe("AWS-S3", func(){
Context("US-EAST-1", func(){
It("Should have a public 'testing' bucket", func(){
bucket := GetBucket('testing') // Made up example
Expect(bucket.Name).To(Equal("testing"))
Expect(bucket.Visibility).To(Equal("public")) //oversimplification
}
It("Should have a private 'internal' bucket", func(){
bucket := GetBucket('internal') // Made up example
Expect(bucket.Name).To(Equal("internal"))
Expect(bucket.Visibility).To(Equal("private")) //oversimplification
}
})
}
If there is no public ‘testing’ bucket, then the test will fail. If there is an ‘internal’ bucket that is not private, the test will fail. The tests are self documenting examples of what the configuration should be.
By building a small suite of these tests, we can verify anything that the AWS SDK lets us see… which is almost everything!
One cool thing about this is that once you have the tests built, any CI/CD system that can run and test Go programs can run the tests and fail the build.
Having built JASP and now getting deeper into these types of infrastructure tests, one thing I can say is that we don’t want to have to write every test ourself or change the code for the test just because a new S3 bucket is out there.
So we have tests that represent the known state expectations in JSON but still read like this so that we can capture the configuration and also test and fail a build based on deviations from it. In theory, I can commit a file to GitHub that represents a configuration and now my tests will actively check that the actual cloud looks like what I say it should look like.
Again, I’m assuming people will run a tool like JASP to get basic and broad security configuration checks.
But then I am recommending that we build on our tests to make the pieces that differ from environment to environment testable.
Areas we’ve written tests:
Ping us if you’d like to talk more about this.