JASP 1.0.3

JASP 1.0.3

Matt Konda No Comments
  Application Security Engineering Startup

Today we’re publicly releasing version 1.0.3 of JASP, our cloud security automation tool:  https://jasp.jemurai.com.

We think of it as “we’ll watch your back in the cloud“.   We currently work in AWS and check for common security issues across EC2, S3, and RDS and many other services.  You can sign up on the website and get free checks right there.  To get monitoring or extended reporting, we are offering several tiers of paid subscription plans.

We’re working hard to price it such that it is consumable by a general audience and not just hyper secure environments with huge budgets.  Part of our mission is to democratize access to security tools.

JASP is the culmination of a ton of work on the part of our team.  I want to pause to congratulate them for the accomplishment of building a tool that can help a lot of companies build more secure environments.  We’ve been through a lot.  Thanks, Team!

It is just the beginning.  Our roadmap includes tackling other new environments like Azure, GCP and even considering going after AppSec Automation.  We see the platform as a center of accessible security automation.

If you can, take a moment to check it out and give us feedback.  Its for you!  Happy Friday!

Security in the SDLC (Reboot)

Matt Konda No Comments
  Application Security Engineering

Today I was looking back for my blog posts about security in the SDLC from 2012-2016 and I realized that I had never migrated them forward to the new website when we updated.  Whoops!  So … in this post I want to recap in some detail what I’ve learned about security in the SDLC.

Every SDLC Is Different

Nearly every company we work with has a different version of how they build software.  Some are more Waterfally, others are more Agile.  Some are even DevOpsy.  Even within these broad categories, there is always variation.  At most mid and large sized companies, there are effectively more than one variation of SDLC being used.

For these reasons, when we build a model for integrating security into a company’s SDLC, one critical step is to stop and understand the SDLC itself.  I like to draw pictures at both a unit of work level and an overall process level.  We’ll present two then circle back and dive into the details.

Here is a generalized high level example.  The project timeline goes from the top left to the bottom right and the phases are represented by the segments.  Then we overlay security activities for each phase.

Here is an example of how we might represent a unit of work like a story and overlay security security activities.

It is very useful to have visual representations of the workflows to help communicate about what activities are expected when.

Every Company Has Security Strengths and Weaknesses

In theory, we can just identify best practices for security and everyone would do them.  In reality, we are constrained by:

  • The skills of the security team (and developers)
  • The technologies in use
  • The history of the program and the tools that have already been purchased
  • The budget and ability to change

We usually try to understand what a team is capable of and where their strengths are.  For example, some organizations are very focused on penetration testing.  Others have established training programs.  Others have strong static analysis tools.  Again the variation is the rule.  Therefore, our typical advice is to find the things that are realistic for your organization and then only overlay those onto your processes.

Discussing Risk

One important part of handling security within an organization is coming to an agreement across teams about how to discuss and manage risk.  This inherently involves engineering, security, business stakeholders and potentially even executives up the chain.  It is beyond the scope of this post to present a model for handling risk in general, but the presence of a process, engagement of the parties and a way to triage risks are important things to look for as you think about bringing risk into the conversation.

Capturing Security Requirements

There are a variety of ways that requirements are captured.  Sometimes they are in PRD’s (Product Requirements Documents).  Sometimes they are in stories.  The more we are able to explicitly capture security requirements, the better job we’re going to do meeting them.

Examples of security requirements on a story might include:

  • The following role is required to access this function
  • Access to this data should be logged
  • Values for coupon $ should only be positive

Again, we want to think about specific requirements in the context of each unit of work.  One way to help do that is to introduce a villain persona that we can refer to for inspiration as we imagine the abuse cases and how things can go wrong.

That being said, we also advocate for Baseline Security Requirements.  Baseline security requirements are requirements that exist across all of the stories for a given system.  Often they apply across different systems or services.  Examples of things covered in baseline security requirements might include:

  • Identity should be established from …
  • Security events should be logged to syslog
  • All connections should be over TLS
  • All user input should be validated and output encoded if shown in a web UI
  • All data should be encrypted at rest by X,Y,Z method based on data classification
  • How should Microservices authenticate to each other

Baseline security requirements are handled like Non Functional Requirements (NFR) which describe the operational qualities of a system rather than features.  An example of an NFR might be that a system should be able to handle 100,000 concurrent users.  This is something we need to know early and design for.  We think of Baseline Security Requirements in the same way.  If we present global expectations and constraints, it gives the developers the chance to choose appropriate frameworks, logging, identity, encryption, etc. as described above.

For both story and baseline security requirements, it is important that we identify these up front so that estimation includes security.  When estimates include security, developers have a chance of meeting security expectations.

Generally speaking, we advocate for identifying security requirements at almost every organization we work with.

Design Review and Surface Area Change

Throughout the development process, developers are continually adjusting major components, choosing 3rd party libraries, adding API endpoints and changing which systems interact with each other.  When we can identify these types of changes and trigger a discussion, it allows us to identify the right times to talk about security.

For example, when we add a new organization API endpoint, we want to ask who we expect to call it and what data it should expose.

If we add a logging or metrics framework like NewRelic, we might want to think about what data gets shared with NewRelic.

If we start to build something with Struts, we may want to step back and note that the library has a security history.

Some organizations have existing architecture teams that essentially gate the design and development process with processes to ensure supportability, health checks, etc.  This is a good place to inject design review.  In some cases however, this is too coarse grained and we’d rather see weekly or monthly checkpoints on design.

Testing

Some organizations are test forward organizations.  At these types of organizations, we advocate for security unit tests.  Security unit tests can be written for all kinds of security conditions:

  • Testing login, registration, forgot password flows.
  • Verifying authorization is implemented.
  • Injection.
  • XSS.
  • Logging / Auditing.

Generally, we recommend security unit/integration tests when organizations are already testing and their developers are capable of adding security tests.  The more elaborate the business logic and potential risk of fraud or other abuse through authorization scenarios, the more likely we are to push unit tests.

On the other hand, many organizations will not be able to do testing either because they don’t have a test culture or because their developers may not take an interest in security.

Checklists

At most clients, I recommend the following checklist for every story:

  • Prevented XSS through library that does output encoding by default
  • Parameterized any SQL
  • Identified sensitive data and encrypted at rest
  • Implementing proper authorization

At one client, we built this custom checklist into TFS issues so that developers could literally check those four boxes as they confirmed that they had handled these scenarios.

Checklists will only be effective in organizations where developers can value them and are trained.  That said, they are a low tech way to build security habits.

Code Review

Code review is something that we do both in large projects with some clients, and on a PR by PR basis in GitHub with others and on our own internal projects.  For organizations that use pull requests (PR’s) already, this can be a really good place to inject security.  Developers need training and time to do this properly, but in this case their existing processes support a very low friction incremental security activity.

Static Analysis

Static analysis is useful in some cases.  Its usefulness varies based on the language and scenario.

We typically recommend starting with open tools for languages/frameworks where they are well supported.  For example. bandit with python, brakeman with Ruby on Rails applications, sonarqube with Java apps.  Once you know your teams and processes can work with open tools, consider more advanced commercial tools.

Some gotchas with static analysis include:

  • False positives / Lack of prioritization of results
  • Can take a long time to run and therefore not work well as incremental check
  • Often very language specific
  • Rarely API friendly / Often expects devs to work in their interface

If your security team has a static analysis tool they like, consider integrating it gradually.

CI/CD (Continuous Integration/Continuous Delivery)

Continuous Integration is the process through which we automatically compile(build), test and prepare software based on many developers working on it concurrently.  Throughout the day, the build server is working on each commit to continuously integrate changes and rebuild.

Jenkins, Travis, CircleCI and others help us to make sure our software can be build portably and is tested regularly.  It helps us to identify problems as close to the point in time where we were coding them as possible.

Continuous Delivery takes CI a bit further and puts us in a position where we can deploy anytime – potentially many times a day.  CD is beneficial in that it usually means that we can fix issues in production faster than an organization not practicing CD.  That being said, it also suggests a level of automation that is conducive to additional security automation.  We might check internal service registries and ensure all services are running under SSL as part of a post deployment check.

We always recommend integrating tools into CI.  We contributed heavily to OWASP Glue for this type of integration work. This includes dependency checking, security unit tests and other types of more custom automation.  Seeing a project where only one workstation can build the software is an anti-pattern.  We also recommend continuous delivery because it means we can fix faster.

Dependency Analysis

Most software is built upon a set of 3rd party libraries.  Whether these are commercial or open source, sometimes there are vulnerabilities in the libraries we use.  In fact, for attackers, these libraries are particularly nice targets because a vulnerability in a library gives me the ability to target N users of that library.

Thus, as an engineering group, we need to make sure that our dependencies don’t have terrible vulnerabilities in them that we inherit by building on top of them.  We can do this by automating dependency checks.  We like npm, retire, bunder-audit and dependency-check for doing this.  We generally recommend using an open tool and building a process around it before adopting a commercial tool.

We also recommend having an explicit way to triage these items into tiers.

Decommission

An often forgotten way to improve security is to take applications offline.  Sometimes the cost benefit analysis just doesn’t support maintaining an application any more.  Other times the application isn’t used at all anymore.  Surprisingly often, these applications are still available and running – and not being updated.  The easiest way to reduce the attack surface is to just turn off the application.

Conclusion

The real conclusion of all of this is that there are a lot of different ways to take security activities and integrate them into the SDLC.  There is no one right way to do it.  This post presented some of the processes we use to engage and the activities we recommend.

 

The Cost of AppSec

Matt Konda No Comments
  Application Security

Application security is a weird field.  On the one hand, most practitioners know that it plays a critically important part of any organizations security posture.  We see real issues all the time, problems that cause our customers real losses.  On the other hand, as a wise security leader once said:

By the time the AppSec companies get to the table, 90% of the budget has been committed to AV and Firewalls.

Further, standards have a lot of trouble capturing Application Security.   I was recently diving into the NYDFS which mandates that companies have application security and found this broad statement:

Each Covered Entity’s cybersecurity program shall include written procedures, guidelines and standards designed to ensure the use of secure development practices for in-house developed applications utilized by the Covered Entity, and procedures for evaluating, assessing or testing the security of externally developed applications utilized by the Covered Entity within the context of the Covered Entity’s technology environment.

I mean, can  you say vague and open ended?  How do we know what to do based on that?  How do we go to our stakeholders and tell them we need to do … something?

To answer this question, I looked to the Verizon DBIR, various standards like NIST CSF, etc. and I realized that so many of the ways we think about those categories don’t fit into one area or another.  Consider a few areas that get called out:  MFA, stolen credentials, insider threats, malware.  Each and every one of these has an Application Security related angle.  As a software developer, I do networks, store data, think about identity and generally cross concerns.

So how do we step back and think about the value of application security?  Maybe we start by thinking about our risk models.

Do we have fraud paths in our applications?  What can we gain by stopping fraud?

Are we subject to standards?  Do we have partners that gate software purchases based on our security posture?  In that case, security is a part of our value for those partners.

In practice, models like the OWASP ASVS, OpenSAMM and others help to give us structure to think about application security.  They don’t necessarily help us know how to think about costs or appropriate expenditure…

So I guess its hard to answer this question.  We can say that we have “annualized loss expectancy” or other models … we can look at a company like Equifax and see the significant changes to that company’s stock price.  Generally, most models I’ve seen are speculative at best.

Through many projects building appsec programs, we have developed a practice that we can use to:

  • Size a program based on stakeholders, standards and goals
  • Inventory applications
  • Project costs and timelines

But the assumptions we make and the framework we use to do that is reactive and based upon stakeholder feedback.

How can we learn to build a better model that provides better proactive guidance for spending on application security?

 

 

Product Security in Github Issues

Matt Konda No Comments
  Application Security Engineering Startup

As I’ve mentioned here in the past, we started working on a product in 2018 and we are getting really close to launching it more openly where people other than our initial (friendly) alpha customers can use it.   The reaction has been awesome and we’re encouraged.

As I looked at our project, I realized we needed to step back and really confirm we had our basics covered.  Now mind you, we’re all security oriented developers so I’m not saying these things weren’t done by our team – but I will say that even I, as a leader and someone who gets on a soapbox a lot about pushing left, had not explicitly emphasized or required these things as first class features yet.

So I thought I’d share a bit of detail about our thinking here.  We basically wanted to take the simplest possible approach to something that would work.  So I brainstormed a bit on some general items we always care about and added github issues for them.

Note that this not an exhaustive list.  We have more detailed requirements about authorization for example in each story.  We also have items to check for things like sql and code injection.  But I wanted to start simple for this discussion.

  • The Twitter password exposure notification and the emails I received as I dutifully reset my password inspired #36 here.  Basically, if we notify people properly we can let them help us identify issues if they happen.
  • Implementing two factor is pretty basic now and should be required for most if not all public apps.  We will think about social logins as well, but we’re not sure if that is relevant yet.
  • Knowing about both failed password attempts and successful attempts after a lot of failed attempts is interesting and will allow us to see brute forcing.
  • Setting security headers is easy and should be automatic.  Let’s make sure we do it.
  • Force SSL is self explanatory.
  • Verifying that we have secure and httpOnly set for session cookies is easy once we think of it.
  • We’re using Vue.js for the front end.  Let’s stop and make sure it is handling malicious script inputs the way we think and ensure that our API’s are encoding data as needed.
  • If we’re storing sensitive information, let’s talk through and document key management and algorithms as a team.
  • Let’s build a threat model that we can use on an ongoing basis for each different app component.  We’re thinking about threat dragon but I generally just use a mind mapping tool.  I think I’ll write a whole post about these.

Anyway, if you’re working on any kind of web application, most of these apply in one shape or form.  Reach out to me if you have questions.  Happy building!!!

Learn From AWS Security Expert, Aaron Bedra

Keely Caldwell No Comments
  Uncategorized

Our Chief Scientist, Aaron Bedra, will be on the road over the next month speaking at a few conferences. Most of his talks will involve around AWS Security and security for developers.

Aaron, a senior level security developer, has worked as a CSO and CTO and is the co-author of Programming Clojure, 2nd and 3rd Edition.  

You can catch Aaron at any of the conferences below:

Sunday, 4/22
Topic: AWS Security Essentials and Adaptive Threat Monitoring
https://www.nofluffjuststuff.com/conference/reston/2018/04/speakers/aaron_bedra

 

Tuesday, 4/24
Topic: AWS Security Essentials
https://gotochgo.com/2018/workshops/78

 

Thursday, 4/26
Topic: Security and Trust in a Microservice World
https://gotochgo.com/2018/sessions/453

 

Monday, 4/30 (Keynote Speaker)
Topic: Security Skills for Developers
https://glsec.softwaregr.org/2018-program-schedule/

 

Wednesday, 5/9 (Keynote Speaker)
Topic: The Cost of Complexity
https://www.charlotteissa.org/

 

Monday, 5/14
Topic: AWS Security
https://www.infosecsummit.com/ehome/2018cbusinfosec/agenda/

 

Friday, 5/18
Topic: AWS: Critical Security Solutions for Developers
https://www.safaribooksonline.com/live-training/courses/aws-critical-security-solutions-for-developers/0636920177661/

Announcing Jemurai Security Automation

Matt Konda one comments
  Application Security Engineering

Its been an exhilarating few weeks.  I had to remind myself to take a breath and blog today.

What’s new-ish is that, we have a core team working on a new platform for security automation.  It extends the work we’ve done with Glue and other ideas taken from a great sample of client consulting engagements.  Across all of those engagements it just never felt like any solution was complete or as intuitive as it should have been.  We were always bending rules to live in Jenkins or to write custom scripts to check things that maybe aren’t that unique.

I’ve tried to always remind my team:  get meta.  What’s the problem behind the problem?  That vuln isn’t the problem.  Its the SDLC that allows the vuln to be introduced and not reviewed or noticed that is the problem.

Frankly, as a team we all started to think that all the tooling we’re using for DevOps and Security Automation is all too opinionated.  None of it is at its root there to be the backbone of a flexible solution.  Usually the existing tooling is pushing you into a particular vendors security tool.

We don’t have a combination of a rich inventory, accessible through an API, combined with a flexible way to do things with source code, containers, live systems, clouds in an abstracted way that can be assembled to build a better security automation solution.  The opportunity presented itself to start working on it and as a team we couldn’t be more excited to be building the next generation of tooling.  We’ve got prototypes flying that look at this problem in a whole new way.

If you’re interested in hearing more, are struggling with security automation, or have ideas you want to share to help us, I would personally love to hear from you.  We’re also actively looking for beta testers.  Catch me at matt at the company domain dot com.

Impersonation Failures

Matt Konda No Comments
  Application Security Engineering

Several times in the last few weeks we have looked at applications that have significant issues with “impersonation” features.  What is impersonation?

an act of pretending to be another person for the purpose of entertainment or fraud.  (From Google)

In practice, impersonation is a feature that is commonly implemented to allow Customer Support to “see what the user sees”.  Typically, it involves the support user effectively logging in as the actual user.  As such, as a feature it can be invaluable.

It can also go quite wrong. 

We reviewed one application that allowed any user to impersonate any other user.  Only certain users should be able to perform impersonation.  This is an issue of authorization.  Rarely do we want regular users to be able to impersonate internal users.

It is almost a surprise if we see an application that has proper auditing around impersonation and actions during the impersonated session.

Things that should be auditable include:

  • When the impersonation started and who initiated it.
  • What actions were taken on behalf of the user by the impersonating user.
  • When the impersonation stopped.

Extending on auditing just a bit, it is generally advisable that logging facilities record both the acting user and the logged in user.  Imagine that Joe wants to impersonate Matt’s account to help troubleshoot an issue.  All of the actions he takes should show changes to Matt’s data but there should also be a record that Joe is the acting / impersonating user.

This is also a good reminder that we often don’t need to show sensitive data, even to users themselves.  If we avoid showing specific sensitive data without an extra event (click), we can ensure that during impersonation sessions support personnel don’t see data that they don’t need to.

Another fail mode for impersonation is to have a common account with a “bypass” password, kind of like a shared super user for impersonation.  We’ve recently seen systems that allow users to get a bypass password/session and then do whatever they want without any audit trail as to what the users were looking at.  Using a shared user for this means that all of the audit trail will converge on a service account that we can’t now tie to a particular user.  It is always a good exercise to walk through negative abuse case scenarios and ensure that the information needed is being collected to make sense of them.

Let’s think about the risk model for a moment:

  • Malicious internal user wants to collect information from application users
  • Non-malicious internal user helps lots of people and ends up with their data on their system – then they become a victim of a phishing or other type of attack
  • Malicious external user can elevate privileges by assuming another identity

Since impersonation is a way to bypass authentication and authorization, it should receive significant scrutiny.  If you have impersonation code, it may be worth reviewing code to ensure that only the users that you want to do it can, and that when they do their actions are tracked.

Dependency Management for Developers

Matt Konda No Comments
  Application Security Engineering

I recently got asked about best practices for dependency management by an old colleague.  In response, I wrote the following which I realized might be useful to a broader audience.


So … things like github’s new notification on this have been automated via platforms like CodeClimate or via Jenkins internally at dev shops for a long time using tools such as:

  • Retire.js (JavaScript)
  • Bundler Audit (Ruby)
  • Safety Check (Python)
  • Dependency Check (Java)

Generally, we write these checks into a build so that the build fails if there are vulnerabilities in libraries.  We have contributed to an open source project called Glue that makes it easy to run tools such as these and then push the results into JIRA or a CSV for easier integration with normal workflows.

Note that there are also commercial tools that do that ranging from Sonatype to BlackDuck to Veracode’s Software Composition Analysis.  I generally recommend starting with an open source thing to prove you can do the process around the thing and then improving the thing.

At a higher level, I pretty much always recommend that companies adopt a tiered response system such as the following:

  1. Critical – This needs to get fixed ASAP, interrupts current development and represents a new branch of whatever is in Prod.  Typical target turnaround is < 24 hours.  Examples of vulnerabilities in this category might be remote code execution in a library – especially if it is weaponized.
  2. Flow – This should get fixed in the next Sprint or a near term unit of work, within the flow of normal development.  These types of issues might need to get addressed within a week or two.  Typical examples are XSS (in non major sites, major sites treat XSS like an emergency).
  3. Hygiene – These are things that really aren’t severe issues but if we don’t step back and handle them, bad things could happen.  If we don’t update libraries when they come out with minor updates then we get behind.  The problem with being far behind (eg. 3 year old JQuery) is that if there is an issue and the best fix is to update to current, the actual work to remediate could involve API changes that require substantial development work.  So philosophically, I think of this as being the equivalent of keeping ourselves in a position where we could realistically meet our Critical SLA (say 24 hours) on updating to address any given framework bug.

An important part of this is figuring out how a new identified issue gets handled in the context of the tiering system.  In other words, if a new issue arises, we want to know who (a team?) gets to determine which tier it should be handled as.  We definitely do not want to be defining the process and figuring this all out while an active major issue looms in the background.

Of course, with all of this and particularly the hygiene part, we need to be pragmatic and have a way to negotiate with dev teams.  We can weigh the cost of major updates to a system against the cost of keeping it running.  Planned retirement of applications can be the right answer for reducing overall risk.

Ultimately, we want to translate the risk we see with this into terms stakeholders can understand so that they don’t balk at the hygiene work and they are prepared when we need to drop current work to accommodate critical updates.

Using the OWASP Top 10 Properly

Matt Konda No Comments
  Application Security OpenSource

I have gone to great lengths to strictly separate my OWASP activities from my Jemurai activities in an effort to honor the open and non-commercial aspects of OWASP to which I have committed so much volunteer time and energy.

Today I want to cross the streams for a very specific reason, not to promote Jemurai but to stop and think about how some of OWASP’s tools are used in the industry and hopefully prevent damage from misuse.

I want to address perspectives reflected in the following statements, which I’ve heard from a few folks:

I want a tool that is OWASP Compliant

And:

I need a tool to test for the OWASP Top 10

And:

I want an OWASP Report

Or alternatively:

Our tool tests for the OWASP Top 10

OWASP Is Generally Awesome

First of all, let’s ground ourselves.  OWASP provides terrific open resources for application security ranging from the Top 10 to ZAP to ASVS to OpenSAMM to Juice Shop to Cheat Sheets and many more.  The resources are invaluable to all sorts of folks and when used as intended are extremely valuable and awesome.

The Top 10 Is Not Intended To Be Automated

Let me be very clear:  there is no tool that can find the OWASP Top 10.  The following are items among the Top 10 that can rarely if ever be identified by tools.

  • #3:  Sensitive Data Exposure
    • Tools can only find some predefined categories of sensitive data exposure.  In my experience, a small subset.  One reason is that sensitive data is contextual to a company and system.  Another is that exposure can mean anything from internal employees seeing data to not encrypting data.
  • #5:  Broken Access Control
    • Tools can’t generally find this at all.  This is because authorization is custom to a business domain.  A tool can’t know which users are supposed to have access to which things to be able to check.
  • #6:  Security Misconfiguration
    • Tools can find certain known misconfigurations that are always wrong (eg. old SSL), but things like which subnets should talk to other subnets aren’t going to be identified by Burp or ZAP.  We find custom tools that can find them but they are just that custom.
  • #10:  Insufficient Logging & Monitoring
    • What does this even mean?  Our team has been delivering custom “security signal” for a while, but this isn’t binary that you have it or you don’t.  No company I have ever seen has comprehensive evidence.  There’s no tool you can plug in and immediately “get it”.

Even among the things that can be identified, there is no one tool that can find all of them.

Stepping back, its actually a good thing that the Top 10 isn’t easily identified by a tool.  That reflects the thought and human expert opinions that went into it.  It wasn’t just a bunch of canned stuff that got put together.

What The Top 10 is Great For

The Top 10 are a great resource to help frame a conversation among developers, to be the basis for training content, to be thought provoking for people actively thinking about the issues at hand.  More than 10 items is too much.  These approximate the best ideas we currently have.  Of course there is always room for improvement but the Top 10 are a great resource when used well.

Please just understand them as a guide and not a strict compliance standard or something a tool can identify.

References

https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

The Importance of Your Inventory

Matt Konda one comments
  Application Security

We work with companies building security programs a lot.  Across all aspects of the program, the word inventory is a term that seems to have a surprisingly high level of general awareness but a surprisingly low level of common definition.

in·ven·to·ry       ˈinvənˌtôrē/
noun  1.  a complete list of items such as property, goods in stock, or the contents of a building.  (Google:  https://www.google.com/search?q=define+inventory)
synonyms: listlistingcatalogrecordregisterchecklistlogarchive;

In particular, it is always interesting to ask if:

  1. All apps from all business units represented?
  2. The inventory is updated automatically?

Unless the answer to 2. is “Yes” then the answer to 1. is almost always “No”.  If the answer to “Are all apps from all business unites represented?” is no, then how can the inventory be serving as the cornerstone of your security program?  I’m curious what else you might be using as a cornerstone instead…

Let’s step back – what’s the point of the inventory?  The most important reason to have an inventory might be to make sure that we know what we need to do and what we’ve done.  If you have a list of all of your applications that is updated (ideally automatically) and tracks security activities against those, that is a great starting point.

Many people track this in Excel or some form of spreadsheet.  That isn’t inherently bad unless that is spread throughout various teams in different states.  Most security vendors want you to use their tool for your inventory.  The reason is that if they own the main place you go for your TODO list, that suggests you’ll be coming back to them a lot.  In what I’ve seen, all those inventories and dashboards leave a lot to be desired.  You might actually be better off in Excel where you can keep your vendor blinders off.

Let’s focus a bit on getting an inventory.  We’ve done it by hand, by automatically pulling it from GitHub, a service meta tier, or an artifact store and in other ways.  In larger companies, it is often a combination of the above.  We would generally advise that the inventory be seen as a living dataset that can be a reference throughout purchasing decisions.

In future posts we’ll talk about the conversations we need to have to triage an inventory and how we can turn an inventory into a budget and a roadmap.