Dependency management has become a very important part of insuring the security of your applications. We’ve written about it many times in the past and even highlighted some horror stories from the Node community where it’s not uncommon for a single project to have hundreds or thousands of dependencies developed and maintained by a variety of sources. We as a community have adopted tools (such as central repositories of security advisories) and processes like semantic versioning to help reduce the burden of maintaining an application with large numbers of external dependencies. In this post we describe a situation in which what should be a harmless and routine update of external dependencies resulted in a horizontal privilege escalation.

Quick Aside: Semantic Versioning

When developing applications which have large numbers of external dependencies there is a strong need to reduce the effort required to keep up to date without breaking your application. Semantic versioning endeavors to remove some of the pain in keep your dependencies up to date with the latest bug and security fixes (and possibly new features) without breaking your application. The basic idea is to define the types of changes that are permitted in a library or application release given the standard version designation of MAJOR.MINOR.PATCH. Here is the 10k view (full details at semver.org):

  1. When breaking changes are introduced increment MAJOR version.
  2. When new functionality is added in a backwards compatible manner, increment MINOR
  3. When backwards compatible bug fixes are made, increment PATCH

Dependency Requirements Based on SemVer

When a library follows semver, the assumption is that you can easily and safely pull in bug fixes and/or additional features without fear of changes to the API or functionality provided. This allows for defining the required version of a dependency based on this (e.g. the common practice of caret and tilde version specifiers in package.json). Defining your dependency requirements this way allows for updating to new bug fix or enhancement releases as easy as running yarn upgrade. Great, right?

The Nightmare Scenario

Think about how important the dependencies are to your application. Are you using them for encryption? Authentication? Authorization? Probably all of these. Here is a scenario. Consider an TypeScript API written using NestJS and another handy library to reduce the boilerplate for CRUD controllers. An example CRUD method which retrieves a list of Widgets based on a user’s allowed permissions might look something like the following.

  @Override()
  async getOne(@ParsedRequest() req: CrudRequest, @User user: User) {
    // Simulating user only having access to remoteId == 10

    req.parsed.filter.push({
      field: 'remoteId',
      operator: '$eq',
      value: user.remoteId,
    });

    return this.base.getOneBase(req);
  }

This adds additional constraints to the retrieved data based on the user’s remoteId. All works as expected. Great!

The Issue and Investigation

Your app is working great and all is well. Then comes time to update dependencies triggered either by a routine procedure or npm/yarn audit finding. You run yarn upgrade to upgrade to the latest patch/minor releases of all your requirements. Run the test suite, spot check manual testing, and deploy.

The next week, while assisting customers in the production application, a colleague indicates that something doesn’t look quite right. A review of the code in question does not show any changes, but the list endpoint in question is definitely not correctly filtering results based on user’s access. Running git bisect reveals that the issue was introduced with the dependency update. This leads to an upgrade from 4.2.0 to 4.4.1 for @nestjsx/crud.

This was only a minor version increase which means there should only be bug fixes and new features…not breaking changes. Checking release notes reveals no indication of changes. Back to our good friend, git bisect; which when paired with yarn link allows us to easily use bisect for the library in question while still using our application to decide whether the commit under test is good or bad. Eventually arriving at the commit at the root of the issue–a change to when request filters are converted to SQL conditions.

Testing for the issue

After tracking the root cause of the issue, the question that immediately sprang to mind is “How can we build a test case for this?”. We have talked and blogged about security unit testing. Would a unit test be sufficient here? A unit test for this method would likely mock the base function or the underlying service and check that the request contained the necessary filter. However, even after the changed behavior such a test would pass. Without large amounts of work to mock or replace other libraries (e.g. TypeORM), the best way to test for the expected behavior of this controller action is with either an integration or end-to-end test. Below is an example end-to-end test which uses an in-memory SQLite database and simple fixtures needed for the module under test.

import { Test, TestingModule } from '@nestjs/testing';
import * as request from 'supertest';
import Widget from '../src/crud-filter-poc/widget.entity';
import { Repository } from 'typeorm';
import { TypeOrmModule } from '@nestjs/typeorm';
import { CrudFilterPocModule } from '../src/crud-filter-poc/crud-filter-poc.module';

describe('CrudFilterPocController (e2e)', () => {
  let app;
  const fixtures: Widget[] = [
    {
      id: 1,
      name: 'Entity 1',
      remoteId: 10,
    },

    {
      id: 2,
      name: 'Entity 2',
      remoteId: 20,
    },

    {
      id: 3,
      name: 'Entity 3',
      remoteId: 30,
    },

    {
      id: 4,
      name: 'Entity 4',
      remoteId: 10,
    },
  ];

  beforeAll(async () => {
    const moduleFixture: TestingModule = await Test.createTestingModule({
      imports: [
        TypeOrmModule.forRoot({
          type: 'sqlite',
          database: ':memory:',
          synchronize: true,
          dropSchema: true,
          entities: [
            './src/**/*.entity{.ts,.js}',
          ],
          logger: 'advanced-console',
          logging: 'all',
        }),
        CrudFilterPocModule,
      ],
    }).compile();

    app = moduleFixture.createNestApplication();
    await app.init();

    const repo: Repository<Widget> = app.get('WidgetRepository');
    await repo.insert(fixtures);
  });

  afterAll(async () => {
    await app.close();
  });

  it('/crud (GET)', async () => {
    return request(app.getHttpServer())
      .get('/crud')
      .auth('user1', 'password')
      .expect(200)
      .expect([ fixtures[0], fixtures[3] ]);
  });

  it('/crud/3 (GET) 404 for user 1', async () => {
    return request(app.getHttpServer())
      .get('/crud/3')
      .auth('user1', 'password')
      .expect(404);
  });

  it('/crud?filter=remoteId||$eq||10 returns fixtures[0]', async () => {
    return request(app.getHttpServer())
      .get('/crud?filter=remoteId||$eq||10')
      .auth('user1', 'password')
      .expect(200)
      .expect([ fixtures[0], fixtures[3] ]);
  });
});

Conclusion

This experience is a cautionary tale about what can go wrong when trying to do the right thing by staying up to date. Here are some takeaways from the discussion:

  1. *Software is tough!* But, you already knew that.
  2. No amount of care reviewing dependencies will catch all potential issues. Even if you have the resources to review every single commit to every dependency, a breaking change might not immediately be noticeable.
  3. Testing suites should always include security tests. We’ve written about this before and plan to delve deeper in the future as a follow-up to this post so explain and demonstrate which types of tests are best for each category of security issue.

Look for more blog entries in the near future discussing adjacent topics.

References

Joe Kerby

Joe is a developer, security software engineer, and pen tester.

Want to stay up to date with the lastest from Jemurai?

Sign up for our monthly newsletter!