Today I read an interesting story about a team getting disqualified from an IndyCar race they had won because they had used a software enhancement in a way that went against the rules of the race. I had never really thought about the role of software in car racing, so this piqued my interest. When I got into the meat of the issue, I realized this story applies to everyone doing appsec anywhere! Pretty cool.
IndyCars apparently have "Push to Pass" technology.
Push-to-pass is a mechanism on a race car which provides the driver with the ability to increase the car's power for short periods, usually via a button on the steering wheel. The system is designed to make overtaking easier, and hence make the sport more exciting to watch.
Wikipedia - Push To Pass
Push to pass is the blue button in the top left.
https://commons.wikimedia.org/wiki/File:2012_Italian_GP_-_Lotus_wheel.jpg
The technology is supposed to only be available to drivers during certain periods of the race, particularly when the cars are going at full speed. This is controlled and enforced by software. The issue in this case was that the software in several Penske cars had been modified for another event (where it was supposedly allowed). The change was never removed and as such was available (and used, though supposedly without impact) in the IndyCar race in St. Petersburg, Florida. The drivers were penalized and the team was fined.
From the moment I first heard the story, I thought to myself this is a really interesting example of what happens when software and configurations aren't actively managed and maintained properly. All sorts of organizations have this problem when they don't standardize or manage software properly. I hear almost every week from companies that are kicking the can down the road to close some hole we found in their system. I'm sure some of these get lost and never fixed. It isn't uncommon for us to find things that should have been fixed every year!
The other perspective is that the race authority didn't have a standard way to control the software in the car. They detected the software configuration in preparation for a later race, and had to go back and disqualify the winner of the race several weeks later! If software can win or lose races, you would think both teams and the organization should have stronger controls over what the software can do!
“Beginning with this week’s race at Barber Motorsports Park, new technical inspection procedures will be in place to deter this violation.”
IndyCar President Jay Frye - via AP News
Many of us have had the experience of playing games where other gamers were using software that had been cracked and mods were used to make it respond faster! If the teams realize they can gain an advantage by making the software better, I think you have to assume they might push the limits of the rules. I hope IndyCar is putting the right types of resources into analyzing the limits of software.
One interesting element of this is that according to one of the drivers involved, IndyCar has the data about how these controls are used. So on the positive side, it should be possible for them to audit the use of these controls that gave an unfair advantage to several drivers. This is a good reminder that audit-ability and observability are key features of software systems.
OK, this is all interesting, but what if I wanted to advise IndyCar about how to handle the security analysis of these components? That would be really hard and really interesting.
Step 1 - I think there have to be clearly defined rules. This is a bit like how I generally think of business security requirements. What should the system do and not do? In a business system, we might not want to make a transaction if the balance isn't high enough. In a race car, there are probably all sorts of controls I don't even know about. For example, there is this "Push to Pass" function. What it does and when it can be used is a key part of the security requirements. In this case, the "Push to Pass" function should only work when at full speed, maybe that is measured in MPH. It may be that the "Push to Pass" can only provide a certain amount of power or for a certain amount of time. It probably makes sense to look at all the buttons and for each go through a mini abuse case exercise to think about how it could be misused by a driver or team.
The artifact for Step 1, establishing requirements, would be documentation that clearly explains the rules.
Step 2 - There has to be a way to isolate and evaluate the software. This might look like a requirement to actually share the source code and a way to verify that the source code is the same as what is used to create the software (eg. build process + signature which can be compared to the one running live). In the likely event that teams don't want to share the source code, this might look like a way to grab the software and run it through some tests. I'm imagining grabbing a flash drive and pulling the software on it and then using that software through a simulator on a laptop. The simulator would create inputs just like what the car has in the real world and then it would test different real world combinations, maybe even recordings of these inputs from all races ever run, and then evaluate the way the software handles it with additional fictional inputs and detect things it shouldn't have done, like triggering the "Push to Pass" function in a time where it shouldn't be legal.
The artifacts for Step 2, might be a test suite that runs the software through its paces, adds fake inputs and detects whether it follows the rules for Step 1. The test battery might include hundreds or thousands of specific tests. Each control and a variety of combinations should be tested for any conceivable scenario where a driver would gain advantage.
I can't say I was looking to IndyCar stories for examples to learn about application security, but I will say I was thrilled to find one!
The core references for this post are the following: