Here’s an interesting slightly different spin on the otherwise tired “Open Source” vs. “Closed Source” being more secure debate!
The topic is inspired by a conversation with a client that is using a whole slew of old open source libraries. They know they need to update those libraries, but it is very difficult because they are using them as part of a distribution of a commercial product that they are paying for. So, they buy XYZ product and pay good money for it. XYZ product brings in all of these dependencies that are problematic. The customer can’t update the libraries because it may break the commercial product and because the commercial product is large and not open source, they can’t see and edit the code to fix issues that arise. Ultimately, it seems like that’s the vendor’s responsibility. But that doesn’t mean they see it that way.
The obvious avenue for recourse is to tie payment to keep the libraries up to date. That isn’t standard contract language … yet.
Another way to hedge in this situation is, of course, to avoid commercial platforms that don’t have a strong track record of updating their dependencies. It would be interesting to see a scorecard or standardized assessment method to keep track of which vendors and which products do well keeping up with updates in their dependencies. It seems like it might be relatively easy to ascertain …
In this case, we have an organization that went all in on a commercial platform and even has their own code built on top of it. Now they wonder if they should have built from a lower level base so that they wouldn’t be locked into a platform that they really can’t update. Interesting design decision, right!?