Has anyone established any acceptable policies around 3rd party suppliers and patch management? For example, let’s say 3rd party “Systems Ltd” develops you a payroll application called “payapp” that runs on red hat linux platform, is driven by an oracle database and the web service is apache. Your environment is hosted by another 3rd party, let’s call them “a support” who are responsible in the contract for patch management but typically this is windows related patching via WSUS, and doesn’t extend to apache, linux, oracle unless specified in the contract.
So we run a vulnerability scanner like Nessus or whatever, and it flags up numerous missing security patches on the web and data server, missing patches for everything, red hat, oracle database and apache. And then it comes down to who is to blame.
Typically we get a response from the application developers saying we only support the application on a certain “patch set”, which I assume means if “ a support” start patching Oracle to cover all latest security issues, and then the app all of a sudden starts going wappy and not working, “Systems Ltd” aren’t to blame and “a support” are. But, if “Systems Ltd” only support their app on a specific patch set and thus new vulnerabilities are found in say Oracle or Red Hat or Apache, if they aren’t patched they and the application they serve are “vulnerable”.
So what to do in such a situation? Do you allow apps to run on unsupported/unpatched infrastructure knowing it will operate fine? Or do you have to meet in the middle somewhere? Also, is this kind of scenario common? I assume all of you have apps developed by some 3rd party based on a specific technology stack? Are vulnerabilities found in database level products like Oracle typically seen as “low risk” and “low liklehood” of compromise, where missing apache patches seen as “higher risk”, and “higher liklehood” of compromise?