Patch management controls checklist

Can anyone assist with putting together a top level controls checklist of best practice “patch management controls” that should be followed for effective patch management of business critical (with high availability) database servers?

I.e. a list that our audit/risk teams should compare to (( “what should be being done, to what is being done”))  verify what is happening on these systems, to ensure for effective patch management.

Any sort of top 10 controls checklist would be useful. I can’t find much via Google.
If you have had experience in network and systems management before, especially from “availability perspective”, what would good patch management look like, what would you look for, what can typically be improved, what would poor patch management look like?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Which OS database servers are involved?
Does a test environment exist?
Critical/security updates should be updated regularly in the test environment.
Then applied to the production.
Sql updates have to be applied to oth nodes, the OS updates can be applied individually to each node.

Analysis of possible attack vectors, I.e. what the issue an update is correcting and whether this type of access is possible. I.e. IE patch for zero day exploit. If the cluster in question has no Use of IE and has no access to any site, this update can be delayed. If on the other hand you have an SQL service zero day that a certain type of access can allow a remote ser .. Buffer overflow and you have unsecured systems that can access these systems should place this update higher on the to-be installed list.

Testing in a testing evironment when available is optimal.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
btanExec ConsultantCommented:
Thought you should check out the nist SP800-40 draft rev3 talking on the challenges of the patch management technology and summarizing the SCAP based metric and recommending what to look out for as user may help. The process from identifying, installing to verifying are the check balances needed and most of the time the verification is ignored. Do you verified all systems are patch and to what level are they at...

The criticality for patch depends on enterprise policy on how stringent and serious about keeping in sync with the threat landscape and their risk exposure.  How is critical and emergency patch differs depends on the respond time and service level to internal and external customer. There is no one good bible to are the best dictator. But I do see there is serious gap if patch never go through process to validate its integrity and staged it before making to production for release. Patch are also not bug free...the check is to proof its claim and delivered in trusted channel and logged it has been well received.

Form patching practices, business goes on and even if there is mandated downtime, there is still a active and passive or standby system to serve the mass users. At first we need that understanding else availability never come into picture. High availability is only achieved if there is a balanced system whereby any one time the standby is patched and traffic is swinged in application level one at a time till the active is the last to be patched...load balancer folks will knowbit best. Really bad scheme is a big bang without authorised agreement and policy backing for administrator to do it for a shared hosting infrastructure...notice need to be done and agreed upon. Contingency plan is needed if patch goes haywired or needed rollback. They should be tested just like we need to make BCP  are verified and not assumed.
pma111Author Commented:
Would you expect to see documented procedures per patch type, i..e this is the exact process we must follow when applying these patches to system type x.
btanExec ConsultantCommented:
Patch method is best advice by the vendor supplying it so the steps follow it as recommended. But the user dictate the severity and risk of applying on actual system. Criticality from the vendor does not translate into user severity but is used as reference to gauge - security tm will better advise the sys admin. normally patch is packaged rather than going into many steps that can varied, go for official release patch rather than engineering hotfixes which then really need close guidance steps from vendor...(or packaged patch does not work and need manual "patching")
madunix (Fadi SODAH)Commented:
When implementing updates, I prefer to plan ahead, test on a non-critical server, create a change plan B. Also after an update I prefer to restart the services or in some cases to reboot. Be sure to read the release notes, there may be special instructions related some packages/patches.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
OS Security

From novice to tech pro — start learning today.