Irregular connectivity issue from a HTTP in DMZ through a PIX through Cisco 5509 to an application server

What kind of issue am I potentially dealing with here?  Cisco wizzards please assist.

Here is the setup:  DMZ with a webserver application configured to access an internal application server / database, routed through a PIX firewall and 5509 switch to access the application on a specific port.  That port is open on the firewall.  Firewall only allows 443 & 80 for the webserver.

This is the interesting part - accessibility of web apps via webserver works great at certain times of the day for an hour or so then it slows down to a halt, no timeouts of the webserver just takes a really long time and eventually opens up the page by fetching app server data as opposed to lightning fast in other times.  Maybe some tasks kick off that take up much of the network bandwith at these particular times...
Is the bottleneck in the firewall or the switch?  How can I isolate the issue (as a newby to Cisco)?

Who is Participating?
visioneerConnect With a Mentor Commented:
You can use an application like SolarWinds ( to get real-time and aggregate bandwidth gague information on each interface in a Cisco device.  You could monitor the inbound/outbound ports in the PIX as well as the switch port that the server is using.  
You should also look at the server(s) CPU utilization and montitor during these "slow" times.
Monitoring the CPU utilization of the switch can help, too.
Can you pinpoint specific hours during the day?
Can you point to any specific environmental issues during that same time period?
Perhaps someone on the network has a worm-infested PC (like Blaster, Welchia, et al) that is only turned on during certain periods and the effects build up over a period of hours, slow the heck out of the servers, then gets shut off when someone goes home, and everything eventually settles down.
These type issues do not typically point to any type of configuration or other issue on the network proper. Either it works or it doesn't, or the behavior is consistently good or consistently bad, but consistent.
Other environmental factors or worm/virus infections cause most of these infrequent variations. I have seen things like a florescent light fixture interfering with the in-ceiling cabling runs causing interference, but this particular light fixture was in a conference room that was rarely used. It took a long time to correlate the use of the conference room with the apparent slowdown of the network...
I've seen things like apparent network down conditions that happened at almost exactly 4:55 a.m. every day, but nobody knew it until we started setting up syslogging on all the equipment. Turns out that was when the first people came in and got the coffee going. Something about the wattage of that particular coffee pot pulling too much juice and causing a brown-out in the circuit where a network switch was plugged into in the next room over. Just because equipment is not in the same room, doesn't mean the power circuit is not shared somewhere...
The bottom line is to think outside the box and not just look at the network equipment. Set up the syslogging and SNMP on the equipment, and use the 30-day eval of solarwinds to monitor everything about the servers, the firewall, the switch, etc. and I'll bet you'll find something..
SolarWinds can monitor the CPU with a realtime gauge as well, plus it has a syslog, so it can do all of the things that lrmoore is suggesting.
nsomeAuthor Commented:
Solarwinds helped isolate the problem but it turned out to be an application issue that required a just released patch...

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.