We are currently working with a client that is experiencing an issue where their ISP shuts off their internet due to detecting suspect traffic.
This is being stopped at the Calix whom provided this description regarding the criteria for flagging the traffic:
Criteria for DOSDISABLE alarm:
Under which conditions the system generates the DOSDISABLE alarm & How to find that VB port:
DOSDISABLE alarm- This alarm is raised when a DOS (Denial of service) attack has been detected.
This prevents the device on the port from flooding in too much problematic traffic (excessive rate of ARP, DHCP, IGMP, or packets with unknown destination IP) to the CPU for processing. Here, excessive rate means more than 70 frames per second for several seconds.
Once this traffic has stopped, the port wouldn't come up instantly. You have to wait for a provisioning audit to recover.
On-premise hardware/key specs:
~ 20 clients
~ 3 servers (SBS 2011, 2x 2008 Standard R2)
~ Dropbox running on 8 PC's (Accounts for traffic on port 17500)
~ OpenDNS configured under "DNS Forwarders" of SBS 2011 server
WHEN THE DISCONNECTS HAPPEN:
- Seemingly randomly, but typically from 12PM on.
- AFTER the disconnect occurs, it takes anywhere from 1-6 hours until the Calix (at the ISP) clears and allows the traffic to flow again. NOTE: When this happens the Calix is checking the current traffic and CONFIRMS that the suspect traffic has cleared. This means that it is NOT a constant attack like most of the DOS attacks/malware infection based attacks I've seen. It only happens at certain times.
I understand that typically this SHOULD be cut and dry. Find the infected machine, clean it up and the suspect traffic will stop.
What I find strange is that it is NOT constant, making it difficult for me to track down.
I believe it is POSSIBLE that this is an incorrect detection by the ISP's Calix.
However, I'm looking for additional opinions.
I'm going to ATTACH some logs of traffic that I have been capturing from the Cisco Router. You will notice that there are DIFFERENT "outside" interface names listed. This is because we have multiple ISP's and are failing over to the backup when an outage occurs with the primary.
Log WINDSTREAM: (BACKUP) http://cvrec.mpaftp.com/firewall_log_files/LOG-2013-08-28-181258.TXT
Log WINDSTREAM: (BACKUP) http://cvrec.mpaftp.com/firewall_log_files/LOG-2013-08-28-171557.TXT
Log WINDSTREAM (BACKUP) / MULTIPLE DNS REQUESTS: http://cvrec.mpaftp.com/firewall_log_files/LOG-2013-08-28-142141.TXT
Windstream / Random Traffic: (BACKUP) http://cvrec.mpaftp.com/firewall_log_files/LOG-2013-08-28-143043.TXT
PRIMARY ISP LOGS (Before disconnect)
COMMENTS FROM LOGS:
### Here are some consecutive DNS requests to OpenDNS servers.
%ASA-6-302015: Built outbound UDP connection 39150 for CMTEL:22.214.171.124/53 (126.96.36.199/53) to inside:Server/59626 (188.8.131.52/15956)
%ASA-6-302015: Built outbound UDP connection 39151 for CMTEL:184.108.40.206/53 (220.127.116.11/53) to inside:Server/59110 (18.104.22.168/51318)
%ASA-6-302015: Built outbound UDP connection 39152 for CMTEL:22.214.171.124/53 (126.96.36.199/53) to inside:Server/60829 (188.8.131.52/49493)
%ASA-6-302015: Built outbound UDP connection 39153 for CMTEL:184.108.40.206/53 (220.127.116.11/53) to inside:Server/59302 (18.104.22.168/54021)
%ASA-6-302015: Built outbound UDP connection 39154 for CMTEL:22.214.171.124/53 (126.96.36.199/53) to inside:Server/60061 (188.8.131.52/44678)
### This line in the log looks strange. Why would a DHCP client send a broadcast request to the OUTSIDE interface? (labeled Backup) Shouldn't that go to INSIDE?
%ASA-7-710005: UDP request discarded from 0.0.0.0/68 to Backup:255.255.255.255/67
### This log shows a bunch of UDP Discards: (The 17500's are Dropbox traffic)
### These logs appears to be around the time the connection was shut off:
I am really looking for a second or third set of eyes on these logs. Maybe you'll see something I'm missing pointing back to an infected system on the internal network.
To me, everything looks clean.
I look forward to your thoughts!