• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 506
  • Last Modified:

Ethernet Ports are off on core switch


today i faced very strange problem, all the Giga ethernet ports on the core switch (6509) suddenly came OFF, on orange light no green, just off, all the other lights was normal like ex. the Fiber ports was ok, supervisor engien was ok, the ethernet modul is used for the servers, so all the connectivity was lost with the servers, anyway, i restarted the core, and every thing get as it was and solved the problem, but i need to know what coused this? anybody faced the same case?? how can I check what exactly happend by log?

3 Solutions
Do you see any messages for the ports or the line card that had the problems?  What do you mean by "giga ethernet ports" vs. "Fiber ports"?  Our fiber ports are giga bit Ethernet.  Do you mean 10/100/1000 copper ports vs. 1000 (or even 10,000) fiber ports?

What type of line card are the ports that went bad on?

What slot is it in?

Did you have any power problems?

IIRC if the 6500 starts having power issues it will start removing power from the line cards from the bottom up.  So if by some chance the total draw of all your line cards all of sudden exceeded the capacity of your power supply(ies), the 6500 will start "powering down" the line cards from the bottom up.  This is why Cisco used to recommend that your SUPs went into slot 1 and 2, so they would be the last  to be powered down.
Erik BjersPrincipal Systems AdministratorCommented:
First thing to try is reseat the card, if it still does not work try the card in a different switch (if available).  If it still fails the card may have gone bad.

What probably happened is that the card shut down somehow, and the configuration was removed from the running-config, because it was no longer considered part of the configuration.  Then, when the card was turned back on, the config was gone, which is why it worked when you power cycled the device, since the startup config was used then.
Never miss a deadline with monday.com

The revolutionary project management tool is here!   Plan visually with a single glance and make sure your projects get done.

Were you able to get any logs off the switch before restarting it?  From experience this can happen if:

- You are low on power in the unit - which can happen if you have two low rated PSUs such that both are needed to provide the power, and one fails, or if you are using PoE and just draw too much.  Sh mod would tell you this

- The ports go into error state.  This can be spanning tree blocked (if for example all your ports are etherchanneled together then they would all block together), or due to port errors such as uni-direction (UDLD), spanning tree loops/broadcast storms.  Sh Int would tell you this

however some logs might give a clue and if you don't have Cisco site access myself or someone else can look it up for you

I didn't know about the bottom-up powering down -  AFAIK the reason for the positioning of the sups is down to the connector interface and, particularly with the sup720, that there are only 3 (I think) slots it can go in within a 6509 chassis
I know I am the one that started it, but at one time Cisco recommended the Sups go into slot 1 and 2, with the 720's and newer they recommend 5/6 (at least on the 6509 I am not sure about the other 6500 models).  The power down sequence is something that I just stumbled upon one day last week while researching some other things about the 6500.  It is possible that the power down sequence could have changed, especially with the 6509E.
Saed80Author Commented:
after i restarted the 6509 everything is working fine, but i need the way to check what happend to prevent it in the future, can you give me any helping commands?

IMHO, getting a syslog server setup, having the correct level of logging enabled, and having the 6500 forward its log info to the syslog server.

That way you should get the messages you need.

You may also want to verify your power requirements and what your power supplies can provide.
Totally agreed with giltjr, as for what caused it unless you have logs you will not know what caused it as it could be a number of things, and you would just be speculating.  Usually a didgy card would just stop working which would meet what you said, but the logs/snmp traps wuld have alerted that.  As for what to do to stop it, double check your IOS is current and not ED or LD (ie try for a general release) and if you get a chance try to reseat the problem card.  If it happens again, check the logs and try to reseat the card before restarting the chassis.

Featured Post

The new generation of project management tools

With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now