Link to home
Start Free TrialLog in
Avatar of IT_Group1
IT_Group1Flag for Israel

asked on

Fortigate 80 HA setup

Hi all,

I need to setup an high availability between 2 units of Fortigate 80.
Can someone point me to a good article, preferably with screenshots?

Thanks
Avatar of Garry Glendown
Garry Glendown
Flag of Germany image

Apart from the regular, detailed docs like http://docs.fortinet.com/uploaded/files/1088/fortigate-ha-50.pdf or the FortiGate Cookbook (always a helpful overview!), there are loads of short how-to's, like this or this, or videos like this or this
ASKER CERTIFIED SOLUTION
Avatar of myramu
myramu

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of IT_Group1

ASKER

Thanks guys, great posts.

If the primary FG has the following interfaces:
WAN1, WAN2, DMZ and Internal - should I use 4 different switches - 1 for each interface?

Plz See screenshot for detailed conf.

Thanks
Interfaces-03-Nonames.jpg
You mean for the connection between the FGs towards the outside? Not necessary, just make sure you have VLANs set up to separate the different zones ...
OK, thanks.
Just to wrap it up;

- I'll connect all existing segments from both units to a VLAN or physical switch which is connected to the correct source (WAN1, DMZ etc...)
- The configuration between the FG units should replicate automatically.
- The FG units should have the same FW version?

Thanks
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Garry-G,
Thanks, can you explain about the operational log disk?
Check your main status page in the System Resources widget - typically (on a 5.0 system) you should have 3 display, for CPU usage, Memory usage and disk usage ... if the log disk is inoperational, the last isn't displayed ...
Log disk is not available.
Is it safe to move forward w/o a log disk?
On both devices? If possible, I'd recommend using a maintenance window to re-format the logdisk on the operational unit (as well as the new one) ... never know when you'll need it ... (logdisk is somewhat mis-named as logging to the flash is by default disabled ... it's mainly used for things like HTTP-accelerator etc.) Either way, as I mentioned before, if one is operational and the other isn't, once HA comes up, on of your devices will most likely crash without coming back up (except for power cycle)
Garry,

I'm about to connect the units and you're making me worry...
1. I'll verify that both logdisks are not present on both devices.
2. How do I enable them?
3. Are they suppose to part of the basic FG 80c unit by default, or is it an add-on?
4. Does the format requires downtime?

Your swift reply will be highly appreciated ;-)

Thanks
Just checked, the 80C device does not have any internal storage, so of course the logdisk will not be present ... (I knew the 110C doesn't have any either) As you only mentioned "80" earlier, I assumed you would have an 80D, which does have local storage ...
OK, checked it on both units;
The old (MAIN) unit - Log hard disk - not available
The new (Slave) unit - Log hard disk: Available

I've performed the following on the Slave unit:
– config log disk setting
– set status disable

In the log configuration section in the GUI, I've made sure that log writing is change to : Display logs from Memory

Is it enough to start with? Can I plug the units?

Many thanks
current status on the slave unit:

Backup_Fortigate # Backup_Fortigate # config log disk setting

Backup_Fortigate (setting) # set status disable

Backup_Fortigate (setting) # get
status              : disable
max-policy-packet-capture-size: 10
log-quota           : 0
dlp-archive-quota   : 0
maximum-log-age     : 7
full-first-warning-threshold: 75
full-second-warning-threshold: 90
full-final-warning-threshold: 95

Backup_Fortigate (setting) #
The new unit has the config from the old one already?
Judging from my experiences, I'd expect one of the two devices to crash, probably the one with the inop Logdisk ... which should lead to a failover to the new one, allowing you to take out the old one and do a format of the logdisk ...
Garry hi,

I've added both units to the cluster, but only the master shows in the HA pane.
What can be done?
After rebooting the Master unit, all traffic was OK through the slave unit, but in the web GUI, I can see only 1 unit (either primary unit before the restart, or the slave unit after restarting the primary).

What can we do?
Once the unit with the working log disk is online, disconnect the other from the cluster and do the "exec format-logdisk", then reconnect and see whether the cluster comes up ...
Even if the log disk is disabled??
After several reboots the slave unit (the one with the log disk) showing just POWER LED and all other LED's are turned off... Needless to say that it doesn't shows as part of the cluster.

This unit worked well few min ago.. What could go wrong?
Disabling logging to the storage does not mean the log disk isn't used ... formatting the log disk should fix the problem ...
Just done it.
After the unit reboots should I disable or enable the log disk?
Should I change one of those settings (See screenshot)?

Many thanks!
Log.jpg
Ok, the unit keeps crashing, after formatting the disk and disabling it in both the CLI and the GUI.
Any other ideas?
Do both units show the log disk as operational?
No, only the slave unit have log disk present (disabled)
The master unit does not have log disk at all.
Have you tried setting an HA cluster between 2 FG80c units when one is WITH log disk (enabled\disabled) - and one doesn't have disk at all?
if you switch out the units and try the "format-logdisk" on the master unit, do you get an error? Or does it format and subsequently give you the logdisk as available?
Also, you are sure that both are 80C, not one 80C and one 80D? (should be clear based on the serial# ... should both start with FGT80C )
Of course, the devices may be different hardware revisions - not sure if there are different versions of the 80C ... I know there are multiple hardware revisions of the 60C and 60D ...
Tried that again on the master unit:

Primary_Fortigate # execute formatlogdisk
Log disk is not available.

I've rechecked - both units are FGT80C, one is approx 2 years old (Master), and the slave is 2 weeks old.
I'm currently working with Firmware Version      v5.0,build0292 (GA Patch 9) - on both units maybe it has known issues, and need to be upgraded / downgraded?
After enabling the disk the unit was up for less than 1 min (see screenshot), and was crashed again.
I've re-set the HA with different cluster name, same problem.

What can be done?
log-01.jpg
OK, if the old device does not have a logdisk, I assume there are at least two HW-revisions, one with a logdisk/local storage, and one without ... I do not know whether there is any workaround to combine both into a cluster, you will have to open a ticket with Fortinet support (if you have a FortiCare service on it) ... they may ask you to do an RMA on the old device (or you might have to "convince" them to) in order to replace it with a newer revision 80C ... as far as I can find in tech sheets, the 80C should not have any internal storage, but I assume this is outdated and only covers the revision 1 ... I've come across a forum discussion that mentions a Rev2 device that does have local storage, so that's most likely what you have with the newer device ...
Thanks,
What about execute ha ignore-hardware-revision - in order to make the units ignore the HW differences ? Troubleshooting HA clusters

Troubleshooting HA clusters
This section describes some HA clustering troubleshooting techniques.
Ignoring hardware revisions
Some FortiGate platforms have gone through multiple hardware versions. In some cases the
hardware changes between versions have meant that by default you cannot form a cluster if the
FortiGate units in the cluster have different hardware versions. If you run into this problem you
can use the following command on each FortiGate unit to cause the cluster to ignore different
hardware versions:
execute ha ignore-hardware-revision {disable | enable | status}
This command is only available on FortiGate units that have had multiple hardware revisions. By
default the command is set to prevent FortiOS from forming clusters between FortiGate units
with different hardware revisions. You can enable this command to be able to create a cluster
consisting of FortiGate units with different hardware revisions. Use the status option to verify
the whether ignoring hardware revisions is enabled or disabled.
My man - THE CLUSTER IS UP !
After running  execute ha ignore-hardware-revision on both units, all seems OK (thank g-d).
Hopefully the unit with the disk won't crush on us...

Many Many (x10) thanks bro. Your kind of assistance is what EE is all about!
Still, if you have a service contract on the old unit, I'd open a ticket on the HA problem, with some luck you'll get a new device with the same hardware revision ... (we had that done on a customer's 60C, which after an initial hardware failure had received a new revision 60C ... due to some incompatibilities, we later were able to open another ticket on the remaining initial device in order to get a replacement with the same rev...)
Plus, at some point you may need the local storage for something like network accelerator etc. ...
Thanks for a brilliant support guys !