What i'm struggling to understand is how to configure the SAN and servers. We may upgrade and use existing DL360 G5's but may look at new 380 g7's... difference (big?) being more NICS in the 380's.
we are on a 172.19.0.0/16 address range.
How & what settings are required for the SAN?
It will have 2x controllers (8 iscsi ports) with 1x management port per controller.
Can someone explain the basics of the iscsi ports and addressing? Also are the Ethernet management ports equivalent to say an ILO port for a server????
Thanks
VMwareVirtualization
Last Comment
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Go for G7's if you can as the extra NICs are very useful (especially if going down the virtualization route). Adding a Dual/Quad Gigabit card to a G5 is not difficult however. It all depends on your future rollout.
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
I would highly recommend, READING ALL the HP P2000 G3 documentation available from HP, before you implement your P2000.
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
VLAN tags are used on trunks to switches.
James Haywood
We always have switches between servers and storage as the storage is shared. You could setup direct connections but switches give far more flexibility.
If this is your first time with iSCSI etc then I would agree with hancocka and suggest you have a good read up on the basics of iSCSI, SANs, Shared Storage etc.
hhaywood000:
So you have the san connected 'directly'? If so then how does the esxi hosts connect to the LAN?
I'm having difficulty understanding the differences between iscsi and lan connections/configs....
James Haywood
No. Our SAN is connected to switches reserved for storage traffice only. The ESX hosts then connect to the same switches using NICs also reserved specifically for storage. Only storage traffic moves across this LAN.
The ESX host also have other NICs, some are connected to the Data LAN and some are connected to the Management LAN each of which uses its own Switches.
Each host use 4x pot NIC's connected directly to the SAN or using a vlan or even separate switches for iscsi traffic on whatever range i like i.e. 10. or 192....
The use additional NIC's dual ports and connect into the lan switches using current 172 ranges...
???
James Haywood
Yes.
If you have more then one host then definately use switches as this will allow the storage to be shared and you can use vmotion etc.
sure. but in terms of redundancy (possibly even performance) would it be best to use 2x dedicated switches for iscsi traffic on the controllers/servers?
guess directly attaching hosts to san simply reduces the amount spend and redundancy required for iscsi?
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Yes, 2x switches would be best.
James Haywood
Yes it would be best for redundancy to have 2 x switches dedicated to storage but it comes down to cost.
A single switch will provide good performance but will give a single point of failure.
Don't directly connect unless you really have to.
It all comes down to money - you either have it or not!!
which is why we were looking at simply loadbalancing the guests over 2x hosts and using veeam as backup (in event of failure we would of course lose data) but have ability to restore very quickly say a days previous.
Other option was VSA, which seems to be still quite expensive (possibly performance hit) and also loses a lot of available local storage.
SAN does give us all when we require i.e. HA, DRS vmotion. - and room to expand, just costs so damn much.
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Have you looked at other VSA products?
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
just to completely alarm you we had 6 P2000 completely fail last year at clients sites, with hardware failure.
sure, however VFM my solution is probably best (ok no HA) but as stable/reliable...
Think you have finally hit the nail on the head re - SAN.
James Haywood
We've got 10 and not had any issues over the past 9 months or so. Also got 10 of the previous version (2312i) and only had a single controller go down.
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Its simple, if you stack all your eggs in one basket, it can be a single point of failure.
if you go forward you need to design in what happens if SAN fails?
i suppose having a large local storage as a san backup (using veeam backups) would provide a good local BCP solution should the san fail?
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Yes, thats what ALL our clients have NAS or iSCSI SANs as backup store, iOmega products.
Thats how we were able to restore to SAN, when repaired.
CHI-LTD
ASKER
ok, so should we go for the P2000 then what sort of NAS is best for backing up the hosts using veeam? im looking at the qnap - i assume just use it as a network share via ethernet into lan, or can USB or esata be used?