The switch gives the network connections on the SAN some redundancy, so it does help, but I also noted that the switch is a single point of failure.
"i wasnt sure if the cluster ip should indeed be on the production network or separate"
The cluster IP is for talking to the cluster and would almost certainly be from the production network. The cluster IP won't matter much to you in THIS scenario because you generally won't talk to the cluster.
But imagine if you had two file servers (or web servers, or mail servers) in a cluster - you'd point people to the cluster IP because you simply want to talk to whoever owns the cluster - you don't care which specific server it is.
To illustrate what I'm talking about, here's an image I knocked together. I included (what I understand to be) Andy's suggestion, and why I disagree with it. No switches, one switch, or more switches, you'll get some level of redundancy and fault tolerance no matter what you do. We went virtual several years ago (my implementation has three of everything) and it's one of the best decisions I've ever made.
"...can all these connections be on 172.16.0.0/24 or am i still using two subnets 172.16.0.0/24 and 172.16.1.0/24, to me it looks they can all be on the same?"
I use a flat, class "C" for my SAN connection. There's no harm in using VLANs, but they're not necessary. The switch(es) will segregate the traffic, and it's only the two SANs and two hosts on that network.
Andy:
"...you can see that if controller 1 fails then both hosts will get I/O via controller 2 instead."
Disconnect the cable from host 1 to controller 1, or lose that NIC in host 1, or lose that NIC in controller 1 and now host 1 cannot get to controller 1. The addition of one or more switches eliminates that problem. It may be the HP SAN has an internal switch that connects the two controllers - my NetApp SAN is like that - and it may be one controller can take over for the loss of the other in the HP SAN (again, my NetApp SAN is like that), but all you've done is taken the external switch moved it inside the shelf. The switch still exists, you just don't see it. I expect that has a lot to do with how the HP SAN is configured. My design works regardless. You just have to imagine two SANS in one shelf.
" BTW, in the storage industry you don't connect the two switches together on the back end as no traffic goes over that link."
The crossover between switches ensures there's no single point of failure. You can disconnect any cable, port, NIC, host or switch and the system should continue to run. The SANs themselves are probably single points of failure, but good backups can be restored to the remaining SAN if a VM is mission critical.
You'll be better off no matter what you do, so good luck!
So if that switch fails it all crashes? If you only have two hosts then throw the switch away and connect point-to-point.