Having trouble trying to get a 2008 R2 Failover cluster to failover when components failed.
We have a HP P4000 San which is providing shared storage for our two-node cluster.
Have setup the cluster as node and disk majority with the disk being hosted from the San.
Tried various network combinations but what we have currently is
iscsi - configured on a 192.168.80.0/24 with no gateway set (for the traffic to the SAN)
LAN - configured on a 192.168.90.0/24 running as an HP Teamed setup from the servers to the LAN.
Via the cluster you can now live migrate to each host and back with no issues.
You can stop the cluster service on one of the hosts and everything fails over ok.
You can shut down one of the servers and everything fails over ok.
So to simulate network failure we now removed the network cables for the LAN from the host that is hosting the VMs and the 'witness disk'
Nothing fails over and the VMs are not accessible. It basically drops the cluster as it seems to think a node and the witness disk has failed.
Therefore after much research we decided to introduce a heartbeat connection by patching both servers direct to each other on a new interface and put the cards on a 192.168.100.0/24 range with no default gateway.
Tried the same test again with network cables nothing failed over and errors appeared that the LAN network was partitioned.
So my conclusion at this point is that if the network connectivity is lost by the host that actually holds the witness disk that you have no auto failover but must manually move the witness disk to the working host and restart everything.
OR does anybody know what we have missed in the setup.