I created a cluster with 2 nodes, both VMs with SQL servers, in a data centre, A. All versions were 2012, OS and SQL. The quorum was setup between the 2 SQL server VMs and a shared network location in a different data centre, B. Then I added an Always On Group (AO) on the cluster. I had the AO setup with synchronous mirroring between primary and secondary and it was working just fine.
To this setup I wanted to add 2 new nodes, also VMs but in the B data centre under the same domain but different subnet. The intention was to have the databases from data centre A mirrored asynchronously to centre B to the 2 new nodes through the AO group.
As soon I added the 2 new nodes to the cluster, going through the validation and such, the cluster was brought down in the sense that in the both 2 original nodes the E and F drives (for SQL data and log files) were put offline, which caused SQL severs to fail and stop. Any attempted to start them was futile due to the missing drives that went offline.
I could not bring back online the E and F drivers on any of the 2 boxes before I destroyed the cluster itself. If I tried that before there was an error message saying that they "cannot be brought online due to a policy set by an administrator which is controlled by the fail-over cluster". The message is not exact. After I destroyed the cluster I was able to bring back online the E and F drivers and start the SQL server on both VMs.
My question is how and why was possible to bring down a cluster by adding 2 new nodes to it.