Link to home
Avatar of Kissel-B
Kissel-B

asked on

Delete cluster settings and network card config via powder shell

I was creating a test environment for hyper-converged storage spaces direct and messed up on some of the network cards and set switch settings is there a way to remove the configuration I applied without reinstalling the OS or destroying the cluster?
ASKER CERTIFIED SOLUTION
Avatar of Daryl Bamforth
Daryl Bamforth
Flag of United Kingdom of Great Britain and Northern Ireland image

Blurred text
THIS SOLUTION IS ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Blurred text
THIS SOLUTION IS ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
Did you get yourself straightened out?
Avatar of Kissel-B
Kissel-B

ASKER

For the most part I am building the final config now after all the testing the thing that really angers me is I want a two tier system with ssd and regular disks for capacity the ssd aic cards I bought were nvme I planned to use the for cache and storage so I got 8 of the same model S2D automatically when it sees HDD's adds all the NVMe's as cache drives and there is no way I could get to work to change that.  So I am removing the HDD and going all flash.  It's doesn't have the capacity I want but the performance is needed
Please clarify? What are you looking for specifically?

NVMe will always be chosen as a cache layer when SSDs and/or HDDs are present in the solution.

So, to get a two tier tenant facing solution:
NVMe: Cache only
SSD: SSD Tier in Pool
HDD: Capacity Tier in Pool

Or, as you have done all-flash. :)
I wanted a two tier system SSD for performance and HDD for capacity.  I got 8 intel DC P3600 pcie cards.  And 8 4TB HDD that are in external sas enclosure.  According to the documentation you need NVMe for cache that can withstand 4 drive writes per day you can use other nvme drive in the system cache will select the fastest automatically I could use other NVMe's like a intel   DC P3500 for the SDD tier but I got a good deal on the 3600 nothing I saw said they had to be a different model drive it could not be add in cards but SD2 Automatically takes all 8 3600's and puts them in as cache for the HDD's performance isn't that great due to the HDD's I wanted in each node 2 3600's for cache 2 for SSD performance tier and 4 4TB sas for capacity if I auto configure they are all cache if I manually configure the storage pool it says the pool is not set up correctly for S2D rerun enable-clusters2d
Using PowerShell I believe you can designate one or two of the NVMe drives as SSDs manually. Maybe that's the way to go.

Catch is, if there is a failure one would need to use an Intel utility with the node offline to figure it out.
Do you have any idea how to accomplish this once I was able to get two out and the system recognized two tiers the issues was when you use enable-clusters2d command all the nvme partition are taken and show as being filed so when I went to create the vhd there was no room left.  You can't access them through disk manger or diskpart the rest disk powershell command does not format or remove the partition
Set the NVMe drives as SSDs before running the Enable-ClusterS2D command.
They are already identified under media type as ssd
There are two other options that I can think of for that setting:
HDD
Other

Perhaps one of those, enable the cluster, then once it's in flip them back to SSD.
Philip I am trying to run the cluster validation tests and everything cones back ok except the storage spaces direct portion this is the error any ideas?

An error occurred while executing the test.
One or more errors occurred.

There was an error retrieving information about the Disks from node 'MIHCS2D02.MI.local'.

ERROR CODE : 0x80131500;
NATIVE ERROR CODE : 1.
 The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.
Not sure what was going on when I ran the cluster validation from one of the nodes it through the error when I ran it from my workstation It verified very strange
Firewall exceptions on the local machine allowed the process to complete is my guess.
I am having issue which just popped up.  With the powershell remote management same issue I had with the cluster validation I am getting the WinRM error that it could not connect to the other clusternode I have checked the listener firewall disabled the firewall it's happening on both nodes tried the GPO weird thing is that I just rebuilt these since I have been Messing with them so much this didn't happen before when I was working on it.
I got it I had to change the GPO to * from the IP range