VMware is virtual machine software that provides a virtualized set of hardware (a video adapter, a network adapter, and hard disk adapters) to the guest operating system. VMware virtual machines become highly portable between computers, because every host looks nearly identical to the guest. In practice, a system administrator can pause operations on a virtual machine guest, move or copy that guest to another physical computer, and there resume execution exactly at the point of suspension. VMware's enterprise software hypervisors for servers, VMware ESX and VMware ESXi are bare-metal embedded Hypervisors that run directly on server hardware without requiring an additional underlying operating system.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi - I am setting up a SAP HANA environment based on VMware 6.5 using RHEL 7.5 VMs.

One of my VMs has a number of drives (100GB OS drive / 32GB Swap / 50GB Binaries / 32GB Shared Data / 750GB DB / 750GB Logs), 48vCPU, and 768GB RAM and resides in a number of different Datastores.

My OS and Swap files share the same disk and datastore.

When I boot my RHEL VM, it tries to create the Swap space to match the amount of RAM = 768GB which instantly fills my storage.

I want 32GB configured for Swap space.

Am I missing the obvious? Is the sudden grabbing of disk space to match the amount of RAM expected behaviour?

Any help or guidance much appreciated
I am looking for desktop imaging system recommendations. A little background, we are currently using Acronis but have outgrown it. I am looking into Quest Kace and SmartDeploy.

I am liking the features of SmartDelpoy much more and would prefer to have a more modern imaging system like that as opposed to Kace. I know there are other options out there like VMWare Workspace ONE, Microsoft System Center Configuration Manager, and CloneZilla.

Does anyone have any experience with any of these or would be able to recommend something better? We would prefer a cloud solution, if there is one.

We moved a cluster to a datacenter in another city and would like to rename the hosts in the cluster accordingly (the hosts are named by the city they're located in, so I need to change from OLDCITYvmhost01, 02, 03, 04 to NEWCITYvmhost01, 02, 03, 04). I believe the process to be:

Put the host into maintenance mode.
Disconnect the host from the vCenter server.
Connect to the host directly (I am using the stand-alone client).
Go to Configuration > DNS and Routing > Click Properties... then edit the name accordingly.
Click OK.
Go to vCenter web client.
Go to the cluster.
RIght-click cluster and choose Add Host.
Choose renamed host and configure through wizard.
Exit maintenance mode.

Will VMs migrate back to this host, now with a new name? I suspect they will in that the host is part of the cluster and the cluster is configured for HA and DRS. However, I wasn't sure if the VMs that were migrated off of the renamed host when it was put into maintenance mode would have some issue moving back.

Please let me know if there are other locations I need to update the host name, any caveats or errors I may encounter with the cluster, or if there is a reason besides personal preference to use the host's web client to do this.
Much appreciated.
How to get Alerts to EMail or SMS from VmWare Horizon, version 6.2

I can see icons turning red for node, and need to get this exported out for someone that... gives poo to resolve issues. Currently it's totally random if someone see the RED node or a user contacts the team.
VMWare Host complaining about a fault. Lost connection to NFS Server. Whats confusing to me, nothing has been lost- everything is up and working fine. the only thing I did a while back was add an additional host as a test then took it out of the farm. ive restarted everything too and still getting same error.

Not sure how to further troubleshoot this.

I added a vlan to a vswitch and see it with active ports 0
Under Physical adapters all VLANs show, but not this one so what am I missing in adding the VLAN?
I am re-building my Veeam serveur moving for 4.x to 9.5 and want to be sure and safe I'm doing the right thing,

A consultant put in place a few jobs but didn't seems that sure of the configs to put.
I want a full backup set out every friday and full on-site every nights (one to a NAS and one to a local USB drive)

My main job goes to a NAS.
As 20 restore points
"Configure secondary destinations for this job" is not check
It's a Incremental
"Enable application-aware processing" is checked
"Enable guest file indexing" is checked
runs daily

got a Weekly on F: drive
1 restore point
"Configure secondary destinations for this job" is not check
"Active full backup - Create active full backups periodically on Thursday"
This job I remove every Fridays and bring home and I rotate it using 8 hd
"Enable application-aware processing" is checked
"Enable guest file indexing" is not checked
Runs everyday at 8pm

And got a BackupCopy job called "backup on external drive"
Target is E: removable HD
31 restore points
Runs continuous

1- Is all configs efficient?
2- And is there anyway on the job "backup on external drive" I could bitlocker that drive when it is not in use.
I activated bitlocker but coudn't create a batch job that would decrypt before backup and re-crypt afterwards.

Veeam is erroring on backups for a handful of VMs due to errors creating snapshots.  I have gone into vSphere and tried to manually create a snapshot and also receive an error.  VMware is saying "An error occurred while taking a snapshot: msg.fileio.generic." & "An error occurred while saving the snapshot: msg.fileio.generic."  We had some storage issues a few days ago where one of our datastores on an HP MSA ran out of space.  I added 200GB to the datastore to free up some room.

I guess my question is when creating a snapshot, I have one of the VMs that I checked and the vmdk is 100GB but the Datastore is showing 259.89GB free.  Is there a way to verify the freespace?  I have about 10 vms failing backups with the same errors and they all seem to be residing on the same Datastore.  Not sure where else to look to troubleshoot?

I have 5 ESXI hosts. All EXI Hosts can see/have connections to 2 HP fibre Sans and a few ISCSI SANS.
Each ESXI has an add on HP 8GB FC to PCI Express HBA Card.

one of the ESXI host had the system board replaced. When ESXI came back up, it had no problems seeing the ISCSI SANS. But it was unable to see the 2 HP fibre SAN. Looking at the storage adapters, Vsphere client shows "VMHBA1 fibre Channel unknown" while "VMAHBA2 Fibre Channel online"

I have rescan for hardware changes. I see no options to re-attached/mount storage back. How do I get the VMHBA2 to reconnect to the HP SAN? All the other ESXI hosts can see it.
When setting a VM's Latency Sensitivity to high, I know you set the CPU and Memory reservations or 100% but are there other changes this setting does to the scheduler or anything else?
HI Exeprts

we upgraded our Vcenter to 6.7  9433894    on windows

and our ESXI servers to  VMware ESXi, 6.7.0, 8169922

my question that i heard that Vmware will stop Vcenter on Windows . is this right and on which version

plus is really the appliance will be good enough to cover all we need . our vcenter on windows with MSSQL DB standard edition work very well

and what about the new appliance is really can cover third party plugins and vcenter linked mode  etc

plus they said in 6.7  they will have full HTML GUI and flash will go away in which version

also said for GPU sharing with new version we can make Vmotion  even if the VM on

kindaly advice
I have a physical HP DL 380 G7 server 80 GB of memory, 900 GB storage, 1 CPU (Processor cores per socket 4 and total logical processors 8, with hyperthreading enabled).

Every client (5 guest in total) are working via their own terminal server, connected via a remote desktop gateway and asa 5510 firewall on their own vlan. OS is WIN 2008 r2,VMWare version is V6, with latest HP servicepacks on it.

Memory total is 26 GB used

Harddiskspace used is 576 GB

I gave all the guests a total of (vsocket x cores per socket =) 14 VCPU. But as i can see i have only 8 VCPU available, so am I overprovisoning?

My clients are complaining about peformance and freezing of the Remote desktop they run on. I can also see it sometimes. The screen stops, my ping times are rising to 15-25 ms in 2 seconds and than it disappears again and they can work. (i did this from a terminal server from a client and pinged the firewall and the internet).

Is this because of the over-provisioning (14 VCPU in stead of 8 VCPU available ??) does anyone has a solution or has experience with this kind of slowness in VMWare?
We have Ubuntu 18.04.1 LTS running on a virtual machine in VMWare Workstation 12.5.7 build-5813279.

The network settings appear to be correct however every time we boot it we get the grey icon with the question mark.  We can disable the network and reenable it and everything is fixed.  Can you tell me how to fix this issue as it is rather annoying?
I'm looking for some feedback or ideas on how to handle multiple HTTPS servers sharing same public IP behind a SonicWALL.

For example:

https://exchange.mydomain.com/owa = flows to Exchange server
https://application.mydomain.com/app = flows to application server

My client has an on-prem Exchange server.  I know from experience that you cannot just use a redirected port with Exchange because the server is constantly rewriting the URL (minus the modifications) making it inaccessible.  Exchange must use 443.

However, my client also wants to publish another site, on another IIS server on the same LAN as the Exchange server, behind the same Public IP, and is not keen on using a custom port.  The desire is to keep the URL as simple as possible for all the staff in the field.  This new IIS server will see a lot of traffic and is considered Mission Critical.

I know with the SonicWALL I could split the Internet connection, get a second Public IP from the ISP, therefore providing a second WAN interface.  That would solve the problem by presenting a second Public IP, but it would also have a negative effect on the available bandwidth for the first WAN connection.

There is absolutely no interest in moving Exchange to another port.  I'm not even sure that's entirely possible, but even if it is, Exchange is too well established where it is, and since it's already working I'd rather not tip over that apple cart regardless.

SonicWALL has a feature that I've seen called …

I am looking for a Flatbed scanner needed for VDI VMWare Horizon. Can anyone recommends one that works well?
Wondering how to combine output from get-vm and get-adcomputer into the same CSV.

I am doing a project where I need to update VMWare Tools, but only dev domain joined Windows servers.

I am hoping to combine the outputs from get-vm and get-adcomputer into a master list, and matches records based on "Name".

For example:

$ADServers = Get-ADComputer -Filter {(OperatingSystem -like "*windows*server*") -and (Enabled -eq "True")}
$VMs = get-vm

CSV includes $VMs.Name, $ADServers.Name, $ADServers.DistinguishedName, $VMs.PowerState

$ADServers.Name and $VMs.Name should match in theory, however not all VMS will be joined to the domain so $ADServers.Name and  $ADServers.DistinguishedName may be empty on some records.

Is this possible?.

String :-)

I inherited several Windows 7 VM's that are wasting a huge amount of disk space. My thought
was to use Converter to trim some fat. The conversion worked but the VM won't boot properly.
I'm in an infinite loop of Starting Windows/seeing a BSOD for a very-pico second/Windows Error
Recovery. All options in Windows Error Recovery send me back into the loop.

Any suggestions on where to start would be greatly appreciated.


I am running several VMware 5.5 hosts on OEM servers, with each using a MegaRAID LSI 9260-4i SATA RAID controller card with accompanying battery backup unit.   Hosts are managed by a VCenter 5.5  virtual server.  Drive configurations on each host are SATA drives paired in RAID-1 configurations.  I recently got an alarm sounding on one of the servers that 1 of the SATA drives had failed.  I installed the Avago StorCLI utility, and using a  SHOW HEALTH command saw that the state of the virtual drive was degraded, and that the failed drive was on the LSI controller's connector P1.  I shut the system down, pulled out the failed SATA drive, and replaced it with a same-size SATA drive.  On startup the controller card began to beep again, as it was still detecting the degraded virtual drive.  I hit CTRL-H to enter the LSI controller's WEB BIOS and went to add the replacement drive to the degraded virtual drive.  Unfortunately, I could not remove the failed drive from the original RAID-1 Drive Group 0, as it still showed the failed drive in the original mirror setup, as a MISSING PD.  As such I could not add the replacement drive into the original Drive Group 0.  When I tried to add the replacement drive into the Logical Drive setups, the interface created a new Drive Group 1 and wanted to add the new drive to that.  Ultimately I had to declare the new drive as a Global Hot Spare, and I then saw the system start  to rebuild the Drive Group 0 using the new hot spare …
Hello, i have a VM that shows this message, i can confirm C and D drive have enough space, this is vmware esx 6.0

Vm warning
Need to make sure our Standard vSwitches and Port groups are doing what we want them to for Vmotion.

We have 4 nics on each ESXi host.

On each Host 2 are 10g interfaces and 2 are 1g interfaces

Here is the 1st problem, We have Vmotion and Management on the same Vlan/subnet.  I know this is wrong but it is what it is in this environment and that's how its been since before me.  

If we need to change it, then we will.

For Vmotion Portgroup we want the 1G interface to be on Standby and the 10G interface to be active

So we currently have the following:

This is vSwitch0
Management Port Group on vswitch0

This is vSwitch1
Vmotion Portgroup on vSwitch1
Now I noticed that on Vswitch0 we have the vnic teaming set to Vmnic2(10G) to Active and vmnic0(1G) to Standby.  But on Management Port Group we have an Override set to have Vnic2(10G) to Standby and Vnic0(1G) to Active.  So we are not using the 10G interface as active on management.  

On vSwitch1 and Vmotion portgroup, they are set the same.  So it looks as though Vmotion will use Vmnic3(10G) for vmotion right?  But we see the traffic on solarwinds and it caps at 1Gb so something is wrong.

I think we may just have to change the nic teaming on the Management Portgroup on vSwitch0 to have vmnic0(1G) as…
So I have no ability to boot from UEFI on a Proliant ML350 Gen8 which makes my 8 2tb drives useless in a raid 10... The 2tb limit comes into effect. I tried ESXI but could not get that to work either.. Any ideas?

This is how this new environment ive just acquired last month has all their storage setup.

They are using all THICK LUNS at the Array/Block level,  and THIN at the VMware level.

Is this a good idea?  best practices?  

I would like to hear your thoughts

Here is our SAN and VMware environment details:

SAN = all Fibre chanel EMC Unity550F's  All Flash SAS Flash 4

VMware= Vcenter 6.5 10000 Build 6816762
Esxi hosts = 5.0, 5.5, 6.0, and 6.5

The EMC Unity550F's have capability  profiles for VVols but we are not using them.

I am having an issue with VMWare workstation 15.0.2 (build-10952284)

What happens is when I run guests sometimes the host freezes for 45 minutes - cant use the mouse or anything.

However the host machine can be pinged - so at some level it is responding.

After this time it comes back to life as if nothing happend after about 45 minutes.

My host is running the latest version of Windows 10 Pro and it has 64 GB of Ram.

I have checked that I am running the latest Windows 10 upadtes. BTW the freeze only happens when I am
running VMWare with guests running. If VMWare is not running then I have no machine issues.

Any suggestions on how to resolve.



I have 2 different forests and one has Exchange 2007 and the another one has Exchange 201. Both are in VMWARE.

My question is, can I move Exchange vm live to different datastore or it's always better to shut it dow first?
I create VMs to a esxi and i would like to use on of the VM is the Domain controller and join the other VMs the domain.
I add the roles of active directory,DHCP,DNS to the domain controller vm.

The problem is that when i tried to join the domain from other VM then i receive the error:
Note: This information is intended for a network administrator.  If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt.

The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller (AD DC) for domain "test.local":

The error was: "This operation returned because the timeout period expired."
(error code 0x000005B4 ERROR_TIMEOUT)

The query was for the SRV record for _ldap._tcp.dc._msdcs.test.local

The DNS servers used by this computer for name resolution are not responding. This computer is configured to use DNS servers with the following IP addresses:

(no addresses found)
Verify that this computer is connected to the network, that these are the correct DNS server IP addresses, and that at least one of the DNS servers is running.

Also the vswitch topology is presented to the attached.vswitch.png
What is the problem ?






VMware is virtual machine software that provides a virtualized set of hardware (a video adapter, a network adapter, and hard disk adapters) to the guest operating system. VMware virtual machines become highly portable between computers, because every host looks nearly identical to the guest. In practice, a system administrator can pause operations on a virtual machine guest, move or copy that guest to another physical computer, and there resume execution exactly at the point of suspension. VMware's enterprise software hypervisors for servers, VMware ESX and VMware ESXi are bare-metal embedded Hypervisors that run directly on server hardware without requiring an additional underlying operating system.