Skip to main content

Horizon View Pools stuck in Deleting state


Recently, had an issue with 2 view Desktop Pools that were stuck in Deleting state in horizon view manager. We are running Horizon View 7.2 and this issue happening since View 4.x.
Horizon View Desktop Pools stuck in Deleting state









Out of 2 Pools, I was able to delete one pool by just removing the VM from Resources-->Machines--> Filtered using Pool name.But when doing the same thing for the TEST-Pool I was getting an error as below,
"Machine","Desktop Pool","DNS Name","User","Host","Agent Version","Datastore","Status"
"TEST-POOL-046","TEST-POOL","TEST-POOL-046.TEST.LOCAL","","esx3.TEST.LOCAL","Unknown","[TEST-VCENTER1VSAN]","Status:Error Status Errors:Nov 30, 2017 10:38:20 PM PST: Failed to delete VM - null"
Horizon View Desktop Pools stuck in Deleting state






So I logged in to the connection server and found the following error logs,
C:\programdata\vmware\vdm\logs\debug-2017-11-30-221023.txt
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2017-11-30T22:48:16.169-08:00 DEBUG (0B74-1010) <propagate-8512659f-1a31-4f16-aa4a-9ae9370566d1> [VirtualCenterDriver] Pool TEST-POOL recovery: waiting for result of recovery action for TEST-POOL-046
2017-11-30T22:49:03.349-08:00 DEBUG (0608-168C) <HAResourceManager> [PendingOperationSet] com.vmware.vdi.desktopcontroller.VirtualCenterDriver@2c5ce1d8 Received Prepare from TEST-CONSERVER1 for DeletingLC on /test-vcenter1/vm/TEST-POOL/TEST-POOL-046(/test-vcenter1/vm/TEST-POOL/TEST-POOL-046)
2017-11-30T22:49:03.349-08:00 DEBUG (0608-168C) <HAResourceManager> [VirtualCenterDriver] Accepting Prepare from TEST-CONSERVER1 for DeletingLC on /test-vcenter1/vm/TEST-POOL/TEST-POOL-046(/test-vcenter1/vm/TEST-POOL/TEST-POOL-046)
2017-11-30T22:49:03.357-08:00 DEBUG (0B74-1010) <propagate-8512659f-1a31-4f16-aa4a-9ae9370566d1> [VirtualCenterDriver] cn=TEST-POOL,ou=server groups,dc=vdi,dc=vmware,dc=int::determineDeletingChanges::Attempting deletion of ipHostNumber=TEST-POOL-046.TEST.LOCAL/-/192.168.10.2, ipHostNumberOverride not set (VM was marked for deletion)
BROKER_PROVISIONING_SVI_ERROR_REMOVING_VM 
Provisioning error occurred for Machine TEST-POOL-046: Unable to remove Machine from inventory 
Attributes: 
MachineId=29539968-1089-4240-bb00-043f1580559e 
MachineName=TEST-POOL-046 
Node=TEST-CONSERVER1.TEST.LOCAL 
Severity=ERROR 
Time=Thu Nov 30 22:49:05 PST 2017 
Module=Broker 
Source=com.vmware.vdi.desktopcontroller.PendingOperation 
Acknowledged=true 
2017-11-30T22:49:05.591-08:00 ERROR (0B74-8F18) <PendingOperation-/test-vcenter1/vm/TEST-POOL/TEST-POOL-046-DeletingLC> [PendingOperation] Pool TEST-POOL::Unable to remove from inventory VM /test-vcenter1/vm/TEST-POOL/TEST-POOL-046 - null
2017-11-30T22:49:05.591-08:00 DEBUG (0B74-8F18) <PendingOperation-/test-vcenter1/vm/TEST-POOL/TEST-POOL-046-DeletingLC> [EventLogger] Error_Event:[BROKER_PROVISIONING_SVI_ERROR_REMOVING_VM] "Provisioning error occurred for Machine TEST-POOL-046: Unable to remove Machine from inventory": MachineId=29539968-1089-4240-bb00-043f1580559e, MachineName=TEST-POOL-046, Node=TEST-CONSERVER1.TEST.LOCAL, Severity=ERROR, Time=Thu Nov 30 22:49:05 PST 2017, Module=Broker, Source=com.vmware.vdi.desktopcontroller.PendingOperation, Acknowledged=true

2017-11-30T22:49:05.591-08:00 DEBUG (0B74-8F18) <PendingOperation-/test-vcenter1/vm/TEST-POOL/TEST-POOL-046-DeletingLC> [VmInformation] ::Updating VM state /test-vcenter1/vm/TEST-POOL/TEST-POOL-046 ERROR ERROR: Failed to delete VM - null 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The google search was always pointing to the KB https://kb.vmware.com/s/article/2015112, but the solutions based on editing/removing from the ADAM DB or Composer DB.
So wanted to try the powerful command viewdbcheck.cmd from the connection server and this can be found under C:\Program Files\VMware\VMware View\Server\tools\bin\ in the connection server.
The ViewDbChk tool allows administrators to scan for and fix provisioning errors that can not be addressed using View Administrator. Provisioning errors can occur when there are inconsistencies between the LDAP, vCenter and View Composer databases. These inconsistencies can be caused by (but are not limited to) direct editing of the vCenter inventory, restoring a backup, or a long-term network problem.

This tool allows VMware View administrators to scan for machines which cannot be provisioned and removes all invalid database entries from the necessary databases.
Read more about viewcheckdb.cmd from here:https://kb.vmware.com/s/article/2118050.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
C:\Program Files\VMware\VMware View\Server\tools\bin>viewdbcheck.cmd --scanMachines --limit 10 
Processing desktop pool "TEST-POOL"
   Desktop Pool Name: TEST-POOL
   Desktop Pool Type: AUTO_LC_TYPE
   VM Folder: /TEST-VCENTER1/vm/TEST-POOL/
   Desktop Pool Disabled: true
   Desktop Pool Provisioning Enabled: false
Checking connectivity...
Machine "TEST-POOL-046" has errors
   VM Name: TEST-POOL-046
   Creation Date: 2/6/17 7:46:08 PM PST
   MOID: vm-13657
   Clone Id: 93e5fd0b-d51e-4a1f-bfa3-3c1f043c3226
   VM Folder: /TEST-VCENTER1/vm/TEST-POOL/TEST-POOL-046
   VM State: ERROR
   VM Clone Error: Failed to delete VM - null
   VM Clone Error Time: Nov 30, 2017 11:24:00 PM PST
   View Composer Error: Failed to delete VM - null
 Do you want to remove the desktop machine "TEST-POOL-046"? (yes/no):yes
Shutting down VM "/TEST-VCENTER1/vm/TEST-POOL/TEST-POOL-046"...
Archiving persistent disks...
Destroying View Composer clone "93e5fd0b-d51e-4a1f-bfa3-3c1f043c3226"...
Removing ThinApp entitlements for machine "/TEST-VCENTER1/vm/TEST-POOL/TEST-POOL
-046"...
Removing machine "/TEST-VCENTER1/vm/TEST-POOL/TEST-POOL-046" from LDAP...
Running delete VM scripts for machine "/TEST-VCENTER1/vm/TEST-POOL/TEST-POOL-046
"...
Provisioning has been disabled for the desktop pool "TEST-POOL". Do you want to
 enable it? (yes/no):no
Do you want to enable the desktop pool "TEST-POOL"? (yes/no):no
java.lang.Exception: ** ERROR: Failed to find desktop for "cn=TEST-POOL,ou=serv
er groups,dc=vdi,dc=vmware,dc=int" **
        at com.vmware.vdi.viewdbchk.desktop.MiniPoolInformation.disableDesktopAn
dPool(MiniPoolInformation.java:245)
        at com.vmware.vdi.viewdbchk.command.PoolHelper.promptEnablePool(PoolHelp
er.java:214)
        at com.vmware.vdi.viewdbchk.command.ScanMachines.execute(ScanMachines.ja
va:183)
        at com.vmware.vdi.viewdbchk.command.ViewDbCmd.execute(ViewDbCmd.java:409
)
        at com.vmware.vdi.viewdbchk.ViewDbChk.go(ViewDbChk.java:129)
        at com.vmware.vdi.viewdbchk.ViewDbChk.main(ViewDbChk.java:62)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
When running viewdbcheck.cmd you can specify the pool name and how many VMs need to be removed and the desktop pool name also. From the above output, this VM TEST-POOL-046 had a provisioning error and which was preventing the desktop pool deletion from the Horizon View Manager.Once the VM is removed it was asking whether to enable the pool or not since the pool was already stuck in deleting state once the VM is removed the pool also got removed immediately.

Comments

Popular posts from this blog

vSAN Disk group is in "Unhealthy State"

If you are running VMware vSAN 6.0, 6.1 and 6.2 then there is a high chance that you will be seeing this issue with the following RAID controllers, Cisco 12G SAS Modular Raid Controller DELL FD332-PERC (Dual ROC) DELL FD332-PERC (Single ROC) DELL PERC H730 Adapter DELL PERC H730 Mini ==> We are using with Dell R620/630 serves with this RAID controller DELL PERC H730P Adapter  DELL PERC H730P Mini Huawei Technologies Co. Ltd. SR 430C Lenovo ThinkServer RAID 720i AnyRAID Adapter Lenovo ThinkServer RAID 720ix AnyRAID Adapter Lenovo ServeRAID 5210e SAS/SATA Controller Lenovo ServeRAID M5210 SAS/SATA Controller LSI MegaRAID SAS 9361-8i LSI MegaRAID SAS 9362-8i Supermicro SMC3108 But this can happen due to Physical Disk Drive failure and RAID Controllers from above list resetting the Disk Drives. In some scenario only one disk group will go to unhealthy state or all the disk groups will go to unhealthy state on the ESXi host in the vSAN cluster. Th

How to Fix | Virtual SAN Health Alarm 'Performance data collection'' status is Red

Virtual SAN Health Alarm 'Performance data collection'' status is Red vSAN CLuster ==> Monitor ==> Virtual SAN==> Health ==> Performance Service ==> Performance Data Collection==>  Stats Gathering ==>  Failed Stats persistence==> Failed The causes for this error is unknown but there are two fixes available to this issue, 1)  Restarting the vsanmgmtd and vsanvpd service on all the ESXi hosts in the vSAN Cluster.  There is no impact of restarting these two services on the ESXi,  /etc/init.d/vsanmgmtd  restart /etc/init.d/vsanvpd restart Make sure the service is is running state after the restart,  /etc/init.d/vsanmgmtd  status /etc/init.d/vsanvpd status Post restart of the services retest the vsan health , vSAN CLuster ==> Monitor ==> Virtual SAN==> Health==>Retest and the Performance Data Collection should be green. 2) To resolve this issue, re-enable the performance service from the cluster level a.

How to fix | ESXI Virtual SAN Health service installation

I encountered an issue with the ESXi Virtual SAN Health service installation in one of the vSAN cluster, Step 1 : I checked whether all the ESXi hosts are running on the same version or not, VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX1 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX2 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX3 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX5 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX4 They are in the same version so we can go check whether vSAN health VIB  is installed or not. From the KB https://kb.vmware.com/s/article/2109874 , On vSphere 6.0 Update 2 release, none of the other health checks will be conducted until all the hosts are upgraded to 6.0 Update 2 (when running the latest version, vSAN 6.2) release to avoid false alarms.  But we have all the ESXi hosts in ESXi 6.0 Update 3 "Install the vSAN Health Service V

How to fix | vSAN CLOMD Liveness - Part I

In the following scenarios, you will see the CLOMD service liveness on ESXi hosts, If any of the ESXi hosts are disconnected, the CLOMD liveness state of the disconnected host is shown as unknown .If the Health service is not installed on a particular ESXi host, the CLOMD liveness state of all the ESXi hosts is also reported as unknown. If the CLOMD service is not running on a particular ESXi host, the CLOMD liveness state of one host is abnormal. The Cluster Health – CLOMD liveness check in the vSAN Health Service, and provides details on why it might report an error.This checks if the Cluster Level Object Manager ( CLOMD ) daemon is alive or not. It does so by first checking that the service is running on all ESXi hosts, and then contacting the service to retrieve run-time statistics to verify that CLOMD can respond to inquiries.  CLOMD (Cluster Level Object Manager Daemon) plays a key role in the operation of a vSAN cluster. It runs on every ESXi host

How to Fix | Virtual SAN Health - Physical Disk Health Retrieval Issues

Physical Disk Health – Physical Disk Health Retrieval Issues In Virtual SAN cluster, there is one more common issue is the Virtual SAN health test failing to retrieve the Physical Disk Health on an ESXi host.It is informing the administrator that it cannot get physical disk-related information from the ESXi host in question in order to perform a check on the health of the physical disks. If the Virtual SAN management service vsanmgmtd on the ESXi host is nonresponsive then you will encounter this issue, in the vsanmgmt.log you will see the following snippets, ++++++++++++++++++++++++++++++++++++++++++++ [root@esxihost-1:/var/log] cat vsanmgmt.log  2017-11-15T03:08:46Z VSANMGMTSVC: INFO vsanperfsvc[Thread-1] [VsanLsomHealth::getHealthStats] Get issued comps = {}  2017-11-15T03:08:46Z VSANMGMTSVC: WARNING vsanperfsvc[Thread-1] [VsanHealthUtil::InvokeMethod] Invoke: mo=ServiceInstance, info=RetrieveContent  2017-11-15T03:08:46Z VSANMGMTSVC: ERROR vsanperfsvc[Thread-1] [Vs

vSAN Component Failure State - Degraded vs Absent - Part I

Failure States of Virtual SAN Components: Virtual SAN  handles failures of the host, network and storage devices in the cluster based on the severity of the failure. When these fail they directly affect the components in the  vSAN cluster.  Virtual SAN has 2 types of failure states for components ABSENT and DEGRADED. According to the component state, it uses different approaches to recover the affected components. Degraded: "A component is in degraded state if Virtual SAN detects a permanent component failure and assumes that the component is not going to recover to working state." Absent: "A component is in absent state if Virtual SAN detects a temporary component failure where the component might recover and restore its working state." An ABSENT state may or not resolve itself over time, but a  DEGRADED state is a permanent state. From the above image, left side a disk has been unplugged or offline may be reinserted or brought online, Virtual

How to fix | vSAN CLOMD Liveness - Part II (Virtual machine creation failed)

vSAN CLOMD daemon may fail when trying to repair objects with 0 byte components When Cloning a VM from a template from VRA and vCenter vMotion failed with the following errors. And the vApp deployment failed due to the clomd service is failed on the host, Read the importance of clomd here,  https://virtuawisdom.blogspot.in/2017/11/how-to-fix-vsan-clomd-liveness-part-i.html Task Details: Name: clone Status: Cannot complete file creation operation. Start Time: May 30, 2017 5:34:13 AM Completed Time: May 30, 2017 5:35:13 AM State: error Error Stack:  A CLOM is not attached. This could indicate that the clomd daemon is not running.Failed to create the object. Additional Task Details: Error Type: CannotCreateFile Task Id: Task Cancelable: true Canceled: false Description Id: VirtualMachine.clone Event Chain Id: 291778 /var/log/clomd.log +++++++++++++++++++++++++++++++++++++++++++++ 2017-05-25T18:20:12.755Z 26738391 (111018916128)(opID:0)main: Clomd is star

How to Fix | Controller utility is installed on host "Warning"

The controller utilities enable additional health checks based on controller settings. The yellow check status indicates that vSAN Health Service is not able to find the appropriate controller utility for the storage controller on the host. Typically, the controller utility is used to configure and view configuration data. When vSAN Health Service can retrieve controller configuration data, it can further analyze configuration issues for the current vSAN setup. Host with PERCCLI installed: [root@esx26:~] esxcli software  vib list | grep perccli vmware-esx-perccli-1.05.08     1.05.08-01                             LSI     PartnerSupported  2017-08-03 Host without PERCCLI: [root@esx7:~] esxcli software  vib list | grep perccli ********NO OUTPUT******** Based on the KB https://kb.vmware.com/s/article/2148867 1) Download the PERCCLI for ESXi from the below link,  http://www.dell.com/support/home/in/en/inbsdt1/Drivers/DriversDetails?driverId=XY978 2) Put the host

vCloud vApp fail to start with error "Edge deployment failed on host as the message bus infra on the host is not green"

Recently, we faced an issue when starting a vApp in the org due to an error " Edge deployment failed on host host-33 as the message bus infra on the host is not green. Please call API to re-sync the message bus and after successful re-sync, try edge installation., error code 10921" This issue is completely from the ESXi / NSX end and there is no issue in the vCloud director. But this issue severely affects the power-on opeation for the Tenants in the vcloud director. Error stack : ++++++++++++++++++++++++++++++++++++++++++++++++++++ [ 3455c8b1-d86f-4ee1-b982-d99f2737a5a3 ] Internal Server Error  - java.util.concurrent.ExecutionException: com.vmware.ssdc.util.LMException: Unable to start vApp "Test-01".  - com.vmware.ssdc.util.LMException: Unable to start vApp "Test-01".  - Unable to start vApp "Test-01".  - Unable to deploy network "Fence(urn:uuid:577df80b-f352-45f3-9739-8819fae1a02a)". com.vmware.vcloud.common.network.VsmEx