Skip to main content

Posts

vCloud vApp fail to start with error "Edge deployment failed on host as the message bus infra on the host is not green"

Recently, we faced an issue when starting a vApp in the org due to an error " Edge deployment failed on host host-33 as the message bus infra on the host is not green. Please call API to re-sync the message bus and after successful re-sync, try edge installation., error code 10921" This issue is completely from the ESXi / NSX end and there is no issue in the vCloud director. But this issue severely affects the power-on opeation for the Tenants in the vcloud director. Error stack : ++++++++++++++++++++++++++++++++++++++++++++++++++++ [ 3455c8b1-d86f-4ee1-b982-d99f2737a5a3 ] Internal Server Error  - java.util.concurrent.ExecutionException: com.vmware.ssdc.util.LMException: Unable to start vApp "Test-01".  - com.vmware.ssdc.util.LMException: Unable to start vApp "Test-01".  - Unable to start vApp "Test-01".  - Unable to deploy network "Fence(urn:uuid:577df80b-f352-45f3-9739-8819fae1a02a)". com.vmware.vcloud.common.network.VsmEx

LogicalSwitch virtualwire-xxxx is marked as missing | NSX

NSX Manager detected a backing DV portgroup for an NSX logical switch is missing in Virtual Center. There was a request to create a port group for a vApp network when deploying a vApp from the vcloud director. But due to some reason the port group was created and removed in the dvsiwtch, and in my case, the vApp was deleted from the vCloud director so the network also removed from the VC and NSX. If  there is an error on the NSX UI follow the below solutions, +++++++++++++++++++++++++++++++++ Dec  6 09:42:00 manager 2017-12-06 09:42:00.597 UTC  INFO DCNPool-1 VirtualWireInFirewallRuleNotificationHandler:58 - Recieved VDN CREATE notification for context virtualwire-33172:VirtualWire Dec  6 09:42:00 manager 2017-12-06 09:42:00.600 UTC  INFO DCNPool-1 VirtualWireDCNHandler:43 - Recieved VDN CREATE notification for context virtualwire-33172:VirtualWire Dec  6 09:42:00 manager 2017-12-06 09:42:00.696 UTC  INFO http-nio-127.0.0.1-7441-exec-917 NetworkFeatureConfigUtil:188 - Added

vCloud Director vApp power on failure due to vim.fault.HAErrorsAtDest

When I was trying to power on a vApp with 12 VMs in vCloud Director the power on operation failed due to one VM was unable to power on the ESXi host with this error "The host is reporting errors in its attempts to provide vSphere HA support" ++++++++++++++++ Underlying system error: com.vmware.vim.binding.vim.fault.HAErrorsAtDest vCenter Server task (moref: task-689) failed in vCenter Server 'TEST-VC1' (73dc8fb7-28d6-41b3-86dd-09126c88aebe). - The host is reporting errors in its attempts to provide vSphere HA support. +++++++++++++++ I was searching for the fault message vim.fault.HAErrorsAtDest and got the information from the http://pubs.vmware.com/, http://pubs.vmware.com/vsphere-6-5/index.jsp?topic=/com.vmware.wssdk.apiref.doc/vim.fault.HAErrorsAtDest.html   Fault Description  The destination compute resource is HA-enabled, and HA is not running properly. This will cause the following problems:  1) The VM will not have HA protection.  2

How to fix | vSAN CLOMD Liveness - Part II (Virtual machine creation failed)

vSAN CLOMD daemon may fail when trying to repair objects with 0 byte components When Cloning a VM from a template from VRA and vCenter vMotion failed with the following errors. And the vApp deployment failed due to the clomd service is failed on the host, Read the importance of clomd here,  https://virtuawisdom.blogspot.in/2017/11/how-to-fix-vsan-clomd-liveness-part-i.html Task Details: Name: clone Status: Cannot complete file creation operation. Start Time: May 30, 2017 5:34:13 AM Completed Time: May 30, 2017 5:35:13 AM State: error Error Stack:  A CLOM is not attached. This could indicate that the clomd daemon is not running.Failed to create the object. Additional Task Details: Error Type: CannotCreateFile Task Id: Task Cancelable: true Canceled: false Description Id: VirtualMachine.clone Event Chain Id: 291778 /var/log/clomd.log +++++++++++++++++++++++++++++++++++++++++++++ 2017-05-25T18:20:12.755Z 26738391 (111018916128)(opID:0)main: Clomd is star

Horizon View Pools stuck in Deleting state

Recently, had an issue with 2 view Desktop Pools that were stuck in Deleting state in horizon view manager. We are running Horizon View 7.2 and this issue happening since View 4.x. Out of 2 Pools, I was able to delete one pool by just removing the VM from Resources-->Machines--> Filtered using Pool name.But when doing the same thing for the TEST-Pool I was getting an error as below, "Machine","Desktop Pool","DNS Name","User","Host","Agent Version","Datastore","Status" "TEST-POOL-046","TEST-POOL","TEST-POOL-046.TEST.LOCAL","","esx3.TEST.LOCAL","Unknown","[TEST-VCENTER1VSAN]","Status:Error Status Errors:Nov 30, 2017 10:38:20 PM PST: Failed to delete VM - null" So I logged in to the connection server and found the following error logs, C:\programdata\vmware\vdm\logs\debug-2017-11-30-221023.txt ++++++++

How to fix | vSAN CLOMD Liveness - Part I

In the following scenarios, you will see the CLOMD service liveness on ESXi hosts, If any of the ESXi hosts are disconnected, the CLOMD liveness state of the disconnected host is shown as unknown .If the Health service is not installed on a particular ESXi host, the CLOMD liveness state of all the ESXi hosts is also reported as unknown. If the CLOMD service is not running on a particular ESXi host, the CLOMD liveness state of one host is abnormal. The Cluster Health – CLOMD liveness check in the vSAN Health Service, and provides details on why it might report an error.This checks if the Cluster Level Object Manager ( CLOMD ) daemon is alive or not. It does so by first checking that the service is running on all ESXi hosts, and then contacting the service to retrieve run-time statistics to verify that CLOMD can respond to inquiries.  CLOMD (Cluster Level Object Manager Daemon) plays a key role in the operation of a vSAN cluster. It runs on every ESXi host

How to fix | ESXI Virtual SAN Health service installation

I encountered an issue with the ESXi Virtual SAN Health service installation in one of the vSAN cluster, Step 1 : I checked whether all the ESXi hosts are running on the same version or not, VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX1 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX2 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX3 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX5 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX4 They are in the same version so we can go check whether vSAN health VIB  is installed or not. From the KB https://kb.vmware.com/s/article/2109874 , On vSphere 6.0 Update 2 release, none of the other health checks will be conducted until all the hosts are upgraded to 6.0 Update 2 (when running the latest version, vSAN 6.2) release to avoid false alarms.  But we have all the ESXi hosts in ESXi 6.0 Update 3 "Install the vSAN Health Service V