Skip to main content

vSAN Disk group is in "Unhealthy State"


If you are running VMware vSAN 6.0, 6.1 and 6.2 then there is a high chance that you will be seeing this issue with the following RAID controllers,

Cisco 12G SAS Modular Raid Controller
DELL FD332-PERC (Dual ROC)
DELL FD332-PERC (Single ROC)
DELL PERC H730 Adapter
DELL PERC H730 Mini ==>We are using with Dell R620/630 serves with this RAID controller
DELL PERC H730P Adapter 
DELL PERC H730P Mini
Huawei Technologies Co. Ltd. SR 430C
Lenovo ThinkServer RAID 720i AnyRAID Adapter
Lenovo ThinkServer RAID 720ix AnyRAID Adapter
Lenovo ServeRAID 5210e SAS/SATA Controller
Lenovo ServeRAID M5210 SAS/SATA Controller
LSI MegaRAID SAS 9361-8i
LSI MegaRAID SAS 9362-8i
Supermicro SMC3108
But this can happen due to Physical Disk Drive failure and RAID Controllers from above list resetting the Disk Drives.















In some scenario only one disk group will go to unhealthy state or all the disk groups will go to unhealthy state on the ESXi host in the vSAN cluster.
The Disk group turns out to be unhealthy only if the cache disk goes not the capacity disks. When a flash cache device in a disk group is impacted by a failure, the whole of the disk group is impacted. 
The disk group status in the vSphere web client shows the overall disk group is now “Unhealthy”. The status of the magnetic disks in the same disk group shows “Flash disk down”.

vsan.disks_stats

This is a very useful command to determine the following information about disks:
Number of components on a disk (for SSDs, this is always 0)
Total disk capacity
The percentage of disk that is being consumed
Health Status of the disk
The version of the on-disk format
+++++++++++++++++++++++++++++++++++++++++++++
vsan.disks_stats /test-vc-2.local.com/vRack-Datacenter/computers/vsancluster/hosts/192.168.1.10

+----------------------+---------------+-------+------+------------+---------+----------+-------------+
| naa.50000396cc89a8c1 | 192.168.1.10 | SSD | 0 | 1490.41 GB | 1.69 % | 0.00 % | FAILED (v2) |
| naa.5000c5008fafefeb | 192.168.1.10 | MD | 22 | 1106.62 GB | 37.56 % | 37.24 % | FAILED (v2) |
| naa.5000c5008fb00f23 | 192.168.1.10 | MD | 23 | 1106.62 GB | 40.90 % | 40.67 % | OK (v2) |
| naa.5000c5008fb17f5f | 192.168.1.10 | MD | 21 | 1106.62 GB | 45.78 % | 45.55 % | OK (v2) |
| naa.5000c5008fb0d70f | 192.168.1.10 | MD | 21 | 1106.62 GB | 45.69 % | 45.55 % | FAILED (v2) |
+----------------------+---------------+-------+------+------------+---------+----------+-------------+
| naa.50000396cc89a8c5 | 192.168.1.10 | SSD | 0 | 1490.41 GB | 2.29 % | 0.00 % | FAILED (v2) |
| naa.5000c5008fb140f3 | 192.168.1.10 | MD | 23 | 1106.62 GB | 41.13 % | 40.49 % | OK (v2) |
| naa.5000c5008fafd21f | 192.168.1.10 | MD | 28 | 1106.62 GB | 35.67 % | 35.43 % | FAILED (v2) |
| naa.5000c5008fb0c10b | 192.168.1.10 | MD | 27 | 1106.62 GB | 35.77 % | 30.37 % | FAILED (v2) |
| naa.5000c5008fb168cb | 192.168.1.10 | MD | 21 | 1106.62 GB | 30.73 % | 30.37 % | FAILED (v2) |
+----------------------+---------------+-------+------+------------+---------+----------+-------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Possible Solutions:
1) If there is no Disk Drive failure(used for cache), put the host into Maintenance Mode with Full Data Migration and reboot it. And check whether the unhealthy disk group has come up healthy or not under,
Cluster ==> Manage ==> Settings ==> Virtual SAN ==> Disk Management
If all the Disk group are healthy state, exit the host out of Maintenance Mode and now the issue has been resolved.
2) If there is any Physical Disk drive(used for cache) failure, Check with your hardware vendor for the disk replacement.
3) Please log a support case with GSS if the above mentioned do not fix the issue. 
When fixing any vSAN issues always make sure that you run the vSAN health check under,
vSANCluster ==> Monitor ==> Virtual SAN==> Health.

Read:
https://kb.vmware.com/s/article/2144936
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vsan-troubleshooting-reference-manual.pdf
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/products/vsan/vmw-gdl-vsan-health-check.pdf

Comments

  1. Our Services | Wellbeing Frankston Chiropractic

    We are licensed Chiropractors in Frankston. We offer range of services to the patient such as, massage, chiro treatment, emergency room hire and many more.

    ReplyDelete

Post a Comment

Please leave your valuable comment to improvise the content.

Popular posts from this blog

How to Fix | Virtual SAN Health Alarm 'Performance data collection'' status is Red

Virtual SAN Health Alarm 'Performance data collection'' status is Red vSAN CLuster ==> Monitor ==> Virtual SAN==> Health ==> Performance Service ==> Performance Data Collection==>  Stats Gathering ==>  Failed Stats persistence==> Failed The causes for this error is unknown but there are two fixes available to this issue, 1)  Restarting the vsanmgmtd and vsanvpd service on all the ESXi hosts in the vSAN Cluster.  There is no impact of restarting these two services on the ESXi,  /etc/init.d/vsanmgmtd  restart /etc/init.d/vsanvpd restart Make sure the service is is running state after the restart,  /etc/init.d/vsanmgmtd  status /etc/init.d/vsanvpd status Post restart of the services retest the vsan health , vSAN CLuster ==> Monitor ==> Virtual SAN==> Health==>Retest and the Performance Data Collection should be green. 2) To resolve this issue, re-enable the performance service from the cluster level a.

How to fix | ESXI Virtual SAN Health service installation

I encountered an issue with the ESXi Virtual SAN Health service installation in one of the vSAN cluster, Step 1 : I checked whether all the ESXi hosts are running on the same version or not, VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX1 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX2 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX3 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX5 VMware ESXi 6.0.0 build-5224934 VMware ESXi 6.0.0 Update 3 on ESX4 They are in the same version so we can go check whether vSAN health VIB  is installed or not. From the KB https://kb.vmware.com/s/article/2109874 , On vSphere 6.0 Update 2 release, none of the other health checks will be conducted until all the hosts are upgraded to 6.0 Update 2 (when running the latest version, vSAN 6.2) release to avoid false alarms.  But we have all the ESXi hosts in ESXi 6.0 Update 3 "Install the vSAN Health Service V

How to fix | vSAN CLOMD Liveness - Part I

In the following scenarios, you will see the CLOMD service liveness on ESXi hosts, If any of the ESXi hosts are disconnected, the CLOMD liveness state of the disconnected host is shown as unknown .If the Health service is not installed on a particular ESXi host, the CLOMD liveness state of all the ESXi hosts is also reported as unknown. If the CLOMD service is not running on a particular ESXi host, the CLOMD liveness state of one host is abnormal. The Cluster Health – CLOMD liveness check in the vSAN Health Service, and provides details on why it might report an error.This checks if the Cluster Level Object Manager ( CLOMD ) daemon is alive or not. It does so by first checking that the service is running on all ESXi hosts, and then contacting the service to retrieve run-time statistics to verify that CLOMD can respond to inquiries.  CLOMD (Cluster Level Object Manager Daemon) plays a key role in the operation of a vSAN cluster. It runs on every ESXi host

Horizon View Pools stuck in Deleting state

Recently, had an issue with 2 view Desktop Pools that were stuck in Deleting state in horizon view manager. We are running Horizon View 7.2 and this issue happening since View 4.x. Out of 2 Pools, I was able to delete one pool by just removing the VM from Resources-->Machines--> Filtered using Pool name.But when doing the same thing for the TEST-Pool I was getting an error as below, "Machine","Desktop Pool","DNS Name","User","Host","Agent Version","Datastore","Status" "TEST-POOL-046","TEST-POOL","TEST-POOL-046.TEST.LOCAL","","esx3.TEST.LOCAL","Unknown","[TEST-VCENTER1VSAN]","Status:Error Status Errors:Nov 30, 2017 10:38:20 PM PST: Failed to delete VM - null" So I logged in to the connection server and found the following error logs, C:\programdata\vmware\vdm\logs\debug-2017-11-30-221023.txt ++++++++

How to Fix | Virtual SAN Health - Physical Disk Health Retrieval Issues

Physical Disk Health – Physical Disk Health Retrieval Issues In Virtual SAN cluster, there is one more common issue is the Virtual SAN health test failing to retrieve the Physical Disk Health on an ESXi host.It is informing the administrator that it cannot get physical disk-related information from the ESXi host in question in order to perform a check on the health of the physical disks. If the Virtual SAN management service vsanmgmtd on the ESXi host is nonresponsive then you will encounter this issue, in the vsanmgmt.log you will see the following snippets, ++++++++++++++++++++++++++++++++++++++++++++ [root@esxihost-1:/var/log] cat vsanmgmt.log  2017-11-15T03:08:46Z VSANMGMTSVC: INFO vsanperfsvc[Thread-1] [VsanLsomHealth::getHealthStats] Get issued comps = {}  2017-11-15T03:08:46Z VSANMGMTSVC: WARNING vsanperfsvc[Thread-1] [VsanHealthUtil::InvokeMethod] Invoke: mo=ServiceInstance, info=RetrieveContent  2017-11-15T03:08:46Z VSANMGMTSVC: ERROR vsanperfsvc[Thread-1] [Vs

vSAN Component Failure State - Degraded vs Absent - Part I

Failure States of Virtual SAN Components: Virtual SAN  handles failures of the host, network and storage devices in the cluster based on the severity of the failure. When these fail they directly affect the components in the  vSAN cluster.  Virtual SAN has 2 types of failure states for components ABSENT and DEGRADED. According to the component state, it uses different approaches to recover the affected components. Degraded: "A component is in degraded state if Virtual SAN detects a permanent component failure and assumes that the component is not going to recover to working state." Absent: "A component is in absent state if Virtual SAN detects a temporary component failure where the component might recover and restore its working state." An ABSENT state may or not resolve itself over time, but a  DEGRADED state is a permanent state. From the above image, left side a disk has been unplugged or offline may be reinserted or brought online, Virtual

How to fix | vSAN CLOMD Liveness - Part II (Virtual machine creation failed)

vSAN CLOMD daemon may fail when trying to repair objects with 0 byte components When Cloning a VM from a template from VRA and vCenter vMotion failed with the following errors. And the vApp deployment failed due to the clomd service is failed on the host, Read the importance of clomd here,  https://virtuawisdom.blogspot.in/2017/11/how-to-fix-vsan-clomd-liveness-part-i.html Task Details: Name: clone Status: Cannot complete file creation operation. Start Time: May 30, 2017 5:34:13 AM Completed Time: May 30, 2017 5:35:13 AM State: error Error Stack:  A CLOM is not attached. This could indicate that the clomd daemon is not running.Failed to create the object. Additional Task Details: Error Type: CannotCreateFile Task Id: Task Cancelable: true Canceled: false Description Id: VirtualMachine.clone Event Chain Id: 291778 /var/log/clomd.log +++++++++++++++++++++++++++++++++++++++++++++ 2017-05-25T18:20:12.755Z 26738391 (111018916128)(opID:0)main: Clomd is star

How to Fix | Controller utility is installed on host "Warning"

The controller utilities enable additional health checks based on controller settings. The yellow check status indicates that vSAN Health Service is not able to find the appropriate controller utility for the storage controller on the host. Typically, the controller utility is used to configure and view configuration data. When vSAN Health Service can retrieve controller configuration data, it can further analyze configuration issues for the current vSAN setup. Host with PERCCLI installed: [root@esx26:~] esxcli software  vib list | grep perccli vmware-esx-perccli-1.05.08     1.05.08-01                             LSI     PartnerSupported  2017-08-03 Host without PERCCLI: [root@esx7:~] esxcli software  vib list | grep perccli ********NO OUTPUT******** Based on the KB https://kb.vmware.com/s/article/2148867 1) Download the PERCCLI for ESXi from the below link,  http://www.dell.com/support/home/in/en/inbsdt1/Drivers/DriversDetails?driverId=XY978 2) Put the host

vCloud vApp fail to start with error "Edge deployment failed on host as the message bus infra on the host is not green"

Recently, we faced an issue when starting a vApp in the org due to an error " Edge deployment failed on host host-33 as the message bus infra on the host is not green. Please call API to re-sync the message bus and after successful re-sync, try edge installation., error code 10921" This issue is completely from the ESXi / NSX end and there is no issue in the vCloud director. But this issue severely affects the power-on opeation for the Tenants in the vcloud director. Error stack : ++++++++++++++++++++++++++++++++++++++++++++++++++++ [ 3455c8b1-d86f-4ee1-b982-d99f2737a5a3 ] Internal Server Error  - java.util.concurrent.ExecutionException: com.vmware.ssdc.util.LMException: Unable to start vApp "Test-01".  - com.vmware.ssdc.util.LMException: Unable to start vApp "Test-01".  - Unable to start vApp "Test-01".  - Unable to deploy network "Fence(urn:uuid:577df80b-f352-45f3-9739-8819fae1a02a)". com.vmware.vcloud.common.network.VsmEx