- From the logs I can see that first snapshot
consolidation was initiated around: 18:17.
2019-05-27T06:43:07.783Z|
vmx| I125: GuestRpc: Got error for channel 0 connection 570003: Remote
disconnected
2019-05-27T07:18:17.801Z| vmx|
I125: VigorTransportProcessClientPayload: opID=4a8398f8-10-5b1a seq=3068294:
Receiving Snapshot. Consolidate request.
2019-05-27T07:18:18.709Z| vmx|
I125: SnapshotVMX_Consolidate: Starting online snapshot consolidate operation.
2019-05-27T07:18:18.709Z| vmx|
I125: Vix: [11698131 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished:
statevar=3, newAppState=1880, success=1 additionalError=0
2019-05-27T07:18:18.709Z| vmx| I125: Turning on snapshot info cache. VM=abcsh401.vmx.
- Listed
the snapshots:
2019-05-27T07:18:20.374Z|
vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401-flat.vmdk”
: open successful (1114133) size = 0, hd = 0. Type 3
2019-05-27T07:18:20.374Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401-flat.vmdk”
: closed.
2019-05-27T07:18:20.381Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_7-flat.vmdk”
: open successful (1114133) size = 0, hd = 0. Type 3
2019-05-27T07:18:20.381Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_7-flat.vmdk”
: closed.
2019-05-27T07:18:20.393Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8-000001-sesparse.vmdk”
: open successful (1114133) size = 0, hd = 0. Type 19
2019-05-27T07:18:20.393Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8-000001-sesparse.vmdk”
: closed.
2019-05-27T07:18:20.396Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_3-flat.vmdk”
: open successful (1114133) size = 0, hd = 0. Type 3
2019-05-27T07:18:20.396Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_3-flat.vmdk”
: closed.
2019-05-27T07:18:20.402Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_4-flat.vmdk”
: open successful (1114133) size = 0, hd = 0. Type 3
2019-05-27T07:18:20.402Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_4-flat.vmdk”
: closed.
2019-05-27T07:18:20.413Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_5-000001-sesparse.vmdk”
: open successful (1114133) size = 0, hd = 0. Type 19
2019-05-27T07:18:20.413Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_5-000001-sesparse.vmdk”
: closed.
2019-05-27T07:18:20.418Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_6-flat.vmdk”
: open successful (1114133) size = 0, hd = 0. Type 3
2019-05-27T07:18:20.418Z| vmx| I125: DISKLIB-VMFS :
“/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_6-flat.vmdk”
: closed.
- System
tried to create a CTK file but it failed as it was already existing:
2019-05-27T07:18:30.526Z|
vcpu-0| I125: OBJLIB-FILEBE : Error creating file ‘/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8-ctk.vmdk’:
3 (The file already exists).
2019-05-27T07:18:31.447Z|
vcpu-0| I125: DISKLIB-CBT :
Initializing ESX kernel change tracking for fid 35124744455.
2019-05-27T07:18:31.447Z| vcpu-0| I125: DISKLIB-CBT : Successfully created cbt node
82d991107-cbt.
2019-05-27T07:18:37.946Z|
vcpu-0| I125: ConsolidateDiskOpenCB: Opening disk node ‘scsi0:2’ for combining
‘/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8-000001.vmdk’
-> ‘/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8.vmdk’.
2019-05-27T07:18:41.219Z| Worker#1| I125:
DISKLIB-LIB :
DiskLib_IsCombinePossible: Upward Combine 2 links at 0. Need 702342 MB of free
space (2852754 MB available)
2019-05-27T07:18:41.220Z|
vmx| I125: Mirror_DiskCopy: Starting disk copy.
2019-05-27T08:02:01.791Z|
vcpu-4| I125: PVSCSI: scsi3:0: aborting cmd 0xb7
2019-05-27T09:59:01.505Z| vmx| I125: Msg_Question:
2019-05-27T09:59:01.505Z| vmx| I125: [msg.hbacommon.outofspace] There is no
more space for virtual disk ‘abcsh401_5-000001.vmdk’. You might be able to
continue this session by freeing disk space on the relevant volume, and
clicking _Retry. Click Cancel to terminate this session.
2019-05-27T09:59:01.505Z| vmx| I125: —————————————-
2019-05-27T09:59:22.441Z| vcpu-0| I125: Tools: Tools heartbeat timeout.
2019-05-27T07:18:41.220Z|
vmx| I125: ConsolidateMirrorCombineStart: Combine
‘/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8-000001.vmdk’
-> ‘/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8.vmdk’
(node = scsi0:2): 2 links starting from 0. usesCurrent = 1, isNative = 0
2019-05-27T10:15:10.072Z|
vmx| I125: [msg.hbacommon.outofspace] There is no more space for virtual disk ‘abcsh401_5-000001.vmdk’.
You might be able to continue this session by freeing disk space on the
relevant volume, and clicking _Retry. Click Cancel to terminate this session.
2019-05-27T10:15:10.072Z| vmx| I125: —————————————-
2019-05-27T10:19:10.095Z| vmx| I125: Timing out dialog 338325739
2019-05-27T10:19:10.095Z| vmx| I125: MsgQuestion: msg.hbacommon.outofspace
reply=0
2019-05-27T10:19:10.095Z|
vmx| W115: Mirror: scsi0:2: Failed to copy disk: Timeout
2019-05-27T10:19:10.095Z| vmx| W115:
MirrorDiskCopyGetCopyProgress: Failed to get disk copy progress for source
disk: ‘/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8-000001.vmdk’
and destination disk: ‘/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8.vmdk’
2019-05-27T10:19:10.095Z| vmx| I125:
ConsolidateDiskCopyCB: Mirror Disk copy failed on src disk:
/vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8.vmdk and
destination disk: /vmfs/volumes/5c51ac02-accbdd26-af0d-a0369ff51c7c/abcsh401/abcsh401_8.vmdk.
- Later we
can see that its getting failed with the Space Issue:
2019-05-27T10:55:39.622Z|
Worker#3| I125: ConsolidateCheckSpaceNeeded: Not enough space to combine source
disk: ‘/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_5-000001.vmdk’
and destination disk: ‘/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_5.vmdk’:
There is not enough space on the file system for the selected operation (16397)
2019-05-27T10:59:13.989Z| vcpu-0| I125:
ConsolidateDiskCloseCB: Closing disk node ‘scsi2:0’ after combining
‘/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_5-000001.vmdk’
-> ‘/vmfs/volumes/5aad6644-f7ae56fc-fbf6-002590faaa08/abcsh401/abcsh401_5.vmdk’.
2019-05-27T10:59:14.080Z| vcpu-0| I125:
ConsolidateItemComplete: Online consolidation failed for disk node ‘scsi2:0’:
There is not enough space on the file system for the selected operation (5).
Conclusion:
- From the
vmware logs we cannot conclude much other than the VM Consolidation
failed due to space issues.