RCA 22: vMotion Failure RCA

VM Name: abcde6dveot

 

VMs with an iops limit

abcde6dveot.vmx  sched.scsi0:0.throughputCap = “1250”
abcde6dveot.vmx  sched.scsi0:1.throughputCap = “1250”

15725 2019-09-26T08:25:42.722Z| vcpu-0| I125: SVMotion_DiskCloseCB: Closing disks in svmPhase 0. Not destroying mirror node.
15794 2019-09-27T08:17:08.889Z| vcpu-0| I125: SVMotion_DiskCloseCB: Closing disks in svmPhase 0. Not destroying mirror node.
15799 2019-09-27T08:17:08.903Z| vcpu-0| I125: SVMotion_DiskCloseCB: Closing disks in svmPhase 0. Not destroying mirror node.
15933 2019-09-27T08:17:57.332Z| vcpu-0| I125: SVMotion_DiskCloseCB: Closing disks in svmPhase 0. Not destroying mirror node.
15951 2019-09-27T08:17:57.469Z| vcpu-0| I125: SVMotion_DiskCloseCB: Closing disks in svmPhase 0. Not destroying mirror node.
16023 2019-09-27T08:17:57.858Z| vcpu-0| I125: SVMotion_DiskCloseCB: Closing disks in svmPhase 0. Not destroying mirror node.

 

VM Name: abcde4bqeot

 

  34 2019-09-28T04:05:40.194Z| vmx| I125: Hostname=abcnac05xsdi017.sdi.corp.abcdef.com
  47 2019-09-28T04:05:40.199Z| vmx| I125: /vmfs/volumes/56dd9bbe-f0431f8c-29df-3ca82a1367e8/abcde4bqeot/abcde4bqeot.vmx: Setup symlink /var/run/vmware/b9e0797ba5ad1d5ada93b930084ac323 -> /var/run/vmware/0/11253112199359_5925602
  50 2019-09-28T04:05:40.201Z| vmx| I125: Vix: [5925602 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=1 additionalError=0
  51 2019-09-28T04:05:40.201Z| vmx| I125: Transitioned vmx/execState/val to poweredOff
  52 2019-09-28T04:05:40.201Z| vmx| I125: Vix: [5925602 mainDispatch.c:4234]: VMAutomation_ReportPowerOpFinished: statevar=1, newAppState=1873, success=1 additionalError=0

 464 2019-09-28T04:05:40.270Z| vmx| I125: Received migrate ‘from’ request for mid id 6200144649006789145, src ip <30.240.113.171>.
 465 2019-09-28T04:05:40.270Z| vmx| I125: MigrateSetInfo: state=8 srcIp=<30.240.113.171> dstIp=<30.240.113.148> mid=6200144649006789145 uuid=30373637-3233-5355-4535-333858334842 priority=low
 466 2019-09-28T04:05:40.270Z| vmx| I125: MigrateSetState: Transitioning from state 0 to 8.

 797 2019-09-28T04:05:40.370Z| vmx| I125: MigrateSetState: Transitioning from state 8 to 9.
 804 2019-09-28T04:05:40.452Z| vmx| I125: MigrateSetState: Transitioning from state 9 to 10.
 808 2019-09-28T04:07:52.125Z| vmx| I125: MigrateSetState: Transitioning from state 10 to 11.
 831 2019-09-28T04:07:52.130Z| Worker#2| I125: DISKLIB-LINK  : Opened ‘/vmfs/volumes/56dd9bbe-f0431f8c-29df-3ca82a1367e8/abcde4bqeot/abcde4bqeot.vmdk’ (0xa): vmfs, 67108864 sectors / 32 GB.
 837 2019-09-28T04:07:52.130Z| Worker#0| I125: DISKLIB-LINK  : Opened ‘/vmfs/volumes/56dd9bbe-f0431f8c-29df-3ca82a1367e8/abcde4bqeot/abcde4bqeot_1.vmdk’ (0xa): vmfs, 524288000 sectors / 250 GB.

1775 2019-09-28T04:07:52.176Z| vmx| A100: ConfigDB: Setting cleanShutdown = “FALSE”
1856 2019-09-28T04:07:52.235Z| vmx| I125: MigrateSetState: Transitioning from state 11 to 12.
1871 2019-09-28T04:07:52.251Z| vcpu-0| I125: MigrateSetState: Transitioning from state 12 to 0.

1872 2019-09-28T04:07:52.251Z| vmx| I125: VMXVmdb_SetToolsVersionStatus: status value set to ‘unmanaged’, ‘unmanaged’, install possible
1873 2019-09-28T04:07:52.268Z| vcpu-0| I125: MemSched (MB): min: 18 sizeLimit: 1272 swapInitialFileSize: 110 prealloc: FALSE.
1874 2019-09-28T04:07:52.269Z| vcpu-0| I125: MemSched (MB): min: 15 sizeLimit: 1272 swapInitialFileSize: 110 prealloc: FALSE.
1875 2019-09-28T04:07:52.269Z| vcpu-0| I125: MemSched (MB): min: 14 sizeLimit: 1272 swapInitialFileSize: 110 prealloc: FALSE.
1876 2019-09-28T04:07:52.279Z| vcpu-0| I125: CPT: vmstart
1877 2019-09-28T04:07:52.279Z| vcpu-5| I125: CPT: vmstart
1878 2019-09-28T04:07:52.279Z| vcpu-7| I125: CPT: vmstart
1879 2019-09-28T04:07:52.279Z| vcpu-4| I125: CPT: vmstart
1880 2019-09-28T04:07:52.279Z| vcpu-6| I125: CPT: vmstart
1881 2019-09-28T04:07:52.279Z| vcpu-2| I125: CPT: vmstart
1882 2019-09-28T04:07:52.279Z| vcpu-3| I125: CPT: vmstart
1883 2019-09-28T04:07:52.279Z| vcpu-1| I125: CPT: vmstart

1887 2019-09-28T04:07:52.344Z| vmx| I125: Vix: [5925602 mainDispatch.c:4151]: VMAutomationReportPowerStateChange: Reporting power state change (opcode=2, err=0).
1888 2019-09-28T04:07:52.344Z| vmx| I125: TOOLS Received tools.set.versiontype rpc call, version = 10282, type = 3
1889 2019-09-28T04:07:52.344Z| vmx| I125: TOOLS Setting toolsVersionStatus = TOOLS_STATUS_UNMANAGED
1890 2019-09-28T04:07:52.344Z| vmx| I125: Tools_SetVersionAndType did nothing; new tools version (10282) and type (3) match old Tools version and type

 

 

VMKernel:

 

 1010  1010 2019-09-30T09:21:39.881Z cpu15:5657072)FSS: 6220: Conflict between buffered and unbuffered open (file ‘abcde6dveot-flat.vmdk’):flags 0x4008, requested flags 0x1
 1011  1011 2019-09-30T09:21:39.888Z cpu15:5657072)FSS: 6220: Conflict between buffered and unbuffered open (file ‘abcde6dveot_1-flat.vmdk’):flags 0x4008, requested flags 0x1

 

 3559  3559 2019-09-30T03:51:46.006Z cpu2:4167551)nhpsa: hpsa_vmkScsiCmdDone:6520: Sense data: error code: 0x70, key: 0x5, info:00 00 00 00 , cmdInfo:00 00 00 00 , CmdSN: 0x4a15, worldId: 0x10882, Cmd: 0x85, ASC: 0x20, ASCQ: 0x0
 3561  3561 2019-09-30T03:51:46.006Z cpu2:4167551)nhpsa: hpsa_vmkScsiCmdDone:6520: Sense data: error code: 0x70, key: 0x5, info:00 00 00 00 , cmdInfo:00 00 00 00 , CmdSN: 0x4a16, worldId: 0x10882, Cmd: 0x4d, ASC: 0x20, ASCQ: 0x0

 5506  5506 2019-09-30T07:59:08.820Z cpu5:65586)WARNING: ScsiDeviceIO: 1203: Device naa.60002ac000000000000000340001b81b performance has deteriorated. I/O latency increased from average value of 8085 microseconds to 162676 microseconds.
 5507  5507 2019-09-30T07:59:08.824Z cpu1:4167551)lpfc: lpfc_handle_status:5050: 2:(0):3271: FCP cmd x8a failed <4/51> sid x1e1100, did x484700, oxid x73e iotag xa44 SCSI Queue Full –
 5508  5508 2019-09-30T07:59:08.824Z cpu1:4167551)lpfc: lpfc_change_queue_depth:513: 2:(0):3295 lun queue depth changed [0:4:51] old=7, new=6
 5509  5509 2019-09-30T07:59:08.824Z cpu1:4167551)lpfc: lpfc_scsi_cmd_iocb_cmpl:5261: 2:(0):0711 detected lun queue full adjust qdepth to 6

 6053  6053 2019-09-30T08:57:56.235Z cpu14:65900)lpfc: lpfc_handle_status:5050: 0:(0):3271: FCP cmd x88 failed <7/42> sid x1d1100, did x474600, oxid x2c7 iotag x5cd SCSI Queue Full –
 6056  6056 2019-09-30T08:57:56.235Z cpu17:4169999)lpfc: lpfc_handle_status:5050: 2:(0):3271: FCP cmd x88 failed <3/42> sid x1e1100, did x484600, oxid x798 iotag xa9e SCSI Queue Full –
 6059  6059 2019-09-30T08:57:56.235Z cpu14:4150456)lpfc: lpfc_handle_status:5050: 0:(0):3271: FCP cmd x88 failed <7/42> sid x1d1100, did x474600, oxid x693 iotag x999 SCSI Queue Full –

 

 

Start

Finish

From IP

To IP

VM Name

HW Version

vRAM

Migration ID

Options

2019-08-15 22:41:25.151

2019-08-15 22:41:29.513

srcIp=<30.240.113.151>

dstIp=<30.240.113.172>

abcde6dveot

10

4096 MB

mid=560b56c346e79f58

encrypted=0  latencyAware=yes

 

 

 

 

 

Destination Hostname: abcnac05xsdi017.sdi.corp.abcdef.com

ESXi 6.5 EP 13

30.240.113.148

 

 

Interface

Port Group/DVPort/Opaque Network

IP Family

IP Address

Netmask

Broadcast

MAC Address

MTU

TSO MSS

Enabled

Type

NetStack

vmk3

252

IPv4

30.240.113.148

255.255.255.192

30.240.113.191

00:50:56:6c:37:48

9000

65535

TRUE

STATIC

defaultTcpipStack

vmk3

252

IPv6

fe80::250:56ff:fe6c:3748

64

 

00:50:56:6c:37:48

9000

65535

TRUE

STATIC, PREFERRED

defaultTcpipStack

 

 

 6547 2019-09-28T03:35:23.368Z cpu2:5913645)Migrate: vm 5913646: 4049: Setting VMOTION info: Source ts = 6200144647193165367, src ip = <30.240.113.148> dest ip = <30.240.113.151> Dest wid = 6053854 using SHARED swap
 6548 2019-09-28T03:35:23.370Z cpu2:5913645)Hbr: 3395: Migration start received (worldID=5913646) (migrateType=1) (event=0) (isSource=1) (sharedConfig=1)
 6549 2019-09-28T03:35:23.370Z cpu54:5891501)MigrateNet: 1627: 6200144647193165367 S: Successfully bound connection to vmknic ‘30.240.113.148’
 6553 2019-09-28T03:35:23.371Z cpu30:5891501)MigrateNet: 1627: 6200144647193165367 S: Successfully bound connection to vmknic ‘30.240.113.148’
 6570 2019-09-28T03:37:37.268Z cpu46:5913646)VMotion: 4993: 6200144647193165367 S: Another pre-copy iteration needed with 132569 pages left to send (prev2 33554432, prev 33554432, network bandwidth ~851.230 MB/s, 424425% t2d)

 6626 2019-09-28T03:44:06.108Z cpu21:5585177)FSS: 6756: Failed to open file ‘hpilo-d0ccb0’; Requested flags 0x5, world: 5585177 [sfcb-smx], (Existing flags 0x0, world: 0 [unknown]): Busy
 6628 2019-09-28T03:44:06.120Z cpu20:5585177)FSS: 6756: Failed to open file ‘hpilo-d0ccb1’; Requested flags 0x5, world: 5585177 [sfcb-smx], (Existing flags 0x0, world: 0 [unknown]): Busy
 6630 2019-09-28T03:44:06.132Z cpu8:5585177)FSS: 6756: Failed to open file ‘hpilo-d0ccb2’; Requested flags 0x5, world: 5585177 [sfcb-smx], (Existing flags 0x0, world: 0 [unknown]): Busy

 6938 2019-09-28T04:05:40.319Z cpu40:5925602)World: vm 5925603: 7379: Starting world vmm0:abcde4bqeot of type 8
 6942 2019-09-28T04:05:40.332Z cpu40:5925602)World: vm 5925607: 7379: Starting world vmm1:abcde4bqeot of type 8
 6943 2019-09-28T04:05:40.332Z cpu40:5925602)World: vm 5925608: 7379: Starting world vmm2:abcde4bqeot of type 8
 6944 2019-09-28T04:05:40.332Z cpu40:5925602)World: vm 5925609: 7379: Starting world vmm3:abcde4bqeot of type 8
 6945 2019-09-28T04:05:40.332Z cpu40:5925602)World: vm 5925610: 7379: Starting world vmm4:abcde4bqeot of type 8
 6946 2019-09-28T04:05:40.332Z cpu40:5925602)World: vm 5892843: 7379: Starting world vmm5:abcde4bqeot of type 8
 6947 2019-09-28T04:05:40.333Z cpu40:5925602)World: vm 5892844: 7379: Starting world vmm6:abcde4bqeot of type 8
 6948 2019-09-28T04:05:40.333Z cpu40:5925602)World: vm 5892845: 7379: Starting world vmm7:abcde4bqeot of type 8

 6949 2019-09-28T04:05:40.365Z cpu45:5925602)Migrate: vm 5925603: 4049: Setting VMOTION info: Dest ts = 6200144649006789145, src ip = <30.240.113.171> dest ip = <30.240.113.148> Dest wid = 0 using SHARED swap
 6950 2019-09-28T04:05:40.370Z cpu45:5925602)Hbr: 3395: Migration start received (worldID=5925603) (migrateType=1) (event=0) (isSource=0) (sharedConfig=1)

 6966 2019-09-28T04:07:46.698Z cpu50:5585180)FSS: 6756: Failed to open file ‘hpilo-d0ccb14’; Requested flags 0x5, world: 5585180 [sfcb-smx], (Existing flags 0x5, world: 68478 [ams-ahs]): Busy
 6967 2019-09-28T04:07:46.698Z cpu50:5585180)FSS: 6756: Failed to open file ‘hpilo-d0ccb15’; Requested flags 0x5, world: 5585180 [sfcb-smx], (Existing flags 0x5, world: 68477 [ams-main]): Busy
 7000 2019-09-28T04:08:17.035Z cpu45:5585179)FSS: 6756: Failed to open file ‘hpilo-d0ccb14’; Requested flags 0x5, world: 5585179 [sfcb-smx], (Existing flags 0x5, world: 68478 [ams-ahs]): Busy

 7027 2019-09-28T04:14:11.784Z cpu21:5830687)lpfc: lpfc_handle_status:5050: 0:(0):3271: FCP cmd x89 failed <10/5> sid x1d0d00, did x570400, oxid x12fc iotag xa03 SCSI Queue Full –
 7030 2019-09-28T04:14:11.784Z cpu1:66522)NMP: nmp_ThrottleLogForDevice:3616: Cmd 0x89 (0x4395d4f94c40, 65599) to dev “naa.60002ac000000000000000070001293c” on path “vmhba1:C0:T10:L5” Failed: H:0x0 D:0x28 P:0x0 Invalid sense data: 0x0 0x0 0x0. Act:NONE
 7031 2019-09-28T04:14:11.784Z cpu1:66522)ScsiDeviceIO: 2980: Cmd(0x4395d4f94c40) 0x89, CmdSN 0x3174ec from world 65599 to dev “naa.60002ac000000000000000070001293c” failed H:0x0 D:0x28 P:0x0 Invalid sense data: 0x74 0x68 0x20.
 7032 2019-09-28T04:14:11.797Z cpu21:4932740)lpfc: lpfc_handle_status:5050: 0:(0):3271: FCP cmd x89 failed <2/5> sid x1d0d00, did x1d0200, oxid xfa1 iotag x6a8 SCSI Queue Full –
 7035 2019-09-28T04:14:11.809Z cpu0:4502984)lpfc: lpfc_handle_status:5050: 2:(0):3271: FCP cmd x89 failed <2/5> sid x1e0d00, did x1e0000, oxid x11a2 iotag x8a9 SCSI Queue Full –
 7038 2019-09-28T04:14:11.822Z cpu21:5602263)lpfc: lpfc_handle_status:5050: 0:(0):3271: FCP cmd x89 failed <10/5> sid x1d0d00, did x570400, oxid xdf3 iotag x4fa SCSI Queue Full –
 7043 2019-09-28T04:14:11.833Z cpu32:5862623)lpfc: lpfc_handle_status:5050: 2:(0):3271: FCP cmd x8a failed <2/5> sid x1e0d00, did x1e0000, oxid xd8d iotag x494 SCSI Queue Full –
 7046 2019-09-28T04:14:11.833Z cpu31:66530)ScsiDeviceIO: 2980: Cmd(0x439dc1298840) 0x8a, CmdSN 0x80000039 from world 5663396 to dev “naa.60002ac000000000000000070001293c” failed H:0x0 D:0x28 P:0x0 Invalid sense data: 0x0 0x0 0x0.
 7047 2019-09-28T04:14:11.845Z cpu28:5528798)lpfc: lpfc_handle_status:5050: 0:(0):3271: FCP cmd x8a failed <10/5> sid x1d0d00, did x570400, oxid x12b3 iotag x9ba SCSI Queue Full –

 7075 2019-09-28T04:28:22.726Z cpu11:4828036)lpfc: lpfc_handle_status:5050: 2:(0):3271: FCP cmd x2a failed <4/6> sid x1e0d00, did x1e4400, oxid x10b7 iotag x7be SCSI Queue Full –
 7078 2019-09-28T04:28:22.726Z cpu11:66527)NMP: nmp_ThrottleLogForDevice:3616: Cmd 0x2a (0x4395d522a080, 4956326) to dev “naa.60002ac000000000000000080001293d” on path “vmhba3:C0:T4:L6” Failed: H:0x0 D:0x28 P:0x0 Invalid sense data: 0x0 0x0 0x0. Act:NONE
 7079 2019-09-28T04:28:22.726Z cpu11:66527)ScsiDeviceIO: 2980: Cmd(0x4395d522a080) 0x2a, CmdSN 0x8000001f from world 4956326 to dev “naa.60002ac000000000000000080001293d” failed H:0x0 D:0x28 P:0x0 Invalid sense data: 0x20 0x70 0x61.
 7122 2019-09-28T04:28:23.192Z cpu28:66530)NMP: nmp_ThrottleLogForDevice:3545: last error status from device naa.60002ac000000000000000080001293d repeated 10 times
 7130 2019-09-28T04:29:17.750Z cpu32:66296)NMP: nmp_ResetDeviceLogThrottling:3458: last error status from device naa.60002ac000000000000000080001293d repeated 3 times
 7144 2019-09-28T04:29:51.808Z cpu10:5660986)WARNING: CBT: 1133: Unsupported ioctl 62
 7147 2019-09-28T04:29:52.103Z cpu1:5660986)WARNING: CBT: 1133: Unsupported ioctl 63

 7314 2019-09-28T04:42:12.735Z cpu45:5862622)lpfc: lpfc_handle_status:5050: 0:(0):3271: FCP cmd x2a failed <3/6> sid x1d0d00, did x1d4400, oxid x1221 iotag x928 SCSI Queue Full –
 7317 2019-09-28T04:42:12.741Z cpu45:4403532)lpfc: lpfc_handle_status:5050: 0:(0):3271: FCP cmd x2a failed <3/6> sid x1d0d00, did x1d4400, oxid x14ea iotag xbf1 SCSI Queue Full –

 9211 2019-09-28T09:41:43.672Z cpu10:5639228)WARNING: UserSocketInet: 2244: python: waiters list not empty!
 9212 2019-09-28T09:41:43.672Z cpu10:5639228)WARNING: UserSocketInet: 2244: python: waiters list not empty!
 9214 2019-09-28T09:41:44.052Z cpu22:5935930)WARNING: MemSched: 11696: Group vsanperfsvc: Requested memory limit 0 KB insufficient to support effective reservation 7616 KB

 

 

vCenter:

 

  • Reviewed the vCenter logs and from there we can find the vMotion task is being initiated.

 

  • While reviewing the logs we can see few issues with the DVS port not found on the cluster.

 

2019-10-01T03:20:37.151Z info vpxd[7F261BB76700] [Originator@6876 sub=vpxLro opID=lro-1-29661e17-25b1b-12-01] [VpxLRO] — BEGIN task-6459826 — abcde6dveot — Drm.ExecuteVMotionLRO —
2019-10-01T03:20:37.151Z info vpxd[7F261BB76700] [Originator@6876 sub=drmLogger opID=lro-1-29661e17-25b1b-12-01] Executing DRM recommended vMotion, migrating [vim.VirtualMachine:vm-618277,abcde6dveot] from vim.HostSystem:host-618231 to vim.HostSystem:host-80301

2019-10-01T03:20:37.151Z error vpxd[7F250B368700] [Originator@6876 sub=MoDVSwitch opID=lro-1-29661e17-25b1b-02-01-01] [MoDVSwitch::CheckInPortKeySetInt] port [479] not found in dvs [VA-MP5-Cluster-1].
2019-10-01T03:20:37.151Z error vpxd[7F250B368700] [Originator@6876 sub=MoDVSwitch opID=lro-1-29661e17-25b1b-02-01-01] [MoDVSwitch::CheckInPortKeySetInt] port [479] not found in dvs [VA-MP5-Cluster-1].

2019-10-01T03:20:37.154Z error vpxd[7F25422C5700] [Originator@6876 sub=MoDVSwitch opID=lro-1-29661e17-25b1b-07-01-01] [MoDVSwitch::CheckInPortKeySetInt] port [600] not found in dvs [VA-MP5-Cluster-1].
2019-10-01T03:20:37.154Z error vpxd[7F25422C5700] [Originator@6876 sub=MoDVSwitch opID=lro-1-29661e17-25b1b-07-01-01] [MoDVSwitch::CheckInPortKeySetInt] port [600] not found in dvs [VA-MP5-Cluster-1].

2019-10-01T03:20:37.156Z info vpxd[7F261BB76700] [Originator@6876 sub=vpxLro opID=lro-1-29661e17-25b1b-12-01-01] [VpxLRO] — BEGIN lro-268432786 —  — VmprovWorkflow —
2019-10-01T03:20:37.159Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] Execute local action vpx.vmprov.InvokePrechecks (5%)
2019-10-01T03:20:37.159Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] Invoking callback type 0 on AutoStartManager
2019-10-01T03:20:37.159Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] Invoking callback type 0 on CbkServiceMgrMigrateCallback
2019-10-01T03:20:37.159Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] Invoking callback type 0 on ContextValidator

2019-10-01T03:20:37.159Z error vpxd[7F2506F61700] [Originator@6876 sub=MoDVSwitch opID=lro-1-29661e17-25b1b-0c-01-01] [MoDVSwitch::CheckInPortKeySetInt] port [502] not found in dvs [VA-MP5-Cluster-1].
2019-10-01T03:20:37.159Z info vpxd[7F261BB76700] [Originator@6876 sub=pbm opID=lro-1-29661e17-25b1b-12-01-01] PBMCallback: ShouldSkipMigrateCallback: migrate callback is skipped – Same-VC migration with home and all disks and profiles the same

 

2019-10-01T03:20:37.161Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] LocalVC Migrate of abcde6dveot on vim.HostSystem:host-618231(30.240.114.44) with ds ds:///vmfs/volumes/56dd9bbe-f0431f8c-29df-3ca82a1367e8/ to vim.HostSystem:host-80301(30.240.114.45) with ds ds:///vmfs/volumes/56dd9bbe-f0431f8c-29df-3ca82a1367e8/ started
2019-10-01T03:20:37.161Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] Execute local action vpx.vmprov.CheckCompatibility (1%)

2019-10-01T03:20:37.153Z error vpxd[7F268D2C6700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-05-01-01] Get exception while executing action vpx.vmprov.InvokeCallbacks: N7Vmacore9ExceptionE(vim.fault.NotFound)
–> [context]zKq7AVECAAAAAAMD2AAjdnB4ZAAA4LArbGlidm1hY29yZS5zbwAAwhcbAB5aGIFq2wABbGlidmltLXR5cGVzLnNvAIElmxgBgWYrGgECbwTVdnB4ZAAC3AbVAp4J1QL13NgCxVXZAqlZ2QLX99cC6B3OAv8nzgJTU9ACufXRAjQqzwLqTM8CN1LPAtMOdAJIBHQC/yh0Al7xcwIzxMoCqC/oAtMOdAJIBHQC/yh0AkB8dAAj/CMACv8jAJcJLAPUcwBsaWJwdGhyZWFkLnNvLjAABN2ODmxpYmMuc28uNgA=[/context]
2019-10-01T03:20:37.163Z info vpxd[7F268D2C6700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-05-01-01] Workflow context:

  • In the End we can see the vMotion Operation getting failed with the Error:  “vim.fault.NotFound”.

 

2019-10-01T03:20:37.178Z info vpxd[7F2528102700] [Originator@6876 sub=Default opID=lro-1-29661e17-25b1b-08-01-01] [VpxLRO] — ERROR lro-268432768 —  — VmprovWorkflow: vmodl.fault.SystemError:
–> Result:
–> (vmodl.fault.SystemError) {
–>    faultCause = (vmodl.MethodFault) null,
–>    faultMessage = <unset>,
–>    reason = “vim.fault.NotFound”
–>    msg = “”
–> }
–> Args:
–>

2019-10-01T03:20:37.179Z info vpxd[7F250840A700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-0d-01-01] Execute local action vpx.vmprov.InvokeCallbacks (5%)
2019-10-01T03:20:37.179Z info vpxd[7F250840A700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-0d-01-01] Invoking callback type 1 on AutoStartManager
2019-10-01T03:20:37.179Z error vpxd[7F2528102700] [Originator@6876 sub=drmLogger opID=lro-1-29661e17-25b1b-08-01] Failed migrating VM [vim.VirtualMachine:vm-618272,wsdne4apeot] to host vim.HostSystem:host-618280
2019-10-01T03:20:37.175Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] Invoking callback type 1 on ContextValidator
2019-10-01T03:20:37.179Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] Invoking callback type 1 on CryptoManager
2019-10-01T03:20:37.179Z info vpxd[7F261BB76700] [Originator@6876 sub=VmProv opID=lro-1-29661e17-25b1b-12-01-01] Invoking callback type 1 on DAS

 

VM Name: abcde6dveot

 

  • Reviewed the VM but was not able to find anything other than the IOPS limit that is applied on the VM.

 

abcde6dveot.vmx  sched.scsi0:0.throughputCap = “1250”
abcde6dveot.vmx  sched.scsi0:1.throughputCap = “1250”

 

 

Hostname:  esx-abcnac05xsdi041.sdi.corp.abcdef.com

ESXi 6.5 EP 13

30.240.113.172

 

Interface

Port Group/DVPort/Opaque Network

IP Family

IP Address

Netmask

Broadcast

MAC Address

MTU

TSO MSS

Enabled

Type

NetStack

vmk3

241

IPv4

30.240.113.172

255.255.255.192

30.240.113.191

00:50:56:67:82:33

9000

65535

TRUE

STATIC

defaultTcpipStack

vmk3

241

IPv6

fe80::250:56ff:fe67:8233

64

 

00:50:56:67:82:33

9000

65535

TRUE

STATIC, PREFERRED

defaultTcpipStack

 

  • From the Hostd Logs we can see the below Errors: “Unable to delete DVPort “” that is in use, use list: wsdne4upeot.eth0″

 

 2019-09-30T09:24:57.770Z info hostd[15883B70] [Originator@6876 sub=Solo.Vmomi opID=5d088e41-f2f7 user=vpxuser] Throw vim.fault.PlatformConfigFault
 2019-09-30T09:24:57.770Z info hostd[15883B70] [Originator@6876 sub=Solo.Vmomi opID=5d088e41-f2f7 user=vpxuser] Result:
 –> (vim.fault.PlatformConfigFault) {
 –>    faultCause = (vmodl.MethodFault) null,

 –>    faultMessage = (vmodl.LocalizableMessage) [
 –>       (vmodl.LocalizableMessage) {
 –>          key = “com.vmware.esx.hostctl.default”,
 –>          arg = (vmodl.KeyAnyValue) [
 –>             (vmodl.KeyAnyValue) {
 –>                key = “reason”,
 –>                value = “Unable to delete DVPort “” that is in use, use list: wsdne4upeot.eth0″
 –>             }
 –>          ],
 –>          message = <unset>
 –>       }
 –>    ],
 –>    text = “”
 –>    msg = “”
 –> }

 

 

Conclusion:

 

 

Action Plan:

 

  1. Open the Virtual Machine Edit Settings dialog of the affected VM
  2. Go to the Network Adapter that is connected to the DVS
  3. Note the existing Port ID
  4. Change the Port ID to different, free one
  5. Save and apply the changes
  6. Edit settings again
  7. Configure the previously noted down, original Port ID from the list
  8. Apply the settings and close the dialog. The vMotion process should work now

 

 

Ashutosh Dixit

I am currently working as a Senior Technical Support Engineer with VMware Premier Services for Telco. Before this, I worked as a Technical Lead with Microsoft Enterprise Platform Support for Production and Premier Support. I am an expert in High-Availability, Deployments, and VMware Core technology along with Tanzu and Horizon.

Leave a Reply