Understanding Logging Flow on Telco Cloud Automation

In this Article we are going to understand the basic logging flow of Telco Cloud Automation appliance.

For this test i created a Management Cluster from my TCA Manager on vCenter Compute Infrastructure

Time of Cluster Creation: 2/11/2022 : 10:03 PM PST
Time in UTC: 5:00 AM UTC 11/3/2022


I have created a Management Cluster which first Deleted the Old Cluster which was failed and then deployed the new One.

created a Management Cluster which first Deleted the Old Cluster which was failed and then deployed the new One.

In order to understand the Flow i capture a support Bundle for both TCA Manager and Control Plane.

TCA Manager Logs: 

  • TCA Version Can be Captured From:  \common\configs\info
version: 2.0.1
build: 19402305
code: 148
md5sum: 50062082d331c68cb18bfdd93be42c90
blocks: 10695244
arch: vm
  • \common\logs\admin\app.log  : We can use this File to Capture the Operations and their Job-ID which can later be used for Tracking purpose.
  • In this case I retried the Cluster creation operation after a failed attempt, due to this first there is a Delete Cluster operation and then there is a Create Cluster Operation.
2022-11-03 04:57:18.350 UTC [InfraAutomationService_SvcThread-14, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.UndeployInfraJob- Starting to run Undeploy Infra Job Id: (89bcad00-d093-4e61-8370-d9d0590f2f2d): state : BEGIN
2022-11-03 05:02:54.100 UTC [InfraAutomationService_SvcThread-15, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.UndeployInfraJob- Starting to run Undeploy Infra Job Id: (89bcad00-d093-4e61-8370-d9d0590f2f2d): state : COLLECT_DELETE_CLUSTER_RESPONSE
2022-11-03 05:03:28.752 UTC [InfraAutomationService_SvcThread-16, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] INFO  c.v.h.s.i.k8s.DeployInfraJob- Starting to run Deploy Infra Job Id: (7c8318f5-8772-426d-881b-26c5828df9dc): state : BEGIN
2022-11-03 05:03:32.602 UTC [InfraAutomationService_SvcThread-17, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] INFO  c.v.h.s.i.k8s.DeployInfraJob- Starting to run Deploy Infra Job Id: (7c8318f5-8772-426d-881b-26c5828df9dc): state : DEPLOY_CLUSTER
2022-11-03 05:37:20.386 UTC [InfraAutomationService_SvcThread-20, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] INFO  c.v.h.s.i.k8s.DeployInfraJob- Starting to run Deploy Infra Job Id: (7c8318f5-8772-426d-881b-26c5828df9dc): state : VALIDATE_DEPLOY_CLUSTER
2022-11-03 05:39:43.873 UTC [InfraAutomationService_SvcThread-29, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] INFO  c.v.h.s.i.k8s.DeployInfraJob- Starting to run Deploy Infra Job Id: (7c8318f5-8772-426d-881b-26c5828df9dc): state : COMPLETE

 

  • \common\logs\admin\web.log : This will help us to have a Look at the Cluster Payload Information which is passed using the TCA Manager.
2022-11-02 05:31:35.876 UTC [https-jsse-nio-127.0.0.1-8443-exec-9     Ent: HybridityAdmin     Usr: tca-admin@VMLABS.COM     TxId: 9b630d05-e3a1-45fa-80b5-650637cd170d] INFO  c.v.h.n.a.i.a.InfraAutomationRestController- Posted request to deploy cluster with payload: {"clusterPassword":"*****"    clusterTemplateId:"9276d9fd-b251-4b98-8514-1777589b06dc"    clusterType:"MANAGEMENT"    hcxCloudUrl:"https:\/\/tca-cp.vmlabs.com"    endpointIP:"192.168.0.60"    masterNodes:[{"name":"master"    networks:[{"label":"MANAGEMENT"    networkName:"\/India-Datacenter\/network\/Management"    nameservers:["192.168.0.2"    192.168.30.10]    isManagement:true}]}]    name:"core-mgmt"    placementParams:[{"name":"TCA-Machines"    type:"Folder"}    {"name":"VM-Datastore"    type:"Datastore"}    {"name":"TCA-Management"    type:"ResourcePool"}    {"name":"Cluster"    type:"ClusterComputeResource"}]    vmTemplate:"photon-3-kube-v1.21.8-vmware.1-tkg.2-49e70fcb8bdd006b8a1cf7823484f98f-19358802"    workerNodes:[{"name":"mgmt-node-pool"    networks:[{"label":"MANAGEMENT"    networkName:"\/India-Datacenter\/network\/Management"    nameservers:["192.168.0.2"    192.168.30.10]    isManagement:true}]    id:"3588773b-7640-4f73-aa56-da90dd465f06"}]    location:{"city":"New Delhi"    country:"India"    cityAscii:"New Delhi"    latitude:28.6    longitude:77.2}}

 

TCA-Control Plane Logs:

  • TCA Version Can be Captured From: \common\configs\info
  • \common\logs\admin\app.log : Can be used to Understand the Tasks and their JobID.

In this case I retried the Cluster creation operation after a failed attempt, due to this first there is a Delete Cluster operation and then there is a Create Cluster Operation.

Delete Cluster Operation:

2022-11-03 04:57:18.667 UTC [InfraAutomationService_SvcThread-185, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state BEGIN
2022-11-03 04:57:18.688 UTC [InfraAutomationService_SvcThread-185, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.b.adapter.BootStrapperAdapter- Deleting Management cluster. URL - http://127.0.0.1:8888/api/v1/managementcluster/8a366956-55ff-4c01-a6b9-d6f72570d920
2022-11-03 04:58:21.579 UTC [InfraAutomationService_SvcThread-186, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 04:58:41.637 UTC [InfraAutomationService_SvcThread-186, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 04:59:02.141 UTC [InfraAutomationService_SvcThread-188, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 04:59:22.209 UTC [InfraAutomationService_SvcThread-186, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 04:59:42.244 UTC [InfraAutomationService_SvcThread-188, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:00:02.282 UTC [InfraAutomationService_SvcThread-191, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:00:22.342 UTC [InfraAutomationService_SvcThread-192, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:00:42.372 UTC [InfraAutomationService_SvcThread-193, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:01:02.401 UTC [InfraAutomationService_SvcThread-194, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:01:22.435 UTC [InfraAutomationService_SvcThread-195, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:01:42.469 UTC [InfraAutomationService_SvcThread-196, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:02:02.505 UTC [InfraAutomationService_SvcThread-197, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:02:22.540 UTC [InfraAutomationService_SvcThread-198, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:02:42.605 UTC [InfraAutomationService_SvcThread-199, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId ce0069d3-6aa4-48ee-b331-c56afe2bf699 === state WAIT_FOR_DELETE
2022-11-03 05:02:42.625 UTC [InfraAutomationService_SvcThread-199, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.b.adapter.BootStrapperAdapter- Deleting Management cluster. URL - http://127.0.0.1:8888/api/v1/managementcluster/8a366956-55ff-4c01-a6b9-d6f72570d920?force=True
2022-11-03 05:02:44.656 UTC [InfraAutomationService_SvcThread-197, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] INFO  c.v.h.s.i.k8s.DeleteK8sClusterJob- Delete k8s cluster === JobId 4f1bc039-4af9-4ba0-84ab-67bdd6086970 === state REMOVE_APPLIANCE_CONFIG_SECTION
2022-11-03 05:02:51.361 UTC [InfraAutomationService_SvcThread-197, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 25f67051-aab4-4087-9bc5-c385dec7473c] WARN  c.v.h.s.i.k8s.DeleteK8sClusterJob- Could not find kubernetes Appliance Config with url:

Create Cluster Operation:

2022-11-03 05:03:29.835 UTC [InfraAutomationService_SvcThread-200, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] INFO  c.v.h.nfvm.k8.VmTemplatesUtils- vm template name photon-3-kube-v1.21.8-vmware.1-tkg.2-49e70fcb8bdd006b8a1cf7823484f98f-19358802, vm template path photon-3-kube-v1.21.8-vmware.1-tkg.2-49e70fcb8bdd006b8a1cf7823484f98f-19358802 for byoi info
2022-11-03 05:03:29.847 UTC [InfraAutomationService_SvcThread-200, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] ERROR c.v.h.nfvm.k8.VmTemplatesUtils- No vm template found to update byoi info
2022-11-03 05:03:37.491 UTC [InfraAutomationService_SvcThread-202, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] INFO  c.v.h.s.i.k8s.CreateK8sClusterJob- Creating management cluster
2022-11-03 05:03:37.498 UTC [InfraAutomationService_SvcThread-202, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] INFO  c.v.h.b.adapter.BootStrapperAdapter- Creation of management cluster. url: http://127.0.0.1:8888/api/v1/managementcluster
2022-11-03 05:37:58.339 UTC [InfraAutomationService_SvcThread-272, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: e0685878-f873-4b8c-a979-2c656e2edca7] INFO  c.v.h.s.i.k8s.UpdateAddonJob- ===== Posting for Update Addon for MANAGEMENT Cluster : c9d422f5-021f-48fc-bf94-5f39293b0fb9 Addon {"plugins":[{"type":"nodeconfig-operator","properties":{}},{"type":"vmconfig-operator","properties":{}}]} =====
2022-11-03 07:10:28.047 UTC [InfraAutomationService_SvcThread-284, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 05634709-419d-4f95-a0c6-a2f05201ea83] INFO  c.v.h.nfvm.k8.VmTemplatesUtils- vm template name photon-3-kube-v1.21.8-vmware.1-tkg.2-49e70fcb8bdd006b8a1cf7823484f98f-19358802, vm template path photon-3-kube-v1.21.8-vmware.1-tkg.2-49e70fcb8bdd006b8a1cf7823484f98f-19358802 for byoi info
2022-11-03 07:10:28.063 UTC [InfraAutomationService_SvcThread-284, Ent: HybridityAdmin, Usr: tca-admin@VMLABS.COM, , TxId: 05634709-419d-4f95-a0c6-a2f05201ea83] ERROR c.v.h.nfvm.k8.VmTemplatesUtils- No vm template found to update byoi info


\common\logs\k8s-bootstrapper\bootstrapperd.log:

Delete Cluster Operation:

Nov  3 04:58:09 apiserverd[3012] : [Info-common] : run cmd [tanzu [management-cluster delete core-mgmt -y -v 9 --log-file=/common/logs/tkg/20221103T045809.log --timeout=30m]] with env [[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOME=/root LOGNAME=root USER=root SHELL=/bin/sh MachineHealthCheck=off]]
Nov  3 05:02:23 apiserverd[3012] : [Warning-Utils] : deleting file [/root/.config/tanzu/tkg/providers/infrastructure-vsphere/v0.7.12/cluster-template-definition-core-mgmt.yaml]
Nov  3 05:02:23 apiserverd[3012] : [Warning-Utils] : deleting file [/opt/vmware/k8s-bootstrapper/mgmt_core-mgmt.yaml]
Nov  3 05:02:43 apiserverd[3012] : [Info-ApiServer] : Force delete mgmt cluster [core-mgmt: 8a366956-55ff-4c01-a6b9-d6f72570d920] completes
Nov  3 05:03:38 apiserverd[3012] : [Warning-Utils] : mkdirConfigDir: path [/opt/vmware/k8s-bootstrapper] exist
Nov  3 05:03:38 apiserverd[3012] : [Warning-Utils] : deleting file [/root/.config/tanzu/tkg/providers/infrastructure-vsphere/v0.7.12/bootstrapper_customize/core-mgmt.yaml]


Create Cluster Operation:

Nov  3 05:03:38 apiserverd[3012] : [Info-customize_tkg] : Using configuration file: /opt/vmware/k8s-bootstrapper/mgmt_core-mgmt.yaml
Nov  3 05:03:45 apiserverd[3012] : [Info-controller] : create management cluster core-mgmt
Nov  3 05:03:45 apiserverd[3012] : [Info-adapter] : Create Management cluster cmd [ tanzu management-cluster create –infrastructure=vsphere –name core-mgmt –plan core-mgmt -v 9 –file /opt/vmware/k8s-bootstrapper/mgmt_core-mgmt.yaml –log-file /common/logs/tkg/20221103T050345.log –timeout=30m –vsphere-controlplane-endpoint 192.168.30.50 –deploy-tkg-on-vSphere7]

 

Workload Cluster Creation:

Nov  3 07:10:42 apiserverd[3012] : [Info-customize_tkg] : Using configuration file: /opt/vmware/k8s-bootstrapper/workload_core-amf-workload.yaml
Nov  3 07:10:46 apiserverd[3012] : [Info-controller] : Creating workload cluster [2dc9e18e-eb84-4ec8-b235-c1bb8a2bdc63:core-amf-workload]...
Nov  3 07:10:49 apiserverd[3012] : [Info-controller] : the ReadyReplicas of MachineDeployment [core-amf-workload-large-node-pool] is [0], non-termiated machine is 1, Replicas is 1
Nov  3 07:10:49 apiserverd[3012] : [Info-controller] : Cluster [2dc9e18e-eb84-4ec8-b235-c1bb8a2bdc63:core-amf-workload] still creating, status: [cluster control plane is still being initialized, cluster infrastructure is still being provisioned]
Nov  3 07:10:59 apiserverd[3012] : [Info-controller] : Cluster [2dc9e18e-eb84-4ec8-b235-c1bb8a2bdc63:core-amf-workload] still creating, status: [cluster control plane is still being initialized, cluster infrastructure is still being provisioned]
Nov  3 07:10:59 apiserverd[3012] : [Info-controller] : the ReadyReplicas of MachineDeployment [core-amf-workload-large-node-pool] is [0], non-termiated machine is 1, Replicas is 1
Nov  3 07:11:04 apiserverd[3012] : [Warning-ApiServer] : addon not exist [2dc9e18e-eb84-4ec8-b235-c1bb8a2bdc63] in cluster from store. err: db: key not found
Nov  3 07:11:09 apiserverd[3012] : [Info-controller] : the ReadyReplicas of MachineDeployment [core-amf-workload-large-node-pool] is [0], non-termiated machine is 1, Replicas is 1
Nov  3 07:11:09 apiserverd[3012] : [Info-controller] : Cluster [2dc9e18e-eb84-4ec8-b235-c1bb8a2bdc63:core-amf-workload] still creating, status: [cluster control plane is still being initialized, cluster infrastructure is still being provisioned]

 

TKG Cluster Logs:

\common\logs\kbs-techsupport\k8s-bootstrapper-20221103T070509\common\versions : Can give you cluster version information for tkg and kubectl

nfv_k8s_bootstrapper.properties
bootstrapper_vm: 2.0.1
tkg: v1.4.2
kubectl: 1.21.8
buildnumber: 19402304
k8s-bootstrapper: 74bf46ee1d935862b19e51af7e0f79411cadf1d5
nodeconfig-operator: 6fc04a84396bd466e9a504e62a2195b25d18362e
nfv-vm-operator: ad28cfa31eb2836c3161c3c5119ada7e22d60d39
nfv-ccli: 8fcbb65428281c50e55093d8006fb465f7cf0ce5
bootstrapper-helm-infra: bcc0a020452d5d73e654a11bc9196c459547889a

\common\logs\kbs-techsupport\k8s-bootstrapper-20221103T070509\crashd-cluster-core-mgmt\core_mgmt_master_control_plane_96zgk
sudo_df__i.txt : Space Information

Filesystem      Inodes IUsed   IFree IUse% Mounted on
devtmpfs       1018982   392 1018590    1% /dev
tmpfs          1020919     1 1020918    1% /dev/shm
tmpfs          1020919  2131 1018788    1% /run
tmpfs          1020919    17 1020902    1% /sys/fs/cgroup
/dev/sda3      2621440 67050 2554390    3% /
tmpfs          1020919    16 1020903    1% /tmp
/dev/sda2            0     0       0     - /boot/efi
/dev/sda4       655360    11  655349    1% /vmware-data
shm            1020919     1 1020918    1% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/1053eb3cf4957cced95bbe8dde76918dc4f5deea10b50b2c18afed6136a8cd2e/shm
  • sudo_ifconfig__a.txt  :: IP Address details:
  • \common\logs\tkg : TKG Logging:
vSphere 7.0 Environment Detected.
You have connected to a vSphere 7.0 environment which does not have vSphere with Tanzu enabled. vSphere with Tanzu includes
an integrated Tanzu Kubernetes Grid Service which turns a vSphere cluster into a platform for running Kubernetes workloads in dedicated
resource pools. Configuring Tanzu Kubernetes Grid Service is done through vSphere HTML5 client.
Tanzu Kubernetes Grid Service is the preferred way to consume Tanzu Kubernetes Grid in vSphere 7.0 environments. Alternatively you may
deploy a non-integrated Tanzu Kubernetes Grid instance on vSphere 7.0.
I1103 05:03:49.258296 init.go:198] Deploying TKG management cluster on vSphere 7.0 ...

Setting up management cluster...
I1103 05:03:51.263524 client.go:131] Creating kind cluster: tkg-kind-cdhkndtldcp41p1oj35g
I1103 05:03:52.263965 logger.go:115] Image: projects.registry.vmware.com/tkg/kind/node:v1.21.8_vmware.1 present locally
I1103 05:09:18.649678 logger.go:117] I1103 05:07:55.503455 34 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration[init] Using Kubernetes version: v1.21.8+vmware.1[certs] Using certificateDir folder "/etc/kubernetes/pki"I1103 05:08:02.999557 34 certs.go:110] creating a new certificate authority for ca[certs] Generating "ca" certificate and keyI1103 05:08:03.440361 34 certs.go:519] validating certificate period for ca certificate[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost tkg-kind-cdhkndtldcp41p1oj35g-control-plane] and IPs [100.64.0.1 172.20.0.2 127.0.0.1][certs] Generating "apiserver-kubelet-client" certificate and keyI1103 05:08:04.365796 34 certs.go:110] creating a new certificate authority for front-proxy-ca[certs] Generating "front-proxy-ca" certificate and keyI1103 05:08:05.937885 34 certs.go:519] validating certificate period for front-proxy-ca certificate[certs] Generating "front-proxy-client" certificate and keyI1103 05:08:06.267766 34 certs.go:110] creating a new certificate authority for etcd-ca[certs] Generating "etcd/ca" certificate and keyI1103 05:08:06.367212 34 certs.go:519] validating certificate period for etcd/ca certificate[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost tkg-kind-cdhkndtldcp41p1oj35g-control-plane] and IPs [172.20.0.2 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost tkg-kind-cdhkndtldcp41p1oj35g-control-plane] and IPs [172.20.0.2 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and keyI1103 05:08:08.494112 34 certs.go:76] creating new public/private key files for signing service account users[certs] Generating "sa" key and public key[kubeconfig] Using  in 0


I1103 05:14:48.133320 init.go:198] Start creating management cluster…
I1103 05:31:31.600218 upgrade_region.go:427] Successfully reconciled package: tanzu-addons-manager
I1103 05:35:21.638589 upgrade_region.go:427] Successfully reconciled package: metrics-server
I1103 05:35:31.595665 upgrade_region.go:427] Successfully reconciled package: vsphere-cpi
I1103 05:35:31.595666 upgrade_region.go:427] Successfully reconciled package: vsphere-csi
I1103 05:35:51.592776 upgrade_region.go:427] Successfully reconciled package: antrea
I1103 05:35:51.598259 client.go:161] Deleting kind cluster: tkg-kind-cdhkndtldcp41p1oj35g
Management cluster created!

Ashutosh Dixit

I am currently working as a Senior Technical Support Engineer with VMware Premier Services for Telco. Before this, I worked as a Technical Lead with Microsoft Enterprise Platform Support for Production and Premier Support. I am an expert in High-Availability, Deployments, and VMware Core technology along with Tanzu and Horizon.

Leave a Reply