VCF 9 - VKS with NSX VPCs
VCFnested labvmwarenetworknsxVKSNSXNSX 9TanzuK8s
3759 Words Words // ReadTime 17 Minutes, 5 Seconds
2025-06-27 23:00 +0200
Introduction
K8s is already a crucial part in the VMware ecosystem for many years and the level of integration in other products like NSX and AVI changed a lot in the past. That is also true for the naming like “vSphere with Tanzu”, “vSphere IaaS” and “VKS” and perhaps more changes in the future. For this blog post we will bring some spotlight to the integration for VKS with NSX VPCs, which is from my point of view a great enhancement from tenancy point of view.
As preperation for this blog post it is highly recommended to read the previous blog articel from my esteemed colleague Daniel Krieger aka SDN-Warrior about “VCF 9 and VPCs as part of NSX 9”.
LAB Environment
- VCF 9
- NSX 9
- ESX 9
- NSX Native Load Balancer (no AVI integration)
- Vyos Router
- PFsense
The following drawing shows an overview of the VPCs and corresponding network connectivity used for the VKS integration iny my lab.
Requirements
Most of the requirements from the past are still valid and will be listed in the following list.
- Management network for the supervisor Cluster
- DRS set to Fully Automated
- vSphere HA
- NSX Edges in large form factor (vm-type)
- Minimum 3 ESXi Hosts in the vSphere cluster
- Service CIDR
- NTP Server
- DNS Server
Beside the requirements which are unchanged the following is special for the initial integration of the supervisor cluster with VPCs.
- PreDefined Project where the Supervisor should be deployed
- VPC Connectivity Profile
- External IP Block
- Private Transit Gateway IP Block
- Private VPC CIDRs
- T0 router should be deployed in Active/ Standby HA mode
For the creation of the vSphere namespaces, you can either create additional NSX projects or use the same project as for the supervisor cluster. If you want to use a dedicated NSX project, you need to create it manually including Transit Gateway, external IP block and private transit gateway IP block. The corresponding VPCs can be either auto-created by the vSphere namespace creation process or you can manually create some VPCs and select them while creating a new vSphere namespace. A VPC which is manually created can also be used for multiple vSphere Namespaces, but in most situations a vSphere namespace will be used for isolating services and a dedicated VPC would be more applicable.
Deployment Supervisor
The implementaion is very simple, if the NSX projects and VPCs are already predefined, since all the required networks are already defined in the chosen VPC.
As shown in the following screenshot you have now a new option while supervisor activation, which is “VCF netwoking with VPC”. For the integration of VKS with NSX projects and VPCs you have to select this option and proceed with the next step.
The second step looks familiar, if you already used VKS with vSphere 8 and you are aware of the deployment option “vSphere Zone Deployment”. For the deployment of a single cluster you have to select the option “Cluster Deployment” and your vSphere clusters within the used vCenter will be validated against the requirements mentioned above. As soo as the validation is completed you can select your compatible vSphere cluster and proceed to the next step.
In the next step you will define the Management network including IP assignment mode, predefined network for the management of the VKS supervisor nodes as well as the IP and subnet parameters.
Also important is to choose valid NTP and DNS servers to make sure the communication between the supervisor nodes and other components like vCenter and NSX Manager is working.
The corresponding NSX Project and VPC is dependent on the selected management network which is pubseg01
in this example.
After definiation of the management network it is required to define the workload network parameters as shown in the screenshot below. This is the first section of the configuration, which is really different from the past. Compared to the past a ingress or egress network must not be defined. Instead of ingress and egress networks it is required to take advantage of the external and private IP blocks of the predefined VPC. Furthermore the service CIDR has the same function as in earlier VKS versions and will be used for services within K8s, but not beyond the K8s nodes. DNS and NTP is also required in this stage of the deployment.
For the advanced settings you are able to choose the size of the supervisor control plane, which is small
for my lab.
Optionally you are able to select a FQDN for the API of the supervisor, which is not required for my lab.
As a result you will see a summary of the most important settings done within the last 5 steps. After final validation of those settings the deployment can be started.
As a result of the deployment the following components and nsx objects are created.
- VPC with name
kube-system_<some id>
- Subnet for supervisor control plane workload network assigned to VPC
kube-system_<some id>
based on the private VPC CIDR defined in the workload network definition - VPC with name
vmware-system-supervisor-services-vpc_<some id>
- Supervisor control plane nodes
- NSX native LoadBalancer mapped to VPC
kube-system_<some id>
used for services like K8s API, CSI controller and some other services of VKS (running on VPC gateway) - SNAT rule for egress traffic (running on VPC gateway) with a IP assigned from
External IP Block
of the corresponding NSX project.
Creating vSphere Namespaces
A vSphere namespace can be created and assigned to a automaticly created VPC as part of the vSphere Namespace creation process or assigned to a predefined VPC. If the VPC is not prefedined, it will be added within the default NSX project. The predefined VPC can be part of the default NSX Project or a dedicated NSX project.
The following screenshot shows the creation of a vSphere Namespace with the option I would like to override the default network settings
unchecked.
Based on this configuration a vSphere Namespace will be created as well as a new autocreated VPC.
The autocreated VPC starts with the name of the vSphere Namespace followed by a random ID and the following objects are created in NSX.
- VPC with name
ns-test01_482ea8ab-e57f-473b-a677-457ff5aa91b0
- SNAT rule for egress traffic (running on VPC gateway) with a IP assigned from
External IP Block
of the corresponding NSX project. - NSX native LoadBalancer mapped to VPC
ns-test01_482ea8ab-e57f-473b-a677-457ff5aa91b0
used for services like K8s API of the K8s clusters that will be created within the vSphere Namespace
In the next example I will show the creation of vSphere Namespace with the assignment of a manually created VPC named vpc-vks-prod
.
As shown in the following screenshot it is required to select the checkbox for I would like to override the default network settings
After hitting Next
you will see the Network Settings
, where Consume existing VPC
and the desired NSX Project
is selected.
For some reason the desired VPC vpc-vks-prod
is not in the list of available VPCs and cannot be selected.
Why is the desired VPC not in the list? It is already pre-created!
To identify the cause of this issue you should check the log file /var/log/vmware/wcp/wcpsvc.log
on vCenter.
Here you will more detailed information about the reason why a VPC is listed as available VPC or not.
2025-06-18T20:59:28.926Z debug wcp [nsxt/validator.go:79] Processing vpc vpc01
2025-06-18T20:59:28.926Z debug wcp [nsxt/validator.go:79] Processing vpc vpc02
2025-06-18T20:59:28.927Z debug wcp [nsxt/validator.go:79] Processing vpc vks-dev
2025-06-18T20:59:28.927Z debug wcp [nsxt/validator.go:79] Processing vpc vpc-vks-prod
2025-06-18T20:59:28.992Z debug wcp [projects/utils.go:76] VPC '/orgs/default/projects/vks-dev/vpcs/vpc-vks-prod' is not compatible: VPC /orgs/default/projects/vks-dev/vpcs/vpc-vks-prod has no compatible Load balancer providers.
As mentioned in the logs above the VPC vpc-vks-prod
is not compatible, because it has no compatible Load Balancer.
At this point it is getting a bit tricky, since the NSX Loadbalancing service cannot be enabled for manually created VPCs. From UI point of view there is just the option to enable AVI, but not the NSX native Loadbalancer. The good news is that there is a way trough the API and therefore I validated the configuration of a autocreated VPC and checked the API documentaion of NSX to find the required API call.
To enable the NSX native Load Balancer, you should execute the following API call shown here based on a curl command.
curl --location --request PUT 'https://nsx00.vcf00.training/policy/api/v1/orgs/default/projects/vks-dev/vpcs/vpc-vks-prod/vpc-lbs/lb-vks-prod01' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46Vk13YXJlMSFWTXdhcmUxIQ==' \
--header 'Cookie: JSESSIONID=08429D3CDD35E7947EA39D8565C885E8' \
--data '{
"resource_type": "LBService",
"enabled": true,
"size":"SMALL"
}'
After the LB Service is enabled for the VPC, it is also visible in the list of VPCs within the vSphere Namespace creation process.
Within the NSX UI you can also validate if the NSX Loadbalancer
is activated for a specific VPC.
NSX Load Balancer not activated:
NSX Load Balancer activated:
The created NSX objects are not the same compared to a vSphere Namespace assigned to a autocreated namespace, just the assigned VPC and NSX project might be different.
Creating K8s Clusters
The creation of a K8s cluster within VKS is unchanged and it can be created as shown in the following example.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: tkc03
namespace: ns-test04
spec:
clusterNetwork:
services:
cidrBlocks: ["198.16.0.0/12"]
pods:
cidrBlocks: ["192.12.0.0/16"]
topology:
class: tanzukubernetescluster
version: v1.29.4---vmware.3-fips.1-tkg.1
controlPlane:
replicas: 1
workers:
machineDeployments:
- class: node-pool
name: node-pool-1
replicas: 2
variables:
- name: vmClass
value: best-effort-small
- name: storageClass
value: tanzu
After creating the K8s cluster the following obejects are created within NSX in addition.
- Workload network for the K8s worker and master nodes based on the
private VPC CIDR
- LoadBalancer Virtual Services for the K8s API of the K8s cluster
Isolation tests
To test the isolation I did the following two tests.
- Traceflow from k8s master of tkc02 to k8s worker of tkc02 (communication within same vSphere namespace and VPC)
- Traceflow from k8s master of tkc02 to k8s master of tkc03 (communication between different vSphere namespace assigned to different VPCs)
Traceflow from k8s master of tkc02 to k8s worker of tkc02 (Layer 2 communication same VPC)
The following traceflow proved that the communication for multiple VMs within the same VPC and network is working as expected.
Traceflow from k8s master of tkc02 to k8s master of tkc04 master (Layer 3 communication same VPC)
The following traceflow proved that the communication for multiple VMs within the same VPC connected to different networks is working as expected and routing is done over VPC gateway only.
Traceflow from k8s master of tkc02 to k8s master of tkc03
The following traceflow proved that the communication from VPC vpc-vks-dev
will not be routed to the VPC vpc-vks-prod
.
Validating Routing, NAT and LB through NSX Edge CLI
In case you are using NSX projects the liklihood of using multiple projects is very high. That also means you are using multiple transit gateways which will be a bit confusing, if they are running on the same edge cluster. While using the CLI for inspecting the status of the logical routers, you will recognize a DR and SR for all created transit gateways with the exact same name. This will be even worse as more NSX projects with transit gateways you do have running on the same edge cluster. An example can be seen in the following output of my lab.
nsxe01> get gateways
Fri Jun 27 2025 UTC 20:01:30.530
Gateway
UUID VRF Gateway-ID Name Type Ports Neighbors
736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL 4 6/5000
0e1de47c-e3b6-4c38-a904-219a50a59c72 1 4 SR-t0-01 SERVICE_ROUTER_TIER0 7 2/50000
fecd5bf3-5cfc-4666-9a8b-2488f9384a4d 3 2 DR-t0-01 DISTRIBUTED_ROUTER_TIER0 5 2/50000
4bafe1b4-cdd6-4719-87bc-a989664c219e 4 5 SR-VRF-Default Transit Gateway VRF_SERVICE_ROUTER_TIER0 5 0/50000
59afa3ce-f036-46bc-a577-4cab48b22725 5 1 DR-VRF-Default Transit Gateway VRF_DISTRIBUTED_ROUTER_TIER0 9 0/50000
9b498501-e115-4eda-96f7-037b15b8f006 6 11 DR-t1-01 DISTRIBUTED_ROUTER_TIER1 4 2/50000
d405b694-d269-4db5-89ac-aae80eeb69fe 7 13 SR-vpc-mgmt01 SERVICE_ROUTER_TIER1 5 2/50000
d662073d-6063-4445-95e7-e9eb0019ca8b 8 12 DR-vpc-mgmt01 DISTRIBUTED_ROUTER_TIER1 6 0/50000
7f02acd5-b834-4d76-8dc9-52155ff1fcb3 9 15 SR-vpc02 SERVICE_ROUTER_TIER1 5 2/50000
b3b32677-f610-460a-a900-aa7700177600 12 14 DR-vpc02 DISTRIBUTED_ROUTER_TIER1 4 0/50000
075e7400-998a-4b90-8f9e-c9483a55b3f3 20 29 SR-kube-system_728cc606-4e34-462 SERVICE_ROUTER_TIER1 5 2/50000
4-bd4c-0bfdfa2792d5
db84e3e8-306e-46a4-bc72-840561b08dc1 21 28 DR-kube-system_728cc606-4e34-462 DISTRIBUTED_ROUTER_TIER1 4 0/50000
4-bd4c-0bfdfa2792d5
7e8aa7e3-1f09-4ad4-b155-4803d3335418 22 31 SR-vmware-system-supervisor-serv SERVICE_ROUTER_TIER1 5 2/50000
ices-vpc_23142704-a564-440f-ace9
-820a21b28f06
405be5e0-4268-498e-90fb-dbbe71c59f1c 23 33 SR-ns-test01_482ea8ab-e57f-473b- SERVICE_ROUTER_TIER1 5 2/50000
a677-457ff5aa91b0
0e8a4292-c9aa-49ef-9faa-2058c7b5043a 25 37 SR-VRF-Default Transit Gateway VRF_SERVICE_ROUTER_TIER0 5 0/50000
93d2e99b-4cb1-4f40-9cdd-0d9eb3376acb 26 36 DR-VRF-Default Transit Gateway VRF_DISTRIBUTED_ROUTER_TIER0 6 2/50000
f6be5a57-1a7a-483f-829d-32329ae2665a 27 40 SR-vpc-vks-dev SERVICE_ROUTER_TIER1 5 2/50000
6b4135fb-bd55-4d9b-a9be-30d2c1d4eb39 28 42 SR-vpc-vks-shared-services SERVICE_ROUTER_TIER1 5 2/50000
80bf7130-067f-40bf-a062-daf5ea2d17a3 30 41 DR-vpc-vks-shared-services DISTRIBUTED_ROUTER_TIER1 4 0/50000
1ef00d98-cef5-4db1-bfb3-994414cb3da4 31 39 DR-vpc-vks-dev DISTRIBUTED_ROUTER_TIER1 4 0/50000
11e9ad2f-74d8-4156-984a-5de036c0d9f8 32 44 SR-vpc-vks-prod SERVICE_ROUTER_TIER1 5 2/50000
b4b334d1-bd83-4bfa-98c9-8bfcc9d13124 33 43 DR-vpc-vks-prod DISTRIBUTED_ROUTER_TIER1 4 0/50000
1de383e3-f7c3-4b7f-be36-9b9f797a7bdf 34 46 SR-vpc-test SERVICE_ROUTER_TIER1 5 2/50000
You might be able to differentiate them by comparing the port count shown as result of the privious command or by executing the command get gateway <UUID> interfaces
.
As soon as you know which is the transit gateway for the NSX project you want to inspect, you can proceed with the next steps.
As an example we can try to discover the source router of a SNAT IP and the LoadBalancer VIP of a K8s Cluster API within vSphere namespace vpc-vks-prod
.
The IPs we are looking for, are higlighted in the following two screenshots.
NAT Rule VPC vpc-vks-prod
:
LoadBalancer VIP K8s Cluster API VPC vpc-vks-prod
:
Based on the vSphere Namespace and assigned NSX project as well as VPC, we already have the following information.
- NSX project:
tenant01
- VPC name:
vpc-vks-prod
- T0 router where the corresponding transit gateway is connected to:
t0-01
- Default transit gateway of the NSX projects:
Default Transit Gateway
- VPC gateway name:
vpc-vks-prod
Based on what we learned we already know the IPs should be located on the VPC gateway, but sometimes it is still required to validate if this is also what the T0 router learned and therefore the routing is initialized as expected.
For the validation we should connect to the edge node with the active T0 router first. Which edge node does run the active T0 can be validated in the UI under the Default
NSX project Networking --> Tier-0 Gateways
by clicking on the configured HA modes as shown in the following screenshot.
If you are connected to the egde node you should connect to the VRF of the T0 service router which is VRF 1
in my lab as shown in the output of the command get gateways
.
Afterwards you should enter get route
to validate the next hop.
nsxe01> vrf 1
nsxe01(tier0_sr[1])> get route
Fri Jun 27 2025 UTC 20:47:24.494
Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, o - OSPF
t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected,
t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT,
t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR, isrs: Inter-SR-static
ivs: Inter-VRF-Static, tgws: Transit-Gateway-Static, > - selected route, * - FIB route
Total number of routes: 25
b > * 0.0.0.0/0 [20/0] via 192.168.19.254, uplink-281, 01w1d14h
b > * 0.0.0.0/0 [20/0] via 192.168.20.254, uplink-288, 01w1d14h
tgws> * 10.200.0.0/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.1/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.2/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.3/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.4/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.5/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.6/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.7/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.8/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.9/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.200.0.10/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d03h
tgws> * 10.200.0.11/32 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d02h
tgws> * 10.200.0.64/26 [5/0] via 169.254.2.2, inter-vrf-280, 01w2d04h
tgws> * 10.201.0.0/32 [5/0] via 169.254.2.3, inter-vrf-280, 01w2d04h
tgws> * 10.201.0.1/32 [5/0] via 169.254.2.3, inter-vrf-280, 01w2d01h
tgws> * 10.201.0.2/32 [5/0] via 169.254.2.3, inter-vrf-280, 01w2d00h
tgws> * 10.201.0.3/32 [5/0] via 169.254.2.3, inter-vrf-280, 01w1d23h
t0c> * 100.64.0.0/31 is directly connected, linked-305, 06w6d10h
t0c> * 169.254.2.0/23 is directly connected, inter-vrf-280, 06w6d10h
t0c> * 192.168.19.0/24 is directly connected, uplink-281, 06w6d10h
t0c> * 192.168.20.0/24 is directly connected, uplink-288, 06w6d10h
t0c> * fc0d:3a64:2334:3000::/64 is directly connected, linked-305, 06w6d10h
t0c> * fc0d:3a64:2334:fde8::/64 is directly connected, inter-vrf-280, 06w6d10h
t0c> * fe80::/64 is directly connected, inter-vrf-280, 06w6d10h
nsxe01(tier0_sr[1])>
Based on prefix tgws
you can see that the next-hop for the LB VIP and SNAT IP is behind the corresponding transit gateway.
As soon as you have this information the next step is to siwtch to the VRF of the transit gateway service router which is VRF 25
in my lab.
There you will execute the same command which delivers the following output in my lab.
nsxe01> vrf 25
nsxe01(tier0_vrf_sr[25])> get route
Fri Jun 27 2025 UTC 21:09:33.265
Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, o - OSPF
t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected,
t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT,
t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR, isrs: Inter-SR-static
ivs: Inter-VRF-Static, tgws: Transit-Gateway-Static, > - selected route, * - FIB route
Total number of routes: 14
ivs> * 0.0.0.0/0 [1/0] via 169.254.2.1, inter-vrf-588, 01w2d04h
t1n> * 10.201.0.0/32 [3/0] via 100.64.0.1, downlink-595, 01w2d04h
t1l> * 10.201.0.1/32 [3/0] via 100.64.0.1, downlink-595, 01w2d02h
t1n> * 10.201.0.2/32 [3/0] via 100.64.0.3, downlink-639, 01w2d00h
t1l> * 10.201.0.3/32 [3/0] via 100.64.0.3, downlink-639, 01w1d23h
t0c> * 100.64.0.0/31 is directly connected, downlink-595, 01w2d04h
t0c> * 100.64.0.2/31 is directly connected, downlink-639, 01w2d00h
t0c> * 100.64.0.4/31 is directly connected, downlink-671, 01w0d10h
t0c> * 169.254.2.0/23 is directly connected, inter-vrf-588, 01w2d04h
t0c> * fc0d:3a64:2334:fde8::/64 is directly connected, inter-vrf-588, 01w2d04h
t0c> * fcc4:2e20:cb1f:800::/64 is directly connected, downlink-639, 01w2d00h
t0c> * fcc4:2e20:cb1f:1400::/64 is directly connected, downlink-595, 01w2d04h
t0c> * fcc4:2e20:cb1f:7c00::/64 is directly connected, downlink-671, 01w0d10h
t0c> * fe80::/64 is directly connected, inter-vrf-588, 01w2d04h
Based on the prefix t1n
and t1l
you can see that the routed are pointing to a NAT IP and LB VIP, but the next-hop router will be a VPC gateway instead of a T1 router in this case.
To find the next-hop router you can execute the command get interfaces
within the VRF of the transit gateway to find the corresponding VPC gateway as shown in. the following output.
The output is omitted and does show only the affected interface of the transit gateway which does have the second IP in the used /31 transit network between transit and VPC gateway.
Interface : 77c9406c-7d8c-5a78-8806-58c450a258af
Ifuid : 639
Name : default-vpc-vks-prod-t0_lrp
Fwd-mode : IPV4_ONLY
Internal name : downlink-639
Mode : lif
Port-type : downlink
IP/Mask : 100.64.0.2/31;fcc4:2e20:cb1f:800::1/64(NA);fe80::50:56ff:fe56:4452/64(NA)
MAC : 02:50:56:56:44:52
VNI : 74756
Access-VLAN : untagged
LS port : e88ce6d8-e06e-4f71-8eb5-74b66d3a0866
Urpf-mode : STRICT_MODE
DAD-mode : LOOSE
RA-mode : SLAAC_DNS_THROUGH_RA(M=0, O=0)
Admin : up
Op_state : up
Enable-mcast : False
MTU : 1500
arp_proxy
Based on the privious output we validated that the traffic towards the IPs 10.201.0.2
and 10.201.0.3
is routed to VPC gateway with the name vpc-vks-prod
.
As soon as we switched to the VRF of the VPC gateway which is VRF 32
in my lab, we can validate the existence of the LB VIP and SNAT IP based on the following commands.
nsxe01> vrf 32
nsxe01(tier1_sr[32])> get firewall interfaces
...
Interface : 352e9a1b-9b23-43c7-be85-dec752255429
Type : UPLINK
Sync enabled : true
Name : default-vpc-vks-prod-t1_lrp
VRF ID : 32
Context entity : 11e9ad2f-74d8-4156-984a-5de036c0d9f8
Context name : SR-vpc-vks-prod
Interface : 50f8bae3-8ea8-4795-a404-f1b40dcdc389
Type : BACKPLANE
Sync enabled : true
Name : bp-sr0-port
VRF ID : 32
Context entity : 11e9ad2f-74d8-4156-984a-5de036c0d9f8
Context name : SR-vpc-vks-prod
...
From the privious output we will need to copy the UUID of the interface from type uplink
and execute the following command to validate that the SNAT IP is configured on the VPC gateway.
nsxe01> get firewall <interface uuid> ruleset rules
Fri Jun 27 2025 UTC 21:28:15.304
DNAT rule count: 2
Rule ID : 1
Rule : in protocol tcp natpass from any to ip 10.201.0.3 port 6443 lb lbtype L4 lbidletimeout 1800 lbclosetimeout 8 lboptions 8ffc3a7d-8423-4bb5-b572-8406be91e55f 7c0b6e7c-21ea-4fd3-a0df-f7563bf3ba46 tag 'loadbalancer'
Rule ID : 2
Rule : in protocol tcp natpass from any to ip 172.18.1.3 port 6443 lb lbtype L4 lboptions 8ffc3a7d-8423-4bb5-b572-8406be91e55f 6e12f884-577f-4c14-9060-85024de60790 with lbrule tag 'loadbalancer'
SNAT rule count: 2
Rule ID : 3
Rule : out protocol tcp natpass from any to ip 172.18.1.3 port 6443 lb lbtype L7 lboptions 8ffc3a7d-8423-4bb5-b572-8406be91e55f 6e12f884-577f-4c14-9060-85024de60790 tag 'loadbalancer'
Rule ID : 536870932
Rule : out protocol any prenat from ip 172.18.1.0/24 to any snat ip 10.201.0.2 port 37001-65535
Firewall rule count: 0
For the Load Balancer you can use the command get load-balancers
outside of any VRF, search for the name of the LoadBalancer shown under the Network Services
of the VPC and validate the service router id shown in the following ouput.
The output is omitted and just shows the affectd Load Balancer.
nsxe01> get load-balancers
...
Load Balancer
Applied To :
Logical Router Id : b4b334d1-bd83-4bfa-98c9-8bfcc9d13124
Service Router Id : 11e9ad2f-74d8-4156-984a-5de036c0d9f8
Display Name : lb-vks-prod01
Enabled : True
UUID : 8ffc3a7d-8423-4bb5-b572-8406be91e55f
Log Level : LB_LOG_LEVEL_INFO
Relax Scale Validation : True
Size : SMALL
Tenant Context :
Org : default
Proj : vks-dev
Vpc : 4h1WsqhV
Virtual Server Id : 7c0b6e7c-21ea-4fd3-a0df-f7563bf3ba46
...
Summary
The deployment is very simple and the requirements are just slightly changed compared to the integration without VPCs. From my point of view this type of integration is a very important milestone in the journey of creating a multi tenant platform which supports dynmaic K8s workloads, but is still secure and able to isolate the different K8s workloads. A isolation was already possible before the integration between VKS and VPCs, but much more complex and based on specific vDefend rules.
I did not cover all the posibilities with VKS and VPCs in this blog article, since vDefend is still something that could be used within the VPCs, as well as the Antrea and AVI integration. Those topics will be discussed in future blog articles.