VCF 9 - VKS with NSX VPCs and AVI
VCFnested labvmwarenetworknsxVKSNSXNSX 9TanzuK8sAVI
4767 Words Words // ReadTime 21 Minutes, 40 Seconds
2026-02-11 21:00 +0100
Introduction
In my last blog post I wrote about the integration of VKS with NSX and VPCs. As LoadBalancer I used the NSX native version and the AVI integration was not covered. In addition to the last blog post I will describe now how VPCs can be used for VKS in combination with AVI. If you already used AVI for VKS in the past you might be aware of some limitations like the vNIC limit of the Service Engines. The impact of this scalability limition is reduced with the VPC based AVI integration. Further the AVI tenancy feature will be combined with NSX projects to esure the isolation for the workload is implemented on all different products. For VKS it is implemented with vSphere Namespaces, for NSX the tenancy is implemented by using NSX projects sumpplemented by VPCs and for AVI the NSX projects are automatically combined with a AVI tenant for each NSX project. Sure there are still some limitations and problems in the new integration, but if not it would be boring.
If you are not aware of the vNIC limitations regarding AVI and VKS integrations without using VPCs, I can recommend the following blog post “Scaling with NSX-T and ALB beyond vNIC limits”.
Further it is crucial to understand the general integration of VKS with VPCs availble in my privious blog post “VCF 9 - VKS with NSX VPCs”.
Architecural overview of AVI integration
The general AVI architecture is unchanged and the NSX-Cloud for AVI is required for the VPC integration.
AVI still requires a data network and is limited to a one-arm LoadBalancing setup. But from VPC point of view the consumed data networks are VPC backed networks from type private and it is not required to manually create specific T1 routers or segments for the AVI inegration.
Beside the data networks from type private the virtual LoadBalancer IPs for the AVI virtual servers are allocated from the network pools assigned for the specific VPC.
By default the AVI integration can only take advantge of the private and public network types. The Private Transit Gateway network type cannot be used by default. This limitaion is currently based on the VPC integration. For VKS the automated workflow for creating LoadBalancers or Ingress deployments based on the AVI Kubernetes Operator (AKO) is currently limited to the public network type.
The following drawing shows the architecturev in an high level overview and focuses on the general AVI and VPC integration not just for VKS. Therfore I mentioned the network types of private, public and Private Transit Gateway used as IP for AVI virtual services. I will show a small hack how to use the Private Transit Gateway network type by manual intervention within this blog post.
Within the drawing the previous data network for the AVI integration is now called Service Network and will be created once per VPC.
This network is now being used to connect the Service Engines for the data network, but the management network of the Service Engine still needs to be created manually.
Further the requirement of the connection between AVI controller nodes, Service Engines, vCenter and NSX Manager is still existing.
Requirements
The deployment of AVI with VKS and VPCs has the following requirements.
- vCenter 9.0 and above
- NSX 9.0 and above
- AVI 31.1.1 and above
- AVI enterprice or enterprise cloud services license
- VKS and VPC related networks, no additianal networks for AVI inetgartion needed
- VKS guest cluster of VKS must be deplyoed in a custom nsx project, otherwise the mapping between AVI tenant and NSX project is not working
- Replace AVI controller certificate. The certificate content must match the FQDN, hostname and IPs of the controller with the corresponding CN and SAN contents
LAB specific components
For the integration of VKS with NSX VPCs and AVI the following nsx objects are relevant.
- Default NSX Project
- T0 Router:
t0-vks-01 - VPC Connectivity Profile:
Default VPC Connectivity Profile- External IP-Block:
10.100.55.0/24 - Private - Transit Gateway IP Block:
172.16.0.0/20
- External IP-Block:
- VPC:
vpc-mgmt01(for VKS supervisor nodes) - VPC network type public:
pub-vks-infra(for VKS supervisor nodes) - T1 Router:
T1-AVI-MGMT(for AVI Service Engine managament network) - Segment:
seg-avi-mgmt(for AVI Service Engine managament network)
- T0 Router:
- Custom NSX Project:
nsx-tenant1- VPC Connectivity Profile:
Default VPC Connectivity Profile- External IP-Block:
10.100.56.0/24 - Private - Transit Gateway IP Block:
172.17.0.0/20
- External IP-Block:
- VPC:
vpc-tenant01-prod(VPC for VKS guest cluster)
- VPC Connectivity Profile:
The NSX components are used for the VKS deployment and are mapped as shown below.
- VKS Supervisor Cluster:
- Management Network:
pub-vks-infra - NSX Project:
Default - VPC Connectivity Profile:
Default VPC Connectivity Profile - Private CIDR (VPC):
172.26.0.0/16 - Service CIDR:
10.96.0.0/24
- Management Network:
- vSphere Namespace:
ns-test02- NSX Project:
nsx-tenant1 - VPC:
vpc-tenant01-prod - VPC Connectivity Profile:
Default VPC Connectivity Profile - Private CIDR (VPC):
172.19.0.0/20
- NSX Project:
Preperation for the deployment
In the blog post I assume the basic setup of VCF including NSX with VPCs and the AVI controllers is already done.
This blog starts in AVI with the integration of AVI with NSX VPCs and vCenter based on the NSX Cloud.
The NSX cloud for VPCs DHCP to be enabled for the management network, further IPv4 is sufficient and IPv6 can be disabled.
For the NSX integration it is important to select the dedired Transport-Zone, T1-Router and path of the overlay segment for the management network. In addition it is required to explicitely enable the VPC Mode and select the Transport-Zone which should be used for the Data Networks.

As next step it is required to enter the vCenter credentials.

Beside the vCenter credentials it is mandatory to select a content library where AVI can store the OVA files of the service enginges which will be deployed automatically in the vCenter on demand.

After the initail setup of the NSX cloud it is required to explicitly onboard AVI in NSX by using the NSX API.
In earlier versions it was required to keep the default controller certificate of AVI in place until the onboarding process was completed.
Just after the successful onboarding the certificate could be changed. This limitation is no longer valid and the controller certificate can be changed independently from the onboarding workflow.
To complete this onboarding process, the following PUT-Request must be executed.
curl --location --request PUT 'https://10.0.1.62/policy/api/v1/infra/alb-onboarding-workflow' \
--header 'X-Allow-Overwrite: True' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46Vk13YXJlMSFWTXdhcmUxIQ==' \
--data '{
"owned_by":"LCM",
"cluster_ip" :"10.0.1.64",
"infra_admin_username" : "admin",
"infra_admin_password" : "VMware1!VMware1!",
"ntp_servers": [
"depool.ntp.org"
],
"dns_servers": [
"10.0.1.2"
]
}'
After the onboarding process is successfully completed, the onboarding state can be validated by using the following GET-Request.
curl --location 'https://10.255.51.84/policy/api/v1/infra/sites/default/enforcement-points/alb-endpoint' \
--header 'X-Allow-Overwrite: True' \
--header 'Authorization: Basic YWRtaW46Vk13YXJlMSFWTXdhcmUxIQ==' \
--header 'Cookie: JSESSIONID=FDA0832265D393BB21C3A9AA5B1D457D'
This will generate a similar output as shown below, where a valid certificate should be included. If there is not a valid certificate shown, this might be a hint that the onboarding process did not work as expected.
{
"connection_info": {
"username": "****",
"tenant": "admin",
"expires_at": "2026-02-11T02:00:06.178547+00:00",
"managed_by": "LCM",
"status": "DEACTIVATE_PROVIDER",
"certificate": "-----BEGIN CERTIFICATE-----\nMIIDqTCCApGgAwIBAgIUXu/15z4GfaZvagy0+Ca0kECm/WkwDQYJKoZIhvcNAQEL\nBQAwYDELMAkGA1UEBhMCREUxDjAMBgNVBAcMBU1haW56MRUwEwYDVQQKDAxTRE4t\nVGVjaHRhbGsxEDAOBgNVBAsMB2hvbWVsYWIxGDAWBgNVBAMMD2F2aWMwOS5sYWIu\naG9tZTAeFw0yNTA3MjIxOTMzMzhaFw0yNzA3MjIxOTMzMzhaMGAxCzAJBgNVBAYT\nAkRFMQ4wDAYDVQQHDAVNYWluejEVMBMGA1UECgwMU0ROLVRlY2h0YWxrMRAwDgYD\nVQQLDAdob21lbGFiMRgwFgYDVQQDDA9hdmljMDkubGFiLmhvbWUwggEiMA0GCSqG\nSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCoEIVIJbKzG4k2REnbRaYzFtxq+IFOKXXp\nYg8O+5Z/KFoHAeqVU0c/TiWtnNrMrOoO9y9Vo8JkHHDAXvwCIijAsHD0fjNYWevW\nt4zf7HdjEv/YLxP4j5aAZe0GR6VlQwHCvPrH6Ke2P8C+0Tys4g0wM+Z1stkjWG11\niYy0u7BgWy+J9jTe1BouhVnHKNvXOzFpUwMRIu9MGJKCGLIncTf9R6Qi6tvCxDHW\nroqMPLpwrBl/yIS5JNOl9MWGEfBcFDMRMu6F67ZbUowbIqojRgEnonPGiuxhjYwF\n8yYpVNQqbOLTHTv9MMgdg1g8vecijhCW8DIjCaVDpwzmw9lbQXg7AgMBAAGjWzBZ\nMDgGA1UdEQQxMC+CEGF2aWMwOWEubGFiLmhvbWWCBmF2aWMwOYIHYXZpYzA5YYcE\nCgABQIcECgABQTAdBgNVHQ4EFgQUrCNFoRsPUtO97zybmb9JtrJVSRMwDQYJKoZI\nhvcNAQELBQADggEBACbHF50FBWNp+Sr4I0u2AjQKv8ln0GkvVxfzyl9xN0ZMqj85\nlZTQHPYdEcaQEOrBqAQT/26PgOn/Y61/Cnay6GHBaPRFmyEWWC0DTEYrCLhTyn9g\nulIZPVvS6269RjzWCwv9B7zHSWo7dRPTwcxIMIlu/uwN3NBM/lKPWdAuDwQ5pALY\n5RuUUKjPUqueYguC7ilvNBWjDJVU0eTo3GUHCmftOGf7TREFcN3rwXIQiwQG8XA5\n6qftDEKvpzEh/iTX4ktBeapUhESjJ5JkjVM7H/FA0iY16w1bMCwC6nTFj3L9WoD8\n4JRy/54OJ5nssosL+7USjhICNQVzlsiK/31zTiQ=\n-----END CERTIFICATE-----\n",
"is_default_cert": true,
"enforcement_point_address": "10.0.1.64",
"resource_type": "AviConnectionInfo"
},
"auto_enforce": true,
"resource_type": "EnforcementPoint",
"id": "alb-endpoint",
"display_name": "alb-endpoint",
"path": "/infra/sites/default/enforcement-points/alb-endpoint",
"relative_path": "alb-endpoint",
"parent_path": "/infra/sites/default",
"remote_path": "",
"unique_id": "441ecc87-a916-49b8-a003-05e8be5c0eb3",
"realization_id": "441ecc87-a916-49b8-a003-05e8be5c0eb3",
"owner_id": "1b281e5c-ff6a-4f3e-a4c2-0b8fc2d6d339",
"marked_for_delete": false,
"overridden": false,
"_create_time": 1770570270597,
"_create_user": "admin",
"_last_modified_time": 1770753606426,
"_last_modified_user": "system",
"_system_owned": false,
"_protection": "NOT_PROTECTED",
"_revision": 11
}
As next step it is required to configure the desired service engine group for each tenant. For the VPC integration of AVI, VKS always uses the serive engine group Default-Group and the deployed service engines are not shared between the tenants. Each tenant will deploy it´s own service engines.
For the service engine group the parameters for vCenter Server, Cluster and Datastore should be configured.
Based on the global tenant settings it would be expected that the service engines are shared between the tenants, but those settings are overwritten by the integrated tenant creation process of NSX with VPCs for AVI. This integration automatically creates a new AVI tenant as soon as a NSX Project is created.
The detailed tenant settings of AVI can be validated by using the CLI command show tenant <tenant name> which shows the following kind of output.
Important here is the parameter se_in_provider_context, which is set to False and is preventing the service enginges from being shared between multiple tenants.
[admin:10-0-1-65]: > show tenant nsx-tenant1
+--------------------------------+---------------------------------------------+
| Field | Value |
+--------------------------------+---------------------------------------------+
| uuid | tenant-da2850ef-33aa-4e3a-9d76-3ef9a31f6edb |
| name | nsx-tenant1 |
| local | False |
| config_settings | |
| tenant_vrf | True |
| se_in_provider_context | False |
| tenant_access_to_provider_se | False |
| created_by | CloudConnector |
| enforce_label_group | False |
| attrs[1] | |
| key | path |
| value | /orgs/default/projects/nsx-tenant1 |
+--------------------------------+---------------------------------------------+
[admin:10-0-1-65]: >
After the service engine groups are prepared the last requirement is to prepare the management network settings globally in the admin tenant. Here it is recommended to assign the IPs by enableing DHCP, otherwise all other interfaces at the service engine are getting DHCP disabled. This would lead to functinality issues, since the IPs of the service network (AVI data network) shown in the architecture drawing will be assigned by DHCP. If DHCP for the management network is disabled, this state will be inheritated to all other service engine interfaces and will prevent IP asignment for the service networks. The following screenshot shows the expected configuration of the managemet network.
The validation, if DHCP is enabled for the different service engine interfaces can be done under the service engine settings as shown in the screenshot below. Also highlighted in the screenshot is the assigned IP for a service network and the corresponding VPC which is mapped as VRF within AVI.
VKS Deployment
The VKS deployment itself is unchanged and is identical as already written in my previous blog post “VCF 9 - VKS with NSX VPCs”. As soon as the AVI integration with NSX is done, AVI is prepared and the onboarding workflow of AVI within NSX is successfuly completed, the VKS deployment automatically selects AVI as loadbalancer.
The detailed selection process can be validated in the vCenter log /var/log/vmware/wcp/wcpsvc.log.
After the deployment of VKS is completed, you should see that there are only Distributed LoadBalancer services in NSX and all other LoadBalancer Services should be visible in AVI.
The following screenshot shows the distributed LoadBalancer services in NSX.
All other LoadBalancer services are now created in AVI. For the Supervisor cluster the LoadBalancer services are created in the default admin tenant of AVI, since the deployment was done in the NSX default project.
VKS Guest Cluster deployment
Each VKS guest cluster must be deployed in a vSphere Namespace and this namespace can be mapped to a specific VPC, as already mentioned in my previous blog post “VCF 9 - VKS with NSX VPCs”.
I will not describe here how the creation of a vSphere Namespace for a custom VPC will be done, but you can read the process in the mentioned blog post.
The difference for the AVI integration is that the LoadBalancer services are not created in NSX. Instead they are created in the AVI tenant for the corresponding NSX project with the exact same name as the NSX project. In my example this is nsx-tenant1.
By default the creation of a VKS guest cluster is creating exactly one AVI Virtual Service for the K8s API as shown in the screenshot below.
AKO Integration for AVI with NSX VPC integration
For the implementation of K8s ingress services with AVI, it is required to deploy AKO in the specific VKS guest cluster.
In combination with AVI and NSX VPCs this is supported since AKO version 1.13.3 as shown in the corresponding “AKO 1.13.1 Release Notes”.
In my lab I used version 1.13.4.
For the AKO deployment it is recommended to create a dedicated namespace in the VKS guest cluster. Keep in mind you need to be logged in to the VKS guest cluster.
I created a namespace with the name avi-system.
After the namespace is created, the permissions for the namespace must be adjusted to allow the deployment of the AKO pods. Since I am just creating a test installation in my lab I create very generic security settings, which should be evaluated in more detail for production environments.
kubectl label --overwrite ns avi-system \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/warn=baseline \
pod-security.kubernetes.io/audit=baseline
As next setp it is required to download the values.yaml for the helm chart deployment of AKO.
The detailed descriptions including the required URL for the helm chart is available under “Avi Kubernetes Operator Guide 1.13”.
As an example for the values.yaml you can check my lab configuration.
# Default values for ako.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
### FeatureGates is to enable or disable experimental features.
featureGates:
GatewayAPI: false # Enables/disables processing of Kubernetes Gateway API CRDs.
EnablePrometheus: false # Enable/Disable prometheus scraping for AKO container
EnableEndpointSlice: true #Enable/Disable endpoint slices in AKO (kubernetes version GA >= 1.21)
replicaCount: 1
image:
repository: projects.packages.broadcom.com/ako/ako
pullPolicy: IfNotPresent
pullSecrets: [] # Setting this will add pull secrets to the statefulset for AKO. Required if using secure private container image registry for AKO image.
#pullSecrets:
# - name: regcred
GatewayAPI:
image:
repository: projects.packages.broadcom.com/ako/ako-gateway-api
pullPolicy: IfNotPresent
### This section outlines the generic AKO settings
AKOSettings:
primaryInstance: true # Defines AKO instance is primary or not. Value `true` indicates that AKO instance is primary. In a multiple AKO deployment in a cluster, only one AKO instance should be primary. Default value: true.
enableEvents: 'true' # Enables/disables Event broadcasting via AKO
logLevel: WARN # enum: INFO|DEBUG|WARN|ERROR
fullSyncFrequency: '1800' # This frequency controls how often AKO polls the Avi controller to update itself with cloud configurations.
apiServerPort: 8080 # Internal port for AKO's API server for the liveness probe of the AKO pod default=8080
deleteConfig: 'false' # Has to be set to true in configmap if user wants to delete AKO created objects from AVI
disableStaticRouteSync: 'false' # If the POD networks are reachable from the Avi SE, set this knob to true.
clusterName: tkc01 # A unique identifier for the kubernetes cluster, that helps distinguish the objects for this cluster in the avi controller. // MUST-EDIT
cniPlugin: 'antrea' # Set the string if your CNI is calico or openshift or ovn-kubernetes. For Cilium CNI, set the string as cilium only when using Cluster Scope mode for IPAM and leave it empty if using Kubernetes Host Scope mode for IPAM. enum: calico|canal|flannel|openshift|antrea|ncp|ovn-kubernetes|cilium
enableEVH: false # This enables the Enhanced Virtual Hosting Model in Avi Controller for the Virtual Services
layer7Only: false # If this flag is switched on, then AKO will only do layer 7 loadbalancing.
# NamespaceSelector contains label key and value used for namespacemigration
# Same label has to be present on namespace/s which needs migration/sync to AKO
namespaceSelector:
labelKey: ''
labelValue: ''
servicesAPI: false # Flag that enables AKO in services API mode: https://kubernetes-sigs.github.io/service-apis/. Currently implemented only for L4. This flag uses the upstream GA APIs which are not backward compatible
# with the advancedL4 APIs which uses a fork and a version of v1alpha1pre1
vipPerNamespace: 'false' # Enabling this flag would tell AKO to create Parent VS per Namespace in EVH mode
istioEnabled: false # This flag needs to be enabled when AKO is be to brought up in an Istio environment
# This is the list of system namespaces from which AKO will not listen any Kubernetes or Openshift object event.
blockedNamespaceList: []
# blockedNamespaceList:
# - kube-system
# - kube-public
ipFamily: '' # This flag can take values V4 or V6 (default V4). This is for the backend pools to use ipv6 or ipv4. For frontside VS, use v6cidr
useDefaultSecretsOnly: 'false' # If this flag is set to true, AKO will only handle default secrets from the namespace where AKO is installed.
# This flag is applicable only to Openshift clusters.
vpcMode: true # VPCMode enables AKO to operate in VPC mode. This flag is only applicable to NSX-T.
### This section outlines the network settings for virtualservices.
NetworkSettings:
## This list of network and cidrs are used in pool placement network for vcenter cloud.
## Node Network details are not needed when static routes are disabled / for non vcenter clouds.
## Either networkName or networkUUID should be specified.
## If duplicate networks are present for the network name, networkUUID should be used for appropriate network.
nodeNetworkList: []
# nodeNetworkList:
# - networkName: "network-name"
# networkUUID: "net-4567"
# cidrs:
# - 10.0.0.1/24
# - 11.0.0.1/24
enableRHI: false # This is a cluster wide setting for BGP peering.
nsxtT1LR: '/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod' # Unique ID (note: not display name) of the T1 Logical Router for Service Engine connectivity. Only applies to NSX-T cloud.
# nsxtT1LR: "/infra/tier-1s/avi-t1"
bgpPeerLabels: [] # Select BGP peers using bgpPeerLabels, for selective VsVip advertisement.
# bgpPeerLabels:
# - peer1
# - peer2
# Network information of the VIP network. Multiple networks allowed only for AWS Cloud.
# Either networkName or networkUUID should be specified.
# If duplicate networks are present for the network name, networkUUID should be used for appropriate network.
vipNetworkList: []
# vipNetworkList:
# - networkName: net1
# networkUUID: net-1234
# cidr: 100.1.1.0/24
# v6cidr: 2002::1234:abcd:ffff:c0a8:101/64 # Setting this will enable the VS networks to use ipv6
# The defaultDomain flag has two use cases.
# For L4 VSes, if multiple sub-domains are configured in the cloud, this flag can be used to set the default sub-domain to use for the VS. This flag should be used instead of L4Settings.defaultDomain, as it will be deprecated in a future release.
# If both NetworkSettings.defaultDomain and L4Settings.defaultDomain are set, then NetworkSettings.defaultDomain will be used.
# For L7 VSes(created from OpenShift Routes), if spec.subdomain field is specified instead of spec.host field for an OpenShift route, then the default domain specified is appended to the spec.subdomain to form the FQDN for the VS.
# The defaultDomain should be configured as a sub-domain in Avi cloud.
defaultDomain: ''
# defaultDomain: "avi.internal"
### This section outlines all the knobs used to control Layer 7 loadbalancing settings in AKO.
L7Settings:
defaultIngController: 'true'
noPGForSNI: false # Switching this knob to true, will get rid of poolgroups from SNI VSes. Do not use this flag, if you don't want http caching. This will be deprecated once the controller support caching on PGs.
serviceType: NodePortLocal # enum NodePort|ClusterIP|NodePortLocal
shardVSSize: SMALL # Use this to control the layer 7 VS numbers. This applies to both secure/insecure VSes but does not apply for passthrough. ENUMs: LARGE, MEDIUM, SMALL, DEDICATED
passthroughShardSize: SMALL # Control the passthrough virtualservice numbers using this ENUM. ENUMs: LARGE, MEDIUM, SMALL
enableMCI: 'false' # Enabling this flag would tell AKO to start processing multi-cluster ingress objects.
fqdnReusePolicy: InterNamespaceAllowed # Use this to control whether AKO allows cross-namespace usage of FQDNs. enum Strict|InterNamespaceAllowed
### This section outlines all the knobs used to control Layer 4 loadbalancing settings in AKO.
L4Settings:
defaultLBController: 'true'
defaultDomain: '' # If multiple sub-domains are configured in the cloud, use this knob to set the default sub-domain to use for L4 VSes. This flag will be deprecated in a future release; use NetworkSettings.defaultDomain instead.
# If both NetworkSettings.defaultDomain and L4Settings.defaultDomain are set, then NetworkSettings.defaultDomain will be used.
autoFQDN: default # ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled.
### This section outlines settings on the Avi controller that affects AKO's functionality.
ControllerSettings:
serviceEngineGroupName: Default-Group # Name of the ServiceEngine Group.
controllerVersion: '' # The controller API version
cloudName: nsx09 # The configured cloud name on the Avi controller.
controllerHost: '10.0.1.64' # IP address or Hostname of Avi Controller
tenantName: nsx-tenant1 # Name of the tenant where all the AKO objects will be created in AVI.
vrfName: '' # Name of the VRFContext. All Avi objects will be under this VRF. Applicable only in Vcenter Cloud.
nodePortSelector: # Only applicable if serviceType is NodePort
key: ''
value: ''
resources:
limits:
cpu: 350m
memory: 400Mi
requests:
cpu: 200m
memory: 300Mi
securityContext: {}
podSecurityContext: {}
rbac:
# Creates the pod security policy if set to true
pspEnable: false
# If username and either password or authtoken are not specified, avi-secret will not be created. AKO will assume that the avi-secret already exists and will reference it. The Avi Controller credentials, including certificateAuthorityData, will be read from the existing avi-secret.
avicredentials:
username: 'admin'
password: 'VMware1!VMware1!'
authtoken:
certificateAuthorityData:
persistentVolumeClaim: ''
mountPath: /log
logFile: avi.log
akoGatewayLogFile: avi-gw.log
After making all required changes in the values.yaml you can start the deployment with the following command.
helm install --generate-name oci://projects.packages.broadcom.com/ako/helm-charts/ako --version 1.13.4 --namespace=avi-system -f ako/vcf09/values.yaml
The helm chart must be deployed while being logged in to the VKS guest cluster as mentioned earlier.
After AKO is successfully deployed you should be able to see the corrensponding K8s pod with the kubectlcommand below.
kubectl get pods -n avi-system
For more information of the deployment and integration status between AKO and AVI, you can check the logs with the following command.
kubectl get pods -n avi-system
NAME READY STATUS RESTARTS AGE
ako-0 1/1 Running 2 (34h ago) 35h
kubectl logs -n avi-system ako-0
After AKO is running you can deploy your application in k8s and create a ingress to publish your application. This process is unchanged for the syntax.
In my lab I deployed the hipster shop k8s deployment available in GitHub “Link to Hipstershop deployment”.
For the ingress I used the following example.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hipster-tls
labels:
app: hipster
annotations:
#ako.vmware.com/enable-tls: "true"
spec:
tls:
- hosts:
- hipster.lab.home
secretName: hipster-secret
rules:
- host: hipster.lab.home
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
After deploying the Ingress two new Virtual Services are created in the corrensponding AVI tenant. In my case it is under the tenant nsx-tenant1.
The assigned IP for the ingress can be validated by executing the following command against the K8s API.
kubectl get ingress -n hipster
NAME CLASS HOSTS ADDRESS PORTS AGE
hipster-tls avi-lb hipster.lab.home 10.100.56.4 80, 443 33h
A more detailed description how to deploy the test application hipster shop and the corresponding ingress is availabl under one of my older blog posts named “vSphere with Tanzu AKO integration for Ingress”.
Manual intervention to deploy private transit gateway backed virtual services
In the first setp it is important to understand how AVI is using the NSX VPC integration to allocate IPs within the VPC for a specific network type.
At the beginning AVI allocates a IP for virtual services by maintaining VSVIP objects. Those objects can get static or auto-allocated IP assignements.
The auto-allocated IP assignements can be managed by AVIs internal IPAM or external IPAM solutions. For the VPC integration AVI uses the NSX managed IPAM solution.
Therfore the integration of AVI and NSX VPCs is capable to auto-allocate IPs from type private or public, but not from type private transit gateway.
If you understand this procedure it should also be possible to discover the NSX log files for the corresponding API calls, right? In the logs you will find the following API calls.
2026-02-11T20:31:18.998Z - "GET /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/ip-address-allocations/vsvip-e02095f8-1ef8-4369-a757-0b0eb2e37f9b HTTP/2.0" 404 320 18 18 +
2026-02-11T20:31:20.288Z - "PATCH /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/ip-address-allocations/vsvip-e02095f8-1ef8-4369-a757-0b0eb2e37f9b HTTP/2.0" 200 - 1250 1250 +
2026-02-11T20:31:20.332Z - "GET /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/ip-address-allocations/vsvip-e02095f8-1ef8-4369-a757-0b0eb2e37f9b HTTP/2.0" 200 1068 36 36 +
2026-02-11T20:32:14.745Z - "GET /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/static-routes/avicloudnsx09-vsvip-e02095f8-1ef8-4369-a757-0b0eb2e37f9b-1-32 HTTP/2.0" 404 330 12 11 +
2026-02-11T20:32:14.774Z - "GET /nsxapi/api/v1/search?query=resource_type:staticroutes%20AND%20parent_path:%22/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod%22%20AND%20network:%2210.100.56.10/32%22 HTTP/2.0" 200 43 19 18 +
2026-02-11T20:32:14.822Z - "PATCH /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/static-routes/avicloudnsx09-vsvip-e02095f8-1ef8-4369-a757-0b0eb2e37f9b-1-32 HTTP/2.0" 200 - 32 32 +
2026-02-11T20:32:14.867Z - "GET /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/static-routes/avicloudnsx09-vsvip-e02095f8-1ef8-4369-a757-0b0eb2e37f9b-1-32 HTTP/2.0" 200 1603 33 31 +
Based on those API calls you can see that AVI sends a PATCH command to allocate the IP over the NSX IPAM and names the IP assignment with the UUID of the VSVIP object created in AVI.
As soon as a corresponding virtual service in AVI comes up, a static route will be mapped to the previously made IP allocation based on the exact name of the VSVIP UUID. After realizing this procedure I made the IP assigment for a manually created VSVIP backed by a static IP of a private transit gatewaytype network.
Threfore I used the following API call in my example.
curl --location --request PATCH 'https://nsx09a.lab.home/policy/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/ip-address-allocations/vsvip-2e0b687f-47d0-4785-8b1f-9661d921696b' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic YWRtaW46Vk13YXJlMSFWTXdhcmUxIQ==' \
--header 'Cookie: JSESSIONID=857D22342BECFB3E6EA3302DA30A53CD' \
--data ' {
"ip_address_block_visibility": "PRIVATE_TGW",
"allocation_ip": "172.17.15.255"
}'
To get the VSVIP UUID, you can login to the AVI controller and execute the following command.
[admin:10-0-1-65]: > switchto tenant nsx-tenant1
Switching to tenant nsx-tenant1
[nsx-tenant1:10-0-1-65]: > show vsvip private-tgw-test01
+-----------------------------+-----------------------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------------------+
| uuid | vsvip-2e0b687f-47d0-4785-8b1f-9661d921696b |
| name | private-tgw-test01 |
| vip[1] | |
| vip_id | 1 |
| ip_address | 172.17.15.255 |
| enabled | True |
| subnet | 0.0.0.0/32 |
| auto_allocate_ip | False |
| auto_allocate_floating_ip | False |
| avi_allocated_vip | True |
| avi_allocated_fip | False |
| ipam_network_subnet | |
| subnet | 0.0.0.0/32 |
| auto_allocate_ip_type | V4_ONLY |
| prefix_length | 32 |
| vrf_context_ref | orgs:default:projects:nsx-tenant1:vpcs:vpc-tenant01-prod |
| east_west_placement | False |
| tier1_lr | /orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod |
| tenant_ref | nsx-tenant1 |
| cloud_ref | nsx09 |
+-----------------------------+-----------------------------------------------------------+
As a next step it is just required to use the VSVIP for a virtual service and the AVI to NSX VPC integration is sending the following API calls.
2026-02-11T20:13:03.807Z - "GET /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/static-routes/avicloudnsx09-vsvip-2e0b687f-47d0-4785-8b1f-9661d921696b-1-32 HTTP/2.0" 404 330 16 15 +
2026-02-11T20:13:03.834Z - "GET /nsxapi/api/v1/search?query=resource_type:staticroutes%20AND%20parent_path:%22/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod%22%20AND%20network:%22172.17.15.255/32%22 HTTP/2.0" 200 43 17 17 +
2026-02-11T20:13:04.013Z - "PATCH /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/static-routes/avicloudnsx09-vsvip-2e0b687f-47d0-4785-8b1f-9661d921696b-1-32 HTTP/2.0" 200 - 158 157 +
2026-02-11T20:13:04.091Z - "GET /nsxapi/api/v1/orgs/default/projects/nsx-tenant1/vpcs/vpc-tenant01-prod/static-routes/avicloudnsx09-vsvip-2e0b687f-47d0-4785-8b1f-9661d921696b-1-32 HTTP/2.0" 200 1603 70 69 +
As a result the static route is established at the VPC gateway. If the IP allocation is not done, the API calls to set the static route are failing!
The following screenshot shows the created static route for the IP from type private transit gateway.

Summary
I’ve been testing the new integration between NSX VPCs and AVI for a while now, and it’s a significant enhancement. One of the biggest improvements is scalability. By reducing the number of virtual interfaces required on a Service Engine to implement vSphere Namespaces within VKS, the overall design becomes leaner and more efficient. In addition, the integration combines the multi-tenancy capabilities of both AVI and NSX, resulting in improved workload isolation. Tenant boundaries defined on the NSX side are now reflected more consistently within the load balancing layer, which simplifies design and operations. Another strong benefit is the simplified frontend IP address assignment for Virtual Services. By integrating AVI with NSX IPAM, IP management becomes more streamlined and operationally consistent. Of course, there are still challenges to address. For example, the DHCP dependency between the management and data interfaces of the Service Engine can introduce operational constraints. Also, the current limitations around selecting and customizing different VPC network types leave room for improvement. That said, this integration is clearly a step in the right direction. I’m looking forward to upcoming releases and seeing how these capabilities evolve and what additional enhancements will be introduced.







