AKO runs as a POD inside the Kubernetes cluster.
To Run AKO you need the following pre-requisites:
- Step 1: Configure an Avi Controller with a vCenter cloud. The Avi Controller should be versioned 18.2.10 / 20.1.2 or later.
- Step 2:
- Make sure a PG network is part of the NS IPAM configured in the vCenter
- Step 3: If your POD CIDRs are not routable:
Data path flow is as described below:
The markers in the drawing are described below:
- The client requests a specified hostname/path.
- The DNS VS returns an IP address corresponding to the hostname.
- The request is forwarded to the resolved IP address that corresponds to a Virtual IP hosted on an Avi Service Engine. The destination IP in the packet is set as the POD IP address on which the application runs.
- Service Engines use the static route information to reach the POD IP via the next-hop address of the host on which the pod is running.
- The pod responds and the request is sent back to the client.
- Create a Service Engine Group dedicated to a Kubernetes cluster.
- Step 3.1: If your POD CIDRs are routable then you can skip step 2. Ensure that you skip static route syncing in this case using the
disableStaticRouteSync
flag in thevalues.yaml
of your helm chart. - Step 4: Kubernetes 1.16+.
- Step 5:
helm
cli pointing to your kubernetes cluster.
NOTE: We only support
helm 3
For instructions on installing AKO using helm please use this link
Read more about AKO CRDs
AKO can be used in openshift cluster to configure Routes and Services of type Loadbalancer. For details about how to use AKO in an openshift cluster and features specific to openshift refer here.
Starting release 1.5.1, AKO supports the NSX-T write access cloud for both NCP and non-NCP CNIs. In case of NCP CNI, the pods are assumed to be routable from the SE's backend data network segments. Due to this, AKO disables the static route configuration when the CNI is specified as ncp
in the values.yaml. However, if non-ncp CNIs are used, AKO assumes that static routes can be configured on the the SEs to reach the pod networks. In order for this scenario to be valid, the SEs backend data network must be configured on the same logical segment on which the Kubernetes/OpenShift cluster is run.
In addition to this, AKO supports both overlay as well as VLAN backed NSX-T cloud configurations. AKO automatically figures out if a cloud is configured with overlay segments or is used with VLAN networks. The VLAN backed NSX-T setup behaves the same as vCenter write access cloud, thus requiring no inputs from the user. However the overlay based NSX-T setups require the user to configure a logical segment as the backend data network and correspondingly configure the T1 router's info during bootup of AKO via a helm values parameter.
Service of type NodePort
can be used to send traffic to the pods exposed through Service of type NodePort
.
This feature supports Ingress/Route attached to Service of type NodePort
. Service of type LoadBalancer is also supported, since kubernetes populates NodePort
by default. AKO will function either in NodePort
mode or in ClusterIP
mode.
A new parameter serviceType has been introduced as config option in AKO's values.yaml. To use this feature, set the value of the parameter to NodePort.
Parameter | Description | Default |
---|---|---|
configs.serviceType |
Type of Service to be used as backend for Routes/Ingresses | ClusterIP |
nodeSelectorLabels.key |
Key used as a label based selection for the nodes in NodePort mode. | empty |
nodeSelectorLabels.value |
Value used as a label based selection for the nodes in NodePort mode. | empty |
Kubernetes populates NodePort by default for service of type LoadBalancer. If config.serviceType is set to NodePort, AKO would use NodePort as backend for service of type Loadbalancer instead of using Endpoints, which is the default behaviour with config.serviceType set as ClusterIP.
Please refer to this page for details on support for ClusterIP mode for GCP and Azure IaaS cloud in Avi Controller.
Please refer to this page for support in AKO to map each kubernetes / OpenShift cluster uniquely to a tenant in Avi.
Please refer to this page for details on how AKO supports and implements networking/v1 Ingress and IngressClass.
Please refer to this page for details on how AKO interprets the Kubernetes objects and translates them to Avi objects.
Please refer to this page for details on how to migrate workloads from cloud connector based Avi controller to AKO based Avi controller.
AKO version 1.14.1 support for Kubernetes, Openshift, Avi Controller is as below:
Orchestrator/ Controller | Versions Supported |
---|---|
Kubernetes |
1.27 - 1.32 |
Openshift |
4.14 - 4.17 |
Avi Controller |
22.1.3 - 31.1.1 |
For some frequently asked question refer here