top of page
  • Writer's pictureWilliam B

Calico Networking with Tanzu Community Edition Part 1

Updated: Jan 18, 2022

When you are first starting out with Kubernetes, networking is probably one of the most difficult concepts to master and understand. There are many choices and decisions to be made, and understanding the pros and cons of each can be bewildering.


Pods and containers need to be able to communicate with each other internally, externally and to/from the outside world. Kubernetes does a good job of obscuring many of these complexities by making things more "public cloud" like (i.e. by hiding many of the details from the admin that way you can focus more on your apps and less on the cloud platform itself).


But modern applications demand enterprise grade, advanced networking and security services. In order to manage networking at scale and offer a degree of granular control, a technology has been introduced: The Container Network Interface (CNI).


From the official CNCF documentation:


"CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins."


There are a number of CNI plugins available, each with there own features and use cases.


In 2020 ITNEXT published an article detailing the most commonly used CNI`s each with there performance ratings and feature descriptions. An excellent summary is shown here:

CNI`s provide a degree of networking standardization across the ecosystem.


In this article I wanted to explain my understanding of the Calico plugin available from Tigera, and show an example of what this looks like when deployed in a TCE (with vSphere) environment.


Lab Setup


I am currently running TCE on a vSphere 6.7U3 lab environment with vSAN and NSX-T networking. Calico will work with almost any underlying networking infrastructure, as long as there is network reach-ability between the nodes.


1.) Install Calico Package


Create a TCE repo:

tanzu package repository add tce-repo \
  --url projects.registry.vmware.com/tce/main:0.9.1 \
  --namespace tanzu-package-repo-global

Use the following command to install the Calico package:

tanzu package install calico-pkg --package-name calico.tanzu.vmware.com --namespace tkg-system --version 3.11.3+vmware.1-tkg.1 --wait=false

Verify:

tanzu package available get calico.tanzu.vmware.com/3.11.3+vmware.1-tkg.1 --namespace tkg-system


2.) Create cluster configuration file


Create a workload cluster configuration file, and save it in the following directory. This is the default cluster config file location for TCE:

~/.config/tanzu/tkg/clusterconfigs

An example config file can be found here:

AVI_CA_DATA_B64: ""
AVI_CLOUD_NAME: ""
AVI_CONTROL_PLANE_HA_PROVIDER: ""
AVI_CONTROLLER: ""
AVI_DATA_NETWORK: ""
AVI_DATA_NETWORK_CIDR: ""
AVI_ENABLE: "false"
AVI_LABELS: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: ""
AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: ""
AVI_PASSWORD: ""
AVI_SERVICE_ENGINE_GROUP: ""
AVI_USERNAME: ""
CLUSTER_CIDR: 192.168.0.0/11
CLUSTER_NAME: tce-prod-01
CLUSTER_PLAN: prod
ENABLE_AUDIT_LOGGING: "false"
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
OS_ARCH: amd64
OS_NAME: ubuntu
OS_VERSION: "20.04"
SERVICE_CIDR: 192.168.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
VSPHERE_CONTROL_PLANE_DISK_GIB: "20"
VSPHERE_CONTROL_PLANE_ENDPOINT: 
VSPHERE_CONTROL_PLANE_MEM_MIB: "4096"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: 
VSPHERE_DATASTORE: 
VSPHERE_FOLDER: 
VSPHERE_NETWORK:
VSPHERE_PASSWORD: 
VSPHERE_RESOURCE_POOL: 
VSPHERE_SERVER: 
VSPHERE_SSH_AUTHORIZED_KEY: 
VSPHERE_TLS_THUMBPRINT: 
VSPHERE_USERNAME: 
VSPHERE_WORKER_DISK_GIB: "20"
VSPHERE_WORKER_MEM_MIB: "4096"
VSPHERE_WORKER_NUM_CPUS: "2"

3.) Deploy cluster with Calico CNI:


CNI`s cannot be replaced once a cluster is deployed, because it provides the essential networking services for the cluster. Antrea is the default CNI when another is not specified.


The CNI must be selected when creating the cluster.

tanzu cluster create --file tce-prod-test.yaml --cni=calico

The deployment would have been created now, use the following commands to set the kubectl context:

tanzu cluster kubeconfig get tce-prod-test --admin
kubectl config use-context tce-prod-test-admin@tce-prod-test

4.) Install calicoctl


Most calicoctl commands require access to the Calico datastore. By default, calicoctl will attempt to read from the Kubernetes API based on the default kubeconfig. This allows the Calico control plane to have visibility into the Kubernetes environment.


There are also two ways of running calicoctl: either by installing it as a binary or by installing it as a kubectl plugin. I have chosen to install the kubectl plugin version. The only caveat here is that you use "kubectl calico" instead of the calicoctl syntax.


Note: If the location of calicoctl is not already in your PATH, move the file to one that is or add its location to your PATH. This will allow you to invoke it without having to prepend its location.


Find your path:

cd /usr/local/bin

Download calicoctl:

curl -o kubectl-calico -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.11.3/calicoctl"

Make the file executable:

chmod +x kubectl-calico

Verify install:

kubectl calico -h

5.) Locate your kubeconfig file


Run the following command to locate your kubeconfig file:

[[ ! -z "$KUBECONFIG" ]] && echo "$KUBECONFIG" || echo "$HOME/.kube/config" 

This will return the kubeconfig location:


6.) Create configuration file


Specify the kubeconfig file location and datastore type:

apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/home/ice/.kube/config"

Save the file and exit.


7.) Export configurations

export DATASTORE_TYPE=kubernetes
export KUBECONFIG=~/.kube/config

8.) Run test command


Run a kubectl calico command to test:

kubectl calico ipam show

You can now run any calicoctl subcommands through kubectl calico.




384 views
bottom of page