VMware Telco Cloud Automation Airgap Server Deployment
VMware has its own Telco Cloud Platform solution which is ready to host modern 5G telecommunications workloads while giving the end user freedom of choice when it comes to deploying their CNF / VNF workloads. More information can be found here: https://telco.vmware.com/products/telco-cloud-platform.html
The Telco Cloud Automation (TCA) solution is the automation layer which allows users to onboard their own network functions onto VMware´s Tanzu Kubernetes platform VMs.
Most TCA systems will be installed within an airgapped environment (i.e. an environment with no access to the outside internet). An airgap server repository should be deployed before setting up TCA. As quoted from the official VMware documentation: "The airgap server repository is used to hold the container images for the VMware Telco Cloud Automation Containers as a Service (CaaS) system and the packages for Kubernetes cluster node customization".
Airgap server components:
Nginx daemon – Dispatches the file requests for fetching resources from the local datastore or the Harbor server. It also provides a single HTTPS registry and Photon OS repository service to the local VMware Telco Cloud Automation system.
Harbor – Holds the required container images for the VMware Telco Cloud Automation system to run. Harbor is an open source project that provides container image registry service. It maintains all the dependent container images that are pulled from the Internet and serves the local Kubernetes cluster container image pulling process.
Reposync – A tool to synchronize the Photon OS packages from the Internet.
BOM Files – Describes the container images that are BOM-dependent by the VMware Telco Cloud Automation system.
Scripts – Helps to set up the internal components of the airgap server. These scripts start the services, load the BOM files, pull images from public registries and publishes to the local Harbor repository on the airgap server.
There are two airgap deployment options: No Internet Deployment & Restricted Internet Deployment. We will be covering the Restricted Internet Deployment option.
1.) Deploy Photon OS 3 VM
a.) Download the OVA package from here.
b.) From the vCenter UI, right click the compute cluster and then Deploy OVF Template
c.) Select the photon-hw13_uefi-3.0-a383732.ova OS image and click NEXT
d.) Enter the VM name and select the appropriate folder, click NEXT
e.) Accept license agreement, click NEXT
f.) Select an appropriate datastore and Management Network, click NEXT, then click FINISH
g.) Right click the VM, change the Hard Disk 1 size to 200GB, select Add New Device and add a new hard disk, change the disk size to 200GB. Change CPUs to 4 and 8GB of memory. Remove the CD/DVD drive.
h.) Click OK. Power on the VM.
i.) Open VMRC console, enter username root and password= changeme. Add a new password
j.) To create a network configuration file that systemd-networkd uses to establish a static IP address for the eth0 network interface, execute the following commands as root:
cat > /etc/systemd/network/10-static-en.network << "EOF"
Change permissions by running: chmod 644 10-static-en.network
Apply the configuration by running the following command: systemctl restart systemd-networkd
SSH to the VM and verify connectivity
k.) Change the VM hostname: vi /etc/hostname. Delete the existing entry and add new name tca-airgap01. Then reboot. Verify hostname has been updated
l.) Change network proxy with vi /etc/sysconfig/proxy, add proxy values, exit the editor
m.) Test connectivity towards the internet with: curl --proxy "http://YOUR_VM_IP:3128" https://google.com
##### Snapshot VM Before Proceeding to next steps####
2.) Configure Airgap VM
a.) The repositories of Photon OS require 50 GB of memory. Considering future expansion, allocate about 100 GB of disk space for them. Harbor repository requires about 15 GB of disk space and Docker requires about 30 GB of disk space. In total, add a 200 GB disk to the virtual machine. Then perform the following steps:
b.) Update Photon repos, and install Ansible Playbook
root@airgap01 [ ~ ]#tdnf update
root@airgap01 [ ~ ]#tdnf install ansible.noarch -y
c.) Configure proxy connectivity with airgap bundle
# airgap/scripts/bin/setup-proxy.sh http://YOUR_VM_IP:3128 YOUR_VM_FQDN
d.) Set the airgap server up as a template and then deploy a customized airgap server using this template. The setup YAML files are available at airgap/scripts/vars/. The airgap/scripts/vars/user-inputs.yml file contains user-defined variables that specify the parameters for setting up the airgap server. Two examples are provided in the vars folder for this purpose. Use the setup-user-inputs.yml as a template and add your variables:
# root@airgap01 [ ~/airgap/scripts/vars ]# ls
# deploy-user-inputs.yml setup-user-inputs.yml
# root@airgap01 [ ~/airgap/scripts/vars ]# cp setup-user-inputs.yml user-inputs.yml
# root@airgap01 [ ~/airgap/scripts/vars ]# vi user-inputs.yml
e.) Run the setup.yml Ansible Playbook
# root@airgap01 [ ~/airgap ]# ansible-playbook scripts/setup.yml > ansible.log 2>&1 &
# root@airgap01 [ ~/airgap ]# i
f.) Verify that the airgap server is functioning properly by running the following healthchecks:
g.) Insure that you have web access to harbor, in chome go to https://YOUR_VM_IP/harbor/projects and log in with admin / Harbor12345 (Change after deployment).
h.) Log into the airgap server, navigate to cd /etc/docker/certs.d/ and locate the .cert file, copy the key information to a notepad
i.) Navigate to the TCA UI and go to Partner Systems > Register. Enter the name and FQDN of the airgap server, then paste the certificate copied from earlier into the CA Certificate section
This concludes how to configure an Airgap server for TCA and hwo to add it to the system. The next steps are to configure the TCA VIM (Virtual Infrastructure Manager) infrastructure, which will be continued in future posts.