How to Install Portworx Enterprise on TCE
Updated: Feb 22
I mentioned Portworx in previous posts. In short, Portworx is an enterprise-grade storage solution for Kubernetes. It can run on bare metal, virtualized & public cloud platforms.
Some of the features that Portworx provides:
Container-optimized volumes with elastic scaling for no application downtime
High Availability across nodes/racks/AZs so you can fail over in seconds
Multi-writer shared volumes across multiple containers
Storage-aware class-of-service (COS) and application aware I/O tuning
Portworx is certified to run on Tanzu. And I wanted to test to see if we could get it to run specifically with TCE.
Portworx Enterprise comes with a free trial license that can be used for proof-of-concept environments.
I am using vSphere 6.7 U3 with vSAN storage in my lab environment.
1.) Deploy Workload Cluster
Install the test environment on its own dedicated cluster. Create a workload cluster and change the control plane and worker node disk size to 150G and add 2 additional vCPUs in the cluster config file, located in ~/.config/tanzu/tkg/clusterconfigs:
VSPHERE_CONTROL_PLANE_DISK_GIB: "150" VSPHERE_CONTROL_PLANE_MEM_MIB: "8192" VSPHERE_CONTROL_PLANE_NUM_CPUS: "4" VSPHERE_WORKER_DISK_GIB: "150" VSPHERE_WORKER_MEM_MIB: "8192" VSPHERE_WORKER_NUM_CPUS: "4"
2.) Generate Workload Spec
Navigate to Portworx Central, and signup for an account:
Select Portworx Enterprise and hit next:
Select Use Portworx Operator, Portworx Version 2.9 and ETCD* Built-in mode:
Select Cloud= VMware Tanzu. And enter default for the kvdb and Storage class:
I am using the default storage class in my example here. Portworx will create different storage classes that you can use later on. You can view the detailed storage class output by running:
kubectl get sc default -o yaml
Leave the default network settings, hit next:
Under the Customize section, select None and leave the rest as default, then select Next:
The spec is now generated:
3.) Apply Workload Spec
Install the Portworx Operator and workload spec by copy-pasting the kubectl commands above into the Tanzu jump server. Insure that you are in the correct context for the cluster. You can monitor the deployment with watch kubectl get po -A:
You will see several object created in the output.
4.) Inspect Deployment
When the workload spec is applied, Portworx will create several containers and will also provision a new Portworx cluster across the TCE nodes. Note: this process might take a few minutes to complete.
SSH into one of the worker nodes to view the cluster details. You can use the following commands to view the cluster node status and interact with the PX control plane:
sudo /opt/pwx/bin/pxctl cluster --help sudo pxctl status sudo pxctl --help sudo pxctl cluster list
Run sudo pxctl status to view the details of the PX system:
From the output above, we can see that we now have 450GB of capacity in the Portworx cluster:
Portworx essetially builds a virtual volume across the TCE nodes and then presents that as a storage pool that your containers can consume as a storage class, using persistent volume claims.
When running lsblk on one of the TCE nodes, we can see that some additional mount points have been added:
5.) Inspect Portworx Storage Classes
The Portworx installation will create 8 new storage classes that can be used by containers that you create. These different manifest files can be copied + modified to meet your requirements:
5.) Create Portworx Storage Class & Test Pod
We can test our Portworx cluster by creating a storage class, persistent volume claim and a test pod.
Create storage class:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: portworx-sc provisioner: kubernetes.io/portworx-volume parameters: repl: "1"
Create persistent volume claim:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvcsc001 annotations: volume.beta.kubernetes.io/storage-class: portworx-sc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
Create test pod:
apiVersion: v1 kind: Pod metadata: name: pvpod spec: containers: - name: test-container image: gcr.io/google_containers/test-webserver volumeMounts: - name: test-volume mountPath: /test-portworx-volume volumes: - name: test-volume persistentVolumeClaim: claimName: pvcsc001