Deploy Consul on RedHat OpenShift
Red Hat OpenShift is a distribution of the Kubernetes platform that provides a number of usability and security enhancements.
In this tutorial you will:
- Deploy an OpenShift cluster
- Deploy a Consul datacenter
- Access the Consul UI
- Use the Consul CLI to inspect your environment
- Decommission the OpenShift environment
Security Warning
This tutorial is not for production use. The chart was installed with an insecure configuration of Consul. Refer to the Secure Consul and Registered Services on Kubernetes tutorial to learn how you can secure Consul on Kubernetes in production.
Prerequisites
To complete this tutorial you will need:
- Access to a Kubernetes cluster deployed with OpenShift
- A text editor
- Basic command line access
- A Red Hat account
- The Consul CLI
- CodeReady Containers v2.33.0-4.14.12+
- Helm v3.6+ or consul-k8s v1.4.2+
Deploy OpenShift
OpenShift can be deployed on multiple platforms, and there are several installation options available for installing OpenShift on either production and development environments. This tutorial requires a running OpenShift cluster to deploy Consul on Kubernetes. If an OpenShift cluster is already provisioned in a production or development environment to be used for deploying Consul on Kubernetes, please skip ahead to Deploy Consul. This tutorial will utilize CodeReady Containers (CRC) to provide a pre-configured development OpenShift environment on your local machine. CRC is bundled as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10. CRC is the quickest way to get started building OpenShift clusters. It is designed to run on a local computer to simplify setup and emulate the cloud development environment locally with all the tools needed to develop container-based apps. While we use CRC in this tutorial, the Consul Helm deployment process will work on any OpenShift cluster and is production ready.
If deploying CRC is not preferred, a managed OpenShift cluster could quickly be provisioned within less than an hour using Azure RedHat OpenShift. Azure RedHat Openshift requires an Azure subscription. However, it provides the simplest installation flow for getting a production-ready OpenShift cluster available to be used for this tutorial.
CRC Setup
After installing CodeReady Containers, issue the following command to setup your environment.
$ crc setupINFO Using bundle path /Users/hashicorp/.crc/cache/crc_vfkit_4.11.3_arm64.crcbundleINFO Checking if running as non-rootINFO Checking if crc-admin-helper executable is cachedINFO Checking for obsolete admin-helper executableINFO Checking if running on a supported CPU architectureINFO Checking minimum RAM requirementsINFO Checking if crc executable symlink existsINFO Creating symlink for crc executableINFO Checking if running emulated on a M1 CPUINFO Checking if vfkit is installedINFO Checking if CRC bundle is extracted in '$HOME/.crc'INFO Checking if /Users/hashicorp/.crc/cache/crc_vfkit_4.11.3_arm64.crcbundle existsINFO Checking if old launchd config for tray and/or daemon existsINFO Checking if crc daemon plist file is present and loadedINFO Adding crc daemon plist file and loading itYour system is correctly setup for using CRC. Use 'crc start' to start the instance
CRC start
Once the setup is complete, you can start the CRC service with the following command. The command will perform a few system checks to ensure your system meets the minimum requirements and will then ask you to provide an image pull secret. You should have your Red Hat account open so that you can easily copy your image pull secret when prompted.
$ crc startINFO Checking if running as non-rootINFO Checking if crc-admin-helper executable is cachedINFO Checking for obsolete admin-helper executableINFO Checking if running on a supported CPU architectureINFO Checking minimum RAM requirementsINFO Checking if crc executable symlink existsINFO Checking if running emulated on a M1 CPUINFO Checking if vfkit is installedINFO Checking if old launchd config for tray and/or daemon existsINFO Checking if crc daemon plist file is present and loadedINFO Loading bundle: crc_vfkit_4.11.3_arm64...CRC requires a pull secret to download content from Red Hat.You can copy it from the Pull Secret section of https://console.redhat.com/openshift/create/local.? Please enter the pull secret
Next, paste the image pull secret into the terminal and press enter.
Example output:
INFO Creating CRC VM for openshift 4.11.3...INFO Generating new SSH key pair...INFO Generating new password for the kubeadmin user...TRUNCATED...INFO Adding crc-admin and crc-developer contexts to kubeconfig...Started the OpenShift cluster.The server is accessible via web console at: https://console-openshift-console.apps-crc.testingLog in as administrator: Username: kubeadmin Password: <redacted> Log in as user: Username: developer Password: developerUse the 'oc' command line interface: $ eval $(crc oc-env) $ oc login -u developer https://api.crc.testing:6443
Notice that the output instructs you to configure your oc-env
, and also includes
a login command and secret password. The secret is specific to your installation.
Make note of this command, as you will use it to login to CRC on your development
host later.
Configure CRC environment
Next, configure the environment as instructed by CRC using the following command.
$ eval $(crc oc-env)
Login to the OpenShift cluster
Next, use the login command you made note of before to authenticate with the OpenShift cluster.
Note You will have to replace the secret password below with the value output by CRC.
$ oc login -u kubeadmin -p <redacted> https://api.crc.testing:6443Login successful.You have access to 66 projects, the list has been suppressed. You can list all projects with 'oc projects'Using project "default".
Verify configuration
Validate that your CRC setup was successful with the following command.
$ kubectl cluster-infoKubernetes control plane is running at https://api.crc.testing:6443To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Create a new project
First, create an OpenShift project to install Consul on Kubernetes. Creating an OpenShift project creates a Kubernetes namespace to deploy Kubernetes resources.
$ oc new-project consulNow using project "consul" on server "https://api.crc.testing:6443".You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-exampleto build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname
Create an image pull secret for a RedHat Registry service account
You must create an image pull secret before authenticating to the RedHat Registry and pulling images from the container registry. You must first create a registry service account on the RedHat Customer Portal, and then apply the OpenShift secret that is associated with the registry service account as shown below:
$ kubectl create -f openshift-secret.yml --namespace=consulsecret/15490118-openshift-secret-secret created
In the Helm chart values file, use the output of previous command and update the attribute imagePullSecrets
stanza with its value.
Deploy Consul
Helm chart configuration
To customize your deployment, you can pass a YAML configuration file to be used during the deployment.
Any values specified in the values file will override the Helm chart's default settings.
The following example file sets the global.openshift.enabled
entry to true,
which is required to operate Consul on OpenShift. Generate a
file named values.yaml
that you will reference in the helm install
command later.
values.yaml
global: name: consul datacenter: dc1 image: registry.connect.redhat.com/hashicorp/consul:1.18.2-ubi imageK8S: registry.connect.redhat.com/hashicorp/consul-k8s-control-plane:1.4.2-ubi imageConsulDataplane: registry.connect.redhat.com/hashicorp/consul-dataplane:1.4.2-ubi imagePullSecrets: - name: <Insert image pull secret name for RedHat Registry Service Account> openshift: enabled: true server: replicas: 1 bootstrapExpect: 1 disruptionBudget: enabled: true maxUnavailable: 0 ui: enabled: true connectInject: enabled: true default: true cni: enabled: true logLevel: info multus: true cniBinDir: /var/lib/cni/bin cniNetDir: /etc/kubernetes/cni/net.d
Install Consul
Helm chart preparation
Consul on Kubernetes provides a Helm chart to deploy a Consul datacenter on Kubernetes in a highly customized configuration. Review the docs on Helm chart Configuration to learn more about the available configurations.
Verify chart version
To ensure you have version 1.4.2
of the Helm chart, search your local repo.
$ helm search repo hashicorp/consulNAME CHART VERSION APP VERSION DESCRIPTIONhashicorp/consul 1.4.2 1.18.2 Official HashiCorp Consul Chart
If the correct version is not displayed in the output, try updating your helm repo.
$ helm repo updateHang tight while we grab the latest from your chart repositories......Successfully got an update from the "hashicorp" chart repository
Import images from RedHat Catalog
Instead of pulling images directly from the RedHat Registry, Consul and Consul on Kubernetes images could also be pre-loaded onto the internal OpenShift registry using the oc import
command. Read more about importing images into the internal OpenShift Registry in the RedHat OpenShift cookbook
$ oc import-image hashicorp/consul:1.18.2-ubi --from=registry.connect.redhat.com/hashicorp/consul:1.18.2-ubi --confirm$ oc import-image hashicorp/consul-k8s-control-plane:1.4.2-ubi --from=registry.connect.redhat.com/hashicorp/consul-k8s-control-plane:1.4.2-ubi --confirm$ oc import-image hashicorp/consul-dataplane:1.4.2-ubi --from=registry.connect.redhat.com/hashicorp/consul-dataplane:1.4.2-ubi --confirm
Install Consul in your cluster
You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI.
Now, issue the helm install
command. The following command specifies that the
installation should:
- Use the custom values file you created earlier
- Use the
hashicorp/consul
chart you downloaded in the last step - Set your Consul installation name to
consul
- Create Consul resources in the
consul
namespace - Use
consul-helm
chart version1.4.2
$ helm install consul hashicorp/consul --values values.yaml --create-namespace --namespace consul --version "1.0.2" --wait
The output will be similar to the following.
NAME: consulLAST DEPLOYED: Wed Sep 28 11:00:16 2022NAMESPACE: consulSTATUS: deployedREVISION: 1NOTES:Thank you for installing HashiCorp Consul!Your release is named consul.To learn more about the release, run: $ helm status consul $ helm get all consulConsul on Kubernetes Documentation:https://www.consul.io/docs/platform/k8sConsul on Kubernetes CLI Reference:https://www.consul.io/docs/k8s/k8s-cli
Verify installation
Use kubectl get pods
to verify your installation.
$ watch kubectl get pods --namespace consulNAME READY STATUS RESTARTS AGEconsul-cni-45fgb 1/1 Running 0 3mconsul-connect-injector-574799b944-n6jf6 1/1 Running 0 3mconsul-connect-injector-574799b944-xvksv 1/1 Running 0 3mconsul-server-0 1/1 Running 0 3mconsul-webhook-cert-manager-74467cdd8d-88m6j 1/1 Running 0 3m
Once all pods have a status of Running
, enter CTRL-C
to stop the watch.
Accessing the Consul UI
Now that Consul has been deployed, you can access the Consul UI to verify that the Consul installation was successful, and that the environment is healthy.
Expose the UI service to the host
Since the application is running on your local development host, you can expose
the Consul UI to the development host using kubectl port-forward
. The UI and the
HTTP API Server run on the consul-server-0
pod. Issue the following command to
expose the server endpoint at port 8500
to your local development host.
$ kubectl port-forward consul-server-0 --namespace consul 8500:8500Forwarding from 127.0.0.1:8500 -> 8500Forwarding from [::1]:8500 -> 8500
Open http://localhost:8500
in a new browser tab, and you should observe a
page that looks similar to the following.
Accessing Consul with the CLI and HTTP API
To access Consul with the CLI, set the CONSUL_HTTP_ADDR
following environment variable on the development host so that the Consul CLI knows which Consul server to interact with.
$ export CONSUL_HTTP_ADDR=http://127.0.0.1:8500
You should be able to issue the consul members
command to view all available
Consul datacenter members.
$ consul membersNode Address Status Type Build Protocol DC Partition Segmentconsul-server-0 10.217.0.106:8301 alive server 1.18.2 2 dc1 default <all>crc-dzk9v-master-0 10.217.0.104:8301 alive client 1.18.2 2 dc1 default <default>
You can use the same URL to make HTTP API requests with your custom code.
Deploy example services
Create a 'demo' project
To simulate an active environment you will deploy a client, and an upstream backend service. First, create a new project to deploy the client and server to:
$ oc create-project demoNow using project "demo" on server "https://api.crc.testing:6443".You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-exampleto build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname
Create security context constraints for the application sidecars
The consul-dataplane
sidecar injected into each application pod runs with user ID 100, which is not allowed by default in OpenShift. Run the following commands to use this user ID.
First, export the target namespace as an environment variable.
$ export TARGET_NAMESPACE=demo
Add the service accounts in the targeted namespace access to the anyuid security context constraints (SCC).
$ oc adm policy add-scc-to-group anyuid system:serviceaccounts:$TARGET_NAMESPACE
When removing your application, remove the permissions as follows.
$ oc adm policy remove-scc-from-group anyuid system:serviceaccounts:$TARGET_NAMESPACE
Create a network attachment definition
By default, OpenShift uses Multus for managed CNI, and thus requires a NetworkAttachmentDefinition
in the application namespace to invoke the consul-cni
plugin.
Read about the Network Attachment Definition Custom Resource for more details. Issue the following command to create a file named networkattachmentdefinition.yaml
that will be used to create a Network Attachment Definition in the demo
namespace:
Note
If your OpenShift cluster has network isolation enabled, a Network Attachment Definition will be need per namespace. If network isolation is disabled, it is possible to use the Network Attachment Definition created in the namespace where Consul is installed.
$ cat > networkattachementdefinition.yaml <<EOFapiVersion: "k8s.cni.cncf.io/v1"kind: NetworkAttachmentDefinitionmetadata: name: consul-cnispec: config: '{ "cniVersion": "0.3.1", "type": "consul-cni", "cni_bin_dir": "/var/lib/cni/bin", "cni_net_dir": "/etc/kubernetes/cni/net.d", "kubeconfig": "ZZZ-consul-cni-kubeconfig", "log_level": "info", "multus": true, "name": "consul-cni", "type": "consul-cni" }'EOF
Next, deploy the NetworkAttachmentDefinition.
$ oc create -f networkattachmentdefinition.yaml -n demonetworkattachmentdefinition.k8s.cni.cncf.io/consul-cni created
When removing your application, remove the NetworkAttachmentDefinition as follows.
$ oc delete network-attachment-definition consul-cni -n demo
Deploy the server service
Issue the following command to create a file named server.yaml
that will be used to create an http echo server on Kubernetes:
$ cat > server.yaml <<EOFapiVersion: v1kind: Servicemetadata: # This name will be the service name in Consul. name: static-serverspec: selector: app: static-server ports: - protocol: TCP port: 80 targetPort: 8080---apiVersion: v1kind: ServiceAccountmetadata: name: static-server---apiVersion: apps/v1kind: Deploymentmetadata: name: static-serverspec: replicas: 1 selector: matchLabels: app: static-server template: metadata: name: static-server labels: app: static-server annotations: 'k8s.v1.cni.cncf.io/networks': '[{ "name":"consul-cni" }]' spec: containers: - name: static-server image: image: hashicorp/http-echo:latest args: - -text="hello world" - -listen=:8080 ports: - containerPort: 8080 name: http # If ACLs are enabled, the serviceAccountName must match the Consul service name. serviceAccountName: static-serverEOF
Next, deploy the sample backend service.
$ kubectl apply -f server.yaml -n demoservicedefaults.consul.hashicorp.com/static-server createdserviceaccount/static-server createdservice/static-server createddeployment.apps/static-server created
Deploy the client service
Next, create a file named client.yaml
that defines the sample client service.
$ cat > client.yaml <<EOFapiVersion: v1kind: Servicemetadata: # This name will be the service name in Consul. name: static-clientspec: selector: app: static-client ports: - port: 80---apiVersion: v1kind: ServiceAccountmetadata: name: static-client---apiVersion: apps/v1kind: Deploymentmetadata: name: static-clientspec: replicas: 1 selector: matchLabels: app: static-client template: metadata: name: static-client labels: app: static-client annotations: 'k8s.v1.cni.cncf.io/networks': '[{ "name":"consul-cni" }]' spec: containers: - name: static-client image: curlimages/curl:latest # Just spin & wait forever, we'll use `kubectl exec` to demo command: ['/bin/sh', '-c', '--'] args: ['while true; do sleep 30; done;'] # If ACLs are enabled, the serviceAccountName must match the Consul service name. serviceAccountName: static-clientEOF
Next, deploy the sample client.
$ kubectl apply -f client.yaml -n demoservicedefaults.consul.hashicorp.com/static-client createdserviceaccount/static-client createdservice/static-client createddeployment.apps/static-client created
Finally, ensure all pods/containers having a status of Running
before proceeding to the next section.
$ watch kubectl get podsNAME READY STATUS RESTARTS AGEstatic-client-755f485c45-dzg47 2/2 Running 0 14mstatic-server-6d5fb5f5d5-cz7sz 2/2 Running 0 14m
Decommission the environment
Now that you have completed the tutorial, you should decommission the CRC environment.
Enter CTRL-C
in the terminal to stop the port forwarding process.
Stop CRC
First, stop the running cluster.
$ crc stop
Example output:
INFO Stopping the OpenShift cluster, this may take a few minutes...Stopped the OpenShift cluster
Delete CRC
Next, issue the following command to delete the cluster.
$ crc delete
The CRC CLI will ask you to confirm that you want to delete the cluster.
Example prompt:
Do you want to delete the OpenShift cluster? [y/N]:
Enter y
to confirm.
Example output:
Deleted the OpenShift cluster
Next steps
In this tutorial you created a Red Hat OpenShift cluster, and installed Consul to the cluster.
Specifically, you:
- Deployed an OpenShift cluster
- Deployed a Consul datacenter
- Accessed the Consul UI
- Used the Consul CLI to inspect your environment
- Decommissioned the environment
It is highly recommended that you properly secure your Kubernetes cluster and that you understand and enable the recommended security features of Consul. Refer to the Secure Consul and Registered Services on Kubernetes tutorial to learn how you can deploy an example workload, and secure Consul on Kubernetes for production.
For more information on the Consul Helm chart configuration options, review the consul-helm chart documentation.