If you’ve ever used OpenShift, you’re probably familiar with its feature-rich web console. It’s a central hub for managing workloads, projects, security policies, and more. While the console is easy to access in a typical cloud environment, the mechanics behind exposing it on bare metal are equally interesting. In this article, we’ll explore how OpenShift 4.x (including 4.16) serves and secures the console in a bare-metal setting.
1. The Basics: Console vs. API Server
In OpenShift 4.x, there are two main entry points for cluster interactions:
- API server: Runs on port
6443
, usually exposed by external load balancers or keepalived/HAProxy in bare-metal environments. - Web console: Typically accessed at port
443
via an OpenShift “route,” backed by the cluster’s router/ingress infrastructure.
The API server uses a special out-of-band mechanism (static pods on master nodes). By contrast, the console takes a path much more familiar to standard Kubernetes applications: it’s served by a deployment, a service, and ultimately a Route object in the openshift-console
namespace. Let’s focus on that Route-based exposure.
2. How the Console Is Deployed
Console Operator
The console itself is managed by the OpenShift Console Operator, which:
- Deploys the console pods into the
openshift-console
namespace. - Ensures they remain healthy and up-to-date.
- Creates the relevant Kubernetes resources (Deployment, Service, and Route) that expose the console to external users.
Where the Pods Run
By default, the console pods run on worker nodes (though in some topologies, you might have dedicated infrastructure nodes). The important point is that these pods are scheduled like normal Kubernetes workloads.
3. How the Console Is Exposed
The OpenShift Router (Ingress Controller)
OpenShift comes with a built-in Ingress Controller—often referred to as the “router.” It’s usually an HAProxy-based router deployed on worker (or infra) nodes. By default, it will listen on:
- HTTP port
80
- HTTPS port
443
When you create a Route, the router matches the host name in the incoming request and forwards traffic to the corresponding service. In the console’s case, that route is typically named console
in the openshift-console
namespace.
Typical Hostname
During installation, OpenShift configures the default “apps” domain. For instance:
console-openshift-console.apps.<cluster-domain>
So when you browse to, say, https://console-openshift-console.apps.mycluster.example.com
, your request hits the router, which looks for the matching route and then forwards you to the console service.
The Route Object
OpenShift 4.x uses the Route resource to direct external traffic to an internal service. You can find the console route by running:
oc get route console -n openshift-console
You’ll usually see something like:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
console console-openshift-console.apps.mycluster.example.com console https edge None
- Service: The route points to the
console
service in theopenshift-console
namespace. - Edge Termination: The router typically provides TLS termination, ensuring secure communication.
- Host: The domain you’ll use to access the console externally.
4. Traffic Flow on Bare Metal
External Access
On bare metal, you typically have one of the following configurations:
- Direct Node Access: If each worker node has a publicly (or at least internally routable) IP, you create a wildcard DNS record (or direct DNS records) that point to those node IPs (or to a load balancer fronting them).
- External Load Balancer: You can place an external L4 or L7 load balancer in front of the worker nodes’ port 443, distributing traffic across the router pods. This approach mirrors the cloud LB approach but uses an on-prem solution (F5, Netscaler, etc.).
Either way, the router’s service IP on each node is listening on port 443. By default, the Ingress Operator ensures that all router pods share a common DNS domain like *.apps.<cluster-domain>
. This means that any Route you create automatically becomes externally accessible, assuming your DNS points to the router’s IP or load balancer VIP.
TLS Certificates
By default, the console route has a certificate created and managed by the cluster. You can optionally configure a custom TLS certificate for the router if you want to serve the console (and all other routes) with your own wildcard certificate.
5. Customizing the Console Domain or Certificate
You might want to customize how users access the console—maybe you don’t like the default subdomain or you want to serve it at a corporate domain. There are a couple of ways:
- Change the
apps
domain: During installation, you can specify a custom domain. - Edit the Console Route: You can change the route’s host name, but you must ensure DNS for that host name points to your router’s public IP.
- Configure a Custom Cert: If you have a wildcard certificate for
mycompany.com
, you can apply it at the router level, so the console route and all other routes share the same certificate authority.
6. Scaling and Availability
Since the console runs as a standard Deployment, you can scale it up (e.g., set replicas: 3
) if you anticipate heavy usage. The router itself is typically deployed on multiple nodes for high availability—ensuring that even if one node goes down, the router remains functional, and your console remains accessible.
7. How This Differs From the API Server
One point of confusion is that both the API server and the console run in the cluster—so why is the API server not also behind a Route?
- API Server: Runs as static pods with
hostNetwork: true
on each master node, typically exposed on port6443
. It’s not a normal deployment and doesn’t rely on the cluster’s router. Instead, it usually sits behind a separate load balancer (external or keepalived/HAProxy). - Console: A normal deployment plus a Route, served by the ingress router on port
443
.
So while the console takes advantage of standard Kubernetes networking patterns, the API server intentionally bypasses them for isolation, reliability, and the ability to run even if cluster networking is partially down.
8. Frequently Asked Questions
Q: Can I use MetalLB to expose the console on a LoadBalancer-type service?
A: You technically could set up a LoadBalancer service if you had MetalLB. However, the standard approach in OpenShift is to rely on the built-in router for console traffic. The console route is automatically configured, and the router takes care of HTTPS termination and routing.
Q: Do I need a separate load balancer for the console traffic?
A: If your bare-metal nodes themselves are routable (for example, each worker node has a valid IP and your DNS points console-openshift-console.apps.mycluster.example.com
to those nodes), then you may not need an additional LB. However, some organizations prefer to place a load balancer in front of all worker nodes for consistency, health checks, and easier SSL management.
Q: How do I get a custom domain to work with the console?
A: You can edit the route’s hostname or specify a custom domain in your Ingress configuration. Then, point DNS for that new domain (e.g. console.internal.mycompany.com
) to the external IP(s) of your router or your load balancer. Make sure TLS certificates match if you’re providing your own certificate.
Conclusion
In OpenShift 4.x, the web console is exposed via a standard Kubernetes Route and served by the built-in router on port 443. The Console Operator takes care of deploying and managing the console pods, while the Ingress Operator ensures a default router is up and running. On bare metal, the key to making the console accessible is to ensure your DNS points at the router’s external interface—whether that’s a dedicated IP on each worker node or an external load balancer VIP.
By understanding these mechanics, you can customize the console domain, certificate, and scaling strategy to best fit your environment. And once your console is online, you’ll have the full power of the OpenShift UI at your fingertips—no matter where your cluster happens to be running!