Understanding How the OpenShift Console Is Exposed on Bare Metal

If you’ve ever used OpenShift, you’re probably familiar with its feature-rich web console. It’s a central hub for managing workloads, projects, security policies, and more. While the console is easy to access in a typical cloud environment, the mechanics behind exposing it on bare metal are equally interesting. In this article, we’ll explore how OpenShift 4.x (including 4.16) serves and secures the console in a bare-metal setting.

1. The Basics: Console vs. API Server

In OpenShift 4.x, there are two main entry points for cluster interactions:

  1. API server: Runs on port 6443, usually exposed by external load balancers or keepalived/HAProxy in bare-metal environments.
  2. Web console: Typically accessed at port 443 via an OpenShift “route,” backed by the cluster’s router/ingress infrastructure.

The API server uses a special out-of-band mechanism (static pods on master nodes). By contrast, the console takes a path much more familiar to standard Kubernetes applications: it’s served by a deployment, a service, and ultimately a Route object in the openshift-console namespace. Let’s focus on that Route-based exposure.

2. How the Console Is Deployed

Console Operator

The console itself is managed by the OpenShift Console Operator, which:

  • Deploys the console pods into the openshift-console namespace.
  • Ensures they remain healthy and up-to-date.
  • Creates the relevant Kubernetes resources (Deployment, Service, and Route) that expose the console to external users.

Where the Pods Run

By default, the console pods run on worker nodes (though in some topologies, you might have dedicated infrastructure nodes). The important point is that these pods are scheduled like normal Kubernetes workloads.

3. How the Console Is Exposed

The OpenShift Router (Ingress Controller)

OpenShift comes with a built-in Ingress Controller—often referred to as the “router.” It’s usually an HAProxy-based router deployed on worker (or infra) nodes. By default, it will listen on:

  • HTTP port 80
  • HTTPS port 443

When you create a Route, the router matches the host name in the incoming request and forwards traffic to the corresponding service. In the console’s case, that route is typically named console in the openshift-console namespace.

Typical Hostname
During installation, OpenShift configures the default “apps” domain. For instance:

console-openshift-console.apps.<cluster-domain>

So when you browse to, say, https://console-openshift-console.apps.mycluster.example.com, your request hits the router, which looks for the matching route and then forwards you to the console service.

The Route Object

OpenShift 4.x uses the Route resource to direct external traffic to an internal service. You can find the console route by running:

oc get route console -n openshift-console

You’ll usually see something like:

NAME      HOST/PORT                                                   PATH   SERVICES    PORT   TERMINATION   WILDCARD
console console-openshift-console.apps.mycluster.example.com console https edge None
  • Service: The route points to the console service in the openshift-console namespace.
  • Edge Termination: The router typically provides TLS termination, ensuring secure communication.
  • Host: The domain you’ll use to access the console externally.

4. Traffic Flow on Bare Metal

External Access

On bare metal, you typically have one of the following configurations:

  1. Direct Node Access: If each worker node has a publicly (or at least internally routable) IP, you create a wildcard DNS record (or direct DNS records) that point to those node IPs (or to a load balancer fronting them).
  2. External Load Balancer: You can place an external L4 or L7 load balancer in front of the worker nodes’ port 443, distributing traffic across the router pods. This approach mirrors the cloud LB approach but uses an on-prem solution (F5, Netscaler, etc.).

Either way, the router’s service IP on each node is listening on port 443. By default, the Ingress Operator ensures that all router pods share a common DNS domain like *.apps.<cluster-domain>. This means that any Route you create automatically becomes externally accessible, assuming your DNS points to the router’s IP or load balancer VIP.

TLS Certificates

By default, the console route has a certificate created and managed by the cluster. You can optionally configure a custom TLS certificate for the router if you want to serve the console (and all other routes) with your own wildcard certificate.

5. Customizing the Console Domain or Certificate

You might want to customize how users access the console—maybe you don’t like the default subdomain or you want to serve it at a corporate domain. There are a couple of ways:

  1. Change the apps domain: During installation, you can specify a custom domain.
  2. Edit the Console Route: You can change the route’s host name, but you must ensure DNS for that host name points to your router’s public IP.
  3. Configure a Custom Cert: If you have a wildcard certificate for mycompany.com, you can apply it at the router level, so the console route and all other routes share the same certificate authority.

6. Scaling and Availability

Since the console runs as a standard Deployment, you can scale it up (e.g., set replicas: 3) if you anticipate heavy usage. The router itself is typically deployed on multiple nodes for high availability—ensuring that even if one node goes down, the router remains functional, and your console remains accessible.

7. How This Differs From the API Server

One point of confusion is that both the API server and the console run in the cluster—so why is the API server not also behind a Route?

  • API Server: Runs as static pods with hostNetwork: true on each master node, typically exposed on port 6443. It’s not a normal deployment and doesn’t rely on the cluster’s router. Instead, it usually sits behind a separate load balancer (external or keepalived/HAProxy).
  • Console: A normal deployment plus a Route, served by the ingress router on port 443.

So while the console takes advantage of standard Kubernetes networking patterns, the API server intentionally bypasses them for isolation, reliability, and the ability to run even if cluster networking is partially down.

8. Frequently Asked Questions

Q: Can I use MetalLB to expose the console on a LoadBalancer-type service?
A: You technically could set up a LoadBalancer service if you had MetalLB. However, the standard approach in OpenShift is to rely on the built-in router for console traffic. The console route is automatically configured, and the router takes care of HTTPS termination and routing.

Q: Do I need a separate load balancer for the console traffic?
A: If your bare-metal nodes themselves are routable (for example, each worker node has a valid IP and your DNS points console-openshift-console.apps.mycluster.example.com to those nodes), then you may not need an additional LB. However, some organizations prefer to place a load balancer in front of all worker nodes for consistency, health checks, and easier SSL management.

Q: How do I get a custom domain to work with the console?
A: You can edit the route’s hostname or specify a custom domain in your Ingress configuration. Then, point DNS for that new domain (e.g. console.internal.mycompany.com) to the external IP(s) of your router or your load balancer. Make sure TLS certificates match if you’re providing your own certificate.

Conclusion

In OpenShift 4.x, the web console is exposed via a standard Kubernetes Route and served by the built-in router on port 443. The Console Operator takes care of deploying and managing the console pods, while the Ingress Operator ensures a default router is up and running. On bare metal, the key to making the console accessible is to ensure your DNS points at the router’s external interface—whether that’s a dedicated IP on each worker node or an external load balancer VIP.

By understanding these mechanics, you can customize the console domain, certificate, and scaling strategy to best fit your environment. And once your console is online, you’ll have the full power of the OpenShift UI at your fingertips—no matter where your cluster happens to be running!

Understanding OpenShift 4.x API Server Exposure on Bare Metal Openshift API server

Running OpenShift 4.x on bare metal has a number of advantages: you get to maintain control of your own environment without being beholden to a cloud provider’s networking or load-balancing solution. But with that control comes a bit of extra work, especially around how the OpenShift API server is exposed.

In this post, we’ll discuss:

  • How the OpenShift API server is bound on each control-plane (master) node.
  • Load-balancing options for the API server in a bare-metal environment.
  • The difference between external load balancers, keepalived/HAProxy, and MetalLB.

1. How OpenShift 4.x Binds the API Server

Static Pods with Host Networking

In Kubernetes, control-plane components like the API server can run as static pods on each control-plane node. In OpenShift 4.x, the kube-apiserver pods use hostNetwork: true, which means they bind directly to the host’s network interface—specifically on port 6443 by default.

  • Location of static pod manifests: These are managed by the Machine Config Operator and typically live in /etc/kubernetes/manifests on each master node.
  • Direct binding: Because these pods use host networking, port 6443 on the master node itself is used. This is not a standard Kubernetes Service or NodePort; it is bound at the OS level.

Implications

  • There is no Service, Route, or Ingress object for the control-plane API endpoint.
  • The typical Service/Route-based exposure flow doesn’t apply to these system components; they live outside the usual Kubernetes networking model to ensure reliability and isolation.

2. Load-Balancing the API Server

In a production environment, you typically want the API server to be highly available. You accomplish that by putting a load balancer in front of the master nodes, each listening on its own 6443 port. This helps ensure that if one node goes down, the others can still respond to API requests.

Below are three common ways to achieve this on bare metal.

Option A: External Hardware/Virtual Load Balancer (F5, etc.)

Overview
Many on-prem or private datacenter environments already have a load-balancing solution in place—e.g., F5, A10, Citrix, or Netscaler appliances. If that’s the case, you can simply:

  1. Configure a virtual server that listens on api.<cluster-domain>:6443.
  2. Point it to the IP addresses of your OpenShift master nodes on port 6443.

Pros

  • Extremely common in enterprise scenarios.
  • Well-supported by OpenShift documentation and typical best practices.
  • Often includes advanced features (SSL offloading, health checks, etc.).

Cons

  • Requires specialized hardware or a VM/appliance with a license in some cases.

Option B: Keepalived + HAProxy on the Master Nodes

Overview
If you lack a dedicated external load balancer, you can run a keepalived/HAProxy setup within your cluster’s control-plane nodes themselves. Typically:

  • Keepalived manages a floating Virtual IP (VIP).
  • HAProxy listens on the VIP (on port 6443) and forwards traffic to the local node or other master nodes.

Pros

  • No extra hardware or external appliances needed.
  • Still provides a single endpoint (api.<cluster-domain>:6443) that floats among the masters.

Cons

  • More complex to configure and maintain.
  • You’re hosting the load-balancing solution on the same nodes as your control-plane, so it’s critical to ensure these components remain stable.

Option C: MetalLB for LoadBalancer Services

Overview
MetalLB is an open-source solution that brings “cloud-style” LoadBalancer services to bare-metal Kubernetes clusters. It typically works in Layer 2 (ARP) or BGP mode to announce addresses, allowing you to create a Service of type: LoadBalancer that obtains a routable IP.

Should You Use It for the API Server?

  • While MetalLB is great for application workloads requiring a LoadBalancer IP, it is generally not the recommended approach for the cluster’s control-plane traffic in OpenShift 4.x.
  • The API server is not declared as a standard “service” in the cluster; instead, it’s a static pod using host networking.
  • You would need additional customizations to treat the API endpoint like a load-balancer service. This is a non-standard pattern in OpenShift 4.x, and official documentation typically recommends either an external LB or keepalived/HAProxy.

Pros (for application workloads)

  • Provides a simple way to assign external IP addresses to your apps without external hardware.
  • Lightweight solution that integrates neatly with typical Kubernetes workflows.

Cons

  • Not officially supported for the API server’s main endpoint.
  • Missing advanced features you might find in dedicated appliances (SSL termination, advanced health checks, etc.).

3. Recommended Approaches

  1. If You Have an Existing Load Balancer
    • Point it at your master nodes’ IP addresses, forwarding :6443 to each node’s :6443.
    • You’ll typically have a DNS entry like api.yourcluster.example.com that resolves to the load balancer’s VIP or IP.
  2. If You Don’t Have One
    • Consider deploying keepalived + HAProxy on the master nodes. You can designate one floating IP that is managed by keepalived. HAProxy on each node can route requests to local or other masters’ API endpoints.
  3. Use MetalLB for App Workloads, Not the Control Plane
    • If you are on bare metal and need load-balancing for normal application services (i.e., front-end web apps), then MetalLB is a great choice.
    • However, for the control-plane API, it’s best to stick to the official recommended approach of an external LB or keepalived/HAProxy.

Conclusion

The API server in OpenShift 4.x is bound at the host network level (port 6443) on each control-plane node via static pods, which is different from how typical workloads are exposed. To achieve high availability on bare metal, you need some form of load balancer—commonly an external appliance or keepalived + HAProxy. MetalLB is excellent for exposing standard application workloads via type: LoadBalancer, but it isn’t the typical path for the OpenShift control-plane traffic.

By understanding these different paths, you can tailor your OpenShift 4.x deployment strategy to match your on-prem infrastructure, making sure your cluster’s API remains accessible, robust, and highly available.