Skip to main content

How to Create a Red Hat Single Node OpenShift Cluster for a Home Lab

 


How to Create a Red Hat Single Node OpenShift Cluster for a Home Lab

In this blog post, we'll walk through the steps to create a Red Hat Single Node OpenShift (SNO) cluster using the Assisted Installer, even if you don't have access to a hypervisor.The blog post demonstrates how to create a blank virtual machine (VM) on VMware vCloud Director, and then provision the SNO cluster on top of that VM. While I used VMware vCloud Director (VCD) as my IaaS lab environment, this process can be applied to any blank VM.

This is completely a GUI-Based walkthrough for simplicity.

Step 1: Create a Virtual Machine

The first step is to create a virtual machine that will host the SNO cluster. I'm using VMware vCloud Director for this, but you can use any virtualization platform of your choice.


As per Red Hat's documentation, the minimum resource requirements for a SNO cluster are:

  • 8 vCPU cores
  • 16 GB of RAM
  • 120 GB of storage

However, to sustain additional applications, I'm allocating 12 vCPUs and 20 GB of RAM to the VM.

Note: If you're planning to use OpenShift Virtualization to run virtual machines on the cluster, you'll need to enable the "Expose hardware-assisted CPU virtualization to guest OS" option and allocate more resources accordingly.

For storage, I'll create two virtual disks: one for the operating system and one for persistent storage for containers.

Note: i used buss Type IDE, as I received a UUID Disk error on Cluster Pre-checks, this requires VM Disks to have option set ( this could be specific to VMware and Vcloud Director):

Name: disk.enableUUID

Value: TRUE

Without Hypervisor access, I don’t have Access to modify this option – so I just use IDE disk which seems to have the option enabled.  ( I didn’t look further into it )


Optional: The other option is too use used VCD Named Disks that can be shared to VMs (this is VMware Vcloud Director Specific)


Step 2: Register with Red Hat Hybrid Cloud Console

Once the VM is created, register for a free Red Hat account to access the Red Hat Hybrid Cloud Console at https://console.redhat.com/.


Step 3: Create a New Cluster

In the Hybrid Cloud Console, navigate to "Clusters" and click "Create New Cluster". Select "Datacenter" and "Assisted Installer", then provide a cluster name, base domain, and choose the "Single Node OpenShift (SNO)" option.





Optional: If you wanted OpenShift virtualization to run a VM/s on the cluster, you can enable it at this point. Remember this requires more Resources and the Expose visualization option if you are using a VM

If you enabled Option “Install Local Volume Manager Storage” option in this step you could skip the next step 6 but for sake of learning I left it off.


Step 4: Add a Host

Next, add the VM you created earlier as a host. You'll need to download the minimal ISO image and attach it to the VM to boot from it.

We will add a Host 


We Download the ISO : It’s a 100 Odd MB



Step 5: Bootstrap the Cluster

Once the VM has booted from the ISO, return to the Hybrid Cloud Console and bootstrap the OpenShift cluster.

On VMware VCloud Director i upload the ISO into a Catalog so that I can attach it to the VM for boot.


I attach the ISO to the VM and boot the ISO.


Once The VM has Booted into Login Prompt – we can return to Hybrid Cloud Console:


Returning to the Hybrid console – we can see that Host inventory has picked up the VM booted via Iso.




I had a NTP error, but I was able to ignore continue with the Bootstrap of the OCP Custer.




On completion – we can see Node is ready.


Finally need to add some DNS entries to access the Cluster Console. I just added these to my local host file.


And I can then resolve the Cluster console.



Step 6: Configure Storage

After the cluster is ready, you'll need to configure storage for persistent data. I'll demonstrate how to use the Local Storage Operator and the LVM Operator to provision storage from the second virtual disk you created earlier.

If you enabled Option “Install Local Volume Manager Storage” option in previous step you could skip this step but for sake of learning left it off.

We need to configure storage on Cluster, the easiest way I found was using a Operator.

Before we get there lest look at the Cluster node:


Here we can see the cluster has used my first disk for OS , I need configure the second disk for persistent storage for the apps I want to deploy.

If we Select the Node name & terminal the node, we can see what local disk are visible to the os


We can see we have an additional 120GB Volume that we can use for containers persistent data.

To do so let’s Jump to Operators Hub and provision “Local Storage Operator”

Optional: This step is optional, it's a nice to have to display discovered disks on a node in UI, you can skip to LVM creation.

Install the Local operator – I just left all default.


Once the operator is deployed – Under Operator – installed operators - we navigate to Local storage & “Local volume Discovery.” 


Let’s configure Local Volume Discovery – to find the available disks on the node.


I left all settings default.

The Local disk Discovery Daemon has started & is in discovery Phase.


If we navigate to – Compute – Nodes – (Select the Single Node) – Under disks we can see volumes have been discovered and the 120GB disk I want to allocate as persistent storage for my containers. 


I want to use the SDB device for my container’s persistent storage. 
Let's go back to the operator's hub – and add LVM Operator


Access the LVM Storage Operator under installed Operators.

We then create a LVM cluster & follow the wizard.



We give the cluster a Name “Basic-lvmcluster “
Then we select Storage > Device Classes >Thin pool config I call it thin-pool-1 
Over provision Ratio 10X or lower (used 2x) & 90% of the storage 



Then Select Device Selector 

Enabled Force Wipe (Note runs WipeFs to format the device) 

We also configure “Device Paths” /dev/sda  (based on what we found earlier) 

Then remove Node selector terms: we not filtering cluster nodes as this is SNO setup.


Once Created, we can navigate to storage > Storage class 



We can see LVM storage class created




Step 7: Deploy an Application

To test the cluster, I'll deploy the Veeam Kasten K10 application using the Operator Hub.

The final steps we will deploy an App, in this case Veeam Kasten.

So, we return to the Operator Hub.

Search for Kasten and Select Kasten K10 (free)


I'll leave everything default and install the operator


Next – we select Instance we want to deploy 


When creating the Instance, I provided the Storage Class that was created with the LVM storage operator 


We Can see the K10 instance is Initialized and deployed



We can see Pods under Kasten Project being deployed 



We can also see under storage – Persistent volume claims that Kasten is using some of the LVM local storage for a few of its containers.



Step 8: Access the Application

Finally, I'll create a route to access the Kasten K10 dashboard and add a DNS entry to my local hosts file to resolve the URL.

Next, we will create A route so we can access the Kasten Dashboard 

We will give the route a name “kasten-route” 

We add the URL path “/k10/”

We select the service we want to route too “gateway“

And last, we select the Target ports for Forwarding: “80-8000”


On completion I have a URL that we can access the Dashboard with, but I still need to add DNS entry to local host file on my local laptop to access resolve URL to cluster IP along with the other OCP DNS entries.


Host File on my laptop (could be DNS server) 



Once saved, the Kasten Dashboard is accessible.

Optional: By Clicking on the Location URL under route or click Home > Projects > Kasten > Workloads >Select gateway Service URL icon


Kasten Dashboard



By following these steps, you can easily create a Red Hat Single Node OpenShift cluster for a home lab environment, even without access to a hypervisor. Remember to delete the cluster and the VM when you're done with your testing or lab work.

To remove the cluster post lab work we need to follow this process, Deleting your cluster: 

To delete an SNO cluster installed by using the Assisted Installer, perform the following steps:

1. Delete the SNO cluster from Assisted Installer Clusters List. Assisted Installer Clusters List.

2. Archive the cluster from All Clusters List. All Clusters List.

3. Delete the VM manually.


 Thank you for reading this far, please share & comment.

Comments