Getty Images/iStockphoto

How to set up a K3s cluster

Admins have choices when it comes to selecting a Kubernetes distribution. One such option is K3s, which can be used for development or production. Here's how to get started.

K3s is a lightweight Kubernetes distribution developed by Rancher. As a lightweight version of Kubernetes, K3s consumes fewer resources than traditional distributions, which enables it to run well on small, individual machines such as laptops or desktop PCs. K3s is also easier than other Kubernetes distributions to set up and manage in many ways.

Both characteristics -- the ability to run on virtually any device, and a deployment process simple enough for even a Raspberry Pi experiment -- make K3s convenient for IT admins who are new to Kubernetes and want to test it.

That said, K3s isn't just for testing and experimentation. It can also serve as a production-ready Kubernetes distribution that can scale to operate across large networks of devices. Rancher promotes K3s as a Kubernetes option for IoT and edge infrastructures due to its low resource requirements, as well as its support for ARM64 and ARMv7 devices.

Whatever your goal -- whether to use K3s to run Kubernetes locally for testing or to deploy it across a large cluster in production -- this tutorial will help you get started. We'll walk step by step through the process to create a K3s cluster.

Why -- and why not -- to use K3s

Before jumping into the K3s installation, let's talk drawbacks of K3s.

  1. K3s is an "opinionated" Kubernetes distribution: It comes preconfigured to work in a certain way. It has, for example, preset networking integrations, storage management and ingress control, which can be difficult to modify. This is not inherently a bad thing -- its default settings and integrations are part of what makes K3s easy to use. However, if you want a lot of flexibility regarding how your Kubernetes environment is configured, it might take a fair amount of work to make your K3s cluster fit your vision.
  2. While modern releases support high-availability mode -- meaning clusters with at least two master or server nodes -- setting up high-availability clusters requires configuring an external data store. High-availability mode is easier to configure in most other Kubernetes distributions.
  3. It's difficult to find providers that offer K3s as a managed service. Civo is the first vendor to provide K3s as a managed service, but all the mainstream managed Kubernetes services -- such as Amazon Elastic Kubernetes Service, Microsoft Azure Kubernetes Service and Google Kubernetes Engine -- are based on other Kubernetes distributions. In general, to run K3s, you must set up and manage it yourself. This isn't a limitation of K3s itself, but it does place limitations on a K3s deployment's flexibility.

Set up a K3s cluster

Now, let's look at how to set up a K3s cluster.

Step 1. Get a Linux machine

First, you'll need a device running Linux. K3s works on every Linux kernel and distribution released after 2017, so you can choose any distribution, as well as where you choose to run it, either VM or bare metal.

Step 2. Download the Rancher binary

On your chosen Linux machine, download the K3s binary -- named k3s -- from its GitHub repository via your web browser. Alternatively, use a wget or curl command.

wget https://github.com/k3s-io/k3s/releases/download/v1.23.5%2Bk3s1/k3s

Now make the binary executable.

chmod +x k3s

Step 3. Start the K3s server

With the executable binary in place, run this command to start K3s on your device.

sudo ./k3s server

Step 4. Check your cluster

At this point, your cluster should be up and running as a single-node cluster. To confirm that the node is operating, run this sudo command.

sudo ./k3s kubectl get nodes

The output should look similar to below.

NAME             STATUS    ROLES                   AGE    VERSION
your-hostname    Ready     control-plane,master    65s    v1.23.5+k3s1

Step 5. Manage your cluster

K3s provides a built-in kubectl utility. You can run most kubectl commands through the K3s binary.

To drain the node you created, for example, use the following command.

sudo ./k3s kubectl drain your-hostname

Or to cordon a node, use the command below.

sudo ./k3s kubectl cordon your-hostname

Add nodes to a K3s cluster

In the steps above, we created a cluster with just one node. If you want to add a node to your cluster, first determine the node token value of your Rancher server. You can get this by running the following command.

cat /var/lib/rancher/k3s/server/node-token

The value should be a string of numbers and letters. For example, here's mine.

K109577d4b65c338968b8349fd29cfef99280a1b14a4ff3464c4251d556a4fa668a::server:e43aee47b7f81c5b783b8a1d4e1739ce

Then, on the machine that will serve as the additional node, download the K3s binary and run the following.

sudo ./k3s agent --server https://myserver:6443 --token NODE-TOKEN

Be sure to substitute in the node token from your K3s server for the NODE-TOKEN value in this command.

Repeat this process to add as many nodes as you want to your cluster.

Go further

There are many more things you can do with K3s, such as customize networking or logging, change the container runtime, and set up certificates. Check out the K3s documentation for details on everything you can do with an active K3s cluster.

Next Steps

What is K3OS and how does it relate to K3s?

Dig Deeper on Containers and virtualization

Search Software Quality
Search App Architecture
Cloud Computing
Search AWS
TheServerSide.com
Search Data Center
Close