Skip to Content
Installation OverviewInstalling Ascender on RKE2

Installing Ascender on RKE2

Overview

RKE2 runs on a single node or a multi-node HA cluster and supports RLC-Hardened for environments with stricter security requirements.

Unlike K3s, the installer does not create the RKE2 cluster. You set up RKE2 first, then run the Ascender installer against it.

For air-gapped environments, see the RKE2 Offline Install guide instead.

Prerequisites

All RKE2 nodes require:

  • 4 CPUs
  • 8 GB RAM
  • 40 GB disk
  • Rocky Linux 8 or 9
  • A unique hostname (cannot be changed after installation)

For HA deployments, you will also need three nodes and an unused IP address in the same subnet to serve as the Virtual IP (VIP). DNS entries for Ascender point to this address.

Preparing the Nodes

Run these steps on each node before installing RKE2.

Disable SELinux

RKE2 does support SELinux, but disabling it simplifies the installation and avoids potential policy conflicts:

sudo setenforce 0 sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

The first command disables SELinux immediately. The second makes it persistent across reboots.

Disable firewalld

firewalld conflicts with the default Canal (Calico + Flannel) CNI plugin that RKE2 uses. See the RKE2 known issues for details.

sudo systemctl stop firewalld sudo systemctl disable firewalld

Enable time synchronization

Ensure chronyd is running on all nodes. Time skew between nodes can cause etcd failures in HA deployments:

sudo dnf install -y chrony sudo systemctl enable --now chronyd

Installing RKE2

Single Node

Create the RKE2 config directory and write the config file:

sudo mkdir -p /etc/rancher/rke2 sudo vim /etc/rancher/rke2/config.yaml
tls-san: - <NODE_IP>

Install RKE2 and start the service:

curl -sfL https://get.rke2.io | sudo sh - sudo systemctl enable --now rke2-server

Add the RKE2 binaries to your PATH and set the kubeconfig system-wide:

sudo tee /etc/profile.d/rke2.sh << 'EOF' export PATH=$PATH:/var/lib/rancher/rke2/bin export KUBECONFIG=/etc/rancher/rke2/rke2.yaml EOF source /etc/profile.d/rke2.sh

For other access options, see the RKE2 cluster access docs.

Verify the node is ready:

kubectl get nodes

Storage

RKE2 does not include a default storage provisioner. Ascender requires a StorageClass for its PostgreSQL persistent volume.

Single NodeMulti-Node HA

local-path-provisioner is lightweight and sufficient for single-node deployments. It stores data at /opt/local-path-provisioner on the node:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.35/deploy/local-path-storage.yaml

Once applied, mark it as the cluster default:

kubectl patch storageclass local-path \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Longhorn provides replicated block storage across cluster nodes, keeping storage highly available to match the rest of the cluster.

If your infrastructure already has a storage appliance (NetApp, Pure Storage, vSphere, Ceph, VAST Data, etc.), use its official Kubernetes CSI driver instead.

Install the prerequisites on each RKE2 node:

sudo dnf install -y iscsi-initiator-utils nfs-utils sudo systemctl enable --now iscsid

Install Longhorn from node 1 (Kubernetes deploys it across all nodes automatically):

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.11.1/deploy/longhorn.yaml

Longhorn creates a longhorn StorageClass and sets it as the cluster default automatically. By default it stores volume data at /var/lib/longhorn on each node. If you want storage on a dedicated disk, mount it to /var/lib/longhorn before installing Longhorn. See the Longhorn installation docs for full details.

Installing Ascender

Clone the ascender-install repository if you don't have it:

git clone https://github.com/ctrliq/ascender-install.git cd ascender-install

Generate a TLS certificate for the Ascender web UI. Run this from inside the ascender-install directory. The certificate files will be created there so the installer can locate them automatically:

openssl req -x509 -newkey rsa:4096 -keyout ascender.key -out ascender.crt -days 365 -nodes \ -subj "/CN=<ascender.example.com>" \ -addext "subjectAltName=DNS:<ascender.example.com>"

Create custom.config.yml:

Single NodeMulti-Node HA
k8s_platform: rke2 k8s_lb_protocol: https kube_install: false download_kubeconfig: true kubeapi_server_ip: "<NODE_IP>" use_etc_hosts: false tls_crt_path: "{{ playbook_dir }}/../ascender.crt" tls_key_path: "{{ playbook_dir }}/../ascender.key" ASCENDER_HOSTNAME: <ascender.example.com> ASCENDER_NAMESPACE: ascender ASCENDER_ADMIN_USER: admin ASCENDER_ADMIN_PASSWORD: "<change-me>" ASCENDER_VERSION: 25.3.5 ASCENDER_OPERATOR_VERSION: 2.19.4 ascender_garbage_collect_secrets: true ascender_setup_playbooks: true

To include Ascender Pro, add the following to custom.config.yml:

LEDGER_INSTALL: true LEDGER_HOSTNAME: <ledger.example.com> LEDGER_NAMESPACE: ledger LEDGER_REGISTRY: BASE: depot.ciq.com USERNAME: <your-depot-username> PASSWORD: <your-depot-token> LEDGER_ADMIN_PASSWORD: "<change-me>" LEDGER_DB_PASSWORD: "<change-me>" LEDGER_VERSION: latest LEDGER_WEB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-web LEDGER_PARSER_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-parser LEDGER_DB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-db

See the Ascender Pro Installation Guide for the full variable reference and post-install steps.

Set kubeapi_server_ip to the VIP you configured in the keepalived section above.

k8s_platform: rke2 k8s_lb_protocol: https kube_install: false download_kubeconfig: true kubeapi_server_ip: "<VIP_ADDRESS>" use_etc_hosts: false tls_crt_path: "{{ playbook_dir }}/../ascender.crt" tls_key_path: "{{ playbook_dir }}/../ascender.key" ASCENDER_HOSTNAME: <ascender.example.com> ASCENDER_NAMESPACE: ascender ASCENDER_ADMIN_USER: admin ASCENDER_ADMIN_PASSWORD: "<change-me>" ASCENDER_VERSION: 25.3.5 ASCENDER_OPERATOR_VERSION: 2.19.4 ascender_garbage_collect_secrets: true ascender_setup_playbooks: true

To include Ascender Pro, add the following to custom.config.yml:

LEDGER_INSTALL: true LEDGER_HOSTNAME: <ledger.example.com> LEDGER_NAMESPACE: ledger LEDGER_REGISTRY: BASE: depot.ciq.com USERNAME: <your-depot-username> PASSWORD: <your-depot-token> LEDGER_ADMIN_PASSWORD: "<change-me>" LEDGER_DB_PASSWORD: "<change-me>" LEDGER_VERSION: latest LEDGER_WEB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-web LEDGER_PARSER_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-parser LEDGER_DB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-db

With a multi-node cluster, running multiple replicas takes advantage of available capacity. Add the following for both Ascender and Ascender Pro:

ascender_replicas: 2 ledger_web_replicas: 2 ledger_parser_replicas: 2

Each additional replica adds roughly 1 GB of memory overhead. Adjust counts based on your available node resources.

See the Ascender Pro Installation Guide for the full variable reference and post-install steps.

For all available configuration options, refer to default.config.yml in the repository. See the main install guide for the full variable reference.

Run the installer:

sudo ./setup.sh

Verifying the Installation

kubectl get pods -n ascender

All pods should reach Running or Completed state. Access the Ascender web UI at https://ASCENDER_HOSTNAME and log in with the ASCENDER_ADMIN_USER and ASCENDER_ADMIN_PASSWORD values from your custom.config.yml.

If Ascender Pro was installed:

kubectl get pods -n ledger

Log into the Ascender Pro web UI, navigate to Settings > License, and upload the .json license file provided by CIQ to activate it.

Troubleshooting

fail2ban blocking inter-node communication

If fail2ban is running on your cluster nodes, its default iptables rules can block inter-node communication. This can cause metrics-server to fail and the Ascender operator to stall during deployment, often surfacing as a failure on "Create imagePullSecret" or pods stuck in CrashLoopBackOff.

The simplest fix is to whitelist all cluster node IPs in fail2ban's ignoreip list so they are never subject to banning:

# /etc/fail2ban/jail.local [DEFAULT] ignoreip = 127.0.0.1/8 ::1 <NODE1_IP> <NODE2_IP> <NODE3_IP>

Restart fail2ban after making the change:

sudo systemctl restart fail2ban