Skip to Content
Installation OverviewInstalling Ascender on RKE2 (Air-Gapped)

Installing Ascender on RKE2 (Air-Gapped)

Overview

This guide covers installing Ascender on an air-gapped RKE2 cluster with no internet access. It uses RKE2's built-in embedded registry mirror (Spegel) to distribute container images across cluster nodes without requiring an external registry. Images are saved on a staging host, transferred to the air-gapped node, and imported into containerd.

Unlike K3s, the installer does not create the RKE2 cluster. You set it up first, then run the Ascender installer against it.

Run the staging steps on a staging host. This can be any machine with internet access: a laptop, a VM, a spare server. It does not need to be one of your cluster nodes. It does need to run the same OS version as your air-gapped nodes (Rocky Linux 8 or 9, x86_64) so the RPMs downloaded here are compatible.

No external registry required

Images are stored directly in containerd on the air-gapped node and served to other nodes by RKE2's embedded mirror (Spegel). If you already have a private registry in your environment, see the RKE2 private registry docs instead.

Ascender Pro

Ascender Pro can be included in this workflow. See the Ascender Pro notes in Configuring the Installer and Running the Bundle Creator.

Private Registry Variant

If your environment already has a private registry, you can use it instead of the embedded RKE2 mirror. Set k8s_container_registry in custom.config.yml, push the Ascender and storage images to that registry, and configure RKE2 authentication in registries.yaml as described in the RKE2 private registry docs.

With that approach, skip the sections that only exist to load tarballs into containerd:

Prerequisites

Staging Host

The staging host needs the following tools installed before you begin:

  • git
  • ansible-core
  • docker (installed by the bundle creator, but verify your system can run it)
  • curl

The staging host must run the same OS version as your air-gapped nodes (Rocky Linux 8 or 9, x86_64) so that RPMs downloaded here are compatible.

Air-Gapped Nodes

All RKE2 nodes require:

  • 4 CPUs
  • 8 GB RAM
  • 40 GB disk
  • Rocky Linux 8 or 9 (x86_64)
  • A unique hostname (cannot be changed after installation)

For HA deployments, you will also need three nodes and an unused IP address in the same subnet to serve as the Virtual IP (VIP). DNS entries for Ascender point to this address.


Preparing on the Staging Host

Downloading RKE2 Artifacts

Download the following files from the RKE2 releases page for your target version:

  • rke2.linux-amd64.tar.gz (RKE2 binaries)
  • rke2-images.linux-amd64.tar.zst (RKE2 system container images, includes Canal CNI)
  • sha256sum-amd64.txt (checksum file)
  • install.sh (offline installer script)
RKE2 version requirement

The embedded registry mirror (Spegel) requires RKE2 v1.31.4 or later. This guide uses v1.33 as the example. Check the RKE2 releases page for the latest stable patch in the v1.33 series and use that instead.

mkdir -p ~/ascender-airgap/rke2-artifacts cd ~/ascender-airgap/rke2-artifacts RKE2_VERSION=v1.33.1+rke2r1 curl -LO "https://github.com/rancher/rke2/releases/download/${RKE2_VERSION}/rke2.linux-amd64.tar.gz" curl -LO "https://github.com/rancher/rke2/releases/download/${RKE2_VERSION}/rke2-images.linux-amd64.tar.zst" curl -LO "https://github.com/rancher/rke2/releases/download/${RKE2_VERSION}/sha256sum-amd64.txt" curl -Lo install.sh https://get.rke2.io

Configuring the Installer

sudo dnf install -y git dnf-plugins-core ansible-core mkdir -p ~/ascender-airgap git clone https://github.com/ctrliq/ascender-install.git ~/ascender-airgap/ascender-install cd ~/ascender-airgap/ascender-install

Create custom.config.yml with the settings for your deployment. The example configuration below shows the minimum required settings. For all available options, refer to default.config.yml in the repository.

The installer requires a TLS certificate for the Ascender web UI. Generate it now inside the ascender-install directory. It will travel with the bundle to the air-gapped node and the installer will locate it automatically. Replace ascender.example.com with your ASCENDER_HOSTNAME value:

openssl req -x509 -newkey rsa:4096 -keyout ascender.key -out ascender.crt -days 365 -nodes \ -subj "/CN=ascender.example.com" \ -addext "subjectAltName=DNS:ascender.example.com"

If you have a CA-signed certificate instead, copy the .crt and .key files into the ascender-install directory and update the paths in custom.config.yml accordingly.

vim custom.config.yml

Because images are loaded directly into containerd with their original names, k8s_container_registry is not set. The installer will use the default image references (ghcr.io/ctrliq/ascender, etc.) which will match the images the bundle creator saves.

k8s_platform: rke2 k8s_lb_protocol: https k8s_offline: true kube_install: false download_kubeconfig: true kubeapi_server_ip: "<RKE2-NODE-IP>" use_etc_hosts: false tls_crt_path: "{{ playbook_dir }}/../ascender.crt" tls_key_path: "{{ playbook_dir }}/../ascender.key" ASCENDER_HOSTNAME: <ascender.example.com> ASCENDER_NAMESPACE: ascender ASCENDER_ADMIN_USER: admin ASCENDER_ADMIN_PASSWORD: "<change-me>" ASCENDER_VERSION: 25.3.5 ASCENDER_OPERATOR_VERSION: 2.19.4 ascender_image_pull_policy: IfNotPresent ascender_garbage_collect_secrets: true ascender_setup_playbooks: true
Warning

ascender_image_pull_policy: IfNotPresent is required. Unlike K3s offline installs, the RKE2 installer does not set this automatically. Without it, pods will attempt to pull images from the internet and fail.

To include Ascender Pro, add the following to the end of your custom.config.yml. The LEDGER_REGISTRY block is required here so the bundle creator can authenticate against depot.ciq.com to pull the Ledger images.

LEDGER_INSTALL: true LEDGER_HOSTNAME: <ledger.example.com> LEDGER_ADMIN_PASSWORD: "<change-me>" LEDGER_DB_PASSWORD: "<change-me>" LEDGER_VERSION: latest LEDGER_WEB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-web LEDGER_PARSER_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-parser LEDGER_DB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-db LEDGER_REGISTRY: BASE: depot.ciq.com USERNAME: <your-depot-username> PASSWORD: <your-depot-token>
Remove `LEDGER_REGISTRY` before transferring

Remove or comment out the LEDGER_REGISTRY block from custom.config.yml before copying the directory to the transfer device. If it remains set, the installer will attempt to authenticate against depot.ciq.com from the air-gapped node at install time and fail.

Running the Bundle Creator

The bundle creator installs Docker, pulls all required container images, and saves them as tarballs in offline/images/. It also downloads RPM packages, Ansible collections, and operator manifests into offline/.

If you are installing Ascender Pro, install the community.docker Ansible collection first, which is required for the bundle creator to authenticate against the Depot registry:

ansible-galaxy collection install community.docker

Run the bundle creator, passing k8s_platform=k3s to trigger the image save workflow. The bundle creator uses the K3s image-save path for all platforms. This is intentional shared infrastructure, not a mistake:

ansible-playbook playbooks/create_bundle.yml -e k8s_platform=k3s

When complete, offline/images/ will contain tarballs for all required images named <image>-<tag>.tar, for example ascender-25.3.5.tar and postgres-latest.tar.

Also download busybox to the same directory. It is required by local-path-provisioner:

docker pull busybox docker save busybox -o ~/ascender-airgap/ascender-install/offline/images/busybox-latest.tar

The air-gapped node also needs ansible-core to run the installer. Download it now so it ships with the bundle:

sudo dnf download --resolve --destdir ~/ascender-airgap/ascender-install/offline/packages ansible-core

Downloading Storage Manifests

Download the storage manifests and images:

Single NodeMulti-Node HA

local-path-provisioner is lightweight and sufficient for single-node deployments. It stores data at /opt/local-path-provisioner on the node:

mkdir -p ~/ascender-airgap/storage curl -Lo ~/ascender-airgap/storage/local-path-storage.yaml \ https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.35/deploy/local-path-storage.yaml docker pull rancher/local-path-provisioner:v0.0.35 docker save rancher/local-path-provisioner:v0.0.35 \ -o ~/ascender-airgap/storage/local-path-provisioner.tar

Longhorn provides replicated block storage across nodes. Download the manifest, image list, and RPM prerequisites:

mkdir -p ~/ascender-airgap/storage/rpms sudo dnf download --resolve --destdir ~/ascender-airgap/storage/rpms iscsi-initiator-utils nfs-utils
LONGHORN_VERSION=v1.11.1 mkdir -p ~/ascender-airgap/storage/longhorn-images cd ~/ascender-airgap/storage curl -Lo longhorn.yaml \ https://raw.githubusercontent.com/longhorn/longhorn/${LONGHORN_VERSION}/deploy/longhorn.yaml curl -Lo longhorn-images.txt \ https://raw.githubusercontent.com/longhorn/longhorn/${LONGHORN_VERSION}/deploy/longhorn-images.txt while IFS= read -r image; do name=$(echo "$image" | tr '/:' '-') docker pull "$image" docker save "$image" -o "longhorn-images/${name}.tar" done < longhorn-images.txt

See the Longhorn air-gap documentation for full details.

Organizing for Transfer

Copy the entire ascender-airgap folder to your transfer device:

~/ascender-airgap/ ├── rke2-artifacts/ # RKE2 binaries, system images, and install script ├── storage/ # Storage provisioner manifests and images └── ascender-install/ ├── custom.config.yml ├── ascender.crt ├── ascender.key └── offline/ ├── images/ # Ascender (and Ledger) image tarballs ├── packages/ # RPMs ├── collections/ # Ansible collections └── ascender-operator-<version>/

Preparing the First Node

Copy the ascender-airgap folder from your transfer device to the server under /root.

Disable SELinux

RKE2 does support SELinux, but disabling it simplifies the installation and avoids potential policy conflicts:

sudo setenforce 0 sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

The first command disables SELinux immediately. The second makes it persistent across reboots.

Disable firewalld

firewalld conflicts with the default Canal (Calico + Flannel) CNI plugin that RKE2 uses. See the RKE2 known issues for details.

sudo systemctl stop firewalld sudo systemctl disable firewalld

Enable time synchronization

Ensure chronyd is running on all nodes. Time skew between nodes can cause etcd failures in HA deployments:

sudo dnf install -y chrony sudo systemctl enable --now chronyd

Installing RKE2

Place the RKE2 system images where RKE2 expects them, then run the offline installer:

sudo mkdir -p /var/lib/rancher/rke2/agent/images/ sudo cp /root/ascender-airgap/rke2-artifacts/rke2-images.linux-amd64.tar.zst /var/lib/rancher/rke2/agent/images/ sudo INSTALL_RKE2_ARTIFACT_PATH=/root/ascender-airgap/rke2-artifacts sh /root/ascender-airgap/rke2-artifacts/install.sh

Enabling the Embedded Registry Mirror

Create the RKE2 config directory:

sudo mkdir -p /etc/rancher/rke2

Write config.yaml:

Single NodeMulti-Node HA
sudo vim /etc/rancher/rke2/config.yaml
embedded-registry: true tls-san: - <NODE1_IP>

Generate a cluster token and write the config file:

openssl rand -hex 16 sudo vim /etc/rancher/rke2/config.yaml
token: <YOUR-GENERATED-TOKEN> embedded-registry: true tls-san: - <VIP_IP> - <VIP_DNS> # optional, include if connecting via DNS name
Note

You can omit the token and let RKE2 generate one automatically. If you do, retrieve it from /var/lib/rancher/rke2/server/node-token before configuring the other nodes. Setting it explicitly here is simpler.

Also create registries.yaml. This tells containerd to route image pulls through the embedded mirror. Without it, the mirror is running but bypassed and pods will attempt to pull from the internet:

sudo vim /etc/rancher/rke2/registries.yaml
mirrors: "*":

Starting RKE2

sudo systemctl enable rke2-server sudo systemctl start rke2-server

RKE2 startup takes several minutes. It needs to initialize etcd, start the API server, and import the system images from rke2-images.linux-amd64.tar.zst into containerd before the node becomes available. Wait for it to complete before proceeding.

Verify the cluster is up:

export KUBECONFIG=/etc/rancher/rke2/rke2.yaml /var/lib/rancher/rke2/bin/kubectl get nodes

The node should show Ready status. You can follow startup progress with journalctl -u rke2-server -f. Add the RKE2 binaries to your PATH and set the kubeconfig system-wide:

sudo tee /etc/profile.d/rke2.sh << 'EOF' export PATH=$PATH:/var/lib/rancher/rke2/bin export KUBECONFIG=/etc/rancher/rke2/rke2.yaml EOF source /etc/profile.d/rke2.sh

For other access options, see the RKE2 cluster access docs.

Importing Ascender Images

Once RKE2 is running, import the image tarballs into containerd using the ctr tool that ships with RKE2:

cd /root/ascender-airgap/ascender-install/offline/images for image in *.tar; do sudo /var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock -n k8s.io image import "$image" done

Importing images takes several minutes. Each image is decompressed and written to containerd's local store. The Ascender EE image in particular is large and can take 3-4 minutes on its own. Let the loop complete before proceeding.

Verify the images are present in containerd:

sudo /var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock -n k8s.io image list | grep ctrliq

Expected output will look similar to:

ghcr.io/ctrliq/ascender:25.3.5 application/vnd.oci.image.manifest.v1+json ... ghcr.io/ctrliq/ascender-ee:latest application/vnd.oci.image.manifest.v1+json ... ghcr.io/ctrliq/ascender-operator:2.19.4 application/vnd.oci.image.manifest.v1+json ...

If the list is empty, the import loop may not have completed or the tarballs were not placed in the correct directory. Rerun the import loop and check for errors.


Complete the Deployment

Single NodeMulti-Node HA

Configuring Storage

Import the local-path-provisioner image and apply the manifest:

sudo /var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock \ -n k8s.io image import /root/ascender-airgap/storage/local-path-provisioner.tar kubectl apply -f /root/ascender-airgap/storage/local-path-storage.yaml

Set POSTGRES_STORAGE_CLASS: local-path in your custom.config.yml, or mark it as the cluster default:

sudo kubectl patch storageclass local-path \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Adding Additional Nodes

Note

Nodes must be able to reach each other on TCP ports 5001 and 9345 for the embedded mirror to work. Verify your firewall rules allow this between all cluster nodes.

On each additional node, copy the full ascender-airgap/ directory from the transfer device to /root, then run through the following steps.

Place the RKE2 system images and run the offline installer:

sudo mkdir -p /var/lib/rancher/rke2/agent/images/ sudo cp /root/ascender-airgap/rke2-artifacts/rke2-images.linux-amd64.tar.zst /var/lib/rancher/rke2/agent/images/ sudo INSTALL_RKE2_ARTIFACT_PATH=/root/ascender-airgap/rke2-artifacts sh /root/ascender-airgap/rke2-artifacts/install.sh

Create the RKE2 config directory and write the config file, then create registries.yaml:

sudo mkdir -p /etc/rancher/rke2 sudo vim /etc/rancher/rke2/config.yaml
server: https://<NODE1_IP>:9345 token: <YOUR-GENERATED-TOKEN> embedded-registry: true tls-san: - <VIP_IP> - <VIP_DNS> # optional, include if connecting via DNS name
sudo vim /etc/rancher/rke2/registries.yaml
mirrors: "*":

Start the service:

sudo systemctl enable rke2-server sudo systemctl start rke2-server

All server nodes in an HA cluster run rke2-server. Once a node starts and joins, Spegel will distribute images from node 1's containerd store automatically. From node 1, verify all nodes join successfully:

kubectl get nodes

All nodes should show Ready status.

Setting Up a VIP with keepalived

A VIP (Virtual IP) allows the cluster to be accessed from a single IP address. DNS entries for Ascender should point to this VIP. keepalived uses a primary/standby model. One node holds the VIP under normal operation, the others take over automatically if it goes down.

Primary node: install keepalived and write the config:

sudo dnf install -y keepalived sudo cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak sudo truncate -s 0 /etc/keepalived/keepalived.conf sudo vim /etc/keepalived/keepalived.conf

Replace the placeholder values:

  • <INTERFACE>: the primary network interface (check with ip addr).
  • <AUTH_STRING>: a shared secret for all keepalived nodes. Generate with openssl rand -hex 4.
  • <VIP_ADDRESS>/<CIDR>: the Virtual IP and subnet (e.g., 10.4.12.99/22).
vrrp_instance VI_1 { state MASTER interface <INTERFACE> virtual_router_id 51 priority 200 advert_int 1 authentication { auth_type PASS auth_pass <AUTH_STRING> } virtual_ipaddress { <VIP_ADDRESS>/<CIDR> } }
sudo systemctl enable --now keepalived

Standby nodes: install keepalived and use the same config as the primary, with two changes: set state to BACKUP and set priority to a lower value (150 for the second node, 100 for the third). The auth_pass, virtual_router_id, interface, and virtual_ipaddress must match the primary.

vrrp_instance VI_1 { state BACKUP interface <INTERFACE> virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass <AUTH_STRING> } virtual_ipaddress { <VIP_ADDRESS>/<CIDR> } }
sudo systemctl enable --now keepalived

Confirm the VIP responds from any machine on the same network:

ping <VIP_ADDRESS>

Configuring Storage

Install prerequisites on every node from the downloaded RPMs:

sudo dnf install -y --disablerepo='*' /root/ascender-airgap/storage/rpms/*.rpm sudo systemctl enable --now iscsid

Import the Longhorn images on each node:

for image in /root/ascender-airgap/storage/longhorn-images/*.tar; do sudo /var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock \ -n k8s.io image import "$image" done

Apply the manifest from node 1:

sudo kubectl apply -f /root/ascender-airgap/storage/longhorn.yaml

Wait for Longhorn to come up:

kubectl get pods -n longhorn-system --watch

All pods should reach Running status. Longhorn creates a longhorn StorageClass and sets it as the cluster default automatically.

See the Longhorn air-gap documentation for full details.

If your environment already has a storage appliance such as NetApp, vSphere, Pure Storage, Ceph, VAST Data, and others, you can use its CSI driver instead of Longhorn. Either mark its StorageClass as the cluster default or set POSTGRES_STORAGE_CLASS in your custom.config.yml to the StorageClass name.


Installing Ascender

Run the installer from the transferred directory:

cd /root/ascender-airgap/ascender-install sudo ./setup.sh

The installer uses the offline/ directory for packages, collections, and operator manifests. All image references use the default ghcr.io/ctrliq/... paths, which match the images now in containerd. With ascender_image_pull_policy: IfNotPresent, Kubernetes finds them locally and does not attempt to pull from the internet.

A successful install ends with:

PLAY RECAP ********************************************************************* ascender_host : ok=14 changed=6 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 localhost : ok=72 changed=27 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 ASCENDER SUCCESSFULLY SETUP

Verifying the Installation

kubectl get pods -n ascender

All pods should reach Running or Completed state. Access the Ascender web UI at https://ASCENDER_HOSTNAME. The default admin username is admin and the password is the value you set for ASCENDER_ADMIN_PASSWORD.

If Ascender Pro was installed:

kubectl get pods -n ledger

Log into the Ascender Pro web UI, navigate to Settings > License, and upload the .json license file provided by CIQ to activate it.

Replica Counts

For HA deployments, take advantage of the multi-node cluster by running multiple replicas of the Ascender and Ascender Pro components. Add the following to your custom.config.yml before running the installer:

ascender_replicas: 2

If you are also installing Ascender Pro:

ledger_web_replicas: 2 ledger_parser_replicas: 2

Each additional replica adds roughly 1 GB of memory overhead. Adjust counts based on your available node resources.

Troubleshooting

fail2ban blocking inter-node communication

If fail2ban is running on your cluster nodes, its default iptables rules can block inter-node communication. This can cause metrics-server to fail and the Ascender operator to stall during deployment, often surfacing as a failure on "Create imagePullSecret" or pods stuck in CrashLoopBackOff.

The simplest fix is to whitelist all cluster node IPs in fail2ban's ignoreip list so they are never subject to banning:

# /etc/fail2ban/jail.local [DEFAULT] ignoreip = 127.0.0.1/8 ::1 <NODE1_IP> <NODE2_IP> <NODE3_IP>

Restart fail2ban after making the change:

sudo systemctl restart fail2ban

Configuration Reference

VariableDescription
k8s_platformSet to rke2
k8s_offlineSet to true
kube_installSet to false (the installer does not create the RKE2 cluster)
download_kubeconfigSet to true to have the installer copy /etc/rancher/rke2/rke2.yaml to ~/.kube/config
ascender_image_pull_policyMust be set to IfNotPresent. Not set automatically for RKE2 offline installs
k8s_container_registryLeave unset. Images are imported with their original names and the installer uses those names directly
LEDGER_REGISTRYRequired in custom.config.yml when running the bundle creator with Ascender Pro, so it can authenticate against depot.ciq.com. Remove it before running setup.sh on the air-gapped node