Installing Ascender on AKS
Overview
This guide covers the prerequisites and configuration specific to deploying Ascender on Azure Kubernetes Service. For common configuration variables (Ascender application, Ascender Pro, PostgreSQL, TLS), see the main install guide.
Prerequisites
Azure CLI credentials and a service principal must be configured before running the installer. Complete the Authenticating with Azure CLI section below before editing any configuration.
In addition to the general prerequisites, AKS installations require:
- Rocky Linux version 9
- A Microsoft Azure subscription with Owner or User Access Administrator privileges. Service principal creation involves role assignment at the subscription scope, which requires one of these roles.
- The Azure CLI installed and authenticated on the machine running the installer
Authenticating with Azure CLI
-
Log in to Azure:
az loginThis opens a browser for authentication. If no browser is available, use device code flow:
az login --use-device-code -
The Ascender installer expects Azure service principal credentials at
~/.azure/credentials. Create them as follows:List your subscriptions to get the subscription ID:
az account list --output tableCreate a service principal:
az ad sp create-for-rbac --name ascender-installer --role Contributor --scopes /subscriptions/<your-subscription-id>Record the
appId(client ID),password(client secret), andtenantfrom the output. The client secret is only shown once. -
Create the credentials file:
mkdir -p ~/.azure cat <<EOF > ~/.azure/credentials [default] subscription_id=<your-subscription-id> client_id=<your-client-id> secret=<your-client-secret> tenant=<your-tenant-id> EOF chmod 600 ~/.azure/credentials
Example Configuration
If you do not have the ascender-install directory, clone it:
git clone https://github.com/ctrliq/ascender-install.git
If you already have it, pull the latest changes:
cd ascender-install
git pull
Generate a TLS certificate for Ascender:
openssl req -x509 -newkey rsa:4096 -keyout ascender.key -out ascender.crt -days 365 -nodes \
-subj "/CN=<ascender.example.com>" \
-addext "subjectAltName=DNS:<ascender.example.com>"
Replace <ascender.example.com> with your ASCENDER_HOSTNAME value.
vim custom.config.yml
k8s_platform: aks
k8s_lb_protocol: https
AKS_CLUSTER_NAME: <ascender-prod>
AKS_CLUSTER_STATUS: provision
AKS_CLUSTER_REGION: <eastus>
AKS_K8S_VERSION: "<1.29>"
AKS_INSTANCE_TYPE: Standard_D2_v2
AKS_NUM_WORKER_NODES: 3
AKS_WORKER_VOLUME_SIZE: 100
USE_AZURE_DNS: true
tls_crt_path: "{{ playbook_dir }}/../ascender.crt"
tls_key_path: "{{ playbook_dir }}/../ascender.key"
ASCENDER_HOSTNAME: <ascender.example.com>
ASCENDER_DOMAIN: <example.com>
ASCENDER_NAMESPACE: ascender
ASCENDER_ADMIN_USER: admin
ASCENDER_ADMIN_PASSWORD: "<change-me>"
ASCENDER_VERSION: 25.3.5
ASCENDER_OPERATOR_VERSION: 2.19.4
ascender_garbage_collect_secrets: true
LEDGER_INSTALL: true
LEDGER_HOSTNAME: <ledger.example.com>
LEDGER_NAMESPACE: ledger
LEDGER_REGISTRY:
BASE: depot.ciq.com
USERNAME: <DEPOT USERNAME>
PASSWORD: <DEPOT TOKEN>
LEDGER_ADMIN_PASSWORD: "<change-me>"
LEDGER_DB_PASSWORD: "<change-me>"
LEDGER_VERSION: latest
LEDGER_WEB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-web
LEDGER_PARSER_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-parser
LEDGER_DB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-db
Running the Installer
From the ascender-install directory:
./setup.sh
Verifying the Installation
Confirm all pods are running:
kubectl get pods -n ascender
All pods should reach Running or Completed status. If Ascender Pro was installed:
kubectl get pods -n ledger
Check the load balancer was created and has an external IP:
kubectl get ingress -n ascender
Connecting to the Web UI
After installation, access Ascender at https://ASCENDER_HOSTNAME (the value you configured). Log in with ASCENDER_ADMIN_USER and ASCENDER_ADMIN_PASSWORD.
Cleanup
If you provisioned the AKS cluster through the installer and want to tear it down, the Terraform state is saved in ascender_install_artifacts/. To destroy the cluster:
terraform -chdir=ascender_install_artifacts/aks_deploy/ destroy --auto-approve
This destroys the entire AKS cluster and all workloads running on it. Verify that you have backups before proceeding.
AKS Configuration Reference
Add these variables to your custom.config.yml alongside the common configuration.
Cluster Settings
| Variable | Default | Description |
|---|---|---|
AKS_CLUSTER_NAME | ascender-aks-cluster | Name of the AKS cluster |
AKS_CLUSTER_STATUS | provision | Cluster lifecycle action. See below |
AKS_CLUSTER_REGION | southcentralus | Azure region for the cluster |
AKS_CLUSTER_STATUS controls what the installer does with the cluster:
provision: Create a new AKS cluster, then install Ascenderconfigure: Use an existing cluster by name, but apply required configurationno_action: Use an existing cluster as-is with no changes before installing Ascender
Node Pool (required when provisioning)
These variables are used when AKS_CLUSTER_STATUS is provision:
| Variable | Default | Description |
|---|---|---|
AKS_K8S_VERSION | 1.29 | Kubernetes version. See AKS supported versions |
AKS_INSTANCE_TYPE | Standard_D2_v2 | Worker node VM size |
AKS_NUM_WORKER_NODES | 3 | Number of worker nodes |
AKS_WORKER_VOLUME_SIZE | 100 | OS disk size per worker node in GB |
DNS
| Variable | Default | Description |
|---|---|---|
USE_AZURE_DNS | true | Use Azure DNS for automated DNS management |
If USE_AZURE_DNS is true, the installer automatically creates DNS records for ASCENDER_HOSTNAME (and LEDGER_HOSTNAME if Ascender Pro is installed).
If set to false, you must manually create A records with your DNS provider pointing those hostnames to the Azure load balancers created by the installer.