Installing Ascender on EKS
Overview
This guide covers the prerequisites and configuration specific to deploying Ascender on Amazon EKS. For common configuration variables (Ascender application, Ascender Pro, PostgreSQL, TLS), see the main install guide.
Prerequisites
AWS CLI credentials must be configured as root before running the installer. Complete the Authenticating with AWS CLI section below before editing any configuration.
In addition to the general prerequisites, EKS installations require:
- Rocky Linux version 9
- The AWS CLI installed at
/usr/local/bin/awson the machine running the installer - An AWS IAM user or role with permissions to create and manage EKS clusters, VPCs, IAM roles, and load balancers
- An SSL certificate in AWS Certificate Manager (ACM). The certificate must cover all hostnames you plan to use:
ASCENDER_HOSTNAME,LEDGER_HOSTNAMEif installing Ascender Pro, andASCENDER_MESH_HOSTNAMEif using mesh ingress. A wildcard certificate like*.example.comis recommended.
eksctl is only required for private clusters (EKS_PUBLIC: false). The installer will set up eksctl automatically if needed.
Authenticating with AWS CLI
The installer runs with sudo, so credentials must be available to root. Configure AWS CLI as root:
sudo aws configure
Verify identity and region before running the installer:
sudo aws sts get-caller-identity
sudo aws configure get region
What the Installer Deploys
When EKS_CLUSTER_STATUS is provision, the installer creates:
- VPC, subnets, internet gateway, and security groups
- IAM roles for the cluster and node group
- EKS cluster and managed node group
- Cert Manager
- AWS Load Balancer Controller (ALB)
- EBS CSI Driver with associated IAM service account
- Storage class (gp2, gp3, or io2) set as the cluster default
Example Configuration
The configuration below is a starting point. Cluster name, region, hostnames, and credentials are all specific to your environment.
If you do not have the ascender-install directory, clone it:
git clone https://github.com/ctrliq/ascender-install.git
If you already have it, pull the latest changes:
cd ascender-install
git pull
vim custom.config.yml
k8s_platform: eks
k8s_lb_protocol: https
USE_ROUTE_53: yes
EKS_CLUSTER_NAME: <ascender-prod>
EKS_CLUSTER_STATUS: provision
EKS_CLUSTER_REGION: <us-east-1>
EKS_K8S_VERSION: "1.35"
EKS_PUBLIC: false
EKS_INSTANCE_TYPE: t3.large
EKS_NUM_WORKER_NODES: 3
EKS_MIN_WORKER_NODES: 2
EKS_MAX_WORKER_NODES: 6
EKS_WORKER_VOLUME_SIZE: 100
EKS_CLUSTER_CIDR: "10.10.0.0/16"
EKS_NUM_SUBNETS: 3
EKS_SUBNET_SIZE: 26
EKS_INTERNET_GATEWAY: true
EKS_ALB_INBOUND_CIDRS: ["0.0.0.0/0"]
EKS_SSL_POLICY: "ELBSecurityPolicy-TLS13-1-2-Res-2021-06"
EKS_DEFAULT_STORAGE_CLASS: gp3
EKS_SSL_CERT: "<arn:aws:acm:us-east-1:123456789012:certificate/abc-def-123>"
ASCENDER_HOSTNAME: <ascender.example.com>
ASCENDER_DOMAIN: <example.com>
ASCENDER_NAMESPACE: ascender
ASCENDER_ADMIN_USER: admin
ASCENDER_ADMIN_PASSWORD: "<change-me>"
ASCENDER_VERSION: 25.3.5
ASCENDER_OPERATOR_VERSION: 2.19.4
ascender_garbage_collect_secrets: true
LEDGER_INSTALL: true
LEDGER_HOSTNAME: <ledger.example.com>
LEDGER_NAMESPACE: ledger
LEDGER_REGISTRY:
BASE: depot.ciq.com
USERNAME: <your-depot-username>
PASSWORD: <your-depot-token>
LEDGER_ADMIN_PASSWORD: "<change-me>"
LEDGER_DB_PASSWORD: "<change-me>"
LEDGER_VERSION: latest
LEDGER_WEB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-web
LEDGER_PARSER_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-parser
LEDGER_DB_IMAGE: depot.ciq.com/ascender-ledger-pro/ascender-ledger-pro-images/ledger-db
For all available configuration options, refer to default.config.yml in the repository.
Running the Installer
From the ascender-install directory:
sudo ./setup.sh
If you need to generate the IAM policy documents for your AWS user first:
sudo ./setup.sh -p
Verifying the Installation
Confirm all pods are running:
kubectl get pods -n ascender
All pods should reach Running or Completed status. If Ascender Pro was installed:
kubectl get pods -n ledger
Check the load balancer was created and has a DNS name:
kubectl get ingress -n ascender
Connecting to the Web UI
If USE_ROUTE_53: yes, DNS is configured automatically and Ascender is accessible at https://ASCENDER_HOSTNAME once DNS propagates.
If USE_ROUTE_53: no, create DNS records pointing your hostnames to the ALB DNS names shown in the ingress output above. For subdomains like ascender.example.com, create a CNAME record. For zone apex domains like example.com, use your DNS provider's ALIAS or ANAME record type. CNAME is not valid at the zone apex.
USE_ROUTE_53: yes creates CNAME records and does not support root/apex domains. If you need to resolve at the zone apex, set USE_ROUTE_53: no and create a Route 53 alias A record manually.
Log in with ASCENDER_ADMIN_USER and ASCENDER_ADMIN_PASSWORD.
EKS Configuration Reference
Add these variables to your custom.config.yml alongside the common configuration.
Cluster Settings
| Variable | Default | Description |
|---|---|---|
EKS_CLUSTER_NAME | ascender-eks-cluster | Name of the EKS cluster |
EKS_CLUSTER_STATUS | no_action | Cluster lifecycle action. See below. |
EKS_CLUSTER_REGION | us-east-1 | AWS region for the cluster |
EKS_PUBLIC | false | Controls cluster network exposure. When false, nodes are in private subnets and the load balancer is internal (only reachable within the VPC). When true, nodes are assigned public IPs and the load balancer is internet-facing. |
EKS_USER | ascenderuser | Name used to label the IAM policy artifacts generated by ./setup.sh -p. Running -p creates the policy JSON files. It does not create the IAM user itself. |
EKS_CLUSTER_STATUS controls what the installer does with the cluster:
provision: Create a new EKS cluster, then install Ascenderconfigure: Use an existing cluster by name and configure it with required policiesno_action: Use an existing cluster as-is with no changes before installing Ascender
Networking (required when provisioning)
| Variable | Default | Description |
|---|---|---|
EKS_CLUSTER_CIDR | 10.10.0.0/16 | VPC CIDR block |
EKS_NUM_SUBNETS | 3 | Number of subnets to create |
EKS_SUBNET_SIZE | 26 | CIDR prefix length for each subnet |
EKS_INTERNET_GATEWAY | true | Whether to create an internet gateway for the VPC. Required for outbound traffic from private nodes and for the ALB. Only set to false for fully private deployments with existing network egress. |
Node Pool (required when provisioning)
| Variable | Default | Description |
|---|---|---|
EKS_K8S_VERSION | 1.35 | Kubernetes version. Check currently supported versions before deploying. |
EKS_INSTANCE_TYPE | t3.large | Worker node instance type |
EKS_NUM_WORKER_NODES | 3 | Desired number of worker nodes |
EKS_MIN_WORKER_NODES | 2 | Minimum worker nodes (auto-scaling) |
EKS_MAX_WORKER_NODES | 6 | Maximum worker nodes (auto-scaling) |
EKS_WORKER_VOLUME_SIZE | 100 | EBS volume size per node in GB |
Load Balancer
| Variable | Default | Description |
|---|---|---|
EKS_SSL_CERT | (required) | ACM certificate ARN for HTTPS. Must cover both ASCENDER_HOSTNAME and LEDGER_HOSTNAME if installing Ascender Pro. A wildcard certificate like *.example.com is recommended. |
EKS_SSL_POLICY | ELBSecurityPolicy-TLS13-1-2-Res-2021-06 | ALB TLS security policy |
EKS_ALB_INBOUND_CIDRS | ["0.0.0.0/0"] | CIDR blocks allowed to access the load balancer |
EKS_ALB_SECURITY_GROUPS | (unset) | Custom security groups for the ALB. If unset, a security group is created automatically. |
Storage
| Variable | Default | Description |
|---|---|---|
EKS_DEFAULT_STORAGE_CLASS | gp3 | Default storage class. Options: gp2, gp3, io2. If unset, no default storage class is created. |
DNS
| Variable | Default | Description |
|---|---|---|
USE_ROUTE_53 | yes | When yes, the installer automatically creates DNS records in Route 53. When no, DNS must be managed manually. |