Skip to Content

Restore

Overview

Version must match

The Ascender version running on the cluster must match the version the backup was taken from. Restoring across versions is not supported.

Ascender can be restored from a backup previously created with ./setup.sh -b. The restore process replaces the current PostgreSQL database with the contents of a local backup, rolling the deployment back to the state captured in that backup.

Restore works on any running Ascender deployment, including a fresh install. You do not need to restore onto the same instance the backup was taken from. Restoring onto a new install of the same version is fully supported.

Restore is only supported for deployments using the built-in (containerized) PostgreSQL database. If ASCENDER_PGSQL_HOST is set in your configuration, the restore will print a message and exit without making any changes. For external PostgreSQL, restore the database using your own tooling such as pg_restore.

Prerequisites

  • A running Ascender deployment with the operator, task, web, and PostgreSQL pods present. This can be the original instance or a fresh install of the same Ascender version.
  • Rocky Linux 8 or 9.
  • A local backup in ascender_install_artifacts/backups/current/ (created by ./setup.sh -b)
  • kubectl access to the cluster with a valid kubeconfig at ~/.kube/config
  • The ascender-install directory with your custom.config.yml from the original install

If you no longer have the ascender-install directory, clone it:

git clone https://github.com/ctrliq/ascender-install.git

If you already have it, pull the latest changes before running:

cd ascender-install git pull

Copy your original custom.config.yml into the directory before running the restore.

Danger

The restore process deletes the existing PostgreSQL PVC and replaces the database contents entirely. Any data on the running deployment will be lost. Verify that you have the correct backup before proceeding.

Running a Restore

From the ascender-install directory, run:

./setup.sh -r

The process takes several minutes. Ascender will be unavailable during the restore.

What Happens During a Restore

  1. Cleans up any previous restore resources. Any existing AWXRestore or AWXBackup resources and the temp-copy-pod are removed to ensure a clean starting state.

  2. Provisions a backup PVC. The installer creates an AWXBackup resource against the running deployment. This provisions the PVC that the operator uses as a staging area for the restore. The backup data captured here is not used. This step exists only to create the PVC.

  3. Copies local backup files into the cluster. A temporary pod mounts the backup PVC. The three backup files (tower.db, secrets.yml, awx_object) are copied from ascender_install_artifacts/backups/current/ into the PVC with correct ownership and permissions.

  4. Restores secrets. The PostgreSQL configuration secret from the backup is applied to the cluster before the database is rebuilt.

  5. Rebuilds the database. The operator, task, and web deployments are scaled to zero, then the PostgreSQL StatefulSet is scaled down. The existing PostgreSQL PVC is deleted. PostgreSQL is then scaled back up with a fresh empty database.

  6. Restores from backup. An AWXRestore resource is applied pointing the operator at the backup files on the PVC. The operator restores the database dump, secrets, and deployment configuration.

  7. Brings services back up. The operator is scaled back up and waits for the PostgreSQL pod and web pods to become ready. The AWXRestore resource is then removed. Ascender is accessible again with the restored data.

Verifying a Restore

After the restore completes, confirm that all pods are running:

kubectl get pods -n ascender

Log in to the Ascender web UI and verify that your data (organizations, credentials, job templates, inventories) matches the state at the time of the backup.

Common Restore Scenarios

Restoring After Data Loss

If Ascender is still running but data needs to be rolled back:

  1. Confirm the backup you want to restore is in ascender_install_artifacts/backups/current/
  2. Run ./setup.sh -r

Restoring onto a Fresh Install

If the original cluster is lost or you are moving to new infrastructure:

  1. Run a fresh install of the same Ascender version the backup was taken from
  2. Copy your backup directory to ascender_install_artifacts/backups/current/ on the new machine
  3. Run ./setup.sh -r

Restoring from a Timestamped Backup

The backups/current/ directory is a symlink to the most recent backup. To restore from an older timestamped backup, update the symlink:

cd ascender_install_artifacts/backups rm -f current ln -s backup-20260309T214902 current

Then run ./setup.sh -r.

Limitations

  • External PostgreSQL is not supported. If ASCENDER_PGSQL_HOST is set in your configuration, the restore will exit without making any changes. Restore the database using your own tooling such as pg_restore.
  • Versions must match. The running Ascender version must match the version the backup was taken from.
  • Downtime is required. Ascender is fully unavailable during the restore. Plan for a maintenance window.
  • Backup and restore must use the same namespace. The restore expects the deployment in the namespace specified by ASCENDER_NAMESPACE in your configuration.