Skip to content

Instantly share code, notes, and snippets.

@Boilerplate4u
Last active April 22, 2026 13:33
Show Gist options
  • Select an option

  • Save Boilerplate4u/1185abae20f20760f6a079b17d9636fe to your computer and use it in GitHub Desktop.

Select an option

Save Boilerplate4u/1185abae20f20760f6a079b17d9636fe to your computer and use it in GitHub Desktop.
How to set up a self-hosted OpenWrt Attended Sysupgrade (ASU) server

OpenWrt ASU – Installation Guide (Podman)

Step-by-step guide for setting up a self-hosted OpenWrt Attended Sysupgrade server (ASU) on Linux using Podman and podman-compose. Podman runs on Linux, macOS and Windows, so you can host the ASU server on whatever machine you have available, even your own laptop.

Tested on: Alpine Linux 3.22, rootful Podman.


How it works

ASU does not compile anything. Instead it downloads pre-built packages from the official OpenWrt CDN and uses the ImageBuilder tool to assemble a ready-to-flash firmware image for your specific router model. Builds typically complete in under a minute, making the server lightweight enough to run on a laptop or a small home server.

It is a drop-in replacement for the official sysupgrade.openwrt.org server. The only change needed on the router is pointing LuCI to your own server URL instead of the default one.


Prerequisites

  • Linux server with root access
  • Internet access (the server downloads ImageBuilder containers and packages from the OpenWrt CDN)
  • At least 10 GB free disk space (ImageBuilder containers are ~500 MB each and are cached per target/version)

Installing dependencies

  • podman – container runtime replacing Docker, supports rootful and rootless modes
  • podman-compose – manages multi-container stacks via compose files
  • git – required to clone the ASU source code
  • python3 – required internally by the ASU server
  • curl – used to test and verify API responses
apk update
apk add podman podman-compose git python3 curl

Enable and start required services:

rc-update add cgroups default
rc-service cgroups start
rc-update add podman default
rc-service podman start

Verify the installation

podman --version
podman-compose --version
git --version

All three commands should return a version number without errors.


1. Configure Podman networking (nftables)

Podman defaults to iptables for network management. If the system already runs nftables (e.g. via Docker or an existing firewall stack) conflicts will occur. Switch the firewall driver:

mkdir -p /etc/containers
cat >> /etc/containers/containers.conf << 'EOF'

[network]
firewall_driver = "nftables"
EOF

2. Fix the Podman OpenRC service (Alpine-specific)

The Podman OpenRC service attempts to start containers before the socket is ready, causing conflicts with podman-compose. Override start_post:

mkdir -p /etc/init.d/podman.d
cat > /etc/init.d/podman.d/override.sh << 'EOF'
start_post() {
    return 0
}
EOF

Enable and start the Podman service:

rc-update add podman default
rc-service podman start

3. Redis: vm.overcommit_memory

By default Linux denies memory allocations it deems excessive (vm.overcommit_memory = 0). Redis temporarily requests more memory than it actually consumes, which Linux may deny. Without this setting Redis will log warnings and risk crashing.

Add to sysctl and apply immediately:

echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf
sysctl -w vm.overcommit_memory=1

4. Clone the ASU source code

git clone https://github.com/openwrt/asu.git /opt/asu
cd /opt/asu

5. Change network mode from pasta to bridge

ASU defaults to network_mode="pasta" which only works in rootless mode. Switch to bridge for rootful Podman:

sed -i 's/network_mode="pasta"/network_mode="bridge"/' /opt/asu/asu/build.py

Verify:

grep -n "network_mode" /opt/asu/asu/build.py
# Expected: 201:        network_mode="bridge",

6. Check that port 8080 is available for ASU

ASU uses Caddy as a reverse proxy, which will listen on port 8080. Before configuring the stack, verify that no other service is already using it:

ss -tlnp | grep -E "808|809"

Empty output means the port is free. If something shows up you need to either stop the conflicting service or change the port in Caddyfile and podman-compose.yml to something else, e.g. 8088.


7. Configure podman-compose.yml

The default configuration binds the ASU server to 127.0.0.1:8000 and lacks a reverse proxy. Replace podman-compose.yml with the following:

cat > /opt/asu/podman-compose.yml << 'EOF'
version: "3"

services:
  redis:
    image: redis/redis-stack-server
    ports:
      - "127.0.0.1:6379:6379"

  server:
    image: docker.io/openwrt/asu:latest
    command: uv run uvicorn --host 0.0.0.0 'asu.main:app'
    ports:
      - "127.0.0.1:8000:8000"
    depends_on:
      - redis

  worker:
    image: docker.io/openwrt/asu:latest
    command: uv run rqworker --url redis://redis default
    depends_on:
      - redis

  caddy:
    image: caddy:latest
    ports:
      - "0.0.0.0:8080:8080"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
    depends_on:
      - server
EOF

8. Create the Caddyfile

Caddy acts as a reverse proxy and handles:

  • CORS headers (required by LuCI)
  • OPTIONS preflight responses
  • Redirect from /api/overview to /api/v1/overview
cat > /opt/asu/Caddyfile << 'EOF'
:8080 {
    @options method OPTIONS
    handle @options {
        header Access-Control-Allow-Origin "*"
        header Access-Control-Allow-Methods "GET, POST, OPTIONS"
        header Access-Control-Allow-Headers "Content-Type"
        respond 204
    }

    handle /api/overview {
        redir /api/v1/overview 301
    }

    reverse_proxy server:8000 {
        header_up Host {host}
    }

    header Access-Control-Allow-Origin "*"
    header Access-Control-Allow-Methods "GET, POST, OPTIONS"
    header Access-Control-Allow-Headers "Content-Type"
}
EOF

9. Build the ASU image

Always build with --no-cache to ensure the patch in asu/build.py is included in the image:

cd /opt/asu
podman build --no-cache -t docker.io/openwrt/asu:latest -f Containerfile .

The build process takes a few minutes and downloads Python packages from PyPI.


10. Start the stack

cd /opt/asu
podman-compose up -d

Verify that all containers are running:

podman ps

Expected result: four containers with status Up: asu_redis_1, asu_server_1, asu_worker_1, asu_caddy_1.


11. Verify the installation

curl -sL http://127.0.0.1:8000/api/v1/overview | python3 -m json.tool | head -20

The response should contain latest, branches and a list of OpenWrt versions.

Also test via the Caddy proxy:

curl -s http://<server-ip>:8080/ | grep "Sysupgrade Server"

12. Configure LuCI on the router

In LuCI: System → Attended Sysupgrade → Server URL:

http://<server-ip>:8080

Click Search for upgrades and verify that search and build work correctly. The worker automatically downloads the correct ImageBuilder container on the first build for a new target/version combination.


Managing the service

Action Command
Start podman-compose start
Stop (keeps containers) podman-compose stop
Restart podman-compose restart
Tear down and remove containers podman-compose down
Show logs podman-compose logs -f
Show logs for one container podman logs -f asu_worker_1

Auto-start on boot

rc-update add podman default

The Podman service will automatically start containers that have restart: unless-stopped on system boot. Add to podman-compose.yml as needed:

    restart: unless-stopped

Troubleshooting

Pasta error during build:

pasta networking is only supported for rootless mode

Verify that the patch in step 5 is correct and that the image was rebuilt with --no-cache.

CORS error in LuCI: Verify that the Caddy proxy is running and that the LuCI URL points to port 8080 (Caddy) and not 8000 (the ASU server directly).

nftables conflicts on startup: Check firewall_driver = "nftables" in /etc/containers/containers.conf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment