Aussom-Server ships as a Docker image for production deployments. This guide walks through two lifecycle stages: installing it on a target host, and upgrading an existing install to a new version.
If you are doing local development, the plain JAR install
(INSTALL-debian.md or INSTALL-other.md) is simpler and the
container path adds nothing for you. Reach for the container when
you want a clean upgrade story, an isolated runtime, and a
predictable layout on a real server.
| Container path | What lives there |
|---|---|
/opt/aussom-server/aussom-server.jar |
The fat JAR. Replaced on every upgrade. |
/opt/aussom-server/entrypoint.sh |
Dispatches start (default) or cli for one-shot ops. |
/opt/aussom-server/cli.sh |
Carries the same env, config path, and lib classpath as the running server. |
/var/aussom-server/config |
Mount point for config.yaml. Read-only is fine. |
/var/aussom-server/apps |
Mount point for applications.yaml, app dirs, and the webhook script. |
/var/aussom-server/lib |
Mount point for runtime JARs the image cannot ship (JDBC drivers, etc.). |
/var/log/aussom-server |
Mount point for log files. Standard Linux log path. |
The container runs as the host user that ran install.sh. That UID
and GID are captured at install time and written into /srv/aussom/.env
as AUSSOM_UID and AUSSOM_GID. The compose file picks them up via
user: "${AUSSOM_UID}:${AUSSOM_GID}".
The practical effect: every file the server reads or writes - your apps
under /srv/aussom/apps/, log files under /var/log/aussom-server/,
applications.yaml, JDBC drivers in /srv/aussom/lib/ - stays owned by
the same user on the host and in the container. You can edit them with
your own login shell, no chown dance, and webhook scripts that do
git pull against an app's working tree run as the same user that owns
the tree.
If install.sh is run without sudo (or as root directly), it falls back
to the calling UID. You can also override the values explicitly by
passing AUSSOM_UID=... and AUSSOM_GID=... to install.sh.
The image still ships an aussom user (UID 1000) and falls back to it
if AUSSOM_UID/AUSSOM_GID are unset, so a bare docker run against
the image without compose still launches non-root.
HOME is set to /tmp inside the image, so tools like git that look
for a home directory work even when the runtime UID has no entry in
/etc/passwd (which is normal once compose overrides user:).
The launch command uses java -cp aussom-server.jar:/var/aussom-server/lib/* com.lehman.aussomserver.Main
(not -jar), so any *.jar you drop into the host's lib
directory is on the classpath at startup.
docker installed and running. Confirm with:
docker version
docker compose v2 plugin (note the space, not the
hyphenated docker-compose v1). Confirm with:
docker compose version
If you get 'compose' is not a docker command (or unknown shorthand flag: 'd' in -d later when install.sh runs), the
plugin is missing. On Debian / Ubuntu install one of:
sudo apt install docker-compose-plugin # Docker's official APT repo
# or
sudo apt install docker-compose-v2 # newer Ubuntu universe
Do not install the plain docker-compose package — that's
Compose v1, which is end-of-life and uses a different command name.
If neither package is in your repos, drop the plugin binary in by
hand:
sudo mkdir -p /usr/local/lib/docker/cli-plugins
sudo curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-x86_64 \
-o /usr/local/lib/docker/cli-plugins/docker-compose
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
docker group so the
aussom-server host wrapper (which calls docker exec) can reach
the daemon socket. Without this, every CLI op fails with
permission denied while trying to connect to the docker API at unix:///var/run/docker.sock. Add yourself before running
install.sh:
sudo usermod -aG docker "$USER"
newgrp docker # or log out and back in
docker version # should now succeed without sudo
install.sh itself (it writes under
/srv/aussom and /var/log/aussom-server)./srv/aussom/
config/
config.yaml (sample, with placeholder API key)
apps/
applications.yaml (empty list at first)
onCodeChange.sh (sample webhook stub)
lib/ (empty - drop your JDBC drivers here)
docker-compose.yml (the running compose file)
.env (mode 600, holds AUSSOMSERVER_KEY plus
AUSSOM_UID and AUSSOM_GID)
/var/log/aussom-server/
aussom-server.system.log
aussom-server.admin.log
<appName>.log (one per running app)
Everything under /srv/aussom/ and /var/log/aussom-server/ is owned
by the user that invoked install.sh (captured via SUDO_UID/
SUDO_GID). The container runs as that same user, so you never have to
chown files you copy in.
sudo mkdir -p /opt/aussom-server-install
sudo unzip aussom-server-0.11.2-docker.zip -d /opt/aussom-server-install
cd /opt/aussom-server-install
sudo ./install.sh
This is idempotent. Re-running on an existing install never
overwrites your config, applications, or .env file - it only
creates anything that is missing.
What it does in order:
docker load the bundled image tarball.sudo (the
normal case) this comes from SUDO_UID/SUDO_GID, so you get the
real user, not root. Override with AUSSOM_UID=... AUSSOM_GID=...
in front of install.sh if you want a different identity./srv/aussom/{config,apps,lib} and /var/log/aussom-server,
all owned by the captured UID/GID.config.yaml -> /srv/aussom/config/config.yamlapplications.yaml -> /srv/aussom/apps/applications.yamlsample-webhook.sh -> /srv/aussom/apps/onCodeChange.shdocker-compose.yml.sample -> /srv/aussom/docker-compose.yml/srv/aussom/.env does not exist, generate a fresh encryption
key and write it there with mode 600 along with AUSSOM_UID and
AUSSOM_GID. If AUSSOMSERVER_KEY is already in the calling
environment, that value is used instead. If .env already exists
from an older install, missing AUSSOM_UID/AUSSOM_GID lines are
appended.docker compose up -d. The compose file's
user: "${AUSSOM_UID}:${AUSSOM_GID}" directive picks up the values
from .env so the container runs as you.http://127.0.0.1:8091/Admin/health for up to 30
seconds. On a 200 with ok: true, install is complete.Two thin shell wrappers ship in the unzipped directory:
aussom-server - calls docker exec against the running container
for CLI ops (-k, -e, -v, etc.).restart-aussom-server - calls docker restart against the
running container.Drop both on PATH:
sudo cp aussom-server /usr/local/bin/aussom-server
sudo cp restart-aussom-server /usr/local/bin/restart-aussom-server
sudo chmod +x /usr/local/bin/aussom-server /usr/local/bin/restart-aussom-server
aussom-server -v # confirm it works
If you run multiple Aussom-Server containers on the same host,
override the target with AUSSOM_CONTAINER in your environment.
Both wrappers honor it:
AUSSOM_CONTAINER=aussom-staging aussom-server -v
AUSSOM_CONTAINER=aussom-staging restart-aussom-server
The sample config ships with apiKey: "REPLACE_ME_WITH_A_GENERATED_KEY".
Generate a real one and paste it in:
aussom-server -k # prints a fresh key
sudo nano /srv/aussom/config/config.yaml # paste into admin.server.clients[0].apiKey
restart-aussom-server
Verify the new key:
curl -H "X-API-KEY: <the key you pasted>" http://127.0.0.1:8091/Admin/applications
The response is a JSON array of running apps (empty at first). A 401 means the key didn't match.
Each app is a directory under /srv/aussom/apps/ with a <name>.aus
source file. Copy your app over, then add an entry to
applications.yaml:
cp -r /path/to/myapp /srv/aussom/apps/myapp
nano /srv/aussom/apps/applications.yaml
If the app came from another user (a tarball you extracted as root, a
scp from a different account), make sure the tree is owned by the
same user install.sh recorded as AUSSOM_UID in /srv/aussom/.env.
Most of the time the simple cp above is enough because you copy the
files as the same login user that ran the install.
Add:
applications:
- name: myapp
appDirectory: myapp
enabled: true
hostApiEndpoint: true
hostDocEndpoint: true
hostResources: true
publicHttpDirectory: public
reloadOnFileChange: true
logLevel: info
Restart:
restart-aussom-server
The new app is reachable at http://<host>:8081/myapp/.
Drop the JAR into /srv/aussom/lib/ and restart:
cp postgresql-42.7.0.jar /srv/aussom/lib/
restart-aussom-server
The container's launch command puts /var/aussom-server/lib/* on
the classpath at startup, so the driver is now visible to any app
that does include jdbc;. The directory is mounted read-only
inside the container.
The driver only needs to be readable by the runtime UID. As long as you
copy it in as the same user that ran install.sh (the user the
container runs as), no chown is needed.
The same path applies to anything else you'd put on the JVM classpath - third-party libraries, custom connector JARs, etc.
/srv/aussom/apps/onCodeChange.sh was populated from the sample
during install. As shipped it just prints what it received. To make
it do something useful, edit it in place:
sudo nano /srv/aussom/apps/onCodeChange.sh
The typical pattern: git pull against an app's working tree and
then curl POST to the admin reload endpoint. The shipped sample
header documents every AUSSOM_WEBHOOK_* env var the server sets
before invoking the script. See also design/webhook-design-doc.md.
Aussom-Server releases are hosted on the Aussom website as the same kind of zip you used for the initial install:
aussom-server-0.12.0-docker.zip
Download the new zip from the website and unzip it into the same
/opt/aussom-server-install directory you used for the initial
install (or a fresh one - the upgrade script doesn't care):
sudo unzip aussom-server-0.12.0-docker.zip -d /opt/aussom-server-install
cd /opt/aussom-server-install
sudo ./upgrade.sh 0.12.0
What the script does in order:
aussom-server-0.12.0.tar.gz next to itself. If the
tarball is missing it errors out clearly and points at the
Aussom website.docker load on the new tarball so the new tag exists locally.image: line in /srv/aussom/docker-compose.yml
in place, with a .bak backup. The edit is idempotent and only
touches lines matching ^[[:space:]]*image:[[:space:]]*aussom-server:<tag>.docker compose up -d to recreate the container against the
same volumes and env.http://127.0.0.1:8091/Admin/health for up to 30
seconds. On success, prints "Upgrade complete." and exits 0.
On timeout, prints the last response body and exits non-zero
while leaving the container running so you can investigate.All four host-volume directories stay put across the swap, so the following all survive without re-setup:
/srv/aussom/.env).config.yaml and admin client list./srv/aussom/apps/ and the entries in
applications.yaml./srv/aussom/lib/./var/log/aussom-server/.The only thing that moves on upgrade is the JAR inside the image.
The upgrade is a one-liner; the rollback is the same one-liner with the previous version:
sudo ./upgrade.sh 0.11.2
Provided you still have the old tarball (aussom-server-0.11.2.tar.gz)
in the unzipped install directory, docker load finds the image
already locally and the rollback completes in seconds.
aussom-server -v # prints the new version banner
curl http://127.0.0.1:8091/Admin/health # returns version + uptime
aussom-server -v # version banner
aussom-server -k # generate a fresh encryption key
aussom-server -e "secret string" # encrypt a value with the running key
aussom-server -h # full flag list
The wrapper docker execs into the running container, so all flags
that need the live config and key (like -e) just work.
The host-side wrapper covers the common case:
restart-aussom-server # restart
For everything else, run compose from the install directory
/srv/aussom:
( cd /srv/aussom && sudo docker compose ps ) # status
( cd /srv/aussom && sudo docker compose down ) # stop
( cd /srv/aussom && sudo docker compose up -d ) # bring it back up
( cd /srv/aussom && sudo docker compose logs -f ) # tail container stdout/stderr
The server writes log files to the host log volume:
sudo tail -f /var/log/aussom-server/aussom-server.system.log
sudo tail -f /var/log/aussom-server/aussom-server.admin.log
sudo tail -f /var/log/aussom-server/myapp.log
Webhook deliveries appear in the admin log under a [webhook <id>]
prefix.
curl http://127.0.0.1:8091/Admin/health
Returns:
{"ok":true,"version":"0.11.2","startTime":"2026-04-26 04:32:52","uptimeMs":20618}
Public endpoint, no API key required. Used by the docker-compose healthcheck and external uptime monitors.
The defaults assume you can use /srv/aussom and
/var/log/aussom-server on the host. To override, set environment
variables before running install.sh:
sudo AUSSOM_HOST_ROOT=/data/aussom \
AUSSOM_LOG_DIR=/data/aussom-logs \
AUSSOM_COMPOSE_DIR=/data/aussom \
./install.sh
After install, edit the resulting docker-compose.yml so the
left-hand side of each volume mapping matches your host paths.
The right-hand side (the in-container path) must stay as
/var/aussom-server/... and /var/log/aussom-server.
Check docker logs aussom-server. The most common cause is a
missing or empty AUSSOMSERVER_KEY. The entrypoint exits non-zero
with a clear message in that case. Confirm /srv/aussom/.env
exists and contains AUSSOMSERVER_KEY=... (mode should be 600).
/Admin/health returns nothingdocker port aussom-server - confirm port 8091 is mapped on the
host.cat /srv/aussom/config/config.yaml - confirm
admin.server.enabled: true.docker logs aussom-server - look for stack traces during admin
init.The admin API requires X-API-KEY for every endpoint except
/Admin/api (the OAS doc) and /Admin/health (the liveness
probe). Make sure you replaced the placeholder key in
config.yaml and that you're sending the new value in the request
header.
ClassNotFoundException for a JDBC driver/srv/aussom/lib/<driver>.jar and is
readable by the runtime UID. Cross-check with the values in
/srv/aussom/.env (AUSSOM_UID / AUSSOM_GID).lib/. The classpath
is read at JVM start.docker exec aussom-server ls /var/aussom-server/lib should
list the driver from inside the container.Tail /var/log/aussom-server/aussom-server.admin.log. Each
delivery is logged under a [webhook <id>] prefix; the script's
stdout (with stderr merged) appears there too.
Confirm the script file exists at the path in
admin.server.webhookScript and is executable.
Confirm the container has bash, git, curl, and ca-certificates
available - all four ship in the image by default. If your script
needs anything else, build a derived image:
FROM aussom-server:0.11.2
USER root
RUN apk add --no-cache jq python3
USER aussom
mvn not on PATHYou ran ./build-docker-image.sh with sudo. Don't
use sudo for the build. Add yourself to the docker group instead:
sudo usermod -aG docker "$USER"
newgrp docker
./build-docker-image.sh
docker compose pull fails on upgradeSelf-hosted releases live on the Aussom website, not on Docker Hub
(yet). upgrade.sh does NOT call docker compose pull. It calls
docker load on the local tarball, then docker compose up -d.
If you see a pull error, you're probably running docker compose up directly without first loading the new image - run
upgrade.sh <version> instead, or docker load -i aussom-server-<version>.tar.gz manually.