Menu

WebSocket Connector - Usage Guide

Aussom-Server has built-in WebSocket support. You write a function on your app class, mark it with @Websocket, and the server hands it a WsConn object every time a client connects to that route. From there your function sends frames, registers callbacks for incoming messages, and stays alive until the client closes (or you do).


What WebSockets are good for

WebSockets are a long-lived, bidirectional channel between a client (usually a browser) and your server. Once the connection is open, either side can send a message at any time without the overhead of opening a new HTTP request. Compared to plain HTTP polling:

Workload HTTP polling WebSocket
Push a notification to a client client polls every Ns server pushes when ready
Stream live data (prices, telemetry) many requests, lag one connection, low lag
Multi-client chat or collaboration each client polls broadcast over open channels
Long-running computation status updates client polls or SSE server pushes progress

Typical use cases inside an Aussom-Server app:

  • Live dashboards and metric streams.
  • Chat or collaborative editing.
  • Notification fan-out (one event, many connected clients).
  • Game or simulation state updates.
  • Build/deploy status streams.

WebSockets are not a good fit when the client only ever asks once and waits for one response - that's plain HTTP. They also do not replace HTTP for cacheable read endpoints.


How it fits in an Aussom-Server app

A WebSocket route is just a function on your AppBase-derived class with the @Websocket annotation. The function takes one argument - a WsConn object - and is called once per new connection.

  • The route URL is /<appName>/<functionName>. So @Websocket public chat(ws) on app myapp is reachable at ws://host:port/myapp/chat.
  • The HTTP listener and the WebSocket listener share the same port. Aussom-Server sniffs the Upgrade: websocket header on every request and routes accordingly. You can mix HTTP routes and @Websocket routes on the same app.
  • A function marked @Websocket is not callable via HTTP. Aussom-Server returns 400 if a regular GET hits a WebSocket function. Likewise, an HTTP-only function (no @Websocket) returns close code 1008 if a client tries to upgrade.

Quick start: an echo route

The smallest useful WebSocket route. It echoes every text frame back to the same client.

@Api(version = "1.0.0")
class echoapp : AppBase {
    @Websocket
    public echo(ws) {
        ctx = new echoCtx();
        ctx.attach(ws);
    }
}

/**
 * Per-connection helper. Stores the WsConn and registers a message
 * callback that replies on it.
 */
class echoCtx {
    public ws = null;

    public attach(ws) {
        this.ws = ws;
        ws.onMessage(::handleMsg);
    }

    public handleMsg(text) {
        this.ws.send("echo: " + text);
    }
}

Connect from a browser console:

const sock = new WebSocket('ws://127.0.0.1:8081/echoapp/echo');
sock.onopen    = () => sock.send('hello');
sock.onmessage = e  => console.log(e.data);   // "echo: hello"
sock.onclose   = () => console.log('closed');

That's the whole pattern. The route function is called once when the connection opens; everything after that is event callbacks invoked on the message queue.

Why the helper class?

Aussom's ::handleMsg callback syntax binds to the current this. If you registered ::handleMsg directly inside echo(ws), this would be the echoapp singleton - not your connection - and all sockets would share state. Creating a fresh per-connection ctx object means each connection's callback runs against its own this.ws and any other state you keep on the ctx. This is the recommended pattern for any non-trivial route.


The WsConn API

Every @Websocket function receives one argument: a WsConn object that represents this single connection. The object's full API:

Method Purpose
send(string Text) Send a text frame.
sendBytes(object Buf) Send a binary frame. Buf is an Aussom Buffer.
onMessage(callback Cb) Register a callback for incoming text frames. Signature: (text).
onBinary(callback Cb) Register a callback for incoming binary frames. Signature: (buffer).
onClose(callback Cb) Register a close callback. Signature: (code, reason).
onError(callback Cb) Register an error callback. Signature: (message).
close(int Code = 1000, string Reason = "") Server-initiated close.
getReqPath() Path from the original handshake (e.g. "/echoapp/echo").
getReqHeaders() Map of handshake headers (lower-cased keys).
getQueryString() Raw query string, or "".
getSrcAddress() Client IP address.

Each onX setter replaces any previously registered callback - if you call ws.onMessage(::a) and then ws.onMessage(::b), only b will fire after that. There's no built-in subscriber list.

Threading model

Receive callbacks (onMessage, onBinary, onClose, onError) for a single connection always run in order, one at a time, on Aussom-Server's worker thread pool. Two concurrent text frames on the same connection won't race against each other. Different connections run in parallel - so a slow handler on connection A doesn't block messages on connection B.

You don't need to add locks or synchronization inside your callback methods for state stored on the per-connection ctx. State shared across connections (a peer list, a chat room, a shared counter) does need protection - see the broadcast example below.


Pattern: per-connection state

A counter that ticks up for every text frame the client sends, replies with the running total, and emits a final summary on close.

@Api(version = "1.0.0")
class tickerapp : AppBase {
    @Websocket
    public ticker(ws) {
        ctx = new tickerCtx();
        ctx.attach(ws);
    }
}

class tickerCtx {
    public ws = null;
    public count = 0;

    public attach(ws) {
        this.ws = ws;
        ws.onMessage(::handleMsg);
        ws.onClose(::handleClose);
        ws.send("ready");
    }

    public handleMsg(text) {
        this.count += 1;
        this.ws.send("count:" + this.count);
    }

    public handleClose(code, reason) {
        // Best-effort summary frame. The channel may already be
        // closing; this can no-op silently if it does.
        this.ws.send("final:" + this.count);
    }
}

Each new connection gets its own tickerCtx, so the counter is scoped to the connection. No locking is needed because the worker pool serializes callbacks for a given connection.


Pattern: broadcasting to all connected clients

A simple chat room. The app keeps a list of connected WsConn objects and rebroadcasts every incoming message to all of them.

include thread;        // for the lock; comes from aussom-base

@Api(version = "1.0.0")
class chatapp : AppBase {
    private peers = [];
    private lock = new lock();

    @Websocket
    public room(ws) {
        ctx = new chatCtx();
        ctx.app = this;
        ctx.attach(ws);
    }

    /**
     * Add a connection to the room. Holds the lock while mutating
     * the peer list, since multiple connections could open at the
     * same time.
     */
    public addPeer(ws) {
        this.lock.acquire();
        try {
            this.peers @= ws;
        } finally {
            this.lock.release();
        }
    }

    /**
     * Remove a connection from the room.
     */
    public removePeer(ws) {
        this.lock.acquire();
        try {
            keep = [];
            for (p : this.peers) {
                if (p != ws) {
                    keep @= p;
                }
            }
            this.peers = keep;
        } finally {
            this.lock.release();
        }
    }

    /**
     * Send a frame to every currently-connected peer. Snapshots the
     * peer list under the lock, then sends outside the lock so a
     * slow client can't stall everyone.
     */
    public broadcast(text) {
        snapshot = [];
        this.lock.acquire();
        try {
            for (p : this.peers) {
                snapshot @= p;
            }
        } finally {
            this.lock.release();
        }
        for (p : snapshot) {
            try {
                p.send(text);
            } catch (e) {
                // Peer may have disconnected; ignore.
            }
        }
    }
}

/**
 * Per-connection state. Holds the peer's WsConn and a back-reference
 * to the app so the message handler can call broadcast.
 */
class chatCtx {
    public ws = null;
    public app = null;
    public name = "anon";

    public attach(ws) {
        this.ws = ws;
        ws.onMessage(::handleMsg);
        ws.onClose(::handleClose);
        this.app.addPeer(ws);
        this.app.broadcast("[*] someone joined");
    }

    public handleMsg(text) {
        // Treat the first message as the user's name.
        if (this.name == "anon" && text.startsWith("name:")) {
            this.name = text.substr(5, #text);
            this.app.broadcast("[*] " + this.name + " joined");
            return;
        }
        this.app.broadcast(this.name + ": " + text);
    }

    public handleClose(code, reason) {
        this.app.removePeer(this.ws);
        this.app.broadcast("[*] " + this.name + " left");
    }
}

Connect three browser tabs to ws://127.0.0.1:8081/chatapp/room, send name:alice from one, name:bob from another, then any message from any tab fans out to all three.

Why the lock

The peer list is shared across all connections, and connections run in parallel. Without the lock, two clients connecting at the same time could clobber each other's append. Aussom Server's per-connection serialization only protects state stored on a single connection's ctx; cross-connection state (the peer list) is the app's responsibility.

The broadcast snapshots the list under the lock, then iterates and sends outside the lock. Sending blocks the calling worker thread on slow clients; doing it under the lock would let one slow client stall every new connect or disconnect.


Pattern: binary frames

onBinary gives you an Aussom Buffer. Use the buffer API (getBuffer(), size(), getString()) to read; build a Buffer of your own to send back.

@Websocket
public bridge(ws) {
    ctx = new bridgeCtx();
    ctx.attach(ws);
}

class bridgeCtx {
    public ws = null;

    public attach(ws) {
        this.ws = ws;
        ws.onBinary(::handleBytes);
    }

    public handleBytes(buf) {
        // buf is an Aussom Buffer. Inspect or transform as needed.
        c.log("got " + buf.size() + " bytes");

        // Send a buffer back. For text-shaped binary, addString()
        // is the easiest construction path.
        out = new Buffer();
        out.newBuffer(64);
        out.addString("ack:");
        this.ws.sendBytes(out);
    }
}

A common pattern is to use a binary frame for one direction (say, file uploads or telemetry) and a text frame for the response. Both work on the same channel; they just trigger different callbacks.


Pattern: server-initiated close

Close the connection from the server side, with a code and an optional reason:

@Websocket
public timeboxed(ws) {
    ctx = new timeboxCtx();
    ctx.attach(ws);
}

class timeboxCtx {
    public ws = null;
    public start = 0;

    public attach(ws) {
        this.ws = ws;
        this.start = (new date()).getTime();
        ws.onMessage(::handleMsg);
    }

    public handleMsg(text) {
        elapsed = (new date()).getTime() - this.start;
        if (elapsed > 60000) {
            this.ws.send("session expired");
            this.ws.close(4001, "session timeout");
            return;
        }
        this.ws.send("you said: " + text);
    }
}

close() defaults to code 1000 (normal). Pass any code in the 3000-4999 range for application-specific close reasons.


Reading the handshake

If you need the original headers, query string, or source IP from the connecting client, read them off the WsConn. Useful for auth tokens passed as headers or query params.

@Websocket
public secure(ws) {
    headers = ws.getReqHeaders();
    if (!headers.containsKey("x-api-key")) {
        ws.close(1008, "missing x-api-key");
        return;
    }
    if (headers["x-api-key"] != "expected-secret") {
        ws.close(1008, "bad x-api-key");
        return;
    }

    ctx = new secureCtx();
    ctx.attach(ws);
}

Note that browsers cannot set arbitrary request headers on a new WebSocket(...) call. Custom headers work fine for non-browser clients (Node, Python, mobile SDKs, etc.). For browser-based auth, pass tokens through the URL query string and read with getQueryString(), or use cookies.


Connecting from a client

Browser

const sock = new WebSocket('ws://127.0.0.1:8081/myapp/myroute');
sock.onopen    = () => sock.send('hi');
sock.onmessage = e  => console.log(e.data);
sock.onclose   = e  => console.log('closed', e.code, e.reason);
sock.onerror   = e  => console.error(e);

Node.js (22+)

Node has a built-in WebSocket global, no library needed:

const ws = new WebSocket('ws://127.0.0.1:8081/myapp/myroute');
ws.addEventListener('open', () => ws.send('hi'));
ws.addEventListener('message', e => console.log(e.data));

Command line (websocat, optional)

For quick poking, install websocat from your package manager:

websocat ws://127.0.0.1:8081/myapp/myroute

Then type messages and read replies in the same terminal.


Routing recap

Request Result
GET /myapp/somefn (no upgrade header) Routed as HTTP. @Websocket functions return 400.
GET /myapp/somefn with Upgrade: websocket Handshake. @Websocket functions are dispatched.
Upgrade against a path with no matching app Close code 1008 ("policy violation").
Upgrade against a function with no @Websocket annotation Close code 1008.
Upgrade against a private or constructor function Close code 1008.
Server error during dispatch Close code 1011 ("internal error").

Common pitfalls

  • ::method binds to current this. If you call ws.onMessage(::handleMsg) from inside the route function on your AppBase-derived class, every connection shares the same this. Use a per-connection ctx class so each connection's callbacks run against their own state.
  • Don't share mutable state across connections without a lock. Aussom-Server serializes callbacks per connection, not across connections. Peer lists, room maps, and shared counters need explicit synchronization.
  • send() is non-blocking. It queues the frame and returns immediately. You can call it many times in a row without awaiting. The wire-order matches the order you called send on the same connection.
  • close() is also non-blocking. It sends a close frame; the channel actually closes after the client acknowledges. Don't expect onClose to fire synchronously.
  • onClose fires for both client- and server-initiated closes. Use it for per-connection cleanup (removing from a peer list, cancelling timers, flushing pending state). It is the only guaranteed lifecycle hook for "this connection is going away."
  • Browsers can't set custom headers on new WebSocket(...). If you need auth from a browser client, use cookies or pass tokens via the URL query string and read with getQueryString().
  • Avoid method, HttpReq, WsConn, AppBase, props, api, cache as local variable names. They collide with built-in classes/enums in aussomserver.aus.

When to reach for something else

  • Static file delivery - regular HTTP routes serve files. Don't open a WebSocket just to send one document.
  • Server-Sent Events (SSE) - if you only need server-to-client push (not bidirectional) and want plain HTTP semantics, SSE may be simpler. Aussom-Server doesn't have a built-in SSE helper today; if you need one, the regular HttpReq API is enough to implement it manually.
  • Long-running batch jobs - the webhook endpoint (/Admin/webhook) is designed to fire-and-forget a script. If the client doesn't need to stay connected for streamed updates, prefer that.