Part 5: Running the Service Locally

At the end of the last post we had a tested API layer: an OpenAPI spec, a generated APIProtocol, a controller that implemented it, and a suite of unit tests driven through a mocked repository. What we did not have was a way to actually run the thing. No @main, no HTTP server listening on a port, no way for a client on the other side of a socket to ask for a task.

In this post we’ll close that gap. There are three pieces: wiring up the executable entry point so swift run produces a live server, packaging the service into a Docker image that’s structurally identical between local development and production, and verifying the whole stack end-to-end with a handful of curl commands.

Along the way we’ll make a few decisions that matter: reusing the DynamoDBTables’ built-in in-memory table rather than writing our own dev fake, cross-compiling to a fully static Linux binary using Apple’s Static Linux SDK, and leaning on BuildKit cache mounts to keep rebuilds fast without polluting the host filesystem with a .build/ directory.

Wiring Up the Executable

The executable target already exists — it’s the task-cluster target in Package.swift (although we are going to slightly change its name for convention) — but up to now it’s just been a placeholder. We need to replace that with code that builds a repository, passes it to buildApplication, and starts the server.

Move Sources/task-cluster/task_cluster.swift to Sources/TaskCluster/TaskCluster.swift (updating the target in the package manifest) and replace the contents with a @main struct that constructs a InMemoryDynamoDBCompositePrimaryKeyTable from the dynamo-db-tables package, wraps it in a DynamoDBTaskRepository, reads the HTTP port from the environment using swift-configuration, and runs the application returned by buildApplication (binding to port 0.0.0.0 for use within a container). Add the matching product dependencies (DynamoDBTables, Configuration, Hummingbird) to the executable target.

Using DynamoDBTables’ in-memory table

Eventually we will want to set up a proper DynamoDB table to read and write against but at this point that requires a bit more setup than we need just to run the service and tests that its APIs work. To avoid this setup for now, we can just use an in-memory repository for now.

The obvious move here would be to write a parallel InMemoryTaskRepository that conforms to TaskRepository and stores tasks in a dictionary. That’s what the previous post hinted at. But there’s a nicer option.

DynamoDBTaskRepository from part two is generic over its table:

package struct DynamoDBTaskRepository<Table: DynamoDBCompositePrimaryKeyTable & Sendable>: TaskRepository {
// ...
}

And dynamo-db-tables ships with an InMemoryDynamoDBCompositePrimaryKeyTable that conforms to the same protocol. The library already maintains a testing-grade in-memory implementation with the same semantics as the real DynamoDB client — conditional writes, transactions, TTL, the lot. So rather than writing (and testing) a parallel repository for dev, we can reuse DynamoDBTaskRepository unchanged and swap the table it wraps.

The entry point

With that decision made, the @main is small:

import Configuration
import DynamoDBTables
import Hummingbird
import Logging
import TaskClusterApp
import TaskClusterDynamoDBModel
@main
struct TaskCluster {
static func main() async throws {
let config = ConfigReader(provider: EnvironmentVariablesProvider())
let port = config.int(forKey: "HTTP_PORT", default: 8080)
let logger = Logger(label: "TaskCluster")
let table = InMemoryDynamoDBCompositePrimaryKeyTable()
let repository = DynamoDBTaskRepository(table: table)
let configuration = ApplicationConfiguration(
address: .hostname("0.0.0.0", port: port)
)
let application = try buildApplication(
repository: repository,
configuration: configuration,
logger: logger
)
try await application.run()
}
}

A couple of things worth noting.

Binding to 0.0.0.0. For local development outside a container, binding to 127.0.0.1 is the right default — the server is only reachable from the host machine. But we’re about to package this service into a container, and inside a container the loopback interface is the container’s loopback, not the host’s. Binding to 0.0.0.0 tells Hummingbird to listen on every interface, which is what we need for docker run -p 8080:8080 to be able to forward traffic to the service. In production that still makes sense: the container network itself is the isolation boundary, not the bind address.

No explicit shutdown. Hummingbird’s application.runService() handles its own graceful shutdown on SIGINT and SIGTERM. When the in-memory table goes out of scope it’s reclaimed by ARC like any other struct. If we were using the Soto-backed table, we’d need an await client.shutdown() after runService() returns to flush the AWS client’s connection pool; we’ll come back to that when we wire up the real thing.

The executable’s dependencies

With the entry point in place, the executable target needs the two new imports:

.executableTarget(
name: "TaskCluster",
dependencies: [
"TaskClusterApp",
"TaskClusterDynamoDBModel",
.product(name: "DynamoDBTables", package: "dynamo-db-tables"),
.product(name: "Configuration", package: "swift-configuration"),
.product(name: "Hummingbird", package: "hummingbird"),
]
),

TaskClusterApp provides buildApplication, TaskClusterDynamoDBModel provides DynamoDBTaskRepository, DynamoDBTables provides the in-memory table, and Hummingbird gives us ApplicationConfiguration. The executable is the only place in the dependency graph that knows about all of these at once — the controller only sees TaskRepository, the repository only sees the DynamoDB table protocol, and the application builder only sees TaskRepository and Hummingbird. This is the payoff of keeping the layers cleanly separated.

A quick swift run TaskCluster should now produce a server listening on localhost:8080. That’s a useful sanity check, but running against the host toolchain isn’t where we want to spend our iteration time. Local host builds will work fine for individual developers, but the machine that compiles the service isn’t the machine that runs it in production. Let’s fix that now rather than later.

Why Package in a Container at All?

In production, this service will run as a container image on a container orchestrator — long gone are the days where we’re copying a binary onto an EC2 instance and hoping the dynamic library paths line up. And once production is a container, the argument for making development a container becomes hard to resist.

The usual pitch for dev-prod parity is that subtle differences between environments cause the kind of bug that only shows up at 2am on launch day. However, build reproducibility is perhaps an even stronger argument. In two years if we need to revisit this version of the code – say to track down the source of a long standing bug – we want to understand how today’s version of the code compiled in the toolchain of today, not in the development environment two years from now.

This means specifying everything that determines how the code is built – such as the Swift compiler version – rather than relying on what happens to be installed on the development machine at the time.

The cleanest way to do this is to sidestep the host toolchain entirely. If every build — local or CI — happens inside the same Swift container image, then everyone is compiling with the same compiler, against the same stdlib, producing the same output. Developers don’t need to install Swift at all, let alone worry about which Swift.

There’s an efficiency cost: cold container builds are slower than host builds, because every build starts from a pristine container filesystem. We’ll come back to this — it’s what the BuildKit cache mounts section is about — but the baseline we’re optimising from is every build is fully reproducible from a fresh checkout, not every build is fast on my machine specifically.

Choosing the Build Flavour: the Static Linux SDK

Once we’ve decided to build in a container, we still need to choose what kind of binary to produce. Two reasonable options:

  1. Build a dynamically linked binary against glibc on an Ubuntu base image, and ship a runtime image based on the same distribution.
  2. Cross-compile a fully static binary against musl using the Static Linux SDK, and ship it in a FROM scratch image with nothing else at all.

Option 1 is the path of least resistance. The official swift:6.3.1-noble image gives you a Swift toolchain on Ubuntu 24.04; build there, copy the binary into a smaller runtime image, done. But the runtime image needs something under it — at minimum glibc, the ICU libraries Foundation depends on, and the Swift stdlib. In practice this means a runtime image with a full OS and a non-trivial surface area of shared libraries.

Option 2 is more interesting. The Static Linux SDK produces a single executable with everything statically linked — Foundation, the Swift runtime, even the C library (musl replaces glibc). The resulting binary has no runtime dependencies at all, so the runtime image can be FROM scratch: no base OS, no package manager, no shell. Just the binary. Additionally the attack surface collapses to whatever’s in the Swift runtime itself.

For a service that’s going to be re-deployed frequently and scaled horizontally, those are compelling properties. There’s also a lovely secondary benefit: cross-compilation. The SDK installs as an artifactbundle on any host where the Swift toolchain runs, so you can target aarch64-swift-linux-musl from either an x86_64 or an ARM64 build machine. For a development loop on Apple Silicon targeting ARM production, that maps perfectly.

There are tradeoffs worth being honest about. Musl’s implementation of a few edge cases (DNS resolution, locale handling) differs subtly from glibc, and some third-party C libraries don’t build cleanly against it. Swift Foundation is known to work well, and the pure-Swift dependencies in this project don’t touch any of the problematic areas. But if the service later grows a dependency on, say, a Swift package that wraps a C library that only builds against glibc, we’d have to revisit this decision. For now, the cost is a one-line SDK install in the Dockerfile; the upside is a binary we can ship in an empty image.

A Three-Stage Dockerfile

With the approach chosen, we can build the Dockerfile. It has three stages: a toolchain stage that installs the Static Linux SDK, a build stage that compiles the service, and a runtime stage that holds only the compiled binary.

Note: you can update the SDK path below (and its checksum) with the latest one at https://www.swift.org/install/macos/#swift-sdk-bundles

Create a Dockerfile at the project root with three stages. The toolchain stage extends swift:6.3.1-noble and installs the Static Linux SDK (for example using swift sdk install https://download.swift.org/swift-6.3.1-release/static-sdk/swift-6.3.1-RELEASE/swift-6.3.1-RELEASE_static-linux-0.1.0.artifactbundle.tar.gz –checksum fac05271c1f7d060bd203240ce5251d5ca902d30ac899f553765dbb3a88b97ad). The build stage uses COPY . . to preserve mtimes, copies the source tree (–mount=type=cache,target=/workspace/.build) and runs swift build -c release --swift-sdk aarch64-swift-linux-musl. The runtime stage is FROM scratch and contains the compiled binary, an EXPOSE 8080 directive, and the entrypoint.

# syntax=docker/dockerfile:1
FROM swift:6.3.1-noble AS toolchain
RUN swift sdk install \
https://download.swift.org/swift-6.3.1-release/static-sdk/swift-6.3.1-RELEASE/swift-6.3.1-RELEASE_static-linux-0.1.0.artifactbundle.tar.gz \
--checksum fac05271c1f7d060bd203240ce5251d5ca902d30ac899f553765dbb3a88b97ad
FROM toolchain AS build
WORKDIR /workspace
COPY . .
RUN --mount=type=cache,target=/workspace/.build \
swift build -c release --swift-sdk aarch64-swift-linux-musl && \
cp .build/aarch64-swift-linux-musl/release/TaskCluster /TaskCluster
FROM scratch AS runtime
COPY --from=build /TaskCluster /TaskCluster
EXPOSE 8080
ENTRYPOINT ["/TaskCluster"]

The toolchain stage

The first stage pins the Swift toolchain to swift:6.3.1-noble and installs the Static Linux SDK. This stage’s output depends only on the base image and the SDK URL/checksum — none of our source code — so Docker caches it between builds. Only when we bump the Swift version or the SDK URL does this layer need to rebuild, and when it does, the layer that runs swift sdk install is cached by the RUN instruction hash, not by the source tree (so any build on your machine will use this layer).

Pinning the exact SDK tarball and its checksum is important – we want to make sure the toolchain matches the SDK version that we are installing.

The build stage

The build stage is where it gets interesting. The naive version would be:

FROM toolchain AS build
COPY . .
RUN swift build -c release --swift-sdk aarch64-swift-linux-musl

That works — the resulting image would be correct — but it’s painful to iterate against. Every source change invalidates the COPY . . layer, which invalidates the layer that ran swift build, which means every edit triggers a from-scratch recompilation of the entire project. For a pure-Swift service with a handful of dependencies, that’s a couple of minutes of rebuild time per change. Enough friction that people will start reaching for swift build on the host, which is exactly what we’re trying to avoid.

The fix is BuildKit’s --mount=type=cache. A cache mount is a directory that BuildKit preserves across build invocations, independent of the image layer graph. It’s not part of any image — it lives in BuildKit’s cache, attached to the Dockerfile by the mount’s target path — and it survives across docker build runs of the same Dockerfile.

By mounting the cache at /workspace/.build during the swift build step, we let Swift’s incremental compiler see its previous outputs. On the first build, .build/ is empty, and swift build runs a full compilation. On subsequent builds, .build/ still contains the modules, object files, and build manifests from the previous run. Swift’s compiler checks the timestamps of the source files against the cached artifacts and only recompiles what changed. A change to a single Swift file in a leaf module goes from a two-minute full rebuild to a few-second incremental one.

Three things make this work cleanly:

The COPY . . preserves mtimes. Docker’s COPY copies files with their original modification times intact. Swift’s incremental compiler uses mtimes to determine what’s stale. If COPY normalised timestamps to the current time, every file would look newer than every cached artifact and nothing would cache-hit. Fortunately, COPY does the right thing by default.

The cp extracts the binary out of the cache. The cache mount only exists during the RUN step; after the step completes, BuildKit detaches it and the directory doesn’t appear in the resulting image layer. That’s fine for the cache’s intended use — we don’t want a multi-gigabyte .build/ baked into every image — but it means we need to extract the binary before the step ends. The cp at the end of the RUN copies the compiled executable from the cache-mounted .build/ to a fixed location (/TaskCluster) that does become part of the image layer. The runtime stage then copies it with COPY --from=build /TaskCluster /TaskCluster.

The host filesystem stays clean. Because the cache lives inside BuildKit’s storage rather than a host bind-mount, there’s no .build/ directory appearing in the developer’s checkout. No artifacts leak out of the container, nothing needs to be added to .gitignore for this workflow, and resetting the cache is just docker builder prune.

The same Dockerfile is used in CI. In CI, there’s no persistent BuildKit cache by default, so each build starts cold — which is exactly what we want for a release build. Locally, the cache persists and iteration is fast. One command (docker build -t task-cluster .) does the right thing in both contexts.

The runtime stage

The runtime stage is almost trivial. FROM scratch produces an image with nothing in it — no filesystem, no shell, no ls. We copy the statically linked binary in, expose port 8080 for documentation, and set the entrypoint. That’s the entire runtime environment.

A consequence worth being aware of: you cannot docker exec into a scratch image to poke around, because there’s no shell for it to exec. For debugging, the workflow is to shell into the build stage instead: docker build --target build -t task-cluster-debug . produces an image with the full Swift toolchain and a working shell. That’s the right division of labour — production images shouldn’t have a shell, debugging ones should.

The .dockerignore

There’s one more file that’s easy to forget and extremely painful to forget. Without a .dockerignore, docker build sends the entire project directory to the BuildKit daemon as the build context. For a Swift project with an existing host .build/ directory (for example from running swift build above), that’s potentially gigabytes of object files, compiled modules, and SDK caches being copied into the daemon before the build even starts. We saw the build context hit nearly 10GB before we added the ignore file.

# Build artifacts — never what we want in a Docker image.
.build/
.swiftpm/
# IDE / editor scratch
.DS_Store
*.xcodeproj
.idea/
.vscode/
# Version control
.git/
.gitignore
# Docker
Dockerfile
.dockerignore

The key entries are .build/ and .swiftpm/, which will balloon the context. The rest is hygiene. .git/ can also get large on older repos, and there’s no reason the compiler needs it. Excluding the Dockerfile and .dockerignore themselves is convention — they’re read by the builder before the context is sent, so there’s no need to ship them into the build.

Running It

With the Dockerfile and .dockerignore in place, a single command produces a runnable image:

docker build -t task-cluster .

The first build is slow — it downloads the Swift base image (a few hundred megabytes), installs the SDK (another few hundred), and then compiles the project from scratch. Subsequent builds — even after source changes — reuse the toolchain layer from Docker’s layer cache and the .build/ directory from BuildKit’s cache mount, so they take seconds rather than minutes.

Starting the container:

docker run --rm -p 8080:8080 task-cluster

The -p 8080:8080 maps the container’s exposed port to the same port on the host. The --rm deletes the container when it exits, which is what we want for ephemeral runs — there’s no state to preserve because the in-memory table goes away with the process anyway. Hummingbird should log a line telling us it’s listening, and the container is ready. Keep this terminal running like this, you will need a separate terminal to run the upcoming set of curl commands.

Exercising the API

The fastest way to confirm the whole stack works is to hit every endpoint with curl. This isn’t a replacement for the unit tests we wrote in the previous post — those test the controller in isolation with mocked repositories — but it verifies something the unit tests can’t: that the packaged binary, inside the runtime image, with no base OS, actually runs and responds correctly.

Health check

$ curl -i http://localhost:8080/health
HTTP/1.1 200 OK
Content-Length: 0
Date: Mon, 20 Apr 2026 07:53:18 GMT

A 200 tells us the process is alive, the router is wired, and the scratch image has everything it needs to serve a request. If any of the runtime dependencies were missing — say, the binary wasn’t actually fully static and was looking for a shared library that doesn’t exist in a scratch image — we’d find out here, not in production.

Create a task

$ curl -i -X POST http://localhost:8080/task \
-H 'Content-Type: application/json' \
-d '{"title":"Test task","priority":1}'
HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8
Content-Length: 202
{
"createdAt" : "2026-04-20T07:53:43Z",
"priority" : 1,
"status" : "pending",
"taskId" : "A841C633-66B6-4A80-9A35-4C1F24390209",
"title" : "Test task",
"updatedAt" : "2026-04-20T07:53:43Z"
}

A lot of moving parts had to agree for that response to come back. Hummingbird routed POST /task to the controller’s createTask method. The OpenAPI runtime deserialised the JSON body into a Components.Schemas.CreateTaskRequest. The controller validated the priority, constructed a domain TaskItem, and called repository.create(task:). The DynamoDBTaskRepository translated the domain item into a composite-keyed row and called insertItem on the table. The InMemoryDynamoDBCompositePrimaryKeyTable stored it. On the way back, the controller converted the returned TaskItem into a TaskResponse schema, and the OpenAPI runtime serialised it back as JSON. Every layer we’ve built in this series just ran, end-to-end.

Read it back

$ curl -i http://localhost:8080/task/<the task id from the previous step>
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 202
{
"createdAt" : "2026-04-20T07:53:43Z",
"priority" : 1,
"status" : "pending",
"taskId" : "A841C633-66B6-4A80-9A35-4C1F24390209",
"title" : "Test task",
"updatedAt" : "2026-04-20T07:53:43Z"
}

The read path. Same taskId as the create returned, same createdAt, same updatedAt (we haven’t modified the task yet). This confirms the repository really is persisting across requests within a single process — the task didn’t just round-trip through the controller and get thrown away.

Update the priority

$ curl -i -X PATCH http://localhost:8080/task/<the task id from the previous step>/priority \
-H 'Content-Type: application/json' \
-d '{"priority":5}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
{
"createdAt" : "2026-04-20T07:53:43Z",
"priority" : 5,
"status" : "pending",
"taskId" : "A841C633-66B6-4A80-9A35-4C1F24390209",
"title" : "Test task",
"updatedAt" : "2026-04-20T07:55:18Z"
}

The priority bumped from 1 to 5, the updatedAt advanced to reflect the mutation, and the createdAt stayed pinned to the original creation time. That’s a nice check that the controller’s update logic — fetch the existing task, mutate the priority, refresh the timestamp, persist — is doing the right thing. It also confirms that the path-parameter extraction works: the UUID in the URL was parsed, used to look up the task, and the task came back.

Cancel it

$ curl -i -X POST http://localhost:8080/task/<the task id from the previous step>/cancel
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
{
"createdAt" : "2026-04-20T07:53:43Z",
"priority" : 5,
"status" : "cancelled",
"taskId" : "A841C633-66B6-4A80-9A35-4C1F24390209",
"title" : "Test task",
"updatedAt" : "2026-04-20T07:55:57Z"
}

The status transitioned from pending to cancelled, and updatedAt advanced again. The priority of 5 was preserved — cancellation doesn’t reset other fields, which is the behaviour we want. Sending a second cancel to the same task should now return a 409 because the controller’s state-transition guard will reject a cancel on an already-cancelled task; that’s left as an exercise for the curious reader.

What We Built

The service runs. Not on a development toolchain that happens to be installed on one person’s laptop, but in a container image that’s bit-identical between any developer’s machine and CI and production. The runtime image has no base OS, and has no runtime dependencies other than the kernel’s syscall interface.

The executable entry point is fewer than 30 lines. The Dockerfile is fewer than 30 lines. The configuration is a single environment variable with a sensible default. None of these things are doing more than they need to — the complexity is either in the Swift code (where it’s tested) or in the Dockerfile (where it’s structural and fixed). That’s where we want it.

The development loop is a single command: docker build -t task-cluster . && docker run --rm -p 8080:8080 task-cluster. The same command in CI builds a release image. BuildKit’s cache mount makes the development iteration of that command fast (a few seconds on a warm cache), and the fresh start in CI makes the release iteration hermetic (every build starts from an empty cache).

What’s Next

We’ve skipped a deliberate step: the service has no real persistence. The InMemoryDynamoDBCompositePrimaryKeyTable is great for exercising the code path, but the moment the container exits, every task vanishes. In the next post, we’ll wire up the real DynamoDB path — swap the in-memory table for a Soto-backed one and point it at LocalStack for local development. With persistence in place, we’ll also be in a position to think about integration tests that exercise the full stack against a real (if local) DynamoDB.

Leave a comment

Trending