**Tags:** "Hello World" Examples · Node.js · Nest.js

# Nest.js showcase

NestJS showcase exercising the full managed-service set on Zerops — PostgreSQL, Valkey cache, NATS broker, object storage, and Meilisearch — wired across a NestJS API, a static SPA frontend, and a standalone background worker, with all the essentials a porter needs to clone, deploy, and adapt the stack to their own product.

### Available Environments

- [AI Agent](https://app.zerops.io/recipes/nestjs-showcase.md?environment=ai-agent)
- [Remote (CDE)](https://app.zerops.io/recipes/nestjs-showcase.md?environment=remote-cde)
- [Local](https://app.zerops.io/recipes/nestjs-showcase.md?environment=local)
- [Stage](https://app.zerops.io/recipes/nestjs-showcase.md?environment=stage)
- **Small Production** ← current
- [Highly-available Production](https://app.zerops.io/recipes/nestjs-showcase.md?environment=highly-available-production)

### Services in this Environment

**Services:**

- **core** (core@1)
  - Containers: 1 × Shared Core, 0.00 GB RAM, 0 GB Disk
- **api** (nodejs@22) :3000
  - Containers: 2 × Shared Core, 0.75 GB RAM, 1 GB Disk
  - Repository: [zerops-recipe-apps/nestjs-showcase-api](https://github.com/zerops-recipe-apps/nestjs-showcase-api)
- **app** (static)
  - Containers: 2 × Shared Core, 0.75 GB RAM, 1 GB Disk
  - Repository: [zerops-recipe-apps/nestjs-showcase-app](https://github.com/zerops-recipe-apps/nestjs-showcase-app)
- **worker** (nodejs@22)
  - Containers: 2 × Shared Core, 0.75 GB RAM, 1 GB Disk
  - Repository: [zerops-recipe-apps/nestjs-showcase-worker](https://github.com/zerops-recipe-apps/nestjs-showcase-worker)
- **db** (postgresql@18) :5432, :6432
  - Containers: 1 × Shared Core, 0.50 GB RAM, 1 GB Disk
- **cache** (valkey@7.2) :6379, :6380
  - Containers: 1 × Shared Core, 0.50 GB RAM, 1 GB Disk
- **broker** (nats@2.12) :4222, :8222
  - Containers: 1 × Shared Core, 0.50 GB RAM, 1 GB Disk
- **storage** (object-storage)
  - Containers: 1 × Shared Core, 0.00 GB RAM, 0 GB Disk
- **search** (meilisearch@1.20) :7700
  - Containers: 1 × Shared Core, 0.50 GB RAM, 1 GB Disk

**Total Resources:** 12 containers, 6.50 GB RAM, 10 GB Disk

### One-Click Deploy (Import YAML)

Use this YAML with `zcli project import` to deploy this environment:

```yaml
#zeropsPreprocessor=on

# Small production environment — two-container runtimes on shared
# CPU for moderate throughput. APP_SECRET is the production
# encryption key shared across all api + worker containers,
# critical for JWT validity when the L7 balancer distributes
# requests across replicas.
project:
  name: nestjs-showcase-small-prod
  envVariables:
    APP_SECRET: <@generateRandomString(<32>)>
    API_URL: https://api-${zeropsSubdomainHost}-3000.prg1.zerops.app
    FRONTEND_URL: https://app-${zeropsSubdomainHost}.prg1.zerops.app

services:
  # Run two NestJS API replicas on shared CPU — minContainers: 2
  # keeps rolling deploys zero-downtime (one container serves traffic
  # while the other rebuilds). The L7 balancer distributes requests
  # across replicas. Bump verticalAutoscaling.maxRam when monitoring
  # shows containers approaching the current ceiling.
  - hostname: api
    type: nodejs@22
    priority: 5
    zeropsSetup: prod
    buildFromGit: https://github.com/zerops-recipe-apps/nestjs-showcase-api
    enableSubdomainAccess: true
    minContainers: 2
    verticalAutoscaling:
      minRam: 0.5
      minFreeRamGB: 0.25

  # Run two SPA replicas on `base: static` — Nginx serves the
  # compiled bundle from two containers, and minContainers: 2 keeps
  # rolling deploys zero-downtime. Bump verticalAutoscaling.maxRam if
  # fetch-burst concurrency outgrows the current ceiling at peak
  # production traffic.
  - hostname: app
    type: static
    zeropsSetup: prod
    buildFromGit: https://github.com/zerops-recipe-apps/nestjs-showcase-app
    enableSubdomainAccess: true
    minContainers: 2
    verticalAutoscaling:
      minRam: 0.5
      minFreeRamGB: 0.25

  # Run 2 worker replicas — both subscribe to the `showcase/jobs`
  # subject hierarchy with the workers queue group, so NATS
  # load-balances job delivery between them and the second container
  # keeps the queue moving during rolling deploys. Bump
  # verticalAutoscaling.maxRam if heavy jobs push container memory
  # near the ceiling.
  - hostname: worker
    type: nodejs@22
    zeropsSetup: prod
    buildFromGit: https://github.com/zerops-recipe-apps/nestjs-showcase-worker
    minContainers: 2
    verticalAutoscaling:
      minRam: 0.5
      minFreeRamGB: 0.25

  # Set higher priority for databases and storages,
  # because the app depends on those services.
  # Single-instance NON_HA Postgres — used by the api codebase to
  # persist items + upload metadata at small-prod scale. The 0.25 GB
  # minFreeRamGB headroom absorbs production load spikes; bump
  # verticalAutoscaling.minRam when monitoring shows query latency
  # creeping up under steady-state working set.
  - hostname: db
    type: postgresql@18
    priority: 10
    mode: NON_HA
    verticalAutoscaling:
      minRam: 0.25
      minFreeRamGB: 0.25

  # Single-node Valkey at small-prod scale — the cache demo serves
  # real production cache traffic from a single replica. Bump
  # verticalAutoscaling.minRam when working-set growth pushes
  # eviction rates past acceptable hit-ratio thresholds.
  - hostname: cache
    type: valkey@7.2
    priority: 10
    mode: NON_HA
    verticalAutoscaling:
      minRam: 0.25
      minFreeRamGB: 0.25

  # Single-node NATS for the `showcase/jobs` fan-out — the workers
  # queue group on 2 worker replicas spreads delivery across both.
  # Node loss interrupts pub/sub liveness until the broker restarts;
  # bump verticalAutoscaling.minRam if publish-burst spikes saturate
  # the broker's working set under production traffic.
  - hostname: broker
    type: nats@2.12
    priority: 10
    mode: NON_HA
    verticalAutoscaling:
      minRam: 0.25
      minFreeRamGB: 0.25

  # Private object-storage bucket for production uploads — durability
  # and availability come from the managed S3-compatible backend. Bump
  # objectStorageSize when production upload volume outgrows the
  # current quota.
  - hostname: storage
    type: object-storage
    priority: 10
    objectStorageSize: 1
    objectStoragePolicy: private

  # Single-node Meilisearch — bump verticalAutoscaling.minRam if
  # production search latency correlates with index growth past the
  # 0.25 GB ceiling.
  - hostname: search
    type: meilisearch@1.20
    priority: 10
    mode: NON_HA
    verticalAutoscaling:
      minRam: 0.25
      minFreeRamGB: 0.25


```

---

## Next Steps

After deploying one of the environments and getting to know Zerops, you have two paths to choose from:

1. **Template Flow** — Clone our GitHub repositories and use the whole recipe as a template
2. **Integrate Flow** — If you already have an existing application on a similar stack, integrate the recipe setup with your application

Select a flow: [Template Flow](https://app.zerops.io/recipes/nestjs-showcase.md?environment=small-production&guideFlow=template) or [Integrate Flow](https://app.zerops.io/recipes/nestjs-showcase.md?environment=small-production&guideFlow=integrate)

Both flows are shown below:

## How to take over the Small Production environment

### 📦 Clone the template repositories

Fork or clone the following repositories to your local machine or GitHub account:

- [zerops-recipe-apps/nestjs-showcase-api](https://github.com/zerops-recipe-apps/nestjs-showcase-api)
- [zerops-recipe-apps/nestjs-showcase-app](https://github.com/zerops-recipe-apps/nestjs-showcase-app)
- [zerops-recipe-apps/nestjs-showcase-worker](https://github.com/zerops-recipe-apps/nestjs-showcase-worker)

### 1. Find your service name

Many commands and configurations need the exact name of your service. You can find it in the Zerops Dashboard.

- Open your project in the Zerops Dashboard.
- In the project overview, find the service you want to manage.
- Use this exact name whenever a command or pipeline configuration asks for `<service-name>`.

<img src="https://storage-prg1.zerops.io/4gfos-storage/copy1_cd2a6044c8.jpg" style="display: block; margin: 0 auto;" alt="Zerops GUI: Locating the Service Name" width="500" />

### 2. Configure deployment pipeline

Go to Service Settings > Pipelines & CI/CD Settings in the Zerops Dashboard and connect your repository.

For production, use a trigger on new tags. This keeps deployments intentional and tied to a specific version. You can also add a regex filter, such as `^v[0-9]+\.[0-9]+\.[0-9]+$`, if you want to allow only semantic version tags.

<img src="https://storage-prg1.zerops.io/4gfos-storage/triggerborder_b865860a89.jpg" style="display: block; margin: 0 auto;" alt="Zerops GUI: Triggers" width="500" />

Alternatively, add `zcli push` to your existing CI/CD pipeline if you want full control over when deployments happen.

Learn more about pipeline triggers: https://docs.zerops.io/features/pipeline

### 3. Deploy to production

Create and push a new Git tag to deploy a specific version of your app:

```bash
git tag -a v1.0.0 -m "Release version 1.0.0"
git push origin v1.0.0
```

> [!TIP]
> Open the pipeline detail in the Zerops Dashboard to check the build progress and verify that all steps finish successfully.

### 4. Configure autoscaling

Review the autoscaling settings for your runtime services and databases in Service Settings > Automatic Scaling Configuration in the Zerops Dashboard.

<img src="https://storage-prg1.zerops.io/4gfos-storage/scaling_ac0880aef5.png" style="display: block; margin: 0 auto;" alt="Zerops GUI: Autoscaling configuration" width="500" />

The most important settings are:

```yaml
verticalAutoscaling:
  minRam: 1
  minFreeRamGB: 0.5
  minFreeRamPercent: 20
```

> [!CAUTION]
> Pay attention to `minFreeRamGB`. This value tells Zerops when to scale RAM vertically. Adjust it based on your app’s real memory needs. RAM scales up immediately, while CPU scales after two consecutive measurements below the threshold.

> [!TIP]
> Run a quick stress test with a tool like hey before real users arrive. This helps you see how your app behaves under load and tune the autoscaling settings.

### 5. Set up your domain

To send real traffic to your app, configure public HTTP access in Service Settings > Public Access & Internal Ports in the Zerops Dashboard.

Add your custom domain and point your DNS records to the Zerops IPs shown in the dashboard:

<img src="https://storage-prg1.zerops.io/4gfos-storage/subdomain_8cafd801e8.jpg" style="display: block; margin: 0 auto;" alt="Zerops GUI: Public access and custom domain" width="500" />

```text
Type   Name          Content          TTL
A      example.com   <zerops-ipv4>    Auto
AAAA   example.com   <project-ipv6>   Auto
```

For wildcard domains, add a CNAME record for SSL validation.

Check the public access documentation: https://docs.zerops.io/features/access

> [!TIP]
> When changing DNS records for production, start with a low TTL value. Make sure SSL certificates are active before you disable the fallback Zerops subdomain.

Once everything works, you can disable the Zerops subdomain so all traffic goes through your custom domain.

---

### 🎉 You are good to go!

Your application is live in production and the core setup is complete.

The following sections are optional. They cover extra production features such as log forwarding, backups, and diagnostic access. You can stop here and come back later when you need them.

---

### 6. Set up log forwarding (Optional)

To send logs to an external service, go to Project Settings > Log Forwarding & Logs Overview in the Zerops Dashboard.

You can forward logs to services like Better Stack, Papertrail, or your own self-hosted solution.

Learn more about log forwarding: https://docs.zerops.io/references/logging

### 7. Configure database backups (Optional)

Manage automated encrypted backups in Service Settings > Backups in the Zerops Dashboard.

By default, backups run daily between 00:00 and 01:00 UTC.

Before a major deployment, create a manual protected backup:

```bash
zcli backup create <db-service> --tags pre-deploy,protected
```

Read the backup documentation for more options: https://docs.zerops.io/features/backup

### 8. Set up diagnostic access (Optional)

Use zCLI and VPN access when you need to inspect or maintain services directly.

For runtime services:

```bash
zcli vpn up
ssh <service-name>.zerops
```

For databases, connect through the VPN to reach the project’s private network, or set up secure direct IP access for your database admin tools.

Check the VPN documentation: https://docs.zerops.io/references/cli/commands#vpn-up

## How to integrate api with Zerops

### 1. Adding `zerops.yaml`

The main configuration file — place at repository root. It tells Zerops how to build, deploy and run your app. This one declares 2 setups (`dev`, `prod`), runs `initCommands` at boot (migrations, seed), and ships readiness + health checks.

```yaml
# Two setups for the api codebase:
# - prod: lean production runtime — compiled JS only, npm prune --omit=dev,
#   readiness gate, rolling-deploy headroom.
# - dev: same wiring on a writable Ubuntu container; the porter SSHs in and
#   runs `npm run start:dev` against the SSHFS-mounted source tree.
zerops:
  - setup: prod
    build:
      # nodejs@22 matches run.base so the compiled dist/ is emitted
      # against the same Node major it runs against.
      base: nodejs@22
      buildCommands:
        # `npm ci` for reproducible, lockfile-pinned installs; `nest build`
        # emits the compiled bundle to `dist/`; `npm prune --omit=dev` strips
        # devDependencies so the runtime container ships only what `node
        # dist/main.js` needs.
        - npm ci
        - npm run build
        - npm prune --omit=dev
      # Narrow deploy set — source TypeScript, tests, and dev tooling
      # don't ship to prod. Anything not listed here is dropped from the
      # runtime filesystem.
      deployFiles:
        - ./dist
        - ./node_modules
        - ./package.json
      # node_modules survives between builds — subsequent deploys skip
      # re-downloading packages that the lockfile hasn't moved.
      cache:
        - node_modules
    deploy:
      # Holds the L7 balancer from routing to the new container until it
      # answers HTTP 200 — prevents 502s during the bootstrap window when
      # Nest is still wiring modules and connecting to managed services.
      # `/api/health` only checks process responsiveness; it does NOT fan
      # out to db/cache/broker, so a managed-service blip doesn't cascade
      # into health-driven restarts.
      readinessCheck:
        httpGet:
          port: 3000
          path: /api/health
    run:
      base: nodejs@22
      # Migrate and seed each gate on `zsc execOnce` with a per-deploy key
      # (`${appVersionId}` resolves to a fresh string every deploy), so
      # each script runs exactly once per deploy across all replicas.
      # `--retryUntilSuccessful` rides out the brief window where Postgres
      # isn't yet accepting connections. Splitting migrate and seed into
      # two keys means a failed seed doesn't burn the migrate key — the
      # next deploy retries the seed but already-applied schema is skipped.
      initCommands:
        - zsc execOnce ${appVersionId}-migrate --retryUntilSuccessful -- node dist/scripts/migrate.js
        - zsc execOnce ${appVersionId}-seed --retryUntilSuccessful -- node dist/scripts/seed.js
      # Port 3000 matches Nest's default and matches `PORT` below.
      # `httpSupport: true` publishes the port to the L7 balancer so the
      # platform mints the zerops.app subdomain on first deploy — without
      # it the api is only reachable on the project network with no public
      # URL and no automatic HTTPS.
      ports:
        - port: 3000
          httpSupport: true
      # Cross-service aliases renamed under your own stable keys — the
      # application code reads `DB_HOST`, `CACHE_HOST`, `NATS_HOST`, etc.
      # rather than the platform-side `${db_hostname}` names. Swapping a
      # managed service later is a one-line yaml edit, no app rebuild.
      # Pick own-key names DIFFERENT from the platform side; declaring
      # `db_hostname: ${db_hostname}` would self-shadow — the literal
      # token wins and `process.env.db_hostname` becomes the string
      # "${db_hostname}".
      # `APP_SECRET`, `FRONTEND_URL`, and `DEV_FRONTEND_URL` are project-
      # level envs and auto-propagate to every container, so they aren't
      # repeated here. Same pattern across the dev setup below.
      envVariables:
        PORT: 3000
        DB_HOST: ${db_hostname}
        DB_PORT: ${db_port}
        DB_USER: ${db_user}
        DB_PASSWORD: ${db_password}
        DB_NAME: ${db_dbName}
        # Valkey on Zerops is unauthenticated; only host + port are
        # injected. Referencing `${cache_user}` or `${cache_password}`
        # would resolve to literal token strings — ioredis would then
        # send garbage `AUTH` on every command.
        CACHE_HOST: ${cache_hostname}
        CACHE_PORT: ${cache_port}
        # NATS Pattern A — host, port, user, pass as separate alias keys.
        # The connection-string alternative double-authenticates (URL
        # credentials + SASL) and the broker rejects the first CONNECT
        # frame with `Authorization Violation`.
        NATS_HOST: ${broker_hostname}
        NATS_PORT: ${broker_port}
        NATS_USER: ${broker_user}
        NATS_PASSWORD: ${broker_password}
        # `${storage_apiUrl}` already carries the `https://` scheme; do
        # NOT compose `http://${storage_apiHost}` — the gateway 301-
        # redirects to https and S3 SDKs don't follow that redirect.
        # `S3_REGION` is inert on the MinIO backend but every S3 SDK
        # demands the field; `us-east-1` is the conventional placeholder.
        S3_ENDPOINT: ${storage_apiUrl}
        S3_BUCKET: ${storage_bucketName}
        S3_REGION: us-east-1
        S3_ACCESS_KEY_ID: ${storage_accessKeyId}
        S3_SECRET_ACCESS_KEY: ${storage_secretAccessKey}
        # `SEARCH_MASTER_KEY` administers indexes — create, delete,
        # document upsert. Never expose it to a browser bundle; any
        # client-side search UI should read `${search_defaultSearchKey}`
        # instead (search-only, safe to ship).
        SEARCH_URL: ${search_connectionString}
        SEARCH_MASTER_KEY: ${search_masterKey}
      # Plain Node executing the compiled bootstrap — works with the
      # pruned production node_modules. Anything heavier (ts-node, nest
      # CLI) is dev-only because it requires devDependencies the build's
      # `npm prune --omit=dev` stripped.
      start: node dist/main.js
      # Long-lived liveness probe pointed at the same endpoint as
      # readiness. A hung process triggers a restart instead of serving
      # errors indefinitely. Same shallow check — no fan-out to managed
      # services, so a downstream blip doesn't restart the api.
      healthCheck:
        httpGet:
          port: 3000
          path: /api/health

  - setup: dev
    build:
      base: nodejs@22
      buildCommands:
        # `npm install` (not `npm ci`) because the dev workflow tolerates
        # lockfile drift while iterating on dependencies.
        - npm install
      # Self-deploy the entire working tree — narrowing this to a
      # `[dist, package.json]` list would wipe the source on the next dev
      # redeploy. Zerops replaces the deployed filesystem with only the
      # listed paths, and the porter's edits live in the rest of the repo.
      deployFiles: ./
      cache:
        - node_modules
    run:
      base: nodejs@22
      # Ubuntu base — gives the porter the standard CLI toolset (apt,
      # git, etc.) when they SSH in to iterate on the source tree.
      os: ubuntu
      ports:
        - port: 3000
          httpSupport: true
      # Same service wiring as prod — only the runtime style differs.
      # If you swap a managed service later, the prod block above needs
      # the matching edit.
      envVariables:
        PORT: 3000
        DB_HOST: ${db_hostname}
        DB_PORT: ${db_port}
        DB_USER: ${db_user}
        DB_PASSWORD: ${db_password}
        DB_NAME: ${db_dbName}
        CACHE_HOST: ${cache_hostname}
        CACHE_PORT: ${cache_port}
        NATS_HOST: ${broker_hostname}
        NATS_PORT: ${broker_port}
        NATS_USER: ${broker_user}
        NATS_PASSWORD: ${broker_password}
        S3_ENDPOINT: ${storage_apiUrl}
        S3_BUCKET: ${storage_bucketName}
        S3_REGION: us-east-1
        S3_ACCESS_KEY_ID: ${storage_accessKeyId}
        S3_SECRET_ACCESS_KEY: ${storage_secretAccessKey}
        SEARCH_URL: ${search_connectionString}
        SEARCH_MASTER_KEY: ${search_masterKey}
      # Dev runs migrations/seeds through `ts-node` against the source
      # tree so the porter doesn't have to compile before the first
      # deploy. Same `execOnce` semantics apply — per-deploy keys,
      # idempotent retries.
      initCommands:
        - zsc execOnce ${appVersionId}-migrate --retryUntilSuccessful -- npx ts-node src/scripts/migrate.ts
        - zsc execOnce ${appVersionId}-seed --retryUntilSuccessful -- npx ts-node src/scripts/seed.ts
      # `zsc noop --silent` keeps the container alive without binding
      # the runtime to a foreground process — the dev container is a
      # remote-development workspace. SSH in and run `npm run start:dev`
      # (Nest's watcher) by hand; source edits over SSHFS rebuild in
      # place, no redeploy.
      start: zsc noop --silent
```

### 2. Bind `0.0.0.0` so the L7 balancer can reach the listener

NestJS's `app.listen(port)` binds `127.0.0.1` by default — fine on a laptop, unreachable on Zerops because the L7 balancer routes from the public subdomain into the container's VXLAN IP. Without an explicit host the platform returns 502 even when [`zerops.yaml`](zerops.yaml) exposes the port with `httpSupport: true`. Pass `'0.0.0.0'` as the second argument and read `PORT` from env so the listener stays in sync with `run.ports[].port`.

```ts
const port = parseInt(process.env.PORT ?? '3000', 10);
await app.listen(port, '0.0.0.0');
```

### 3. Trust the reverse proxy

Zerops terminates TLS at the L7 balancer and forwards traffic via reverse proxy with `X-Forwarded-*` headers. Without telling Express to trust those headers, `req.ip` reports the balancer's IP (breaking rate-limiting and audit logging) and `req.protocol` reports `http` (breaking any redirect that composes its own absolute URL).

NestJS uses Express under the hood, so the canonical config reaches the underlying instance and flips `trust proxy` at bootstrap.

```ts
const expressApp = app.getHttpAdapter().getInstance();
expressApp.set('trust proxy', true);
```

### 4. Drain on `SIGTERM` for rolling deploys

Zerops's rolling deploy stops the old container by sending `SIGTERM`. Without an explicit handler the Node process exits immediately, in-flight HTTP requests get a TCP RST, and pending DB / NATS work aborts mid-call. Wire `SIGTERM` (and `SIGINT` for parity) to `app.close()` so Nest drains the HTTP server, lifecycle hooks shut down providers (pg pool, NATS connection), then the process exits cleanly.

```ts
const shutdown = async (signal: string) => {
  try { await app.close(); } catch (err) { /* log */ }
  process.exit(0);
};
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
```

Pair with `deploy.readinessCheck` in [`zerops.yaml`](zerops.yaml) so the L7 balancer routes traffic to the new container only after it answers HTTP 200 — together they unlock [zero-downtime deploys with multi-container setups](https://docs.zerops.io/features/scaling-ha).

### 5. Alias platform env vars under your own keys

Zerops auto-injects cross-service references as `${db_hostname}`, `${broker_port}`, `${storage_apiUrl}`, etc. — platform-specific names you don't want hard-coded in application code. Re-export each one under your own stable key in [`zerops.yaml`](zerops.yaml) `run.envVariables`, and have the app read only the own-key names. Swapping a managed service later becomes a yaml-only edit.

```yaml
envVariables:
  DB_HOST: ${db_hostname}
  DB_PORT: ${db_port}
  NATS_HOST: ${broker_hostname}
  NATS_PASSWORD: ${broker_password}
  S3_ENDPOINT: ${storage_apiUrl}
  SEARCH_URL: ${search_connectionString}
```

Pick own-key names DIFFERENT from the platform side. Declaring `db_hostname: ${db_hostname}` self-shadows — the per-service `envVariables` write runs after the auto-inject, the literal `${db_hostname}` token wins, and the OS env var becomes the string `"${db_hostname}"`. The same trap fires for project-level secrets (`APP_SECRET: ${APP_SECRET}`) — those already auto-propagate to every container, so re-declaring them under the same name is never necessary. The [per-key env shape and cross-service aliases](https://docs.zerops.io/zerops-yaml/specification#envvariables-) reference covers the full model.

## How to integrate app with Zerops

### 1. Adding `zerops.yaml`

The main configuration file — place at repository root. It tells Zerops how to build, deploy and run your app. This one declares 2 setups (`dev`, `prod`).

```yaml
# Two setups for the SPA — `dev` is an SSH workspace running the
# Vite dev server with HMR over an SSHFS mount; `prod` builds the
# bundle once and ships it to Nginx-backed static hosting.
zerops:
  - setup: dev
    build:
      base: nodejs@22
      buildCommands:
        # `npm install` (not `npm ci`) so devDependencies land in
        # node_modules — Vite's HMR server is a devDependency and
        # the porter expects it present after deploy.
        - npm install
      # Ship the full source tree to the runtime mount so every
      # file the porter would edit is there — including config
      # they may need to tweak (`vite.config.ts`, `tailwind.config.js`).
      deployFiles: ./
      cache:
        - node_modules
    run:
      base: nodejs@22
      ports:
        # 5173 is Vite's default dev port; httpSupport publishes it
        # through the L7 router so the dev subdomain reaches the
        # bundler. The same port appears in the workspace's
        # `DEV_FRONTEND_URL` constant the api uses for CORS.
        - port: 5173
          httpSupport: true
      # Idle the container so Vite can be started by hand over SSH —
      # the porter runs `npm run dev` from `/var/www` and edits flow
      # through the SSHFS mount with HMR picking them up live. Tying
      # `start:` to `npm run dev` would mean every edit goes through
      # a redeploy cycle, defeating the watch loop.
      start: zsc noop --silent

  - setup: prod
    build:
      base: nodejs@22
      buildCommands:
        # `npm ci` for reproducible builds — fails fast on lockfile
        # drift, which is the right gate for production.
        - npm ci
        # `npm run build` runs Vite's Rollup pipeline so TypeScript +
        # React compile into a static `dist/` tree of HTML, hashed
        # JS, and CSS bundles — the only artifacts the static runtime
        # needs at request time.
        - npm run build
      deployFiles:
        # `dist/~` strips the leading `dist/` so the build output
        # becomes the document root directly — `index.html` lands
        # at `/index.html`. Without the trailing `~`, Nginx serves
        # from `/dist/` and `/` returns 404.
        - dist/~
      cache:
        - node_modules
      envVariables:
        # Bake the API origin into the JS bundle at build time —
        # Vite inlines `VITE_*` as string literals before deploy and
        # the static runtime has no process to read OS env later.
        # `${API_URL}` is the workspace's project-scope constant,
        # composed from `${zeropsSubdomainHost}` at provision time
        # so it resolves before any peer service first-deploys. Set
        # your own production origin here once you swap apistage for
        # a custom domain.
        VITE_API_URL: ${API_URL}
    run:
      # Nginx-backed static runtime — no Node process at request
      # time, SPA fallback for unmatched routes is built in, ~2 MB
      # RAM per replica. A dynamic `start:` directive is silently
      # ignored on this base; if you add server-rendered routes
      # later, switch to `base: nodejs@22` with an explicit `start:`.
      base: static
```

### 2. Bake the API origin into the SPA at build time

Vite inlines `import.meta.env.VITE_*` constants into the JS bundle at build time. The runtime container serving a `base: static` build is Nginx — there is no Node process to read env vars at request time, so the API origin must be present BEFORE `npm run build` fires.

Reach for the API service's URL via the project-scope `API_URL` constant the workspace exposes, then re-publish it under `VITE_API_URL` inside the `prod` setup's `build.envVariables`:

```yaml
build:
  envVariables:
    VITE_API_URL: ${API_URL}
```

`API_URL` is composed once from `${zeropsSubdomainHost}` at project-provision time, so it resolves before any peer service first-deploys — no deploy-ordering dance. Reading `${apistage_zeropsSubdomain}` directly works too, but only after the API service has minted its URL, otherwise the literal token ships into the bundle. The project-scope constant skips that ordering window.

For the dev workspace (long-running Vite process, not a static build), set `VITE_API_URL` on the dev service's env via the Zerops UI and restart the dev process — Vite re-reads it on respawn.

### 3. Bind Vite to every interface and accept the platform's subdomain hosts

Vite's dev server defaults to `host: localhost` and rejects any request whose `Host` header is not in its allowlist. Zerops's L7 balancer routes to the container's VXLAN IP — a `127.0.0.1`-bound listener is unreachable, returns 502, and a request that does reach Vite from the project's dev subdomain hits Vite's host check first (`Blocked request. This host is not allowed.`). The [Zerops L7 balancer + subdomain access](https://docs.zerops.io/features/access) reference covers how `httpSupport: true` ports are published through the balancer.

Open [`vite.config.ts`](vite.config.ts) and pin both bindings on both `server` (dev) and `preview` (cross-deploy preview builds):

```ts
export default defineConfig({
  plugins: [react()],
  server: {
    host: '0.0.0.0',
    port: 5173,
    allowedHosts: true,
  },
  preview: {
    host: '0.0.0.0',
    port: 5173,
    allowedHosts: true,
  },
});
```

`allowedHosts: true` is the bundler's intended extension point for hosted dev environments — it accepts every Host header so the dynamic `<host>-${zeropsSubdomainHost}` URLs the project mints for dev and preview both work without re-listing each hostname.

### 4. Strip the build-output prefix and ship to the static runtime

Vite compiles the SPA into a `dist/` tree of HTML + assets — no Node process runs at request time. The right runtime is `base: static`, which is Nginx-backed: ~2 MB RAM per replica versus ~80 MB for a `npx serve` Node process, and SPA fallback (unmatched routes serve `/index.html`) is built in.

Nginx's document root is fixed at the deploy-files root, so `dist/index.html` would land at `/dist/index.html` and `/` would 404. The `~` suffix on a `deployFiles` entry tells Zerops to strip the leading directory before publishing — `dist/~` lands `index.html` at the document root directly:

```yaml
build:
  deployFiles:
    - dist/~
run:
  base: static
```

The `dist/~` shape is the canonical pairing with `base: static`; without the trailing `~`, every static-deploy returns 404 on `/`. The [deploy-files tilde syntax + static runtime](https://docs.zerops.io/zerops-yaml/specification#deployfiles-) reference covers the full strip-prefix semantics + every supported `deployFiles` shape.

## How to integrate worker with Zerops

### 1. Adding `zerops.yaml`

The main configuration file — place at repository root. It tells Zerops how to build, deploy and run your app. This one declares 2 setups (`dev`, `prod`) and runs `initCommands` at boot (migrations).

```yaml
# Two setups: prod runs the compiled worker as a long-lived NATS
# subscriber; dev ships the source tree under SSHFS so the porter
# SSHes in and runs `npm run start:dev` (nest --watch) by hand.
# Both setups are no-HTTP — the worker has no ports, no
# healthCheck, no readinessCheck.
zerops:
  - setup: prod
    build:
      base: nodejs@22
      buildCommands:
        # Compile TypeScript, then strip devDependencies before
        # the deployFiles step copies node_modules to the runtime
        # container — keeps the deployed bundle lean and avoids
        # shipping ts-node, types, lint tooling.
        - npm ci
        - npm run build
        - npm prune --omit=dev
      # Build container compiles into ./dist; the runtime needs
      # the compiled JS plus production node_modules so node can
      # require modules at startup. package.json ships too because
      # NestJS reads it at boot for metadata.
      deployFiles:
        - ./dist
        - ./node_modules
        - ./package.json
      cache:
        - node_modules
    run:
      base: nodejs@22
      # zsc execOnce keys the migration to the current deploy
      # version: ${appVersionId} changes every deploy so the
      # migrator re-fires per deploy (right for idempotent
      # CREATE TABLE IF NOT EXISTS DDL). The -worker-migrate
      # suffix scopes the lock to this codebase — the api
      # codebase runs its own migrator on the same database
      # with its own -api-migrate suffix, so neither migrator
      # blocks the other. --retryUntilSuccessful absorbs the
      # first-deploy window where Postgres has provisioned but
      # isn't yet accepting connections.
      initCommands:
        - zsc execOnce ${appVersionId}-worker-migrate --retryUntilSuccessful -- node dist/migrate.js
      # Cross-service references renamed under stable own-keys —
      # DB_HOST, NATS_HOST, S3_*, SEARCH_* — so the application
      # code reads platform-neutral names. Swapping a managed
      # service later is a one-line yaml edit, no app rebuild.
      # Same-name aliasing (DB_HOST: ${DB_HOST}) would self-shadow
      # — the literal token wins and the OS env var becomes the
      # string "${...}".
      #
      # NATS is wired as four separate fields (host/port/user/
      # password) instead of ${broker_connectionString} because
      # the nats@2.29 client mis-detects IPv6 by colon-count and
      # rejects auto-generated passwords containing multiple
      # colons. Separate fields side-step the parser entirely.
      #
      # Project-scope envs (APP_SECRET, FRONTEND_URL, API_URL)
      # are NOT redeclared here — they auto-propagate to every
      # container, and redeclaring under the same name would
      # self-shadow into a literal "${APP_SECRET}" string.
      envVariables:
        DB_HOST: ${db_hostname}
        DB_PORT: ${db_port}
        DB_NAME: ${db_dbName}
        DB_USER: ${db_user}
        DB_PASSWORD: ${db_password}
        # Valkey on Zerops runs unauthenticated — no ${cache_user}
        # or ${cache_password} aliases exist, and referencing
        # them would resolve to literal "${cache_password}" and
        # crash ioredis with AUTH errors. Host + port only.
        CACHE_HOST: ${cache_hostname}
        CACHE_PORT: ${cache_port}
        NATS_HOST: ${broker_hostname}
        NATS_PORT: ${broker_port}
        NATS_USER: ${broker_user}
        NATS_PASSWORD: ${broker_password}
        # S3_ENDPOINT reads ${storage_apiUrl} (the full https://
        # form) — composing from ${storage_apiHost} would hit
        # the gateway's plaintext-http 301 redirect that S3 SDKs
        # don't follow, producing UnknownError on the first
        # bucket call. S3_REGION is required by the AWS SDK
        # contract but MinIO ignores its value; us-east-1 is the
        # conventional inert pick.
        S3_ENDPOINT: ${storage_apiUrl}
        S3_ACCESS_KEY_ID: ${storage_accessKeyId}
        S3_SECRET_ACCESS_KEY: ${storage_secretAccessKey}
        S3_BUCKET: ${storage_bucketName}
        S3_REGION: us-east-1
        # Meilisearch internal traffic is plain http on the
        # project network. The worker ingests documents, so it
        # needs the master key — never alias this on a frontend
        # codebase that builds for the browser (use
        # ${search_defaultSearchKey} there instead).
        SEARCH_URL: http://${search_hostname}:${search_port}
        SEARCH_MASTER_KEY: ${search_masterKey}
      # Runs the compiled NestJS standalone application context
      # — boots the DI container, opens NATS / Postgres / Valkey
      # / object-storage / Meilisearch clients, then parks on
      # the NATS subscription iterator. No HTTP server, no port
      # binding, no foreground http listener. The platform sends
      # SIGTERM on rolling deploys; the bootstrap forwards that
      # to NestJS shutdown hooks and the subscription drains
      # before the connection closes.
      start: node dist/main.js

  - setup: dev
    build:
      base: nodejs@22
      buildCommands:
        # npm install (not npm ci) — the dev workflow tolerates
        # lockfile drift while the porter iterates locally; the
        # prod setup above pins to package-lock.json.
        - npm install
      # Full source tree shipped under SSHFS so the porter can
      # edit code in place and nest --watch picks up the changes
      # without a redeploy.
      deployFiles: ./
      cache:
        - node_modules
    run:
      base: nodejs@22
      # Ubuntu provides richer interactive tooling (apt, vim,
      # curl, git) over the default minimal image — useful when
      # the porter SSHes in to inspect or debug the worker.
      os: ubuntu
      # Dev runs the migrator straight from TypeScript source
      # (npx ts-node) so the porter can edit src/migrate.ts and
      # the next deploy picks up the change without a build
      # step. The -workerdev-migrate suffix keeps the dev slot
      # independent of the prod slot's lock.
      initCommands:
        - zsc execOnce ${appVersionId}-workerdev-migrate --retryUntilSuccessful -- npx ts-node src/migrate.ts
      # Same wiring as prod — only the run.start command differs
      # between setups. See the prod block above for the
      # rationale on each alias.
      envVariables:
        DB_HOST: ${db_hostname}
        DB_PORT: ${db_port}
        DB_NAME: ${db_dbName}
        DB_USER: ${db_user}
        DB_PASSWORD: ${db_password}
        CACHE_HOST: ${cache_hostname}
        CACHE_PORT: ${cache_port}
        NATS_HOST: ${broker_hostname}
        NATS_PORT: ${broker_port}
        NATS_USER: ${broker_user}
        NATS_PASSWORD: ${broker_password}
        S3_ENDPOINT: ${storage_apiUrl}
        S3_ACCESS_KEY_ID: ${storage_accessKeyId}
        S3_SECRET_ACCESS_KEY: ${storage_secretAccessKey}
        S3_BUCKET: ${storage_bucketName}
        S3_REGION: us-east-1
        SEARCH_URL: http://${search_hostname}:${search_port}
        SEARCH_MASTER_KEY: ${search_masterKey}
      # `zsc noop --silent` keeps the dev container alive without
      # binding the runtime to a foreground process — the porter
      # SSHes in and runs `npm run start:dev` (nest --watch) by
      # hand. Source edits flow through the SSHFS mount and the
      # watcher rebuilds in place; no redeploy required to see
      # code changes on the dev slot.
      start: zsc noop --silent
```

### 2. Bootstrap as a NestJS standalone application context

A NestJS worker has no HTTP server — there's nothing to serve. Swap `NestFactory.create` for `NestFactory.createApplicationContext` so the process runs the dependency-injection container without binding a port. Pair it with `enableShutdownHooks()` so `OnModuleDestroy` fires when the platform sends `SIGTERM` on a rolling deploy.

```typescript
const app = await NestFactory.createApplicationContext(AppModule, {
  bufferLogs: false,
});
app.enableShutdownHooks();
```

The matching `zerops.yaml` shape for a worker omits `ports:`, `healthCheck:`, and `readinessCheck:` from every setup block — those fields gate on HTTP responses the worker never produces. The platform observes liveness from logs instead, so emit a startup line and a periodic heartbeat at boot for visibility in the runtime log viewer.

### 3. Connect to NATS with separate credential fields

Pass the broker's host, port, user, and password as four separate env-var aliases — never compose `nats://user:pass@host:port` by hand. The `nats@2.x` client parses any URL-embedded credentials AND separately attempts SASL with the same values, producing a double-auth attempt the broker rejects with `Authorization Violation` on the first CONNECT frame. The credential-free `servers` string plus `user` / `pass` connect options avoids the double-auth path entirely.

```typescript
import { connect } from 'nats';

const nc = await connect({
  servers: `${process.env.NATS_HOST}:${process.env.NATS_PORT}`,
  user: process.env.NATS_USER,
  pass: process.env.NATS_PASSWORD,
  maxReconnectAttempts: -1,
  reconnectTimeWait: 2_000,
});
```

The shipped `zerops.yaml` aliases the four platform-side keys under `NATS_HOST`, `NATS_PORT`, `NATS_USER`, `NATS_PASSWORD` so the code reads its own names. `${broker_connectionString}` is also offered by the platform, but `nats@2.29` has an IPv6 detection bug that mis-parses auto-generated passwords containing multiple colons — stick with the separate-fields shape. The [managed NATS broker](https://docs.zerops.io/services/nats) reference covers the full set of platform-injected env keys for the service.

### 4. Subscribe in a queue group and drain on SIGTERM

The worker runs `minContainers: 2` on showcase and higher production setups so a rolling deploy keeps the subscription alive while a fresh replica boots. Two replicas plus a plain `nc.subscribe(subject)` fans every message out to BOTH containers — every job gets processed twice. The fix is two-part: pass a stable `queue` group name so the broker delivers each message to exactly one replica in the group, AND on `SIGTERM` call `subscription.drain()` (NOT `unsubscribe()`) so in-flight handlers finish before the connection closes. `unsubscribe()` drops the in-flight message and the next deploy loses one event per replacement.

```typescript
const sub = nc.subscribe('showcase.jobs.*', { queue: 'showcase-workers' });

async onApplicationShutdown(): Promise<void> {
  await sub.drain();
}
```

NestJS calls `OnApplicationShutdown` when `enableShutdownHooks()` is on and the process receives `SIGTERM`. The platform sends `SIGTERM` before pulling the old container during a [zero-downtime deploys with multi-container setups](https://docs.zerops.io/features/scaling-ha); pair `drain()` with the standalone-context bootstrap above and rolling deploys lose zero events.

### 5. Configure the S3 client with path-style addressing

Zerops object-storage is a MinIO backend. The AWS S3 SDK defaults to virtual-hosted addressing (bucket name as a subdomain of the endpoint), which MinIO doesn't support — every bucket call fails with `UnknownError` because the virtual-hosted hostname has no DNS entry. Set `forcePathStyle: true` on the client, and read the endpoint from `${storage_apiUrl}` (a full `https://...` URL) rather than composing it from `${storage_apiHost}`: the gateway returns a 301 from `http://` to `https://` that S3 SDKs don't follow.

```typescript
import { S3Client } from '@aws-sdk/client-s3';

const s3 = new S3Client({
  endpoint: process.env.S3_ENDPOINT,
  region: process.env.S3_REGION ?? 'us-east-1',
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY_ID,
    secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
  },
  forcePathStyle: true,
});
```

`S3_REGION` is required by the SDK constructor (every AWS SDK refuses to build without it) but MinIO ignores the value — `us-east-1` is the conventional inert choice. The shipped `zerops.yaml` aliases `${storage_apiUrl}`, `${storage_accessKeyId}`, `${storage_secretAccessKey}`, and `${storage_bucketName}` under `S3_*` own-key names so the application reads platform-neutral env vars. The [S3-compatible storage on the MinIO backend](https://docs.zerops.io/services/object-storage) reference covers the gateway URL shape + the per-key env keys the managed service emits.

### 🎯 What's next?

**Deploy other environments** — Ready to scale? Deploy additional environments for different stages of your workflow:

- [AI Agent](https://app.zerops.io/recipes/nestjs-showcase.md?environment=ai-agent)
- [Remote (CDE)](https://app.zerops.io/recipes/nestjs-showcase.md?environment=remote-cde)
- [Local](https://app.zerops.io/recipes/nestjs-showcase.md?environment=local)
- [Stage](https://app.zerops.io/recipes/nestjs-showcase.md?environment=stage)
- [Highly-available Production](https://app.zerops.io/recipes/nestjs-showcase.md?environment=highly-available-production)

## Knowledge Base

### Platform Reference

- [Routing & Domains](https://docs.zerops.io/features/access)
- [Scaling](https://docs.zerops.io/features/scaling)
- [Environment Variables](https://docs.zerops.io/features/env-variables)
- [CLI (zcli)](https://docs.zerops.io/references/cli)

### Service Type Reference

**Node.js**

- [Build & Deploy](https://docs.zerops.io/nodejs/how-to/build-pipeline)
- [Customize Runtime](https://docs.zerops.io/nodejs/how-to/customize-runtime)

**Static**

- [Configuration](https://docs.zerops.io/static/overview#routing-configuration)
- [SEO Setup](https://docs.zerops.io/static/overview#seo-with-prerender)
- [Frameworks](https://docs.zerops.io/static/overview#framework-integration)

**PostgreSQL**

- [Connect](https://docs.zerops.io/postgresql/how-to/connect)
- [Backup & Restore](https://docs.zerops.io/postgresql/how-to/backup)
- [Manage](https://docs.zerops.io/postgresql/how-to/manage)
- [Scale](https://docs.zerops.io/postgresql/how-to/scale)

**Valkey**

- [Configuration & Access](https://docs.zerops.io/valkey/overview#service-configuration)

**NATS**

- [Configuration](https://docs.zerops.io/nats/overview#service-configuration)
- [Monitoring](https://docs.zerops.io/nats/overview#health-monitoring)
- [Backup & Restore](https://docs.zerops.io/nats/overview#backup-and-recovery)

**Meilisearch**

- [Configuration](https://docs.zerops.io/meilisearch/overview#service-configuration)
- [Access](https://docs.zerops.io/typesense/overview#access-methods)
- [Backup & Restore](https://docs.zerops.io/typesense/overview#backup)

### Application Reference

#### api Knowledge Base

### Object-storage runs on a MinIO backend

The data plane (PutObject, GetObject, ListObjectsV2, multipart uploads, pre-signed URLs) is S3-compatible end-to-end. Control-plane and AWS-only features aren't on the managed backend: no archival tiers (Glacier, Deep Archive), no Object Lock / WORM, no lifecycle rules, no S3 Select or Inventory, no Transfer Acceleration, no CloudFront integration. Adapting this recipe to a product that needs any of those means planning a separate AWS S3 tier alongside; the rest of the recipe stays intact.

#### app Knowledge Base

### `${API_URL}` resolves at project create, not on subdomain rotation

The project-scope `API_URL` / `FRONTEND_URL` constants compose from `${zeropsSubdomainHost}` once at provision time and don't auto-track if you later swap `apistage` for a custom domain. The SPA bakes `VITE_API_URL: ${API_URL}` at build time, so swapping the api origin means updating `API_URL` in the Zerops UI's project envs AND redeploying `prod` so a fresh build picks up the new value. Dev is exempt — Vite is long-lived, so a `VITE_API_URL` change plus a Vite restart is enough.

### `base: static` is Nginx — no Node at request time

The app ships a compiled Vite bundle to an Nginx-backed runtime: ~2 MB RAM per replica, SPA fallback built in. Anything that needs request-time code — server-rendered routes (Next.js / Nuxt server components), dynamic redirects, edge functions, BFF endpoints — requires switching to `base: nodejs@22` with an explicit `start:` and the runtime cost balloons to ~80 MB per replica. If your product is hybrid SSR/SPA, plan the runtime model up front rather than as an afterthought.

#### worker Knowledge Base

### Meilisearch is single-node across every tier

The recipe ships `mode: NON_HA` for the `search` service on every tier, including HA Production — Zerops's managed Meilisearch is single-node with vertical autoscaling. Production scaling is bounded by the per-instance heap; horizontal sharding isn't a one-yaml-edit upgrade. If your index will grow past ~10M documents or query QPS spikes past single-node throughput, plan a vertical bump (`verticalAutoscaling.minRam` in [zerops.yaml](zerops.yaml)) before the ceiling, or budget for an external search service.

### Each DDL-owning codebase needs its own `execOnce` migrator key

The recipe scopes api and worker migrations under separate `${appVersionId}-api-migrate` and `${appVersionId}-worker-migrate` execOnce keys so they don't contend on a shared lock. If you add a third codebase that issues DDL on the same database, give it its own role-suffixed key (`${appVersionId}-<codebase>-migrate`) — sharing the api or worker key burns the per-deploy gate for whichever migrator loses the race.

---

## Related Recipes

- [Nest.js minimal](https://app.zerops.io/recipes/nestjs-minimal.md)
- [Node.js Hello World](https://app.zerops.io/recipes/node-js-hello-world.md)
- [Bun Hello World](https://app.zerops.io/recipes/bun-hello-world.md)
- [Go Hello World](https://app.zerops.io/recipes/go-hello-world.md)

