Initial commit: _deploy_app skill
Deploy new apps or push updates to existing deployments via Docker Compose + Caddy + Gitea webhooks. Multi-server profiles, auto-detection of deployment status, full infrastructure provisioning. - SKILL.md: 715-line workflow documentation - scripts/detect_deployment.py: deployment status detection - scripts/validate_compose.py: compose file validation - references/: infrastructure, compose patterns, Caddy patterns - assets/: Makefile and compose templates - config.json: mew server profile
This commit is contained in:
252
references/caddy-patterns.md
Normal file
252
references/caddy-patterns.md
Normal file
@@ -0,0 +1,252 @@
|
||||
# Caddyfile Patterns Reference
|
||||
|
||||
Reusable Caddyfile site block patterns for the mew server. All blocks go in `/data/docker/caddy/Caddyfile`. After editing, reload or restart Caddy (see infrastructure.md for details).
|
||||
|
||||
---
|
||||
|
||||
## 1. Standard Reverse Proxy
|
||||
|
||||
The most common pattern. Terminate TLS, compress responses, and forward to a container.
|
||||
|
||||
```
|
||||
# === My App ===
|
||||
myapp.lavender-daydream.com {
|
||||
encode zstd gzip
|
||||
reverse_proxy myapp:3000
|
||||
}
|
||||
```
|
||||
|
||||
### Breakdown
|
||||
|
||||
- **Domain line**: Caddy automatically provisions a Let's Encrypt certificate for this domain.
|
||||
- **`encode zstd gzip`**: Compress responses with zstd (preferred) or gzip (fallback). Include this in every site block.
|
||||
- **`reverse_proxy myapp:3000`**: Forward requests to the container named `myapp` on port 3000. Caddy resolves the container name via the shared `proxy` Docker network.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- DNS A record pointing the domain to `155.94.170.136`.
|
||||
- The target container is running and joined to the `proxy` network.
|
||||
- The container name and port match what is specified in the `reverse_proxy` directive.
|
||||
|
||||
---
|
||||
|
||||
## 2. WebSocket Support
|
||||
|
||||
For applications that use WebSocket connections (chat apps, real-time dashboards, collaborative editors, etc.).
|
||||
|
||||
```
|
||||
# === Real-time App ===
|
||||
realtime.lavender-daydream.com {
|
||||
encode zstd gzip
|
||||
reverse_proxy realtime-app:3000 {
|
||||
header_up X-Real-IP {remote_host}
|
||||
header_up X-Forwarded-For {remote_host}
|
||||
header_up X-Forwarded-Proto {scheme}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Caddy 2 handles WebSocket upgrades transparently. There is no special `websocket` directive needed — `reverse_proxy` detects the `Upgrade: websocket` header and handles the protocol switch automatically.
|
||||
- The `header_up` directives forward the real client IP and protocol to the backend, which is important for applications that log connections or enforce security based on client IP.
|
||||
- If the application uses a non-standard WebSocket path (e.g., `/ws` or `/socket.io`), this pattern still works without changes — Caddy proxies all paths by default.
|
||||
|
||||
---
|
||||
|
||||
## 3. Multiple Domains
|
||||
|
||||
Serve the same application from multiple domains (e.g., bare domain and `www` subdomain, or a vanity domain alongside the primary).
|
||||
|
||||
```
|
||||
# === My App (multi-domain) ===
|
||||
myapp.lavender-daydream.com, www.myapp.lavender-daydream.com {
|
||||
encode zstd gzip
|
||||
reverse_proxy myapp:3000
|
||||
}
|
||||
```
|
||||
|
||||
### With Redirect
|
||||
|
||||
Redirect one domain to the canonical domain instead of serving from both:
|
||||
|
||||
```
|
||||
# === My App (canonical redirect) ===
|
||||
www.myapp.lavender-daydream.com {
|
||||
redir https://myapp.lavender-daydream.com{uri} permanent
|
||||
}
|
||||
|
||||
myapp.lavender-daydream.com {
|
||||
encode zstd gzip
|
||||
reverse_proxy myapp:3000
|
||||
}
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Caddy provisions separate TLS certificates for each domain listed.
|
||||
- Ensure DNS A records exist for every domain in the site block.
|
||||
- Use `permanent` (301) redirects for SEO-friendly canonical domain enforcement.
|
||||
- The `{uri}` placeholder preserves the request path and query string during the redirect.
|
||||
|
||||
---
|
||||
|
||||
## 4. HTTPS Upstream
|
||||
|
||||
For services that speak HTTPS internally (e.g., Cockpit, some management UIs). Caddy must be told to connect to the upstream over TLS.
|
||||
|
||||
```
|
||||
# === Cockpit ===
|
||||
cockpit.lavender-daydream.com {
|
||||
encode zstd gzip
|
||||
reverse_proxy https://cockpit:9090 {
|
||||
transport http {
|
||||
tls_insecure_skip_verify
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Prefix the upstream address with `https://` to instruct Caddy to connect over TLS.
|
||||
- `tls_insecure_skip_verify` disables certificate verification for the upstream connection. Use this when the upstream uses a self-signed certificate, which is common for management interfaces like Cockpit.
|
||||
- Do NOT use `tls_insecure_skip_verify` if the upstream has a valid, trusted certificate — remove the entire `transport` block in that case.
|
||||
- This pattern is uncommon. Most containers speak plain HTTP internally, and Caddy handles TLS termination on the frontend only.
|
||||
|
||||
---
|
||||
|
||||
## 5. Rate Limiting
|
||||
|
||||
Protect sensitive endpoints (login forms, APIs, webhooks) from abuse with rate limiting.
|
||||
|
||||
```
|
||||
# === Rate-Limited App ===
|
||||
myapp.lavender-daydream.com {
|
||||
encode zstd gzip
|
||||
|
||||
# Rate limit login endpoint: 10 requests per minute per IP
|
||||
@login {
|
||||
path /api/auth/login
|
||||
}
|
||||
rate_limit @login {
|
||||
zone login_zone {
|
||||
key {remote_host}
|
||||
events 10
|
||||
window 1m
|
||||
}
|
||||
}
|
||||
|
||||
# Rate limit API endpoints: 60 requests per minute per IP
|
||||
@api {
|
||||
path /api/*
|
||||
}
|
||||
rate_limit @api {
|
||||
zone api_zone {
|
||||
key {remote_host}
|
||||
events 60
|
||||
window 1m
|
||||
}
|
||||
}
|
||||
|
||||
reverse_proxy myapp:3000
|
||||
}
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Rate limiting requires the `caddy-ratelimit` plugin. Verify it is included in the Caddy build before using these directives. If it is not available, implement rate limiting at the application level instead.
|
||||
- The `@name` syntax defines a named matcher that scopes the rate limit to specific paths.
|
||||
- `key {remote_host}` rate-limits per client IP address.
|
||||
- `events` is the maximum number of requests allowed within the `window` period.
|
||||
- Clients that exceed the limit receive a `429 Too Many Requests` response.
|
||||
- Apply stricter limits to authentication endpoints and more generous limits to general API usage.
|
||||
|
||||
### Alternative: Application-Level Rate Limiting
|
||||
|
||||
If the Caddy rate-limit plugin is not installed, skip the `rate_limit` directives and use the standard reverse proxy pattern. Configure rate limiting within the application instead (e.g., `express-rate-limit` for Node.js, `slowapi` for FastAPI).
|
||||
|
||||
---
|
||||
|
||||
## 6. Path-Based Routing
|
||||
|
||||
Route different URL paths to different backend services. Common for monorepo deployments where `/api` goes to a backend service and `/` goes to a frontend.
|
||||
|
||||
```
|
||||
# === Full-Stack App (path-based) ===
|
||||
myapp.lavender-daydream.com {
|
||||
encode zstd gzip
|
||||
|
||||
# API requests → backend container
|
||||
handle /api/* {
|
||||
reverse_proxy myapp-api:8000
|
||||
}
|
||||
|
||||
# WebSocket endpoint → backend container
|
||||
handle /ws/* {
|
||||
reverse_proxy myapp-api:8000
|
||||
}
|
||||
|
||||
# Everything else → frontend container
|
||||
handle {
|
||||
reverse_proxy myapp-frontend:80
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- `handle` blocks are evaluated in the order they appear. More specific paths must come before the catch-all.
|
||||
- The final `handle` (with no path argument) is the catch-all — it matches everything not matched above.
|
||||
- Use `handle_path` instead of `handle` if you need to strip the path prefix before forwarding. For example:
|
||||
```
|
||||
handle_path /api/* {
|
||||
reverse_proxy myapp-api:8000
|
||||
}
|
||||
```
|
||||
This strips `/api` from the request path, so `/api/users` becomes `/users` when it reaches the backend. Only use this if the backend does not expect the `/api` prefix.
|
||||
- Ensure all referenced containers (`myapp-api`, `myapp-frontend`) are on the `proxy` network.
|
||||
|
||||
### Variation: Static Files + API
|
||||
|
||||
Serve static files directly from Caddy for the frontend, with API requests proxied to a backend:
|
||||
|
||||
```
|
||||
# === Static Frontend + API Backend ===
|
||||
myapp.lavender-daydream.com {
|
||||
encode zstd gzip
|
||||
|
||||
handle /api/* {
|
||||
reverse_proxy myapp-api:8000
|
||||
}
|
||||
|
||||
handle {
|
||||
root * /srv/myapp/dist
|
||||
try_files {path} /index.html
|
||||
file_server
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This requires the static files to be accessible from within the Caddy container (via a volume mount).
|
||||
|
||||
---
|
||||
|
||||
## Universal Conventions
|
||||
|
||||
Apply these conventions to every site block:
|
||||
|
||||
1. **Comment header**: Place `# === App Name ===` above each site block.
|
||||
2. **Compression**: Always include `encode zstd gzip` as the first directive.
|
||||
3. **Container names**: Use container names, not IP addresses, in `reverse_proxy`.
|
||||
4. **One domain per block** unless intentionally serving multiple domains (pattern 3).
|
||||
5. **Order matters**: Place more specific `handle` blocks before less specific ones.
|
||||
6. **Test after changes**: After modifying the Caddyfile, reload Caddy and verify the site responds:
|
||||
```bash
|
||||
docker exec caddy caddy reload --config /etc/caddy/Caddyfile
|
||||
curl -I https://myapp.lavender-daydream.com
|
||||
```
|
||||
If reload fails, check Caddy logs:
|
||||
```bash
|
||||
docker logs caddy --tail 50
|
||||
```
|
||||
351
references/compose-patterns.md
Normal file
351
references/compose-patterns.md
Normal file
@@ -0,0 +1,351 @@
|
||||
# Docker Compose Patterns Reference
|
||||
|
||||
Reusable `docker-compose.yaml` templates for common application types deployed on mew. Every template includes the external `proxy` network required for Caddy reverse proxying.
|
||||
|
||||
---
|
||||
|
||||
## 1. Node.js / Express with Dockerfile Build
|
||||
|
||||
Build a Node.js app from a local Dockerfile. The container exposes an internal port that Caddy proxies to.
|
||||
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
app:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: myapp
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "3000"
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- PORT=3000
|
||||
env_file:
|
||||
- .env
|
||||
networks:
|
||||
- proxy
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
```
|
||||
|
||||
### Companion Dockerfile
|
||||
|
||||
```dockerfile
|
||||
FROM node:20-alpine
|
||||
WORKDIR /app
|
||||
COPY package*.json ./
|
||||
RUN npm ci --omit=dev
|
||||
COPY . .
|
||||
EXPOSE 3000
|
||||
CMD ["node", "server.js"]
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Use `expose` (not `ports`) to keep the port internal to Docker networks only.
|
||||
- Set `container_name` to a unique, descriptive name — Caddy uses this name in its `reverse_proxy` directive.
|
||||
- The app listens on port 3000 inside the container. Caddy reaches it via `myapp:3000`.
|
||||
|
||||
---
|
||||
|
||||
## 2. Python / FastAPI with Dockerfile Build
|
||||
|
||||
Build a Python FastAPI app from a local Dockerfile. Uses Uvicorn as the ASGI server.
|
||||
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
app:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: myapi
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "8000"
|
||||
environment:
|
||||
- PYTHONUNBUFFERED=1
|
||||
env_file:
|
||||
- .env
|
||||
networks:
|
||||
- proxy
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
```
|
||||
|
||||
### Companion Dockerfile
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.12-slim
|
||||
WORKDIR /app
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
COPY . .
|
||||
EXPOSE 8000
|
||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- `PYTHONUNBUFFERED=1` ensures log output appears immediately in `docker compose logs`.
|
||||
- For production, consider adding `--workers 4` to the Uvicorn command or switching to Gunicorn with Uvicorn workers.
|
||||
- Caddy reaches this via `myapi:8000`.
|
||||
|
||||
---
|
||||
|
||||
## 3. Static Site (nginx)
|
||||
|
||||
Serve pre-built static files (HTML, CSS, JS) via nginx.
|
||||
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
app:
|
||||
image: nginx:alpine
|
||||
container_name: mysite
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "80"
|
||||
volumes:
|
||||
- ./dist:/usr/share/nginx/html:ro
|
||||
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
|
||||
networks:
|
||||
- proxy
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
```
|
||||
|
||||
### Companion nginx.conf
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
root /usr/share/nginx/html;
|
||||
index index.html;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
|
||||
# Cache static assets
|
||||
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff2?)$ {
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Mount the build output directory (e.g., `./dist`) into the nginx html root.
|
||||
- The `try_files` fallback to `/index.html` supports client-side routing (React Router, Vue Router, etc.).
|
||||
- Mount the nginx config as read-only (`:ro`).
|
||||
- Caddy reaches this via `mysite:80`.
|
||||
|
||||
---
|
||||
|
||||
## 4. Pre-built Image Only
|
||||
|
||||
Pull and run a published Docker image with no local build. Suitable for off-the-shelf applications like wikis, dashboards, and link pages.
|
||||
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
app:
|
||||
image: lscr.io/linuxserver/bookstack:latest
|
||||
container_name: bookstack
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "6875"
|
||||
env_file:
|
||||
- .env
|
||||
volumes:
|
||||
- ./data:/config
|
||||
networks:
|
||||
- proxy
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Replace the `image` and `expose` port with whatever the application requires.
|
||||
- Check the image documentation for required environment variables and volume mount paths.
|
||||
- Persist application data by mounting a local `./data` directory.
|
||||
- Caddy reaches this via `bookstack:6875`.
|
||||
|
||||
---
|
||||
|
||||
## 5. App with PostgreSQL Database
|
||||
|
||||
A two-service stack with an application and a PostgreSQL database. The database is on an internal-only network. The app joins both the internal and proxy networks.
|
||||
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
app:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: myapp
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "3000"
|
||||
env_file:
|
||||
- .env
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- proxy
|
||||
- internal
|
||||
|
||||
db:
|
||||
image: postgres:16-alpine
|
||||
container_name: myapp-db
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB:-myapp}
|
||||
POSTGRES_USER: ${POSTGRES_USER:-myapp}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?Set POSTGRES_PASSWORD in .env}
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-myapp}"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- internal
|
||||
|
||||
volumes:
|
||||
pgdata:
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
internal:
|
||||
driver: bridge
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- The database is **only** on the `internal` network — it is not reachable from Caddy or any other container outside this stack.
|
||||
- The app is on **both** `proxy` (so Caddy can reach it) and `internal` (so it can reach the database).
|
||||
- `depends_on` with `condition: service_healthy` ensures the app waits for PostgreSQL to be ready before starting.
|
||||
- The `${POSTGRES_PASSWORD:?...}` syntax causes compose to fail with an error if the variable is not set, preventing accidental deploys with no database password.
|
||||
- Use a named volume (`pgdata`) for database persistence.
|
||||
- In the app's `.env`, set the database URL:
|
||||
```
|
||||
DATABASE_URL=postgresql://myapp:secretpassword@myapp-db:5432/myapp
|
||||
```
|
||||
Note the hostname is the database container name (`myapp-db`), not `localhost`.
|
||||
|
||||
---
|
||||
|
||||
## 6. App with Environment File
|
||||
|
||||
Pattern for managing configuration through `.env` files with a `.env.example` template checked into version control.
|
||||
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
app:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: myapp
|
||||
restart: unless-stopped
|
||||
expose:
|
||||
- "3000"
|
||||
env_file:
|
||||
- .env
|
||||
networks:
|
||||
- proxy
|
||||
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
```
|
||||
|
||||
### Companion .env.example
|
||||
|
||||
Check this file into version control as a template. The actual `.env` file contains secrets and is listed in `.gitignore` on public repos only (on private Gitea repos, `.env` is committed per project conventions).
|
||||
|
||||
```env
|
||||
# Application
|
||||
NODE_ENV=production
|
||||
PORT=3000
|
||||
APP_URL=https://myapp.lavender-daydream.com
|
||||
|
||||
# Database (if applicable)
|
||||
DATABASE_URL=postgresql://user:password@myapp-db:5432/myapp
|
||||
|
||||
# Secrets
|
||||
SESSION_SECRET=generate-a-random-string-here
|
||||
API_KEY=your-api-key-here
|
||||
|
||||
# Email (Mailgun)
|
||||
MAILGUN_API_KEY=
|
||||
MAILGUN_DOMAIN=
|
||||
MAILGUN_FROM=noreply@lavender-daydream.com
|
||||
|
||||
# Deploy listener webhook secret (must match /etc/deploy-listener/deploy-listener.env)
|
||||
WEBHOOK_SECRET=must-match-deploy-listener
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- The `env_file` directive in compose loads all variables from `.env` into the container environment.
|
||||
- Variables defined in `env_file` are available both to the containerized application and to compose variable interpolation (`${VAR}` syntax in the compose file).
|
||||
- Always provide a `.env.example` with placeholder values and comments explaining each variable.
|
||||
- For the deploy listener to work, the repo's webhook secret must match the value in `/etc/deploy-listener/deploy-listener.env`.
|
||||
|
||||
---
|
||||
|
||||
## Universal Compose Conventions
|
||||
|
||||
These conventions apply to ALL stacks on mew:
|
||||
|
||||
1. **Always include the proxy network** if Caddy needs to reach the container:
|
||||
```yaml
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
```
|
||||
|
||||
2. **Use `expose`, not `ports`**: Keep ports internal to Docker networks. Never bind to the host unless absolutely necessary.
|
||||
|
||||
3. **Set `container_name`** explicitly: Caddy resolves containers by name. Avoid auto-generated names.
|
||||
|
||||
4. **Set `restart: unless-stopped`**: Containers restart automatically after crashes or server reboots, but stay stopped if manually stopped.
|
||||
|
||||
5. **Use `env_file` for secrets**: Do not hardcode secrets in the compose file.
|
||||
|
||||
6. **Use health checks** for databases and critical dependencies to ensure proper startup ordering.
|
||||
|
||||
7. **Persist data with named volumes or bind mounts**: Never rely on container-internal storage for important data.
|
||||
431
references/infrastructure.md
Normal file
431
references/infrastructure.md
Normal file
@@ -0,0 +1,431 @@
|
||||
# Infrastructure Reference — mew Server (155.94.170.136)
|
||||
|
||||
This document describes every infrastructure component on the mew server relevant to deploying Docker Compose applications behind Caddy with automated Gitea-triggered deployments.
|
||||
|
||||
---
|
||||
|
||||
## 1. Deploy Listener
|
||||
|
||||
### Overview
|
||||
|
||||
A Python webhook listener that receives push events from Gitea/Forgejo and automatically deploys the corresponding Docker Compose stack.
|
||||
|
||||
### Filesystem Locations
|
||||
|
||||
| Item | Path |
|
||||
|------|------|
|
||||
| Script | `/usr/local/bin/deploy-listener.py` |
|
||||
| Systemd unit | `deploy-listener.service` |
|
||||
| Deploy map | `/etc/deploy-listener/deploy-map.json` |
|
||||
| Environment file | `/etc/deploy-listener/deploy-listener.env` |
|
||||
| Service user home | `/var/lib/deploy` |
|
||||
|
||||
### Service User
|
||||
|
||||
- **User**: `deploy`
|
||||
- **Groups**: `docker`, `git`
|
||||
- **Home directory**: `/var/lib/deploy`
|
||||
|
||||
The `deploy` user has Docker socket access through the `docker` group and repository access through the `git` group.
|
||||
|
||||
### Network Binding
|
||||
|
||||
- **Port**: 50500
|
||||
- **Bind address**: 0.0.0.0
|
||||
- **Firewall**: UFW blocks external access to port 50500. Only Docker's internal 10.0.0.0/8 range is allowed. Caddy reaches the listener at `10.0.12.1:50500` (the proxy network gateway).
|
||||
|
||||
### Deploy Map
|
||||
|
||||
Location: `/etc/deploy-listener/deploy-map.json`
|
||||
|
||||
Format — a JSON object mapping `owner/repo` to the absolute path of the compose directory:
|
||||
|
||||
```json
|
||||
{
|
||||
"darren/compose-bookstack": "/srv/git/compose-bookstack",
|
||||
"darren/compose-linkstack": "/srv/git/compose-linkstack",
|
||||
"darren/my-app": "/srv/git/my-app"
|
||||
}
|
||||
```
|
||||
|
||||
Add a new entry to this file for every application that should be auto-deployed on push.
|
||||
|
||||
### Environment File
|
||||
|
||||
Location: `/etc/deploy-listener/deploy-listener.env`
|
||||
|
||||
```env
|
||||
WEBHOOK_SECRET=<the-shared-secret>
|
||||
LISTEN_PORT=50500
|
||||
```
|
||||
|
||||
The `WEBHOOK_SECRET` value must match the secret configured in each Gitea/Forgejo webhook.
|
||||
|
||||
### Request Validation & Behavior
|
||||
|
||||
1. **HMAC-SHA256 validation**: The listener reads the `X-Gitea-Signature` or `X-Forgejo-Signature` header and validates the request body against the `WEBHOOK_SECRET` using HMAC-SHA256. Requests that fail validation are rejected.
|
||||
2. **Branch filter**: Only pushes to `main` or `master` (checked via the `ref` field) trigger a deploy. All other branches are ignored.
|
||||
3. **Deploy map lookup**: The `repository.full_name` field (e.g., `darren/my-app`) is looked up in the deploy map. If not found, the request is ignored.
|
||||
4. **Deploy sequence**: On a valid push, the listener executes:
|
||||
```bash
|
||||
cd /srv/git/my-app
|
||||
git pull
|
||||
docker compose pull
|
||||
docker compose up -d
|
||||
```
|
||||
5. **Concurrency control**: A file lock prevents concurrent deploys. If a deploy is already running, the incoming request is queued or rejected.
|
||||
|
||||
### Health Check
|
||||
|
||||
Verify the listener is running:
|
||||
|
||||
```bash
|
||||
curl https://deploy.lavender.spl.tech/health
|
||||
```
|
||||
|
||||
A successful response confirms the listener is reachable through Caddy and functioning.
|
||||
|
||||
### Systemd Management
|
||||
|
||||
```bash
|
||||
# Check status
|
||||
sudo systemctl status deploy-listener
|
||||
|
||||
# Restart
|
||||
sudo systemctl restart deploy-listener
|
||||
|
||||
# View logs
|
||||
sudo journalctl -u deploy-listener -f
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Caddy Reverse Proxy
|
||||
|
||||
### Overview
|
||||
|
||||
Caddy serves as the TLS-terminating reverse proxy for all applications on mew. It automatically provisions and renews certificates via Let's Encrypt.
|
||||
|
||||
### Filesystem Locations
|
||||
|
||||
| Item | Path |
|
||||
|------|------|
|
||||
| Caddyfile | `/data/docker/caddy/Caddyfile` |
|
||||
| Compose file | `/data/docker/caddy/docker-compose.yaml` |
|
||||
| Container name | `caddy` |
|
||||
| Image | `caddy:2-alpine` |
|
||||
|
||||
### Network
|
||||
|
||||
- **Network name**: `proxy`
|
||||
- **Type**: external Docker network
|
||||
- **Subnet**: 10.0.12.0/24
|
||||
- **Gateway**: 10.0.12.1
|
||||
- All application containers MUST join the `proxy` network for Caddy to reach them by container name.
|
||||
|
||||
### TLS
|
||||
|
||||
- **Method**: Automatic via Let's Encrypt
|
||||
- **Email**: `postmaster@lavender-daydream.com`
|
||||
- No manual certificate management required. Caddy handles provisioning, renewal, and OCSP stapling automatically.
|
||||
|
||||
### Deploy Endpoint
|
||||
|
||||
The deploy listener is exposed externally through Caddy:
|
||||
|
||||
```
|
||||
deploy.lavender.spl.tech → 10.0.12.1:50500
|
||||
```
|
||||
|
||||
This routes through the proxy network gateway to the host-bound deploy listener.
|
||||
|
||||
### Reloading the Caddyfile
|
||||
|
||||
**Standard reload** (when Caddyfile content changed but inode is the same):
|
||||
|
||||
```bash
|
||||
docker exec caddy caddy reload --config /etc/caddy/Caddyfile
|
||||
```
|
||||
|
||||
**Full restart** (required when the Caddyfile inode changed, e.g., after replacing the file rather than editing in-place):
|
||||
|
||||
```bash
|
||||
cd /data/docker/caddy && docker compose restart caddy
|
||||
```
|
||||
|
||||
Always check whether the file was edited in-place or replaced. If replaced, you MUST restart rather than reload.
|
||||
|
||||
### Site Block Format
|
||||
|
||||
Follow this exact format when adding new site blocks to the Caddyfile:
|
||||
|
||||
```
|
||||
# === App Name ===
|
||||
domain.example.com {
|
||||
encode zstd gzip
|
||||
reverse_proxy container_name:port
|
||||
}
|
||||
```
|
||||
|
||||
- Place the comment header (`# === App Name ===`) above each block for readability.
|
||||
- Always include `encode zstd gzip` for compression.
|
||||
- Use the container name (not IP) in the `reverse_proxy` directive — Caddy resolves container names on the proxy network.
|
||||
|
||||
---
|
||||
|
||||
## 3. Gitea API
|
||||
|
||||
### Connection Details
|
||||
|
||||
| Item | Value |
|
||||
|------|-------|
|
||||
| Internal URL (from mew host) | `http://10.0.12.5:3000` |
|
||||
| External URL | `https://git.lavender-daydream.com` |
|
||||
| API base path | `/api/v1` |
|
||||
| Token location | `~/.claude/secrets/gitea.json` |
|
||||
|
||||
### Authentication
|
||||
|
||||
Include the token as a header on every API request:
|
||||
|
||||
```
|
||||
Authorization: token {GITEA_TOKEN}
|
||||
```
|
||||
|
||||
### Key Endpoints
|
||||
|
||||
#### Check if a repo exists
|
||||
|
||||
```
|
||||
GET /api/v1/repos/{owner}/{repo}
|
||||
```
|
||||
|
||||
- **200**: Repo exists (response includes repo details).
|
||||
- **404**: Repo does not exist.
|
||||
|
||||
#### Create a new repo
|
||||
|
||||
```
|
||||
POST /api/v1/user/repos
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"name": "my-app",
|
||||
"private": false,
|
||||
"auto_init": false
|
||||
}
|
||||
```
|
||||
|
||||
Set `auto_init` to `false` when pushing an existing local repo. Set to `true` if you want Gitea to create an initial commit.
|
||||
|
||||
#### Add a webhook
|
||||
|
||||
```
|
||||
POST /api/v1/repos/{owner}/{repo}/hooks
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"type": "gitea",
|
||||
"active": true,
|
||||
"branch_filter": "main master",
|
||||
"config": {
|
||||
"url": "https://deploy.lavender.spl.tech/webhook",
|
||||
"content_type": "json",
|
||||
"secret": "<WEBHOOK_SECRET>"
|
||||
},
|
||||
"events": ["push"]
|
||||
}
|
||||
```
|
||||
|
||||
The `secret` in the webhook config MUST match the `WEBHOOK_SECRET` in `/etc/deploy-listener/deploy-listener.env`.
|
||||
|
||||
#### List repos
|
||||
|
||||
```
|
||||
GET /api/v1/repos/search?limit=50
|
||||
```
|
||||
|
||||
Returns up to 50 repositories. Use `page` parameter for pagination.
|
||||
|
||||
---
|
||||
|
||||
## 4. Forgejo API
|
||||
|
||||
### Connection Details
|
||||
|
||||
| Item | Value |
|
||||
|------|-------|
|
||||
| Container name | `forgejo` |
|
||||
| Internal port | 3000 |
|
||||
| External URL | `https://forgejo.lavender-daydream.com` |
|
||||
| SSH port | 2223 |
|
||||
|
||||
### API Compatibility
|
||||
|
||||
Forgejo is a fork of Gitea. The API format, endpoints, authentication, and request/response structures are identical to those documented in the Gitea section above. Use the same patterns — just substitute the Forgejo base URL.
|
||||
|
||||
### SSH Access
|
||||
|
||||
```bash
|
||||
git remote add forgejo ssh://git@forgejo.lavender-daydream.com:2223/owner/repo.git
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Cloudflare DNS
|
||||
|
||||
### Token & Zone Configuration
|
||||
|
||||
Location: `~/.claude/secrets/cloudflare.json`
|
||||
|
||||
Format:
|
||||
|
||||
```json
|
||||
{
|
||||
"CLOUDFLARE_API_TOKEN": "your-api-token-here",
|
||||
"zones": {
|
||||
"lavender-daydream.com": "zone_id_for_lavender_daydream",
|
||||
"spl.tech": "zone_id_for_spl_tech"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Authentication
|
||||
|
||||
Include the token as a Bearer header:
|
||||
|
||||
```
|
||||
Authorization: Bearer {CLOUDFLARE_API_TOKEN}
|
||||
```
|
||||
|
||||
### Create an A Record
|
||||
|
||||
```
|
||||
POST https://api.cloudflare.com/client/v4/zones/{zone_id}/dns_records
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"type": "A",
|
||||
"name": "{subdomain}",
|
||||
"content": "155.94.170.136",
|
||||
"ttl": 1,
|
||||
"proxied": false
|
||||
}
|
||||
```
|
||||
|
||||
- **`name`**: The subdomain portion (e.g., `myapp` for `myapp.lavender-daydream.com`, or the full FQDN).
|
||||
- **`content`**: Always `155.94.170.136` (mew's public IP).
|
||||
- **`ttl`**: `1` means automatic TTL.
|
||||
- **`proxied`**: Set to `false` so Caddy handles TLS directly. Setting to `true` would route through Cloudflare's proxy and interfere with Let's Encrypt.
|
||||
|
||||
### Choosing the Zone
|
||||
|
||||
Pick the zone based on the desired domain suffix:
|
||||
|
||||
- `*.lavender-daydream.com` → use the `lavender-daydream.com` zone ID
|
||||
- `*.spl.tech` → use the `spl.tech` zone ID
|
||||
|
||||
---
|
||||
|
||||
## 6. Docker Networking
|
||||
|
||||
### The `proxy` Network
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Name | `proxy` |
|
||||
| Subnet | 10.0.12.0/24 |
|
||||
| Gateway | 10.0.12.1 |
|
||||
| Type | External (created once, referenced by all stacks) |
|
||||
|
||||
### Requirements
|
||||
|
||||
- **Every application container** that Caddy must reach MUST join the `proxy` network.
|
||||
- Caddy resolves container names to IPs on this network — use container names (not IPs) in `reverse_proxy` directives.
|
||||
- The network is created externally (not by any single compose file). If it does not exist, create it:
|
||||
|
||||
```bash
|
||||
docker network create --subnet=10.0.12.0/24 --gateway=10.0.12.1 proxy
|
||||
```
|
||||
|
||||
### Compose Configuration
|
||||
|
||||
Every compose file that needs Caddy access must include:
|
||||
|
||||
```yaml
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
```
|
||||
|
||||
And each service that Caddy proxies to must list `proxy` in its `networks` key:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
app:
|
||||
# ...
|
||||
networks:
|
||||
- proxy
|
||||
```
|
||||
|
||||
If the stack also has internal-only services (e.g., a database), create an additional internal network:
|
||||
|
||||
```yaml
|
||||
networks:
|
||||
proxy:
|
||||
name: proxy
|
||||
external: true
|
||||
internal:
|
||||
driver: bridge
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Compose Stack Locations
|
||||
|
||||
### Core Infrastructure Stacks
|
||||
|
||||
Location: `/data/docker/`
|
||||
|
||||
These are foundational services that support the entire server:
|
||||
|
||||
| Directory | Service |
|
||||
|-----------|---------|
|
||||
| `/data/docker/caddy/` | Caddy reverse proxy |
|
||||
| `/data/docker/gitea/` | Gitea git forge |
|
||||
| `/data/docker/forgejo/` | Forgejo git forge |
|
||||
| `/data/docker/email/` | Email services |
|
||||
| `/data/docker/website/` | Main website |
|
||||
| `/data/docker/linkstack-berlyn/` | Berlyn's linkstack |
|
||||
|
||||
### Application Stacks
|
||||
|
||||
Location: `/srv/git/`
|
||||
|
||||
These are deployed applications managed by the deploy listener:
|
||||
|
||||
| Directory | Application |
|
||||
|-----------|-------------|
|
||||
| `/srv/git/compose-bookstack/` | BookStack wiki |
|
||||
| `/srv/git/compose-linkstack/` | LinkStack |
|
||||
| `/srv/git/compose-portainer/` | Portainer |
|
||||
| `/srv/git/compose-wishthis/` | WishThis |
|
||||
| `/srv/git/compose-anythingllm/` | AnythingLLM |
|
||||
|
||||
### Ownership & Permissions
|
||||
|
||||
- **Owner**: `root:git`
|
||||
- **Permissions**: `2775` (setgid)
|
||||
- The setgid bit ensures new files and directories inherit the `git` group, so both `root` and members of the `git` group (including `deploy` and `darren`) can read/write.
|
||||
|
||||
### Standard Stack Contents
|
||||
|
||||
Each compose stack directory should contain:
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `docker-compose.yaml` | Service definitions |
|
||||
| `.env` | Environment variables (secrets, config) |
|
||||
| `Makefile` | Convenience targets (`make up`, `make down`, `make logs`) |
|
||||
| `README.md` | Stack documentation |
|
||||
Reference in New Issue
Block a user