OpenCode Test 9c2a0a3b4d chore: add build and test helper scripts
Add convenience scripts for building and running tests:
- quick_build.sh: Fast build without tests
- run_tests.sh: Run all tests
- test_build.sh: Full build with tests
2025-12-24 13:30:35 -08:00
2025-12-24 10:50:10 -08:00
2025-12-24 11:03:53 -08:00
2025-12-24 10:50:10 -08:00
2025-12-24 10:50:10 -08:00
2025-12-24 10:50:10 -08:00
2025-12-24 10:50:10 -08:00
2025-12-24 11:03:53 -08:00
2025-12-24 12:14:11 -08:00
2025-12-24 10:50:10 -08:00

porthole

Porthole: timeline media library (Next.js web + worker), backed by Postgres/Redis/MinIO.

How to try it

  • Create a values file (example minimal):

    • set secrets.postgres.password
    • set secrets.minio.accessKeyId + secrets.minio.secretAccessKey
    • set images.web.repository/tag and images.worker.repository/tag
    • set global.tailscale.tailnetFQDN (recommended), or set app.minio.publicEndpointTs (must be https://minio.<tailnet-fqdn>)
  • Render locally: helm template porthole helm/porthole -f your-values.yaml --namespace porthole

  • Install (to porthole namespace): helm upgrade --install porthole helm/porthole -f your-values.yaml --namespace porthole

ArgoCD

A ready-to-apply ArgoCD Application manifest is included at argocd/porthole-application.yaml (it deploys the Helm release name porthole).

Reference example (deploys into the porthole namespace; the Helm chart itself does not hardcode a namespace):

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: porthole
  namespace: argocd
spec:
  project: default
  source:
    repoURL: git@gitea-gitea-ssh.taildb3494.ts.net:will/porthole.git
    targetRevision: main
    path: helm/porthole
    helm:
      releaseName: porthole
      valueFiles:
        - values.yaml
        # - values-porthole.yaml
      # Alternative to valueFiles: set a few values inline.
      # parameters:
      #   - name: global.tailscale.tailnetFQDN
      #     value: tailxyz.ts.net
       #   - name: images.web.repository
       #     value: gitea-gitea-http.taildb3494.ts.net/will/porthole-web
      #   - name: images.web.tag
      #     value: dev
  destination:
    server: https://kubernetes.default.svc
    namespace: porthole

  # Optional: if you use image pull secrets, you can set them via values files
  # or inline Helm parameters.
  # source:
  #   helm:
  #     parameters:
  #       - name: imagePullSecrets[0]
  #         value: my-registry-secret
  #       - name: registrySecret.create
  #         value: "true"
  #       - name: registrySecret.server
  #         value: registry.lan:5000
  #       - name: registrySecret.username
  #         value: your-user
  #       - name: registrySecret.password
  #         value: your-pass
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=false
      - ApplyOutOfSyncOnly=true

Notes

  • MinIO bucket creation: the app does not auto-create buckets. You can either create the bucket yourself, or enable the Helm hook job:
    • jobs.ensureBucket.enabled=true
  • Staging cleanup: disabled by default; enable with:
    • cronjobs.cleanupStaging.enabled=true

Build + push images (multi-arch)

This repo is a Bun monorepo, but container builds use Docker Buildx.

  • Assumptions:

    • You have an in-cluster registry reachable over insecure HTTP (example: registry.lan:5000).
    • Your Docker daemon is configured to allow that registry as an insecure registry.
  • Create/use a buildx builder (one-time):

    • docker buildx create --name porthole --use
  • Build + push web (Next standalone):

    • REGISTRY=gitea-gitea-http.taildb3494.ts.net TAG=dev
    • docker buildx build --platform linux/amd64,linux/arm64 -f apps/web/Dockerfile -t "$REGISTRY/will/porthole-web:$TAG" --push .
    • Notes:
      • The Dockerfile uses bun install --frozen-lockfile and copies all workspace package.json files first to keep Bun from mutating bun.lock.
      • Runtime entrypoint comes from Next standalone output (the image runs node app/apps/web/server.js).
  • Build + push worker (includes ffmpeg + exiftool):

    • REGISTRY=gitea-gitea-http.taildb3494.ts.net TAG=dev
    • docker buildx build --platform linux/amd64,linux/arm64 -f apps/worker/Dockerfile -t "$REGISTRY/will/porthole-worker:$TAG" --push .
    • Notes:
      • The Dockerfile uses bun install --frozen-lockfile --production and also copies all workspace package.json files first for stable workspace:* resolution.
  • Then set Helm values:

    • images.web.repository: gitea-gitea-http.taildb3494.ts.net/will/porthole-web
    • images.web.tag: dev
    • images.worker.repository: gitea-gitea-http.taildb3494.ts.net/will/porthole-worker
    • images.worker.tag: dev

Private registry auth (optional)

If your registry requires auth, you can either:

  • reference an existing Secret via imagePullSecrets, or
  • have the chart create a kubernetes.io/dockerconfigjson Secret via registrySecret.

Example values:

# Option A: reference an existing secret
imagePullSecrets:
  - my-registry-secret

# Option B: create a secret from values (stores creds in values)
registrySecret:
  create: true
  server: "registry.lan:5000"
  username: "your-user"
  password: "your-pass"
  email: "you@example.com"

MinIO exposure (Tailscale)

MinIO S3 URLs must be signed against https://minio.<tailnet-fqdn>.

You can expose MinIO over tailnet either via:

  • Tailscale Ingress (default), or
  • Tailscale LoadBalancer Service (often more reliable for streaming/Range)

Example values (LoadBalancer for S3 + console):

global:
  tailscale:
    tailnetFQDN: "tailxyz.ts.net"

minio:
  tailscaleServiceS3:
    enabled: true
    hostnameLabel: minio
  tailscaleServiceConsole:
    enabled: true
    hostnameLabel: minio-console

# Optional: if you prefer explicit override instead of deriving from tailnetFQDN
# app:
#   minio:
#     publicEndpointTs: "https://minio.tailxyz.ts.net"

Example values (Pi cluster)

This chart assumes you label nodes like:

  • Pi 5 nodes: node-class=compute
  • Pi 3 node: node-class=tiny

The default scheduling in helm/porthole/values.yaml pins heavy pods to node-class=compute.

Example values.yaml you can start from:

secrets:
  postgres:
    password: "change-me"
  minio:
    accessKeyId: "minioadmin"
    secretAccessKey: "minioadmin"

images:
  web:
    repository: gitea-gitea-http.taildb3494.ts.net/will/porthole-web
    tag: dev
  worker:
    repository: gitea-gitea-http.taildb3494.ts.net/will/porthole-worker
    tag: dev

global:
  tailscale:
    tailnetFQDN: "tailxyz.ts.net"

# Optional, but common for Pi clusters (Longhorn default shown as example)
# global:
#   storageClass: longhorn

minio:
  # Prefer LB Services for streaming/Range reliability
  tailscaleServiceS3:
    enabled: true
    hostnameLabel: minio
  tailscaleServiceConsole:
    enabled: true
    hostnameLabel: minio-console

jobs:
  ensureBucket:
    enabled: true

# Optional staging cleanup (never touches originals/**)
# cronjobs:
#   cleanupStaging:
#     enabled: true
#     olderThanDays: 7

Quick checks

  • Range support through ingress (expect 206):
    • curl -sS -D- -H 'Range: bytes=0-1023' "$(curl -sS https://app.<tailnet-fqdn>/api/assets/<assetId>/url?variant=original | jq -r .url)" -o /dev/null
Description
No description provided
Readme 353 KiB
Languages
Go 55.1%
TypeScript 28.3%
Smarty 14.1%
Dockerfile 0.9%
PLpgSQL 0.9%
Other 0.7%