- Created comprehensive QA checklist covering edge cases (missing EXIF, timezones, codecs, corrupt files) - Added ErrorBoundary component wrapped around TimelineTree and MediaPanel - Created global error.tsx page for unhandled errors - Improved failed asset UX with red borders, warning icons, and inline error display - Added loading skeletons to TimelineTree and MediaPanel - Added retry button for failed media loads - Created DEPLOYMENT_VALIDATION.md with validation commands and checklist - Applied k8s recommendations: - Changed node affinity to required for compute nodes (Pi 5) - Enabled Tailscale LoadBalancer service for MinIO S3 (reliable Range requests) - Enabled cleanup CronJob for staging files
porthole
Porthole: timeline media library (Next.js web + worker), backed by Postgres/Redis/MinIO.
How to try it
-
Create a values file (example minimal):
- set
secrets.postgres.password - set
secrets.minio.accessKeyId+secrets.minio.secretAccessKey - set
images.web.repository/tagandimages.worker.repository/tag - set
global.tailscale.tailnetFQDN(recommended), or setapp.minio.publicEndpointTs(must behttps://minio.<tailnet-fqdn>)
- set
-
Render locally:
helm template porthole helm/porthole -f your-values.yaml --namespace porthole -
Install (to
portholenamespace):helm upgrade --install porthole helm/porthole -f your-values.yaml --namespace porthole
ArgoCD
A ready-to-apply ArgoCD Application manifest is included at argocd/porthole-application.yaml (it deploys the Helm release name porthole).
Reference example (deploys into the porthole namespace; the Helm chart itself does not hardcode a namespace):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: porthole
namespace: argocd
spec:
project: default
source:
repoURL: git@gitea-gitea-ssh.taildb3494.ts.net:will/porthole.git
targetRevision: main
path: helm/porthole
helm:
releaseName: porthole
valueFiles:
- values.yaml
# - values-porthole.yaml
# Alternative to valueFiles: set a few values inline.
# parameters:
# - name: global.tailscale.tailnetFQDN
# value: tailxyz.ts.net
# - name: images.web.repository
# value: gitea-gitea-http.taildb3494.ts.net/will/porthole-web
# - name: images.web.tag
# value: dev
destination:
server: https://kubernetes.default.svc
namespace: porthole
# Optional: if you use image pull secrets, you can set them via values files
# or inline Helm parameters.
# source:
# helm:
# parameters:
# - name: imagePullSecrets[0]
# value: my-registry-secret
# - name: registrySecret.create
# value: "true"
# - name: registrySecret.server
# value: registry.lan:5000
# - name: registrySecret.username
# value: your-user
# - name: registrySecret.password
# value: your-pass
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=false
- ApplyOutOfSyncOnly=true
Notes
- MinIO bucket creation: the app does not auto-create buckets. You can either create the bucket yourself, or enable the Helm hook job:
jobs.ensureBucket.enabled=true
- Staging cleanup: disabled by default; enable with:
cronjobs.cleanupStaging.enabled=true
Build + push images (multi-arch)
This repo is a Bun monorepo, but container builds use Docker Buildx.
-
Assumptions:
- You have an in-cluster registry reachable over insecure HTTP (example:
registry.lan:5000). - Your Docker daemon is configured to allow that registry as an insecure registry.
- You have an in-cluster registry reachable over insecure HTTP (example:
-
Create/use a buildx builder (one-time):
docker buildx create --name porthole --use
-
Build + push web (Next standalone):
REGISTRY=gitea-gitea-http.taildb3494.ts.net TAG=devdocker buildx build --platform linux/amd64,linux/arm64 -f apps/web/Dockerfile -t "$REGISTRY/will/porthole-web:$TAG" --push .- Notes:
- The Dockerfile uses
bun install --frozen-lockfileand copies all workspacepackage.jsonfiles first to keep Bun from mutatingbun.lock. - Runtime entrypoint comes from Next standalone output (the image runs
node app/apps/web/server.js).
- The Dockerfile uses
-
Build + push worker (includes
ffmpeg+exiftool):REGISTRY=gitea-gitea-http.taildb3494.ts.net TAG=devdocker buildx build --platform linux/amd64,linux/arm64 -f apps/worker/Dockerfile -t "$REGISTRY/will/porthole-worker:$TAG" --push .- Notes:
- The Dockerfile uses
bun install --frozen-lockfile --productionand also copies all workspacepackage.jsonfiles first for stableworkspace:*resolution.
- The Dockerfile uses
-
Then set Helm values:
images.web.repository: gitea-gitea-http.taildb3494.ts.net/will/porthole-webimages.web.tag: devimages.worker.repository: gitea-gitea-http.taildb3494.ts.net/will/porthole-workerimages.worker.tag: dev
Private registry auth (optional)
If your registry requires auth, you can either:
- reference an existing Secret via
imagePullSecrets, or - have the chart create a
kubernetes.io/dockerconfigjsonSecret viaregistrySecret.
Example values:
# Option A: reference an existing secret
imagePullSecrets:
- my-registry-secret
# Option B: create a secret from values (stores creds in values)
registrySecret:
create: true
server: "registry.lan:5000"
username: "your-user"
password: "your-pass"
email: "you@example.com"
MinIO exposure (Tailscale)
MinIO S3 URLs must be signed against https://minio.<tailnet-fqdn>.
You can expose MinIO over tailnet either via:
- Tailscale Ingress (default), or
- Tailscale LoadBalancer Service (often more reliable for streaming/Range)
Example values (LoadBalancer for S3 + console):
global:
tailscale:
tailnetFQDN: "tailxyz.ts.net"
minio:
tailscaleServiceS3:
enabled: true
hostnameLabel: minio
tailscaleServiceConsole:
enabled: true
hostnameLabel: minio-console
# Optional: if you prefer explicit override instead of deriving from tailnetFQDN
# app:
# minio:
# publicEndpointTs: "https://minio.tailxyz.ts.net"
Example values (Pi cluster)
This chart assumes you label nodes like:
- Pi 5 nodes:
node-class=compute - Pi 3 node:
node-class=tiny
The default scheduling in helm/porthole/values.yaml pins heavy pods to node-class=compute.
Example values.yaml you can start from:
secrets:
postgres:
password: "change-me"
minio:
accessKeyId: "minioadmin"
secretAccessKey: "minioadmin"
images:
web:
repository: gitea-gitea-http.taildb3494.ts.net/will/porthole-web
tag: dev
worker:
repository: gitea-gitea-http.taildb3494.ts.net/will/porthole-worker
tag: dev
global:
tailscale:
tailnetFQDN: "tailxyz.ts.net"
# Optional, but common for Pi clusters (Longhorn default shown as example)
# global:
# storageClass: longhorn
minio:
# Prefer LB Services for streaming/Range reliability
tailscaleServiceS3:
enabled: true
hostnameLabel: minio
tailscaleServiceConsole:
enabled: true
hostnameLabel: minio-console
jobs:
ensureBucket:
enabled: true
# Optional staging cleanup (never touches originals/**)
# cronjobs:
# cleanupStaging:
# enabled: true
# olderThanDays: 7
Quick checks
- Range support through ingress (expect
206):curl -sS -D- -H 'Range: bytes=0-1023' "$(curl -sS https://app.<tailnet-fqdn>/api/assets/<assetId>/url?variant=original | jq -r .url)" -o /dev/null