diff --git a/ARGOCD_DEPLOY.md b/ARGOCD_DEPLOY.md new file mode 100644 index 0000000..a6eb764 --- /dev/null +++ b/ARGOCD_DEPLOY.md @@ -0,0 +1,231 @@ +# ArgoCD Deployment Guide + +## Prerequisites + +1. **Label Pi 5 nodes** (run on each Pi 5 node): + + ```bash + kubectl label node node-class=compute + kubectl label node node-class=compute + ``` + +2. **Verify Pi 3 taint**: + + ```bash + kubectl taint node capacity=low:NoExecute + ``` + +3. **ArgoCD installed** and accessible + +## Quick Deploy + +### 1. Create cluster-specific values file + +```bash +cd argocd +cp values-porthole.yaml.example values-porthole.yaml +``` + +### 2. Edit values-porthole.yaml + +```bash +vim values-porthole.yaml +``` + +**Required settings:** + +```yaml +global: + tailscale: + tailnetFQDN: "your-tailnet.ts.net" # REQUIRED + +secrets: + postgres: + password: "your-postgres-password" # REQUIRED + minio: + accessKeyId: "your-minio-access-key" # REQUIRED + secretAccessKey: "your-minio-secret" # REQUIRED +``` + +### 3. Commit and push values file + +```bash +git add argocd/values-porthole.yaml +git commit -m "deploy: configure cluster values for porthole" +git push +``` + +### 4. Apply ArgoCD Application + +```bash +kubectl apply -f argocd/porthole-application.yaml +``` + +### 5. Monitor sync in ArgoCD + +- Open ArgoCD UI +- Navigate to porthole application +- Click "Sync" if needed + +## Manual Deploy (without ArgoCD) + +If you prefer direct helm deployment: + +```bash +helm install porthole helm/porthole \ + -f helm/porthole/values.yaml \ + -f argocd/values-porthole.yaml \ + --namespace porthole \ + --create-namespace +``` + +## Upgrade Deployment + +```bash +helm upgrade porthole helm/porthole \ + -f helm/porthole/values.yaml \ + -f argocd/values-porthole.yaml \ + --namespace porthole +``` + +## Rollback + +```bash +helm rollback porthole -n porthole +``` + +## Verify Deployment + +### Check pod status + +```bash +kubectl get pods -n porthole -o wide +``` + +All pods should be: + +- Running +- On Pi 5 nodes (not Pi 3) +- No restart loops + +### Check PVCs + +```bash +kubectl get pvc -n porthole +``` + +Should show: + +- `porthole-minio` (200Gi) +- `porthole-postgres` (20Gi) + +### Check services + +```bash +kubectl get svc -n porthole +``` + +### Check Tailscale ingress + +```bash +kubectl get ingress -n porthole +``` + +Should show: + +- `porthole-web` → app. +- `porthole-minio` → minio. +- `porthole-minio-console` → minio-console. + +### Check Tailscale LoadBalancer services (if enabled) + +```bash +kubectl get svc -n porthole -l app.kubernetes.io/component=minio +``` + +Should show: + +- `porthole-minio-ts-s3` (LoadBalancer) + +## Access Services + +Once deployment is healthy: + +- **App UI**: `https://app.` +- **MinIO S3 API**: `https://minio.` +- **MinIO Console**: `https://minio-console.` + +## Troubleshooting + +### Pods not starting + +```bash +kubectl describe pod -n porthole +kubectl logs -n porthole +``` + +### Sync errors in ArgoCD + +1. Check values file syntax: + + ```bash + helm template porthole helm/porthole \ + -f helm/porthole/values.yaml \ + -f argocd/values-porthole.yaml + ``` + +2. Verify node labels: + + ```bash + kubectl get nodes --show-labels + ``` + +3. Check taints: + ```bash + kubectl describe node | grep Taint + ``` + +### PVC stuck in Pending + +```bash +kubectl get pvc -n porthole +kubectl describe pvc -n porthole +``` + +Check Longhorn storage class: + +```bash +kubectl get storageclass +``` + +### MinIO or Postgres failing + +Check PVC is bound and Longhorn is healthy: + +```bash +kubectl get pv +kubectl get storageclass longhorn -o yaml +``` + +### Tailscale endpoints not accessible + +1. Verify Tailscale operator is installed +2. Check ingress configuration: + ```bash + kubectl get ingress -n porthole -o yaml + ``` +3. Verify tailnet FQDN in values-porthole.yaml + +## Cleanup + +```bash +helm uninstall porthole -n porthole +kubectl delete namespace porthole +kubectl delete -f argocd/porthole-application.yaml +``` + +Note: PVCs will persist unless you delete them manually: + +```bash +kubectl delete pvc -n porthole --all +``` diff --git a/argocd/porthole-application.yaml b/argocd/porthole-application.yaml index 79d025f..4e5d3d7 100644 --- a/argocd/porthole-application.yaml +++ b/argocd/porthole-application.yaml @@ -13,7 +13,7 @@ spec: releaseName: porthole valueFiles: - values.yaml - # - values-porthole.yaml + - ../../argocd/values-porthole.yaml destination: server: https://kubernetes.default.svc namespace: porthole diff --git a/argocd/values-porthole.yaml.example b/argocd/values-porthole.yaml.example new file mode 100644 index 0000000..e11b1cc --- /dev/null +++ b/argocd/values-porthole.yaml.example @@ -0,0 +1,50 @@ +# Cluster-specific values for porthole deployment +# Copy this file to values-porthole.yaml and fill in the required values + +global: + tailscale: + tailnetFQDN: "your-tailnet.ts.net" # REQUIRED: Your tailnet FQDN + # ingressClassName: tailscale # Default: tailscale + +# secrets: +# existingSecret: "porthole-secrets" # Optional: Use existing secret + +# If not using existingSecret, fill these in: +secrets: + postgres: + password: "your-postgres-password" # REQUIRED + minio: + accessKeyId: "your-minio-access-key" # REQUIRED + secretAccessKey: "your-minio-secret-key" # REQUIRED + +# Optional: Override images if using different registry +# images: +# web: +# repository: your-registry/porthole-web +# tag: dev +# worker: +# repository: your-registry/porthole-worker +# tag: dev + +# Optional: Override database/redis/minio if bringing your own services +# app: +# databaseUrl: "postgres://user:pass@host:port/db" +# redisUrl: "redis://host:port" +# minio: +# internalEndpoint: "http://minio-host:9000" +# publicEndpointTs: "https://minio.your-tailnet.ts.net" + +# Optional: Override storage class +# global: +# storageClass: "longhorn" + +# Optional: Enable MinIO Tailscale LoadBalancer service (default: true) +# minio: +# tailscaleServiceS3: +# enabled: true + +# Optional: Enable staging cleanup (default: true) +# cronjobs: +# cleanupStaging: +# enabled: true +# olderThanDays: 14