# ArgoCD Deployment Guide ## Prerequisites 1. **Label Pi 5 nodes** (run on each Pi 5 node): ```bash kubectl label node node-class=compute kubectl label node node-class=compute ``` 2. **Verify Pi 3 taint**: ```bash kubectl taint node capacity=low:NoExecute ``` 3. **ArgoCD installed** and accessible ## Quick Deploy ### 1. Create cluster-specific values file ```bash cd argocd cp values-porthole.yaml.example values-porthole.yaml ``` ### 2. Edit values-porthole.yaml ```bash vim values-porthole.yaml ``` **Required settings:** ```yaml global: tailscale: tailnetFQDN: "your-tailnet.ts.net" # REQUIRED secrets: postgres: password: "your-postgres-password" # REQUIRED minio: accessKeyId: "your-minio-access-key" # REQUIRED secretAccessKey: "your-minio-secret" # REQUIRED ``` ### 3. Commit and push values file ```bash git add argocd/values-porthole.yaml git commit -m "deploy: configure cluster values for porthole" git push ``` ### 4. Apply ArgoCD Application ```bash kubectl apply -f argocd/porthole-application.yaml ``` ### 5. Monitor sync in ArgoCD - Open ArgoCD UI - Navigate to porthole application - Click "Sync" if needed ## Manual Deploy (without ArgoCD) If you prefer direct helm deployment: ```bash helm install porthole helm/porthole \ -f helm/porthole/values.yaml \ -f argocd/values-porthole.yaml \ --namespace porthole \ --create-namespace ``` ## Upgrade Deployment ```bash helm upgrade porthole helm/porthole \ -f helm/porthole/values.yaml \ -f argocd/values-porthole.yaml \ --namespace porthole ``` ## Rollback ```bash helm rollback porthole -n porthole ``` ## Verify Deployment ### Check pod status ```bash kubectl get pods -n porthole -o wide ``` All pods should be: - Running - On Pi 5 nodes (not Pi 3) - No restart loops ### Check PVCs ```bash kubectl get pvc -n porthole ``` Should show: - `porthole-minio` (200Gi) - `porthole-postgres` (20Gi) ### Check services ```bash kubectl get svc -n porthole ``` ### Check Tailscale ingress ```bash kubectl get ingress -n porthole ``` Should show: - `porthole-web` → app. - `porthole-minio` → minio. - `porthole-minio-console` → minio-console. ### Check Tailscale LoadBalancer services (if enabled) ```bash kubectl get svc -n porthole -l app.kubernetes.io/component=minio ``` Should show: - `porthole-minio-ts-s3` (LoadBalancer) ## Access Services Once deployment is healthy: - **App UI**: `https://app.` - **MinIO S3 API**: `https://minio.` - **MinIO Console**: `https://minio-console.` ## Troubleshooting ### Pods not starting ```bash kubectl describe pod -n porthole kubectl logs -n porthole ``` ### Sync errors in ArgoCD 1. Check values file syntax: ```bash helm template porthole helm/porthole \ -f helm/porthole/values.yaml \ -f argocd/values-porthole.yaml ``` 2. Verify node labels: ```bash kubectl get nodes --show-labels ``` 3. Check taints: ```bash kubectl describe node | grep Taint ``` ### PVC stuck in Pending ```bash kubectl get pvc -n porthole kubectl describe pvc -n porthole ``` Check Longhorn storage class: ```bash kubectl get storageclass ``` ### MinIO or Postgres failing Check PVC is bound and Longhorn is healthy: ```bash kubectl get pv kubectl get storageclass longhorn -o yaml ``` ### Tailscale endpoints not accessible 1. Verify Tailscale operator is installed 2. Check ingress configuration: ```bash kubectl get ingress -n porthole -o yaml ``` 3. Verify tailnet FQDN in values-porthole.yaml ## Cleanup ```bash helm uninstall porthole -n porthole kubectl delete namespace porthole kubectl delete -f argocd/porthole-application.yaml ``` Note: PVCs will persist unless you delete them manually: ```bash kubectl delete pvc -n porthole --all ```