- Added values-porthole.yaml.example for cluster-specific configuration - Updated ArgoCD Application to use values-porthole.yaml - Added ARGOCD_DEPLOY.md with deployment guide and troubleshooting
3.9 KiB
3.9 KiB
ArgoCD Deployment Guide
Prerequisites
-
Label Pi 5 nodes (run on each Pi 5 node):
kubectl label node <pi5-node-1> node-class=compute kubectl label node <pi5-node-2> node-class=compute -
Verify Pi 3 taint:
kubectl taint node <pi3-node> capacity=low:NoExecute -
ArgoCD installed and accessible
Quick Deploy
1. Create cluster-specific values file
cd argocd
cp values-porthole.yaml.example values-porthole.yaml
2. Edit values-porthole.yaml
vim values-porthole.yaml
Required settings:
global:
tailscale:
tailnetFQDN: "your-tailnet.ts.net" # REQUIRED
secrets:
postgres:
password: "your-postgres-password" # REQUIRED
minio:
accessKeyId: "your-minio-access-key" # REQUIRED
secretAccessKey: "your-minio-secret" # REQUIRED
3. Commit and push values file
git add argocd/values-porthole.yaml
git commit -m "deploy: configure cluster values for porthole"
git push
4. Apply ArgoCD Application
kubectl apply -f argocd/porthole-application.yaml
5. Monitor sync in ArgoCD
- Open ArgoCD UI
- Navigate to porthole application
- Click "Sync" if needed
Manual Deploy (without ArgoCD)
If you prefer direct helm deployment:
helm install porthole helm/porthole \
-f helm/porthole/values.yaml \
-f argocd/values-porthole.yaml \
--namespace porthole \
--create-namespace
Upgrade Deployment
helm upgrade porthole helm/porthole \
-f helm/porthole/values.yaml \
-f argocd/values-porthole.yaml \
--namespace porthole
Rollback
helm rollback porthole -n porthole
Verify Deployment
Check pod status
kubectl get pods -n porthole -o wide
All pods should be:
- Running
- On Pi 5 nodes (not Pi 3)
- No restart loops
Check PVCs
kubectl get pvc -n porthole
Should show:
porthole-minio(200Gi)porthole-postgres(20Gi)
Check services
kubectl get svc -n porthole
Check Tailscale ingress
kubectl get ingress -n porthole
Should show:
porthole-web→ app.porthole-minio→ minio.porthole-minio-console→ minio-console.
Check Tailscale LoadBalancer services (if enabled)
kubectl get svc -n porthole -l app.kubernetes.io/component=minio
Should show:
porthole-minio-ts-s3(LoadBalancer)
Access Services
Once deployment is healthy:
- App UI:
https://app.<your-tailnet-fqdn> - MinIO S3 API:
https://minio.<your-tailnet-fqdn> - MinIO Console:
https://minio-console.<your-tailnet-fqdn>
Troubleshooting
Pods not starting
kubectl describe pod <pod-name> -n porthole
kubectl logs <pod-name> -n porthole
Sync errors in ArgoCD
-
Check values file syntax:
helm template porthole helm/porthole \ -f helm/porthole/values.yaml \ -f argocd/values-porthole.yaml -
Verify node labels:
kubectl get nodes --show-labels -
Check taints:
kubectl describe node <pi3-node> | grep Taint
PVC stuck in Pending
kubectl get pvc -n porthole
kubectl describe pvc <pvc-name> -n porthole
Check Longhorn storage class:
kubectl get storageclass
MinIO or Postgres failing
Check PVC is bound and Longhorn is healthy:
kubectl get pv
kubectl get storageclass longhorn -o yaml
Tailscale endpoints not accessible
- Verify Tailscale operator is installed
- Check ingress configuration:
kubectl get ingress -n porthole -o yaml - Verify tailnet FQDN in values-porthole.yaml
Cleanup
helm uninstall porthole -n porthole
kubectl delete namespace porthole
kubectl delete -f argocd/porthole-application.yaml
Note: PVCs will persist unless you delete them manually:
kubectl delete pvc -n porthole --all