- Added values-porthole.yaml.example for cluster-specific configuration - Updated ArgoCD Application to use values-porthole.yaml - Added ARGOCD_DEPLOY.md with deployment guide and troubleshooting
232 lines
3.9 KiB
Markdown
232 lines
3.9 KiB
Markdown
# ArgoCD Deployment Guide
|
|
|
|
## Prerequisites
|
|
|
|
1. **Label Pi 5 nodes** (run on each Pi 5 node):
|
|
|
|
```bash
|
|
kubectl label node <pi5-node-1> node-class=compute
|
|
kubectl label node <pi5-node-2> node-class=compute
|
|
```
|
|
|
|
2. **Verify Pi 3 taint**:
|
|
|
|
```bash
|
|
kubectl taint node <pi3-node> capacity=low:NoExecute
|
|
```
|
|
|
|
3. **ArgoCD installed** and accessible
|
|
|
|
## Quick Deploy
|
|
|
|
### 1. Create cluster-specific values file
|
|
|
|
```bash
|
|
cd argocd
|
|
cp values-porthole.yaml.example values-porthole.yaml
|
|
```
|
|
|
|
### 2. Edit values-porthole.yaml
|
|
|
|
```bash
|
|
vim values-porthole.yaml
|
|
```
|
|
|
|
**Required settings:**
|
|
|
|
```yaml
|
|
global:
|
|
tailscale:
|
|
tailnetFQDN: "your-tailnet.ts.net" # REQUIRED
|
|
|
|
secrets:
|
|
postgres:
|
|
password: "your-postgres-password" # REQUIRED
|
|
minio:
|
|
accessKeyId: "your-minio-access-key" # REQUIRED
|
|
secretAccessKey: "your-minio-secret" # REQUIRED
|
|
```
|
|
|
|
### 3. Commit and push values file
|
|
|
|
```bash
|
|
git add argocd/values-porthole.yaml
|
|
git commit -m "deploy: configure cluster values for porthole"
|
|
git push
|
|
```
|
|
|
|
### 4. Apply ArgoCD Application
|
|
|
|
```bash
|
|
kubectl apply -f argocd/porthole-application.yaml
|
|
```
|
|
|
|
### 5. Monitor sync in ArgoCD
|
|
|
|
- Open ArgoCD UI
|
|
- Navigate to porthole application
|
|
- Click "Sync" if needed
|
|
|
|
## Manual Deploy (without ArgoCD)
|
|
|
|
If you prefer direct helm deployment:
|
|
|
|
```bash
|
|
helm install porthole helm/porthole \
|
|
-f helm/porthole/values.yaml \
|
|
-f argocd/values-porthole.yaml \
|
|
--namespace porthole \
|
|
--create-namespace
|
|
```
|
|
|
|
## Upgrade Deployment
|
|
|
|
```bash
|
|
helm upgrade porthole helm/porthole \
|
|
-f helm/porthole/values.yaml \
|
|
-f argocd/values-porthole.yaml \
|
|
--namespace porthole
|
|
```
|
|
|
|
## Rollback
|
|
|
|
```bash
|
|
helm rollback porthole -n porthole
|
|
```
|
|
|
|
## Verify Deployment
|
|
|
|
### Check pod status
|
|
|
|
```bash
|
|
kubectl get pods -n porthole -o wide
|
|
```
|
|
|
|
All pods should be:
|
|
|
|
- Running
|
|
- On Pi 5 nodes (not Pi 3)
|
|
- No restart loops
|
|
|
|
### Check PVCs
|
|
|
|
```bash
|
|
kubectl get pvc -n porthole
|
|
```
|
|
|
|
Should show:
|
|
|
|
- `porthole-minio` (200Gi)
|
|
- `porthole-postgres` (20Gi)
|
|
|
|
### Check services
|
|
|
|
```bash
|
|
kubectl get svc -n porthole
|
|
```
|
|
|
|
### Check Tailscale ingress
|
|
|
|
```bash
|
|
kubectl get ingress -n porthole
|
|
```
|
|
|
|
Should show:
|
|
|
|
- `porthole-web` → app.<tailnet-fqdn>
|
|
- `porthole-minio` → minio.<tailnet-fqdn>
|
|
- `porthole-minio-console` → minio-console.<tailnet-fqdn>
|
|
|
|
### Check Tailscale LoadBalancer services (if enabled)
|
|
|
|
```bash
|
|
kubectl get svc -n porthole -l app.kubernetes.io/component=minio
|
|
```
|
|
|
|
Should show:
|
|
|
|
- `porthole-minio-ts-s3` (LoadBalancer)
|
|
|
|
## Access Services
|
|
|
|
Once deployment is healthy:
|
|
|
|
- **App UI**: `https://app.<your-tailnet-fqdn>`
|
|
- **MinIO S3 API**: `https://minio.<your-tailnet-fqdn>`
|
|
- **MinIO Console**: `https://minio-console.<your-tailnet-fqdn>`
|
|
|
|
## Troubleshooting
|
|
|
|
### Pods not starting
|
|
|
|
```bash
|
|
kubectl describe pod <pod-name> -n porthole
|
|
kubectl logs <pod-name> -n porthole
|
|
```
|
|
|
|
### Sync errors in ArgoCD
|
|
|
|
1. Check values file syntax:
|
|
|
|
```bash
|
|
helm template porthole helm/porthole \
|
|
-f helm/porthole/values.yaml \
|
|
-f argocd/values-porthole.yaml
|
|
```
|
|
|
|
2. Verify node labels:
|
|
|
|
```bash
|
|
kubectl get nodes --show-labels
|
|
```
|
|
|
|
3. Check taints:
|
|
```bash
|
|
kubectl describe node <pi3-node> | grep Taint
|
|
```
|
|
|
|
### PVC stuck in Pending
|
|
|
|
```bash
|
|
kubectl get pvc -n porthole
|
|
kubectl describe pvc <pvc-name> -n porthole
|
|
```
|
|
|
|
Check Longhorn storage class:
|
|
|
|
```bash
|
|
kubectl get storageclass
|
|
```
|
|
|
|
### MinIO or Postgres failing
|
|
|
|
Check PVC is bound and Longhorn is healthy:
|
|
|
|
```bash
|
|
kubectl get pv
|
|
kubectl get storageclass longhorn -o yaml
|
|
```
|
|
|
|
### Tailscale endpoints not accessible
|
|
|
|
1. Verify Tailscale operator is installed
|
|
2. Check ingress configuration:
|
|
```bash
|
|
kubectl get ingress -n porthole -o yaml
|
|
```
|
|
3. Verify tailnet FQDN in values-porthole.yaml
|
|
|
|
## Cleanup
|
|
|
|
```bash
|
|
helm uninstall porthole -n porthole
|
|
kubectl delete namespace porthole
|
|
kubectl delete -f argocd/porthole-application.yaml
|
|
```
|
|
|
|
Note: PVCs will persist unless you delete them manually:
|
|
|
|
```bash
|
|
kubectl delete pvc -n porthole --all
|
|
```
|