Files
porthole/ARGOCD_DEPLOY.md
OpenCode Test 3cbbe2c1d1 deploy: add ArgoCD deployment files
- Added values-porthole.yaml.example for cluster-specific configuration
- Updated ArgoCD Application to use values-porthole.yaml
- Added ARGOCD_DEPLOY.md with deployment guide and troubleshooting
2025-12-24 12:51:06 -08:00

3.9 KiB

ArgoCD Deployment Guide

Prerequisites

  1. Label Pi 5 nodes (run on each Pi 5 node):

    kubectl label node <pi5-node-1> node-class=compute
    kubectl label node <pi5-node-2> node-class=compute
    
  2. Verify Pi 3 taint:

    kubectl taint node <pi3-node> capacity=low:NoExecute
    
  3. ArgoCD installed and accessible

Quick Deploy

1. Create cluster-specific values file

cd argocd
cp values-porthole.yaml.example values-porthole.yaml

2. Edit values-porthole.yaml

vim values-porthole.yaml

Required settings:

global:
  tailscale:
    tailnetFQDN: "your-tailnet.ts.net" # REQUIRED

secrets:
  postgres:
    password: "your-postgres-password" # REQUIRED
  minio:
    accessKeyId: "your-minio-access-key" # REQUIRED
    secretAccessKey: "your-minio-secret" # REQUIRED

3. Commit and push values file

git add argocd/values-porthole.yaml
git commit -m "deploy: configure cluster values for porthole"
git push

4. Apply ArgoCD Application

kubectl apply -f argocd/porthole-application.yaml

5. Monitor sync in ArgoCD

  • Open ArgoCD UI
  • Navigate to porthole application
  • Click "Sync" if needed

Manual Deploy (without ArgoCD)

If you prefer direct helm deployment:

helm install porthole helm/porthole \
  -f helm/porthole/values.yaml \
  -f argocd/values-porthole.yaml \
  --namespace porthole \
  --create-namespace

Upgrade Deployment

helm upgrade porthole helm/porthole \
  -f helm/porthole/values.yaml \
  -f argocd/values-porthole.yaml \
  --namespace porthole

Rollback

helm rollback porthole -n porthole

Verify Deployment

Check pod status

kubectl get pods -n porthole -o wide

All pods should be:

  • Running
  • On Pi 5 nodes (not Pi 3)
  • No restart loops

Check PVCs

kubectl get pvc -n porthole

Should show:

  • porthole-minio (200Gi)
  • porthole-postgres (20Gi)

Check services

kubectl get svc -n porthole

Check Tailscale ingress

kubectl get ingress -n porthole

Should show:

  • porthole-web → app.
  • porthole-minio → minio.
  • porthole-minio-console → minio-console.

Check Tailscale LoadBalancer services (if enabled)

kubectl get svc -n porthole -l app.kubernetes.io/component=minio

Should show:

  • porthole-minio-ts-s3 (LoadBalancer)

Access Services

Once deployment is healthy:

  • App UI: https://app.<your-tailnet-fqdn>
  • MinIO S3 API: https://minio.<your-tailnet-fqdn>
  • MinIO Console: https://minio-console.<your-tailnet-fqdn>

Troubleshooting

Pods not starting

kubectl describe pod <pod-name> -n porthole
kubectl logs <pod-name> -n porthole

Sync errors in ArgoCD

  1. Check values file syntax:

    helm template porthole helm/porthole \
      -f helm/porthole/values.yaml \
      -f argocd/values-porthole.yaml
    
  2. Verify node labels:

    kubectl get nodes --show-labels
    
  3. Check taints:

    kubectl describe node <pi3-node> | grep Taint
    

PVC stuck in Pending

kubectl get pvc -n porthole
kubectl describe pvc <pvc-name> -n porthole

Check Longhorn storage class:

kubectl get storageclass

MinIO or Postgres failing

Check PVC is bound and Longhorn is healthy:

kubectl get pv
kubectl get storageclass longhorn -o yaml

Tailscale endpoints not accessible

  1. Verify Tailscale operator is installed
  2. Check ingress configuration:
    kubectl get ingress -n porthole -o yaml
    
  3. Verify tailnet FQDN in values-porthole.yaml

Cleanup

helm uninstall porthole -n porthole
kubectl delete namespace porthole
kubectl delete -f argocd/porthole-application.yaml

Note: PVCs will persist unless you delete them manually:

kubectl delete pvc -n porthole --all