- Migrate Report model to CouchDB with embedded street/user data - Migrate UserBadge model to CouchDB with badge population - Update all remaining routes (reports, users, badges, payments) to use CouchDB - Add CouchDB health check and graceful shutdown to server.js - Add missing methods to couchdbService (checkConnection, findWithPagination, etc.) - Update Kubernetes deployment manifests for CouchDB support - Add comprehensive CouchDB setup documentation All core functionality now uses CouchDB as primary database while maintaining MongoDB for backward compatibility during transition period. 🤖 Generated with [AI Assistant] Co-Authored-By: AI Assistant <noreply@ai-assistant.com>
11 KiB
11 KiB
Adopt-a-Street Deployment
This directory contains deployment configurations for the Adopt-a-Street application on Kubernetes (Raspberry Pi cluster).
Directory Structure
deploy/
├── k8s/ # Kubernetes manifests
│ ├── namespace.yaml # Namespace definition
│ ├── configmap.yaml # Environment configuration
│ ├── secrets.yaml.example # Secret template (COPY TO secrets.yaml)
│ ├── couchdb-statefulset.yaml # CouchDB StatefulSet with PVC
│ ├── couchdb-configmap.yaml # CouchDB configuration
│ ├── backend-deployment.yaml # Backend Deployment + Service
│ ├── frontend-deployment.yaml # Frontend Deployment + Service
│ └── ingress.yaml # Ingress for routing
├── README.md # This file
└── scripts/ # Deployment helper scripts
Prerequisites
Cluster Requirements
- Kubernetes cluster with 3 nodes:
- 2x Raspberry Pi 5 (8GB RAM) - ARM64
- 1x Raspberry Pi 3B+ (1GB RAM) - ARMv7
- kubectl configured to access your cluster
- Container registry accessible from cluster
- Ingress controller installed (Traefik or NGINX Ingress)
- Persistent storage provisioner (local-path, NFS, or Longhorn)
Local Requirements
- Docker with buildx for multi-arch builds
- kubectl CLI tool
- Access to container registry (Docker Hub, GitHub Container Registry, or private registry)
- Bun runtime for local development and testing
Quick Start
1. Build Multi-Arch Docker Images
Build images for both ARM64 (Pi 5) and ARMv7 (Pi 3B+):
# From project root
cd /home/will/Code/adopt-a-street
# Create buildx builder (one-time setup)
docker buildx create --use --name multiarch-builder
# Build and push backend
docker buildx build --platform linux/arm64,linux/arm/v7 \
-t your-registry/adopt-a-street-backend:latest \
--push ./backend
# Build and push frontend
docker buildx build --platform linux/arm64,linux/arm/v7 \
-t your-registry/adopt-a-street-frontend:latest \
--push ./frontend
Note: Replace your-registry with your actual registry (e.g., docker.io/username or ghcr.io/username)
2. Configure Secrets
# Copy secrets template
cp deploy/k8s/secrets.yaml.example deploy/k8s/secrets.yaml
# Edit secrets with your actual values
nano deploy/k8s/secrets.yaml
# IMPORTANT: Add secrets.yaml to .gitignore if not already there
echo "deploy/k8s/secrets.yaml" >> .gitignore
Required Secrets:
JWT_SECRET- Strong random string for JWT signingCLOUDINARY_CLOUD_NAME- Your Cloudinary cloud nameCLOUDINARY_API_KEY- Your Cloudinary API keyCLOUDINARY_API_SECRET- Your Cloudinary API secret
3. Update Image References
Update the image references in deployment files:
# Update backend image reference
nano deploy/k8s/backend-deployment.yaml
# Change: image: your-registry/adopt-a-street-backend:latest
# Update frontend image reference
nano deploy/k8s/frontend-deployment.yaml
# Change: image: your-registry/adopt-a-street-frontend:latest
4. Configure CouchDB
# Apply CouchDB configuration
kubectl apply -f deploy/k8s/couchdb-configmap.yaml
# Deploy CouchDB
kubectl apply -f deploy/k8s/couchdb-statefulset.yaml
# Wait for CouchDB to be ready
kubectl wait --for=condition=ready pod -l app=couchdb -n adopt-a-street --timeout=120s
4. Update Domain Name
Update the ingress host:
nano deploy/k8s/ingress.yaml
# Change: host: adopt-a-street.local
# To your actual domain or IP
5. Deploy to Kubernetes
# Create namespace
kubectl apply -f deploy/k8s/namespace.yaml
# Create secrets (IMPORTANT: Make sure you've edited secrets.yaml!)
kubectl apply -f deploy/k8s/secrets.yaml
# Create ConfigMap
kubectl apply -f deploy/k8s/configmap.yaml
# Deploy CouchDB (already done in step 4)
# Wait for CouchDB to be ready (this may take 1-2 minutes)
kubectl wait --for=condition=ready pod -l app=couchdb -n adopt-a-street --timeout=120s
# Deploy backend
kubectl apply -f deploy/k8s/backend-deployment.yaml
# Wait for backend to be ready
kubectl wait --for=condition=ready pod -l app=backend -n adopt-a-street --timeout=120s
# Deploy frontend
kubectl apply -f deploy/k8s/frontend-deployment.yaml
# Deploy ingress
kubectl apply -f deploy/k8s/ingress.yaml
# Check deployment status
kubectl get all -n adopt-a-street
Verification
Check Pod Status
# View all resources
kubectl get all -n adopt-a-street
# Check pod status
kubectl get pods -n adopt-a-street
# Expected output:
# NAME READY STATUS RESTARTS AGE
# adopt-a-street-backend-xxxxxxxxxx-xxxxx 1/1 Running 0 5m
# adopt-a-street-backend-xxxxxxxxxx-xxxxx 1/1 Running 0 5m
# adopt-a-street-frontend-xxxxxxxxx-xxxxx 1/1 Running 0 5m
# adopt-a-street-frontend-xxxxxxxxx-xxxxx 1/1 Running 0 5m
# adopt-a-street-couchdb-0 1/1 Running 0 10m
Check Logs
# Backend logs
kubectl logs -f deployment/adopt-a-street-backend -n adopt-a-street
# Frontend logs
kubectl logs -f deployment/adopt-a-street-frontend -n adopt-a-street
# CouchDB logs
kubectl logs -f adopt-a-street-couchdb-0 -n adopt-a-street
Check Services
kubectl get svc -n adopt-a-street
# Expected output:
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# adopt-a-street-backend ClusterIP 10.43.x.x <none> 5000/TCP 5m
# adopt-a-street-frontend ClusterIP 10.43.x.x <none> 80/TCP 5m
# adopt-a-street-couchdb ClusterIP None <none> 5984/TCP 10m
Check Ingress
kubectl get ingress -n adopt-a-street
# Get ingress details
kubectl describe ingress adopt-a-street-ingress -n adopt-a-street
Access the Application
# Port forward for testing (if ingress not working)
kubectl port-forward svc/adopt-a-street-frontend 3000:80 -n adopt-a-street
# Then open http://localhost:3000 in your browser
Resource Allocation
The deployment is optimized for Raspberry Pi hardware:
CouchDB (Pi 5 nodes only)
- Requests: 512Mi RAM, 250m CPU
- Limits: 2Gi RAM, 1000m CPU
- Storage: 10Gi persistent volume
- Additional: 64Mi RAM, 50m CPU for metrics exporter
Backend (prefers Pi 5 nodes)
- Requests: 256Mi RAM, 100m CPU
- Limits: 512Mi RAM, 500m CPU
- Replicas: 2 pods
Frontend (any node)
- Requests: 64Mi RAM, 50m CPU
- Limits: 128Mi RAM, 200m CPU
- Replicas: 2 pods
Total Cluster Requirements
- Minimum RAM: ~3.6 GB (1.5GB CouchDB + 1GB backend + 200MB frontend + 800MB system)
- Recommended: 2x Pi 5 (8GB each) handles this comfortably
Scaling
Scale Deployments
# Scale backend
kubectl scale deployment adopt-a-street-backend --replicas=3 -n adopt-a-street
# Scale frontend
kubectl scale deployment adopt-a-street-frontend --replicas=3 -n adopt-a-street
Note: CouchDB is a StatefulSet with 1 replica. Scaling CouchDB requires configuring clustering.
Updating
Update Images
# Build and push new version
docker buildx build --platform linux/arm64,linux/arm/v7 \
-t your-registry/adopt-a-street-backend:v2.0 \
--push ./backend
# Update deployment
kubectl set image deployment/adopt-a-street-backend \
backend=your-registry/adopt-a-street-backend:v2.0 \
-n adopt-a-street
# Check rollout status
kubectl rollout status deployment/adopt-a-street-backend -n adopt-a-street
Rollback
# Rollback to previous version
kubectl rollout undo deployment/adopt-a-street-backend -n adopt-a-street
# Rollback to specific revision
kubectl rollout undo deployment/adopt-a-street-backend --to-revision=2 -n adopt-a-street
Monitoring
Resource Usage
# Node resource usage
kubectl top nodes
# Pod resource usage
kubectl top pods -n adopt-a-street
Events
# View recent events
kubectl get events -n adopt-a-street --sort-by='.lastTimestamp'
Describe Resources
# Describe pod (useful for troubleshooting)
kubectl describe pod <pod-name> -n adopt-a-street
# Describe deployment
kubectl describe deployment adopt-a-street-backend -n adopt-a-street
Troubleshooting
Pod Not Starting
# Check pod events
kubectl describe pod <pod-name> -n adopt-a-street
# Check logs
kubectl logs <pod-name> -n adopt-a-street
# Check previous logs (if pod crashed)
kubectl logs <pod-name> -n adopt-a-street --previous
Image Pull Errors
- Verify image exists in registry
- Check image name and tag in deployment
- Verify cluster can access registry
- Check if imagePullSecrets are needed
CouchDB Connection Issues
# Shell into backend pod
kubectl exec -it <backend-pod-name> -n adopt-a-street -- sh
# Test CouchDB connection
curl -f http://adopt-a-street-couchdb:5984/_up
# Test authentication
curl -u $COUCHDB_USER:$COUCHDB_PASSWORD http://adopt-a-street-couchdb:5984/_session
Persistent Volume Issues
# Check PVCs
kubectl get pvc -n adopt-a-street
# Check PVs
kubectl get pv
# Describe PVC
kubectl describe pvc mongodb-data-adopt-a-street-mongodb-0 -n adopt-a-street
Cleanup
Delete Everything
# Delete all resources in namespace
kubectl delete namespace adopt-a-street
# Or delete resources individually
kubectl delete -f deploy/k8s/ingress.yaml
kubectl delete -f deploy/k8s/frontend-deployment.yaml
kubectl delete -f deploy/k8s/backend-deployment.yaml
kubectl delete -f deploy/k8s/couchdb-statefulset.yaml
kubectl delete -f deploy/k8s/couchdb-configmap.yaml
kubectl delete -f deploy/k8s/configmap.yaml
kubectl delete -f deploy/k8s/secrets.yaml
kubectl delete -f deploy/k8s/namespace.yaml
# Note: This will also delete the persistent volume data!
Security Best Practices
- Never commit secrets.yaml - Always use secrets.yaml.example
- Use strong JWT_SECRET - Generate with:
openssl rand -base64 32 - Use strong CouchDB passwords - Generate with:
openssl rand -base64 32 - Enable TLS/HTTPS - Uncomment TLS section in ingress.yaml and use cert-manager
- Restrict ingress - Use network policies to limit pod communication
- Use image digests - Pin images to specific SHA256 digests for production
- Enable RBAC - Create service accounts with minimal permissions
- Scan images - Use tools like Trivy to scan for vulnerabilities
Performance Optimization
- Use imagePullPolicy: IfNotPresent - After initial deployment to save bandwidth
- Implement HPA - Horizontal Pod Autoscaler for dynamic scaling
- Add Redis - For caching to reduce CouchDB load
- Use CDN - For frontend static assets
- Enable compression - Nginx already configured with gzip
- Monitor resources - Use Prometheus + Grafana for metrics (CouchDB exporter included)
Additional Resources
- Kubernetes Documentation
- Raspberry Pi Kubernetes Guide
- Helm Charts - Consider migrating to Helm for easier management
- ArgoCD - GitOps continuous delivery for Kubernetes
Support
For issues or questions:
- Check pod logs:
kubectl logs <pod-name> -n adopt-a-street - Check events:
kubectl get events -n adopt-a-street - Describe resources:
kubectl describe <resource> -n adopt-a-street - Review application logs in the backend