Files
rxminder/k8s
William Valentin 585c526a65 feat: Add APP_NAME env support for branding and deployment
- Make app name configurable via APP_NAME env variable - Update UI,
HTML, Docker, scripts, and k8s to use APP_NAME - Add process-html.sh for
template substitution - Document APP_NAME usage in
docs/APP_NAME_CONFIGURATION.md - Update Dockerfile, compose, and scripts
for dynamic naming - Add index.html.template for environment-based
branding
2025-09-07 12:21:44 -07:00
..

Kubernetes Manifests for RxMinder

This directory contains Kubernetes manifests and templates for deploying RxMinder on a Kubernetes cluster.

RxMinder uses template files with environment variable substitution for secure, user-friendly deployment.

Template Files

  • couchdb-secret.yaml.template - Database credentials (uses stringData - no base64 encoding needed!)
  • ingress.yaml.template - Ingress configuration with customizable hostname
  • configmap.yaml.template - Application configuration
  • frontend-deployment.yaml.template - Frontend deployment

Static Files

  • couchdb-statefulset.yaml - StatefulSet for CouchDB database
  • couchdb-service.yaml - Service to expose CouchDB
  • couchdb-pvc.yaml - PersistentVolumeClaim for CouchDB storage
  • db-seed-job.yaml - Job to seed initial database data
  • frontend-service.yaml - Service to expose frontend
  • hpa.yaml - Horizontal Pod Autoscaler
  • network-policy.yaml - Network security policies

🚀 Deployment Instructions

# 1. Copy and configure environment
cp .env.example .env

# 2. Edit .env with your settings
nano .env
# Set: APP_NAME, COUCHDB_PASSWORD, INGRESS_HOST, etc.

# 3. Deploy with templates
./scripts/k8s-deploy-template.sh deploy

# 4. Check status
./scripts/k8s-deploy-template.sh status

# 5. Cleanup (if needed)
./scripts/k8s-deploy-template.sh delete

Benefits of template approach:

  • No manual base64 encoding required
  • Secure credential management via .env
  • Automatic dependency ordering
  • Built-in validation and status checking
  • Easy customization of app name and configuration

Option 2: Manual Deployment

For advanced users who want manual control:

# Manual template processing (requires envsubst)
envsubst < couchdb-secret.yaml.template > /tmp/couchdb-secret.yaml
envsubst < ingress.yaml.template > /tmp/ingress.yaml

# Apply resources in order
kubectl apply -f /tmp/couchdb-secret.yaml
kubectl apply -f couchdb-pvc.yaml
kubectl apply -f couchdb-service.yaml
kubectl apply -f couchdb-statefulset.yaml
kubectl apply -f configmap.yaml.template
kubectl apply -f frontend-deployment.yaml.template
kubectl apply -f frontend-service.yaml
kubectl apply -f /tmp/ingress.yaml
kubectl apply -f network-policy.yaml
kubectl apply -f hpa.yaml
kubectl apply -f db-seed-job.yaml

Environment Configuration

Create .env with these required variables:

# Application Configuration
APP_NAME=rxminder                    # Customize your app name
INGRESS_HOST=rxminder.yourdomain.com # Your external hostname

# Docker Image Configuration
DOCKER_IMAGE=myregistry.com/rxminder:v1.0.0  # Your container image

# Database Credentials
COUCHDB_USER=admin
COUCHDB_PASSWORD=super-secure-password-123

# Storage Configuration
STORAGE_CLASS=longhorn               # Your cluster's storage class
STORAGE_SIZE=20Gi                   # Database storage allocation

# Optional: Advanced Configuration
VITE_COUCHDB_URL=http://localhost:5984
APP_BASE_URL=https://rxminder.yourdomain.com

Docker Image Options

Configure the container image based on your registry:

Registry Type Example Image Use Case
Docker Hub rxminder/rxminder:v1.0.0 Public releases
GitHub Container Registry ghcr.io/username/rxminder:latest GitHub integration
AWS ECR 123456789012.dkr.ecr.us-west-2.amazonaws.com/rxminder:v1.0.0 AWS deployments
Google GCR gcr.io/project-id/rxminder:stable Google Cloud
Private Registry registry.company.com/rxminder:production Enterprise
Local Registry localhost:5000/rxminder:dev Development

Storage Class Options

Choose the appropriate storage class for your environment:

Platform Recommended Storage Class Notes
Raspberry Pi + Longhorn longhorn Distributed storage across nodes
k3s local-path Single-node local storage
AWS EKS gp3 or gp2 General Purpose SSD
Google GKE pd-ssd SSD Persistent Disk
Azure AKS managed-premium Premium SSD

Check available storage classes:

kubectl get storageclass
# Kubernetes Ingress Configuration
INGRESS_HOST=app.meds.192.168.1.100.nip.io  # Your cluster IP

# For production with custom domain
INGRESS_HOST=meds.yourdomain.com

Credentials

The CouchDB credentials are stored in a Kubernetes secret. IMPORTANT: Update the credentials in couchdb-secret.yaml with your own secure values before deploying to production.

Architecture

┌─────────────────────────────────────┐
│           Frontend Pod              │
│  ┌─────────────────────────────────┐│
│  │      React Application          ││
│  │   • Authentication Service      ││  ← Embedded in frontend
│  │   • UI Components              ││
│  │   • Medication Management      ││
│  │   • Email Integration          ││
│  └─────────────────────────────────┘│
└─────────────────────────────────────┘
                  ↓ HTTP API
┌─────────────────────────────────────┐
│          CouchDB StatefulSet        │
│   • User Data & Authentication     │
│   • Medication Records             │
│   • Persistent Storage             │
└─────────────────────────────────────┘

Key Features:

  • Monolithic Frontend: Single container with all functionality
  • Database: CouchDB running as a StatefulSet with persistent storage
  • Storage: Longhorn for persistent volume management
  • Networking: Services configured for proper communication between components

Raspberry Pi Compatibility

All manifests use multi-architecture images and are optimized for ARM architecture commonly used in Raspberry Pi clusters.

Important Notes

  • The PVC uses Longhorn storage class for persistent storage
  • CouchDB runs as a StatefulSet for stable network identifiers
  • Frontend is exposed via LoadBalancer service
  • CouchDB is exposed via ClusterIP service (internal access only)