Deploy Astrology API Apps with Docker & Kubernetes

Deploy Astrology API Apps with Docker and Kubernetes
Published: March 13, 2026 | By Vedika Intelligence | Reading time: 14 minutes

Running an astrology app in production means handling spikes during Mercury retrograde, solar eclipses, and New Year — when every astrology site in the world gets traffic simultaneously. Docker and Kubernetes give you the containerization and horizontal scaling infrastructure to handle these spikes gracefully, with zero-downtime rolling deploys and self-healing pods.

This guide walks through the complete DevOps pipeline for a Node.js astrology app powered by Vedika API — the only B2B astrology API with AI-powered conversational capabilities, 140+ endpoints, and Swiss Ephemeris precision. You'll build a production-ready Dockerfile, docker-compose for local development, Kubernetes manifests, secrets management, health checks, and a GitHub Actions CI/CD pipeline.

What You'll Build: A containerized Node.js astrology app with a multi-stage Dockerfile (production image under 150MB), docker-compose for local dev, a Kubernetes Deployment with HPA scaling, a Service and Ingress, Vedika API keys in K8s Secrets, and a GitHub Actions workflow that builds, tests, and deploys on every push.

Prerequisites

Step 1: Multi-Stage Dockerfile

A multi-stage build produces a minimal production image by separating the build environment from the runtime environment. This is critical for production astrology apps — the builder stage has TypeScript compiler and dev tools; the production stage has only what's needed to run:

# Dockerfile # ── Stage 1: Builder ──────────────────────────────────────────── FROM node:20-alpine AS builder WORKDIR /app # Copy package files first for layer caching COPY package*.json ./ RUN npm ci --include=dev # Copy source and compile TypeScript COPY tsconfig.json ./ COPY src/ ./src/ RUN npm run build # ── Stage 2: Production ───────────────────────────────────────── FROM node:20-alpine AS production WORKDIR /app # Create non-root user for security RUN addgroup -S appgroup && adduser -S appuser -G appgroup # Copy only production dependencies COPY package*.json ./ RUN npm ci --omit=dev && npm cache clean --force # Copy compiled output from builder stage COPY --from=builder /app/dist ./dist # Switch to non-root user USER appuser # Health check — Kubernetes liveness/readiness probes hit this HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \ CMD wget -qO- http://localhost:3000/health || exit 1 EXPOSE 3000 CMD ["node", "dist/index.js"]

Node.js Health Check Endpoint

Add a lightweight /health endpoint that Kubernetes probes will hit. It must respond quickly — under 5 seconds:

// src/routes/health.ts import { Router } from 'express'; const router = Router(); router.get('/health', (req, res) => { res.json({ status: 'ok', timestamp: new Date().toISOString(), uptime: process.uptime(), version: process.env.npm_package_version }); }); router.get('/ping', (req, res) => res.send('pong')); export default router;

Test the Build

# Build the image docker build -t astrology-app:latest . # Check image size (should be under 200MB) docker images astrology-app # Run locally to verify docker run -p 3000:3000 \ -e VEDIKA_API_KEY=vk_live_your_key \ -e NODE_ENV=production \ astrology-app:latest # Test health check curl http://localhost:3000/health

Step 2: Docker Compose for Local Development

Docker Compose gives your team a one-command local environment with hot reload and all services running together:

# docker-compose.yml version: '3.9' services: app: build: context: . target: builder # Use builder stage for hot reload command: npm run dev ports: - "3000:3000" volumes: - ./src:/app/src:ro # Mount source for hot reload - /app/node_modules # Preserve container node_modules env_file: - .env # API key from .env file environment: - NODE_ENV=development restart: unless-stopped redis: image: redis:7-alpine ports: - "6379:6379" volumes: - redis-data:/data volumes: redis-data:
# Start local environment docker compose up # Start in background docker compose up -d # View logs docker compose logs -f app # Stop everything docker compose down

Step 3: Kubernetes Secrets for API Key Management

Never put API keys in Kubernetes Deployment YAML — they end up in git history. Use Kubernetes Secrets, which are stored encrypted in etcd:

# Create the secret from the command line kubectl create secret generic vedika-secret \ --from-literal=VEDIKA_API_KEY=vk_live_your_key_here \ --from-literal=NODE_ENV=production # Verify (values are base64-encoded, not plaintext) kubectl get secret vedika-secret -o yaml # For CI/CD pipelines, create from environment variables: kubectl create secret generic vedika-secret \ --from-literal=VEDIKA_API_KEY=$VEDIKA_API_KEY \ --dry-run=client -o yaml | kubectl apply -f -

Step 4: Kubernetes Deployment Manifest

The Deployment manages your app pods with rolling updates, health probes, and resource limits:

# k8s/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: astrology-app labels: app: astrology-app spec: replicas: 2 selector: matchLabels: app: astrology-app strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 # Zero downtime during deploys maxSurge: 1 template: metadata: labels: app: astrology-app spec: containers: - name: astrology-app image: ghcr.io/your-org/astrology-app:latest ports: - containerPort: 3000 # Inject secrets as environment variables envFrom: - secretRef: name: vedika-secret # Resource limits prevent noisy-neighbour issues resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" # Liveness: restart pod if app stops responding livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 30 periodSeconds: 10 failureThreshold: 3 # Readiness: remove pod from LB until it's ready readinessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 2

Step 5: Service and Ingress

The Service exposes pods internally; the Ingress handles external HTTPS traffic:

# k8s/service.yaml apiVersion: v1 kind: Service metadata: name: astrology-app-svc spec: selector: app: astrology-app ports: - protocol: TCP port: 80 targetPort: 3000 type: ClusterIP
# k8s/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: astrology-app-ingress annotations: kubernetes.io/ingress.class: "nginx" cert-manager.io/cluster-issuer: "letsencrypt-prod" nginx.ingress.kubernetes.io/proxy-read-timeout: "120" spec: tls: - hosts: - astrology.yourdomain.com secretName: astrology-tls rules: - host: astrology.yourdomain.com http: paths: - path: / pathType: Prefix backend: service: name: astrology-app-svc port: number: 80

Step 6: Horizontal Pod Autoscaler

Traffic spikes during eclipses and Mercury retrograde are predictable but intense. HPA scales your pods automatically:

# k8s/hpa.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: astrology-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: astrology-app minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80
# Deploy all manifests kubectl apply -f k8s/ # Verify pods are running kubectl get pods -l app=astrology-app # Check HPA status kubectl get hpa astrology-app-hpa # Watch scaling in real time kubectl get hpa --watch

Ready to Deploy Your Astrology App?

Get a Vedika API key and start with our FREE Sandbox — 65 mock endpoints, no credit card required. Production plans from $12/month.

Get Your API Key

Step 7: GitHub Actions CI/CD Pipeline

Automate building, testing, and deploying your containerized astrology app on every push to main:

# .github/workflows/deploy.yml name: Build and Deploy Astrology App on: push: branches: [main] pull_request: branches: [main] env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }}/astrology-app jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - run: npm ci - run: npm test - run: npx tsc --noEmit build-and-push: needs: test runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' permissions: contents: read packages: write steps: - uses: actions/checkout@v4 - name: Log in to GitHub Container Registry uses: docker/login-action@v3 with: registry: ${{ env.REGISTRY }} username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Extract Docker metadata (tags) id: meta uses: docker/metadata-action@v5 with: images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }} tags: | type=sha,prefix=sha- type=raw,value=latest - name: Build and push Docker image uses: docker/build-push-action@v5 with: context: . push: true tags: ${{ steps.meta.outputs.tags }} cache-from: type=gha cache-to: type=gha,mode=max deploy: needs: build-and-push runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Configure kubectl uses: azure/k8s-set-context@v4 with: method: kubeconfig kubeconfig: ${{ secrets.KUBECONFIG }} - name: Update image tag in Deployment run: | kubectl set image deployment/astrology-app \ astrology-app=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:sha-${{ github.sha }} - name: Wait for rollout run: | kubectl rollout status deployment/astrology-app --timeout=300s - name: Smoke test run: | curl -f https://astrology.yourdomain.com/health

Step 8: Add Secrets to GitHub Actions

Store sensitive values in GitHub repository secrets — never in YAML files:

# Required GitHub Actions secrets to set: # Settings → Secrets and variables → Actions → New repository secret KUBECONFIG # Your kubectl config (base64 encoded) VEDIKA_API_KEY # Used when creating/updating K8s secret in pipeline # Optional: update K8s secret during deploy # Add this step to the deploy job: - name: Sync Vedika API key to K8s secret run: | kubectl create secret generic vedika-secret \ --from-literal=VEDIKA_API_KEY=${{ secrets.VEDIKA_API_KEY }} \ --from-literal=NODE_ENV=production \ --dry-run=client -o yaml | kubectl apply -f -

Step 9: Ingress Rate Limiting

Protect your Vedika API budget from abuse with nginx-ingress rate limiting at the Kubernetes layer:

# Add to ingress.yaml annotations: annotations: nginx.ingress.kubernetes.io/limit-rps: "10" nginx.ingress.kubernetes.io/limit-connections: "20" nginx.ingress.kubernetes.io/limit-burst-multiplier: "5" nginx.ingress.kubernetes.io/proxy-body-size: "1m"

Step 10: Node.js App — Calling Vedika API

A complete example of the Node.js app that your container runs, showing Vedika API integration with environment-based key injection:

// src/index.ts import express from 'express'; const app = express(); app.use(express.json()); const VEDIKA_API_KEY = process.env.VEDIKA_API_KEY; const VEDIKA_BASE = 'https://api.vedika.io'; if (!VEDIKA_API_KEY) { console.error('VEDIKA_API_KEY not set — exiting'); process.exit(1); } // Health check endpoint (required for K8s probes) app.get('/health', (req, res) => { res.json({ status: 'ok', uptime: process.uptime() }); }); app.get('/ping', (req, res) => res.send('pong')); // Birth chart endpoint app.post('/birth-chart', async (req, res) => { const { datetime, latitude, longitude, timezone = '+05:30' } = req.body; try { const response = await fetch(`${VEDIKA_BASE}/v2/astrology/birth-chart`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': VEDIKA_API_KEY! }, body: JSON.stringify({ datetime, latitude, longitude, timezone }) }); if (!response.ok) { const err = await response.json(); return res.status(response.status).json(err); } const data = await response.json(); res.json(data); } catch (e) { res.status(500).json({ error: 'Internal server error' }); } }); // AI chat endpoint app.post('/chat', async (req, res) => { const { question, birthDetails, language = 'en' } = req.body; const response = await fetch(`${VEDIKA_BASE}/api/v1/astrology/query`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': VEDIKA_API_KEY! }, body: JSON.stringify({ question, language, birthDetails }) }); const data = await response.json(); res.status(response.status).json(data); }); app.listen(3000, () => console.log('Astrology app running on port 3000'));

Production Checklist

Before going live with your containerized astrology app:

Why Choose Vedika API?

Production-Ready

99.9% uptime SLA, rate limiting, secure authentication, and comprehensive error responses for containerized apps.

140+ Endpoints

Birth charts, yogas, dashas, transits, Kundali matching, numerology, panchang, and more — one API for everything.

AI-Powered

The only astrology API with a built-in AI engine. One endpoint answers natural language questions about any chart.

Swiss Ephemeris

Astronomical-grade planetary calculations. The same precision as NASA ephemeris data.

MCP Server

World's first astrology MCP server. Your AI agents and automation pipelines can call Vedika natively.

30 Languages

Containerized apps serving global users can pass language codes and get responses in Hindi, Tamil, Arabic, and 27 more.

Pricing

Vedika API pricing scales with your containerized app's usage:

All plans include 140+ endpoints, AI chatbot, and 30-language support. View detailed pricing

Conclusion

Docker and Kubernetes give your astrology app the production infrastructure it needs to handle traffic spikes, deploy without downtime, and scale automatically. Combined with Vedika API's 140+ endpoints and AI-powered predictions, you can build a production-grade astrology platform that competes with the biggest names in the space.

Next steps:

  1. Get your Vedika API key at vedika.io/signup
  2. Clone the starter Node.js app from our documentation
  3. Build the Docker image and deploy to your cluster
  4. Set up GitHub Actions for automated CI/CD
  5. Configure HPA and watch your app scale automatically

About Vedika Intelligence: Vedika is the only B2B astrology API with AI-powered chatbot capabilities, serving production apps worldwide. Our API runs on containerized infrastructure with 99.9% uptime — the same architecture principles described in this guide.

Try the #1 Vedic Astrology API

140+ endpoints, 30 languages, Swiss Ephemeris precision. Free sandbox included — no credit card required.

Get Free API Key