n8n is an extendable workflow automation tool that empowers teams to connect APIs, services, and data pipelines with ease. While it's a breeze to get started locally, deploying n8n on Kubernetes unlocks a new level of scalability, resilience, and automation — especially when using Helm to manage the lifecycle.
In this guide, we walk through deploying n8n to a Kubernetes cluster using a custom Helm-based setup, backed by official OCI charts and automation scripts that make the experience fast and production-ready.
Project Structure
n8n/
├── Chart.yaml # Helm chart metadata
├── values.yaml # Configuration for n8n (DB, persistence, scaling)
├── templates/ # Kubernetes resource templates (Deployment, Service, PVC, Ingress)
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── pvc.yaml
│ └── service.yaml
├── scripts/ # Shell scripts for operational tasks
│ ├── deploy.sh
│ ├── uninstall.sh
│ └── cleanup.sh
└── README.md # Documentation and usage guide
Code Location
Pre-requisites
- A working Kubernetes cluster (minikube, k3s, EKS, GKE, etc.)
- Helm 3.8+
- kubectl, pointing to the desired cluster
- Optional: An external PostgreSQL or MySQL database
Deploying n8n
Run the deployment script:
./scripts/deploy.sh
This will:
- Create the namespace
n8n-system
if it doesn’t exist - Deploy n8n from the official OCI Helm registry:
oci://8gears.container-registry.com/library/n8n
Verify Deployment
Check the pods:
kubectl get pods -n n8n-system
Sample output:
NAME READY STATUS RESTARTS AGE
n8n-6c6fd9d6d4-8q9d8 1/1 Running 0 2m
Access n8n
If ingress is not configured, use port-forwarding:
kubectl port-forward svc/n8n-stack-n8n-stack 5678:80 -n n8n-system &
Then open http://localhost:5678 in your browser.
Configuration
Edit values.yaml
to update key settings:
config:
database:
type: postgresdb
postgresdb:
host: postgres.n8n-system.svc.cluster.local
database: n8n
user: n8n_user
secret:
database:
postgresdb:
password: "your_postgres_password"
persistence:
enabled: true
size: 5Gi
config
holds non-sensitive valuessecret
contains secure credentials (used as Kubernetes Secrets)persistence
enables durable volume storage
Scaling with Queue Mode
Enable queue mode and add Redis to scale horizontally:
scaling:
enabled: true
worker:
count: 2
redis:
host: "redis-host"
password: "redis-password"
In queue mode:
- The main pod handles the UI and triggers
- Worker pods run workflows in parallel
- Redis acts as the shared queue backend
Operational Commands
Uninstall n8n
./scripts/uninstall.sh
Cleanup Persistent Data
./scripts/cleanup.sh
Scale Down / Up
kubectl scale deployment n8n-stack-n8n-stack -n n8n-system --replicas=0
kubectl scale deployment n8n-stack-n8n-stack -n n8n-system --replicas=1
Included AI Workflow
This repo includes an AI-powered n8n workflow:
- Workflow file:
ai-workflow.json
- Metadata:
ai-metadata.yml
To use:
- Import the
.json
into n8n via the UI - Connect your API keys (e.g., OpenAI)
- Execute and customize as needed
References
Conclusion
With this Helm-based setup, n8n can be deployed in a secure, scalable, and GitOps-friendly way. Whether you’re building simple integrations or advanced AI workflows, this approach gives you full control over automation infrastructure on Kubernetes.