Skip to main content

Kubernetes Deployment

Deploy Noxys on Kubernetes using our Helm chart for enterprise-scale deployments with high availability.

Prerequisites

  • Kubernetes: v1.24+ cluster
  • Helm: v3.10+
  • kubectl: configured and authenticated
  • Storage: Persistent storage provisioner (standard for cloud providers)
  • DNS: Ingress controller (nginx-ingress or cert-manager recommended)

Install Helm Chart

Add Noxys Helm Repository

helm repo add noxys https://helm.noxys.cloud/
helm repo update

Create Namespace

kubectl create namespace noxys

Create Values File

Create values.yaml for your deployment:

# values.yaml
noxys:
env: production
domain: noxys.company.com

# JWT Secret (generate with: openssl rand -base64 32)
jwtSecret: "your-secret-key-min-32-bytes-long-here"

api:
replicas: 3
image:
repository: noxys/proxy
tag: latest
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2
memory: 4Gi

console:
replicas: 2
image:
repository: noxys/console
tag: latest
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi

postgresql:
enabled: true
auth:
username: noxys
password: "secure-password-here"
database: noxys
primary:
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 2
memory: 4Gi
persistence:
size: 100Gi
storageClassName: "fast" # Use your high-performance storage class

redis:
enabled: true
auth:
enabled: false
master:
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 1
memory: 1Gi
persistence:
size: 20Gi

nats:
enabled: true
jetstream:
enabled: true
natsBox:
enabled: false
persistence:
enabled: true
size: 50Gi

ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: noxys.company.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: noxys-tls
hosts:
- noxys.company.com

monitoring:
enabled: true
prometheus:
enabled: true
grafana:
enabled: true

Install Chart

helm install noxys noxys/noxys \
--namespace noxys \
--values values.yaml

Helm Chart Structure

The Noxys Helm chart includes:

noxys/
├── Chart.yaml
├── values.yaml (default values)
├── templates/
│ ├── api-deployment.yaml
│ ├── api-service.yaml
│ ├── console-deployment.yaml
│ ├── console-service.yaml
│ ├── postgres-statefulset.yaml
│ ├── redis-deployment.yaml
│ ├── nats-statefulset.yaml
│ ├── ingress.yaml
│ ├── configmap.yaml
│ ├── secret.yaml
│ ├── pvc.yaml
│ └── rbac.yaml
└── charts/
├── postgresql
├── redis
└── nats

Manual Kubernetes Configuration

If you prefer not to use Helm, here are the base Kubernetes manifests:

Namespace

apiVersion: v1
kind: Namespace
metadata:
name: noxys

ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
name: noxys-config
namespace: noxys
data:
NOXYS_ENV: production
NOXYS_LOG_LEVEL: info
NOXYS_ALLOWED_DOMAINS: company.com,subsidiary.com

Secrets

# Create secret with sensitive data
kubectl create secret generic noxys-secrets \
--from-literal=jwt-secret=$(openssl rand -base64 32) \
--from-literal=postgres-password=secure-password \
-n noxys

PostgreSQL StatefulSet

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: noxys
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: noxys
- name: POSTGRES_USER
value: noxys
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: noxys-secrets
key: postgres-password
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 2
memory: 4Gi
livenessProbe:
exec:
command:
- /bin/sh
- -c
- pg_isready -U noxys
initialDelaySeconds: 30
periodSeconds: 10
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi

API Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
name: noxys-api
namespace: noxys
spec:
replicas: 3
selector:
matchLabels:
app: noxys-api
template:
metadata:
labels:
app: noxys-api
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- noxys-api
topologyKey: kubernetes.io/hostname
containers:
- name: api
image: noxys/proxy:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: NOXYS_ENV
valueFrom:
configMapKeyRef:
name: noxys-config
key: NOXYS_ENV
- name: NOXYS_JWT_SECRET
valueFrom:
secretKeyRef:
name: noxys-secrets
key: jwt-secret
- name: NOXYS_DB_URL
value: "postgres://noxys:$(POSTGRES_PASSWORD)@postgres:5432/noxys"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: noxys-secrets
key: postgres-password
- name: NOXYS_REDIS_URL
value: "redis://redis:6379/0"
- name: NOXYS_NATS_URL
value: "nats://nats:4222"
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2
memory: 4Gi
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 2

---
apiVersion: v1
kind: Service
metadata:
name: noxys-api
namespace: noxys
spec:
selector:
app: noxys-api
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP

Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: noxys-ingress
namespace: noxys
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
ingressClassName: nginx
tls:
- hosts:
- noxys.company.com
secretName: noxys-tls
rules:
- host: noxys.company.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: noxys-api
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: noxys-console
port:
number: 3000

Scaling

Horizontal Pod Autoscaling

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: noxys-api-hpa
namespace: noxys
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: noxys-api
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80

Pod Disruption Budget

For high availability during upgrades:

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: noxys-api-pdb
namespace: noxys
spec:
minAvailable: 2
selector:
matchLabels:
app: noxys-api

Multi-Region HA

For disaster recovery across regions:

# Primary cluster (us-east-1)
helm install noxys-primary noxys/noxys \
--namespace noxys \
--values values-primary.yaml

# Replica cluster (eu-west-1)
helm install noxys-replica noxys/noxys \
--namespace noxys \
--values values-replica.yaml \
--set postgresql.replication.enabled=true \
--set postgresql.primary.host=postgres.primary.noxys.svc.cluster.local

Monitoring & Logging

Prometheus ServiceMonitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: noxys
namespace: noxys
spec:
selector:
matchLabels:
app: noxys-api
endpoints:
- port: metrics
interval: 30s

ELK Stack Integration

Configure Fluent Bit to send logs to Elasticsearch:

apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: noxys
data:
fluent-bit.conf: |
[INPUT]
Name tail
Path /var/log/containers/noxys_*.log
Parser docker
Tag kube.*
Refresh_Interval 5

[OUTPUT]
Name es
Match *
Host elasticsearch.logging.svc.cluster.local
Port 9200
Index noxys-%Y.%m.%d

Upgrades

Zero-Downtime Upgrade

# Update values
helm upgrade noxys noxys/noxys \
--namespace noxys \
--values values.yaml \
--wait \
--timeout 10m

Rollback if Needed

helm rollback noxys 1 -n noxys

Backup Strategy

Automated PostgreSQL Backups

Use a CronJob to backup database daily:

apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
namespace: noxys
spec:
schedule: "0 2 * * *" # 2 AM daily
jobTemplate:
spec:
template:
spec:
serviceAccountName: noxys
containers:
- name: backup
image: postgres:16
command:
- /bin/sh
- -c
- |
pg_dump -h postgres -U noxys noxys | gzip > /backups/noxys_$(date +%Y-%m-%d_%H-%M-%S).sql.gz
volumeMounts:
- name: backups
mountPath: /backups
volumes:
- name: backups
persistentVolumeClaim:
claimName: backups-pvc
restartPolicy: OnFailure

S3 Backup

Use Velero for full cluster backup to S3:

velero backup create noxys-backup --include-namespaces noxys

Troubleshooting

Check Pod Status

kubectl get pods -n noxys
kubectl describe pod noxys-api-0 -n noxys
kubectl logs noxys-api-0 -n noxys

Port Forwarding for Debugging

# Access API directly
kubectl port-forward -n noxys svc/noxys-api 8080:8080

# Access database
kubectl port-forward -n noxys svc/postgres 5432:5432

View Events

kubectl get events -n noxys --sort-by='.lastTimestamp'

Next Steps


Questions? Email support@noxys.eu or visit doc.noxys.cloud