Upgrade Hyperswitch
GitOps-Based Deployment using ArgoCD (App-of-Apps Pattern)
Using a GitOps orchestration platform such as Argo CD allows Hyperswitch deployments to be managed declaratively via Git.
Benefits include:
Version-controlled deployments
Automated reconciliation and drift detection
Reproducible environments
Simplified rollbacks
Centralized multi-cluster management
Instead of manually executing Helm commands such as:
helm upgrade --install hyperswitch ...the desired deployment state is defined in Git and continuously reconciled by ArgoCD.
This guide uses ArgoCD, but similar GitOps tools can be used.
Target Architecture
A blue/green cluster upgrade model is recommended.
This strategy involves provisioning a parallel environment (green) where the upgraded version of Hyperswitch is deployed and validated before production traffic is switched over.
The existing environment (blue) continues serving live traffic during this process, allowing controlled cutover and providing a straightforward rollback mechanism if any issues arise after the upgrade.
During upgrade:
hyperswitch-v1
current production cluster
hyperswitch-v2
upgraded cluster
ArgoCD
GitOps deployment controller
Git repository
system source of truth
external services
shared Postgres, Redis, secrets, observability
Stateful infrastructure such as databases should not be recreated during cluster upgrades.
ArgoCD App-of-Apps Pattern
The diagram illustrates how Hyperswitch deployments are managed using the App-of-Apps pattern in Argo CD.

In this model, a single “Root Application” manages multiple child applications, allowing complex systems to be deployed and maintained in a structured and scalable way.
Git Repository All deployment configurations are stored in a Git repository. This includes the ArgoCD application definitions and Helm chart references that describe the desired state of the system.
ArgoCD ArgoCD continuously monitors the Git repository and ensures that the Kubernetes cluster matches the configuration defined in Git. Any changes committed to the repository are automatically synchronized to the cluster.
Root Application (App-of-Apps) The Root Application acts as the entry point for the deployment. Instead of directly deploying services, it references multiple child applications. This structure is known as the App-of-Apps pattern, where one parent application manages a collection of related applications.
Environment Applications The Root Application deploys environment-level applications (for example: Dev, Staging, Production). Each environment application contains configuration specific to that environment.
Helm Applications Each environment application then deploys individual Helm-based applications, which package and deploy specific components of the system.
Hyperswitch + Platform Components These Helm applications ultimately deploy Hyperswitch services and supporting platform components into the Kubernetes cluster.
Using the App-of-Apps pattern allows teams to manage large deployments in a modular, hierarchical structure, making it easier to organize environments, promote changes across stages, and maintain consistency across clusters.
1. Prepare the GitOps Repository Structure
The App-of-Apps pattern organizes deployments hierarchically.
Example repository layout:
Structure overview:
applications
root ArgoCD application
environments
cluster-specific deployments
apps
application Helm configurations
This structure enables environment isolation and reusable application definitions.
2. Provision the New Kubernetes Cluster
Provision a new cluster for the upgraded deployment.
Supported platforms include:
managed Kubernetes services
self-managed Kubernetes clusters
on-prem Kubernetes environments
Verify cluster connectivity:
Recommended cluster baseline:
autoscaling
enabled
node pools
separate system and application pools
ingress controller
installed
certificate management
enabled
RBAC
enabled
network policies
recommended
3. Install ArgoCD
Install ArgoCD on a management cluster or a designated platform cluster.
Installation example:
Verify installation:
Access the UI:
Production deployments should expose ArgoCD through:
secure ingress
TLS certificates
SSO authentication
4. Register Clusters in ArgoCD
Register target clusters so ArgoCD can deploy applications.
Example clusters:
hyperswitch-v1
current production
hyperswitch-v2
upgrade deployment
ArgoCD can then deploy applications to multiple clusters.
5. Deploy the Root Application (App-of-Apps)
The root application manages all other applications.
Example:
applications/root-app.yaml
Apply:
ArgoCD will now automatically deploy all applications defined in the repository.
6. Define Cluster Applications
Environment files define which applications are deployed to each cluster.
Example:
environments/production/hyperswitch-v2.yaml
Each environment can deploy different versions or configurations.
7. Deploy Hyperswitch via Helm
The Hyperswitch application definition references the Helm chart.
Example:
apps/hyperswitch/application.yaml
ArgoCD will automatically:
pull the Helm chart
apply configuration
maintain cluster state
8. Plan Database Migration
Database schema changes must be handled carefully.
Disable automatic migrations during upgrade:
Recommended process:
Deploy new Hyperswitch version.
Ensure application pods are not receiving production traffic.
Create database backup or snapshot.
Run migration job.
Validate schema changes.
scale application replicas.
Example:
9. Gradually Shift Traffic
Traffic should be migrated progressively to the new cluster.
Traffic management options:
DNS weighted routing
gradual traffic shift
load balancer weighting
layer-7 traffic routing
service mesh
advanced routing policies
Example rollout plan:
initial
95%
5%
validation
75%
25%
ramp-up
50%
50%
final
0%
100%
10. Validate System Health
Monitor the deployment during rollout.
Key metrics include:
payment success rate
transaction reliability
API error rate
service failures
database connections
connection pool usage
Redis latency
cache performance
CPU and memory
resource utilization
webhook delivery
downstream system processing
Observability dashboards should be available before rollout.
11. Decommission the Previous Cluster
After the rollout stabilizes:
confirm that all traffic flows to the new cluster.
monitor system stability.
retain the old cluster for a rollback window.
decommission the cluster once the upgrade is confirmed.
Typical retention window: 24–72 hours.
Rollback Strategy
Traffic rollback
Redirect traffic back to the previous cluster.
Application rollback
Update the Git configuration to the previous version:
Commit the change and ArgoCD will automatically synchronize.
Enterprise Best Practices
Secrets Management
Use centralized secret management
Deployment Approvals
Require pull request review
Observability
Integrate logs, metrics, and tracing
Environment Promotion
Dev → Staging → Production pipeline
Disaster Recovery
Maintain database backup strategy
Last updated
Was this helpful?

