Available for opportunities

Building Resilient Cloud Systems

Multi-Cloud DevOps Engineer crafting scalable infrastructure, CI/CD pipelines, and container orchestration across AWS · Azure · GCP

3+ Cloud Platforms
8+ Projects Built
8.88 CGPA
ranjit@devops ~ $
$ kubectl get pods --all-namespaces
✓ 12/12 pods running
$ terraform apply -auto-approve
✓ 8 resources added
$ docker build -t app:v2.1 .
✓ Successfully built a8f3b21c
$ ansible-playbook deploy.yml
✓ PLAY RECAP — ok=14 changed=3
scroll

Engineering reliability into every layer

I'm a Motivated Multi-Cloud DevOps Engineer with hands-on experience across AWS, Azure and GCP. My work sits at the intersection of infrastructure automation and developer experience — making deployments faster, systems more resilient, and operations invisible.

Skilled in Linux administration, CI/CD automation, Docker & Kubernetes orchestration, Infrastructure as Code using Terraform, and observability stacks using Prometheus & Grafana. I believe infrastructure should be code — versioned, tested, and reviewed like any software.

AWS Certified Kubernetes GitOps SRE Practices

Technical Arsenal

Cloud Platforms

  • AWS — EC2, S3, IAM, VPC, RDS
  • AWS — ELB, ASG, CloudWatch, ECR
  • Microsoft Azure
  • Google Cloud Platform (GCP)
AWS
Azure
GCP

Containers & IaC

  • Docker & Docker Compose
  • Kubernetes (EKS, self-managed)
  • Terraform — modular design
  • Ansible — configuration mgmt
Docker
Kubernetes
Terraform

CI/CD & Monitoring

  • Jenkins & GitHub Actions
  • GitLab CI/CD pipelines
  • SonarQube — code quality
  • Prometheus & Grafana
Jenkins
GitHub Actions
Grafana
Linux Bash Git Maven Nginx MySQL Python YAML Helm AWS Lambda NAT Gateway IAM

Featured Work

01

Cloud-Native CI/CD Pipeline on AWS EKS

JenkinsECREKSTerraformSonarQubeDocker

Built an end-to-end CI/CD pipeline integrating GitHub webhooks with Jenkins to trigger automated Maven builds, SonarQube static analysis and Docker image creation. Images were versioned and pushed to Amazon ECR, then deployed to Amazon EKS using rolling update strategies with Horizontal Pod Autoscaling tied to CPU and memory metrics. All AWS infrastructure — VPC, node groups, IAM roles and ECR — was provisioned via modular Terraform, enabling consistent environment replication across dev, staging and production.

Zero-downtime rolling deployments on EKS
HPA scaling on CPU & memory metrics
SonarQube quality gates block bad builds
Modular Terraform for multi-env parity
02

Secure Three-Tier AWS Architecture

VPCEC2RDSS3ALBIAMNAT Gateway

Designed a production-grade three-tier AWS architecture with strict network segmentation. Public subnets host the Application Load Balancer and NAT Gateway; private subnets contain EC2 application servers and Multi-AZ RDS MySQL instances, with no direct internet exposure. S3 was used for static assets and backups, secured via IAM bucket policies and server-side encryption. Security groups enforced least-privilege rules between each tier, and ALB handled SSL termination with ACM certificates, routing traffic based on path-based rules.

Multi-AZ RDS with automated failover
ALB with ACM SSL termination
S3 server-side encryption & lifecycle rules
Zero public exposure for app & DB tiers
03

Auto Scaling & Observability Platform

EC2 ASGCloudWatchLambdaPrometheusGrafanaAnsible

Implemented event-driven Auto Scaling for EC2 fleets using Auto Scaling Groups with custom CloudWatch alarms on CPU utilization, request latency and memory metrics. AWS Lambda functions subscribed to CloudWatch Events to automate instance lifecycle tasks — tagging new instances, running Ansible playbooks for server bootstrapping and triggering Slack alerts. Prometheus scraped metrics from all EC2 nodes and EKS pods, feeding Grafana dashboards with real-time visibility into infrastructure health and application performance.

ASG scales in/out on CloudWatch alarms
Lambda automates instance bootstrapping
Grafana dashboards for full-stack metrics
Ansible idempotent playbooks for config
04

Serverless Data Processing Pipeline

AWS LambdaS3CloudWatchIAMPython

Architected a fully serverless ETL pipeline using S3 event notifications to trigger AWS Lambda functions upon file uploads. Lambda functions — written in Python — processed, validated and transformed incoming data, then wrote results to a separate S3 output bucket. CloudWatch Logs captured all Lambda execution output with structured logging, and CloudWatch Alarms notified on error rate spikes or duration thresholds. IAM execution roles followed the least-privilege principle with resource-level S3 permissions and no wildcard policies.

S3 event-driven Lambda invocations
Structured CloudWatch logging & alarms
Zero-server ETL — no EC2 required
Least-privilege IAM per-resource policies
05

Multi-Environment Infrastructure with Terraform & GitLab CI

TerraformGitLab CI/CDVPCEC2IAMS3 Backend

Developed reusable Terraform modules for VPC, EC2, IAM and RDS to provision isolated dev, staging and production environments from a single codebase. Remote state was stored in S3 with DynamoDB state locking to prevent concurrent apply conflicts across teams. GitLab CI/CD ran terraform plan on every merge request for automated drift detection, and terraform apply on merge to main behind a manual approval gate. IAM roles scoped per environment ensured complete blast-radius isolation.

S3 + DynamoDB remote state locking
GitLab CI plan on PR, apply on merge
Reusable modules across 3 environments
Per-env IAM roles for blast-radius control
06

High-Availability Web App with ELB & RDS Multi-AZ

ELBEC2ASGRDS Multi-AZCloudWatchVPC

Deployed a highly available web application across two Availability Zones using an Elastic Load Balancer distributing traffic to an Auto Scaling Group of EC2 instances. The ASG maintained minimum healthy capacity and replaced unhealthy instances automatically using EC2 health checks and ELB health endpoints. The database tier used Amazon RDS Multi-AZ with automated standby failover under 60 seconds. CloudWatch dashboards tracked ELB request counts, 5xx error rates and RDS replication lag, with SNS notifications for threshold breaches.

ELB across 2 AZs with health checks
ASG auto-replaces unhealthy EC2 nodes
RDS Multi-AZ failover under 60 seconds
SNS alerts on 5xx spike & replication lag
07

Microservices Containerisation with Docker & Kubernetes

DockerDocker ComposeKubernetesHelmNginxPrometheusGrafana

Containerised a multi-service application — frontend (Nginx), backend API and MySQL database — using Docker and Docker Compose for local development parity. Each service was packaged as a versioned Docker image and pushed to a private registry. The stack was then migrated to Kubernetes using hand-crafted manifests: Deployments, Services, ConfigMaps, Secrets and PersistentVolumeClaims. Helm charts were authored to template the entire release, enabling parameterised deploys across namespaces. Nginx acted as the ingress controller handling path-based routing and SSL passthrough. Prometheus scraped pod-level metrics via ServiceMonitor CRDs and Grafana dashboards visualised container CPU, memory, request rate and error budgets in real time.

Docker Compose for dev / Kubernetes for prod
Helm charts with per-env value overrides
Nginx ingress with path-based routing & SSL
Prometheus + Grafana full container observability
08

GitOps Delivery Pipeline with ArgoCD & Kubernetes

ArgoCDKubernetesHelmGitHub ActionsDockerSonarQubeAnsible

Established a fully GitOps-driven delivery workflow where the Git repository is the single source of truth for all Kubernetes state. GitHub Actions handled the CI phase — running Maven builds, SonarQube quality gates and Docker image builds — then committed updated Helm chart image tags back to the config repository. ArgoCD watched the config repo and automatically synced desired state to the Kubernetes cluster using Application CRDs with self-healing enabled, meaning any out-of-band cluster changes were reverted instantly. Ansible playbooks bootstrapped fresh Kubernetes nodes, installing required system packages, container runtime and joining nodes to the cluster. Rollbacks were one-command Git reverts, making recovery from bad releases deterministic and auditable.

ArgoCD self-healing — drift auto-corrected
GitHub Actions CI pushes new image tags to Git
SonarQube gates integrated in CI before deploy
Ansible bootstraps nodes — zero manual steps

Academic Background

🎓
Master of Computer Applications (MCA) ABIT, Cuttack CGPA — 8.88 / 10
📘
Bachelor's Degree Utkal University Graduated

Let's Build Together

Open to full-time roles, freelance infrastructure projects, and consulting. Drop me a message — I respond within 24 hours.