Multi-Cloud DevOps Engineer crafting scalable infrastructure, CI/CD pipelines, and container orchestration across AWS · Azure · GCP
I'm a Motivated Multi-Cloud DevOps Engineer with hands-on experience across AWS, Azure and GCP. My work sits at the intersection of infrastructure automation and developer experience — making deployments faster, systems more resilient, and operations invisible.
Skilled in Linux administration, CI/CD automation, Docker & Kubernetes orchestration, Infrastructure as Code using Terraform, and observability stacks using Prometheus & Grafana. I believe infrastructure should be code — versioned, tested, and reviewed like any software.
Built an end-to-end CI/CD pipeline integrating GitHub webhooks with Jenkins to trigger automated Maven builds, SonarQube static analysis and Docker image creation. Images were versioned and pushed to Amazon ECR, then deployed to Amazon EKS using rolling update strategies with Horizontal Pod Autoscaling tied to CPU and memory metrics. All AWS infrastructure — VPC, node groups, IAM roles and ECR — was provisioned via modular Terraform, enabling consistent environment replication across dev, staging and production.
Designed a production-grade three-tier AWS architecture with strict network segmentation. Public subnets host the Application Load Balancer and NAT Gateway; private subnets contain EC2 application servers and Multi-AZ RDS MySQL instances, with no direct internet exposure. S3 was used for static assets and backups, secured via IAM bucket policies and server-side encryption. Security groups enforced least-privilege rules between each tier, and ALB handled SSL termination with ACM certificates, routing traffic based on path-based rules.
Implemented event-driven Auto Scaling for EC2 fleets using Auto Scaling Groups with custom CloudWatch alarms on CPU utilization, request latency and memory metrics. AWS Lambda functions subscribed to CloudWatch Events to automate instance lifecycle tasks — tagging new instances, running Ansible playbooks for server bootstrapping and triggering Slack alerts. Prometheus scraped metrics from all EC2 nodes and EKS pods, feeding Grafana dashboards with real-time visibility into infrastructure health and application performance.
Architected a fully serverless ETL pipeline using S3 event notifications to trigger AWS Lambda functions upon file uploads. Lambda functions — written in Python — processed, validated and transformed incoming data, then wrote results to a separate S3 output bucket. CloudWatch Logs captured all Lambda execution output with structured logging, and CloudWatch Alarms notified on error rate spikes or duration thresholds. IAM execution roles followed the least-privilege principle with resource-level S3 permissions and no wildcard policies.
Developed reusable Terraform modules for VPC, EC2, IAM and RDS to provision isolated
dev, staging and production environments from a single codebase. Remote state was stored
in S3 with DynamoDB state locking to prevent concurrent apply conflicts across teams.
GitLab CI/CD ran terraform plan on every merge request for automated drift
detection, and terraform apply on merge to main behind a manual approval gate.
IAM roles scoped per environment ensured complete blast-radius isolation.
Deployed a highly available web application across two Availability Zones using an Elastic Load Balancer distributing traffic to an Auto Scaling Group of EC2 instances. The ASG maintained minimum healthy capacity and replaced unhealthy instances automatically using EC2 health checks and ELB health endpoints. The database tier used Amazon RDS Multi-AZ with automated standby failover under 60 seconds. CloudWatch dashboards tracked ELB request counts, 5xx error rates and RDS replication lag, with SNS notifications for threshold breaches.
Containerised a multi-service application — frontend (Nginx), backend API and MySQL database — using Docker and Docker Compose for local development parity. Each service was packaged as a versioned Docker image and pushed to a private registry. The stack was then migrated to Kubernetes using hand-crafted manifests: Deployments, Services, ConfigMaps, Secrets and PersistentVolumeClaims. Helm charts were authored to template the entire release, enabling parameterised deploys across namespaces. Nginx acted as the ingress controller handling path-based routing and SSL passthrough. Prometheus scraped pod-level metrics via ServiceMonitor CRDs and Grafana dashboards visualised container CPU, memory, request rate and error budgets in real time.
Established a fully GitOps-driven delivery workflow where the Git repository is the single source of truth for all Kubernetes state. GitHub Actions handled the CI phase — running Maven builds, SonarQube quality gates and Docker image builds — then committed updated Helm chart image tags back to the config repository. ArgoCD watched the config repo and automatically synced desired state to the Kubernetes cluster using Application CRDs with self-healing enabled, meaning any out-of-band cluster changes were reverted instantly. Ansible playbooks bootstrapped fresh Kubernetes nodes, installing required system packages, container runtime and joining nodes to the cluster. Rollbacks were one-command Git reverts, making recovery from bad releases deterministic and auditable.
Open to full-time roles, freelance infrastructure projects, and consulting. Drop me a message — I respond within 24 hours.