Close this
Close this

Anshul Garg

Development
New Delhi, India

Skills

DevOps

About

Anshul Garg's skills align with Programmers (Information and Communication Technology). Anshul also has skills associated with Consultants and Specialists (Information and Communication Technology). Anshul Garg has 10 years of work experience, with 3 years of management experience, including a low-level position.
View more

Work Experience

Lead Dev Ops Engineer

Lululemon
January 2021 - Present
  • Working in a cloud-native environment where "automation" and "Infrastructure as Code" are guiding principles necessary for developing high-quality and resilient software systems. • Building, managing, and maintaining consistent automated CI/CD pipeline standards using GitLab & Jenkins reusable for 45+ microservices. • Collaborating to evolve Lululemon's existing architecture, working with teams to improve operations, and implementing new features and functionality. • Assisting the DevOps team with architecture and guidance, as well as managing ticket queues/backlogs and prioritizing work for other team members • Led the technical conversations to identify the necessary resource or choose the technology working with the Architects. • Worked closely with internal and external product teams to build, deploy, and manage scalable services in EKS. • Providing system analysis and recommended solutions for environmental performance and availability needs • Monitoring and escalating engineering issues to Product Management/R&D teams when required. • Managing a continuous integration/deployment methodology for our server-based technologies • Building and implementing Bash and YAML scripts for streamlined deployments & systems updates. • 150+ KTs (Knowledge Transfer) sessions to an audience as low as 1 to as big as 127 • Writing and improving Packer Scripts in Bash and Python to automate the spinning of the microservices in EC2. • Built re-usable GitLab Pipeline templates for all our 30+ microservices, managed by different product teams. • Baked-in versioning Strategy, different build and release artifacts, separate build and deployment pipelines, sonar scanning, code scanning, container scanning, centralized Helm repository management, and approvals into 1 GitLab Pipeline template. • Led the technical decision discussion to migrate from Jenkins to GitLab for continuous integration, deployment, and improvements. • Built 20+ prototypes during the migration activity for all use cases in Jenkins & EC2 for GitLab & EKS • Documented almost everything, new features, accesses, functionalities, directions to use & not-to-use, & navigating the tools, M&A, logging, troubleshooting, etc. • Instrumented metrics for Kafka Lag-based autoscaling using KEDA and Prometheus Adapter. • Instrumented Application Monitoring (APM) for all containerized microservices using customized pod volumes and Docker images via Cluster Agent in AppDynamics • Collaborated endlessly with various teams (Enterprise Kubernetes, Monitoring, Vault, CICD, etc.) to continuously improve our Infrastructure Architecture, practices, and security posture.

Sr. DevSecOps Consultant

Protiviti
August 2021 - November 2021
  • • Worked as a DevOps consultant, designing the architecture for AppSec microservices. • Configured virtual machines, storage accounts, resource groups, and IPs and created availability sets for the hybrid environment. • Consulted in implementing security methodologies like Shift-left, Risk Scoring, and Threat-modelling. • Created Virtual Machines via the pipeline using Terraform and handled remote login to Virtual Machines to troubleshoot, monitor, and deploy applications. • Designed and implemented Kubernetes applications, migrated the Microservices from Virtual Machines to Docker containers, and managed the clustered containers using Kubernetes. • Set up Artifactory to store the docker images built from the build pipeline job and version them. • Collaborated with the vault team to understand the requirements and created new roles and policies to integrate the new Kubernetes cluster with the vault. • Gathered requirements from various teams like Kubernetes, Vault, and Cloud Collaborations to understand the existing modules and starter scripts. • Used PowerShell to clone the GitHub repositories and created a docker file with instructions that told the container engine how to build the container image. • Built docker image using Jib Maven plugin for multi-module Java-Sprint Boot microservices. • Created scripts in Python to automate log rotation of multiple logs from web servers. Worked on Bootstrapping instances using Chef and integrating with auto-scaling. • With the Dockerfile written, point Docker at our Dockerfile and tell it to build and then run our image. • Published our containerized Spring Boot microservice to a private registry using the JFrog Container registry, GitLab registry, and ECR. • Deployed the app to a Kubernetes cluster that was created with EKS. Designed Windows Server containers on EKS using the AWS CLI • Worked with Docker Images and Jars to build Artifact Feeds and mutated them onto the pipeline flow. • Created custom templates to minimize the time and effort of rebuilding the regular tasks. • Deployed Docker containers on Kubernetes and administered creating pods, ConfgMaps, and deployments.

Sr. DevOps Engineer

Starbucks
January 2019 - October 2021
  • Designed and documented roles and groups using AWS Identity and Access Management (IAM) and Managed hardware, software, and networking for a large-scale cluster on EC2. • Worked on setting up databases in AWS using RDS, storage using an S3 bucket, and configuring instances. • Worked on securing data stored in Vault (by HashiCorp) via tight controls access tokens and passwords and AWS secrets manager for services like AWS IAM, DynamoDB, SNS, and RDS. • Set up Jenkins to integrate the JAVA project and maintained Jenkins with continuous integration and deployment. • Performed a Proof of Concept (POC) on Synopsys's SAST solution, the Polaris Coverity tool for our environment. • Tested for different use cases provided by Coverity tool for security, OWASP Top 10, CWE Top 25 vulnerabilities, code quality, code review, and more. • Collaborated with stakeholders, AppSec, InfoSec, and SOC Team to design a future state architecture for our API-driven architecture, introducing serverless secure architecture. • Cloud development and automation using Node.js, Python (Boto3), AWS Lambda, AWS CDK (Cloud Development Kit), and AWS SAM (Serverless Application Model). • Worked on creating AWS AMI, used HashiCorp Packer to develop and manage the AMI, and contributed to a method with HashiCorp Packer to test new AWS AMIs before promoting them into production. • Leveraged Terraform modules/scripts to create custom-sized VPC Solutions in AWS with the help of Network ACLs, Security groups, and public and private network configurations to ensure successful deployments of Web applications • Used Kubernetes as an open-source platform for automating deployment, scaling, and operations of applications containers across clusters of hosts and provided container-centric infrastructure. • Integrated Docker container orchestration framework with Kubernetes by creating pods, config Maps, deployments, Replica sets, nodes, etc. • Extensively worked on Terraform modules with version conflicts to utilize during deployments to enable more control or missing capabilities. Managed different infrastructure resources, like physical machines, VMs, and Docker containers using Terraform using CDK. • Worked with Python scripting, defined tasks using YAML format, and Ran ansible Scripts depending on provision servers. • Managed artifacts generated by Jenkins and was involved in creating a deployment, building scripts, and automated solutions using Python and AWS CDK. • Built scalable Docker infrastructure for Microservices utilizing ECS Fargate - AWS Elastic Container service by creating task definition JSON file and worked on Docker container snapshots, attaching to a running container, removing images, managing directory structures, and managing containers in AWS ECS. • Wrote Ansible Playbooks to Manage Configurations of AWS Nodes and Test Playbooks on AWS instances using Python.

DevOps/Cloud Engineer

Kaiser Permanente
January 2017 - October 2019
  • Worked on Hybrid cloud involving AWS and Microsoft Azure, configuring virtual machines, storage accounts, and resource groups. Remote login to Virtual Machines to troubleshoot, monitor, and deploy apps. • Setting up the automation environment for the Application team if necessary and helping them through the build and release automation process. • Used MAVEN as a build tool on Java projects to develop build artifacts on the source code. • Created Virtual Machines via the pipeline using Terraform and handled remote login to Virtual Machines to troubleshoot, monitor, and deploy applications. • Designed and implemented Virtual Networks with the network security groups and deployed those using Terraform and CloudFormation. • Scaled the applications, created high availability by spreading incoming requests across various VMs, deployment of Virtual Machines and Instances (the Cloud Services) into secured VNets using the Azure Load Balancers. • Migrated Virtual Machines to Azure Virtual Machines for multiple global business units and implemented API Management modules for public-facing subscription-based authentication. • Created and maintained Continuous Integration (CI) using tools Azure DevOps over multiple environments to facilitate an agile development process, which is automated and repeatable, enabling teams to safely deploy code many times a day while ensuring Azure Kubernetes Services (AKS) are supported. • Maintained the CI/CD architecture using Azure DevOps/VSTS, created Azure Automation Assets, Graphical Run books, and PowerShell Run books that will automate specific tasks, deployed Azure AD Connect, configuring the Active Directory Federation Services authentication flow, ADFS installation using Azure AD Connect. • Performed ELK configurations using Terraform and monitored ELK and other alerts on the portal by designing the ELK infrastructure and deployments via our CICD pipelines. • Extensively worked on Terraform modules with version conflicts to utilize during deployments to enable more control or missing capabilities. Managed different infrastructure resources, like physical machines, VMs, and Docker containers using Terraform. • Created Clusters using Kubernetes and worked on creating pods, replication controllers, services, deployments, labels, health checks, and ingress by writing YAML files. • Wrote Ansible playbooks with Python SSH as a wrapper to manage configurations and the test playbooks. • Deployed Docker containers on Kubernetes and administered creating pods, ConfgMaps, and deployments. • Configured and managed source code using GIT, resolved code merging conflicts in collaboration with application developers, and provided a consistent environment. Implemented Continuous Integration using VSTS. • Engineered Splunk to build, configure, and maintain heterogeneous environments. Have in-depth knowledge of log analysis generated by various systems, including security products.

AWS DevOps Engineer

TD Bank
May 2015 - July 2017
  • • Designed, deployed, and maintained the puppet system. • Wrote, tested, and deployed necessary puppet modules. • Designed, deployed, and maintained the Check_MK monitoring system (Nagios) and puppet module for agent deployment. • Worked extensively with AWS EC2, Redshift, Route53, VPC etc. • Built and deployed Docker Containers • Worked with Docker swarm to ensure redundancy in our product. • Built a dev network. • Assisted team, wherever possible, in developing a News, CMS • Built DNS system. • Built out, in-house VMWare system. • Built Docker Registry for in-house use. • Worked with GitHub • Built ELK stack and Puppet module for the deployment of agent. • Documentation of all systems deployed into Jira Confluence. • Worked with Jira • Worked in MySQL and Tableau • Worked on getting the analytics team into Amazon RDS (MySQL) and Redshift

DevOps Engineer

May 2014 - April 2015
  • Assisted in migrating existing data centers into the AWS instances and was responsible for daily build & deployments in pre-production and production environments. I got introduced to IaaS, PaaS, and SaaS culture and worked on AWS platforms. • Defined AWS Security Groups, which acted as virtual firewalls controlling traffic to reach one or more AWS EC2 instances and Configuring and Networking of Virtual Private Cloud (VPC). • Worked on Multiple AWS instances, set the security groups, Elastic Load Balancer, and AMIs, Auto scaling to design costeffective, fault-tolerant, and highly available systems. • Created S3 buckets and managed policies for S3 buckets and used S3 bucket and Glacier for storage and backup • Created scripts in Python to automate log rotation of multiple logs from web servers. Worked on Bootstrapping instances using Chef and integrating with auto-scaling. • Developed Cloud Formation scripts to automate EC2 instances. Created Cloud Formation templates and deployed AWS resources using them. • Created CloudWatch alerts for instance & used them for auto-scaling launch configurations. • Set up and built AWS infrastructure for various resources, VPC EC2, S3, IAM, EBS, Security Group, Auto-Scaling, and RDS in Cloud Formation JSON templates. • Automating backups by the shell for Linux to transfer data in the S3 bucket and designed and implemented scalable, secure cloud architecture based on Amazon Web Services.

Education

New York University

Masters