Close this

Darshan Reddy

Development
Texas, United States

Skills

Cloud Computing

About

Darshan M Reddy's skills align with Programmers (Information and Communication Technology). Darshan also has skills associated with Consultants and Specialists (Information and Communication Technology). Darshan M Reddy has 9 years of work experience.
View more

Work Experience

Infrastructure Site Reliability Engineer

Hilton
August 2022 - March 2023
  • Responsibilities: * Worked on various AWS services like EC2, Auto Scaling, S3, Elastic Load Balancing, Route53, AWS Aurora, RDS, VPC, Cloud Watch, IAM, EKS, EMR and Lambda. * Worked with GitHub repository as SCM, integrated to Jenkins pipeline jobs to trigger the job using webhooks. * Working on Jenkins Jobs configuration and maintenance of agents to manage CI/CD pipelines. * Used Maven build tool plugin for java related projects, managed artifacts in JFrog Artifactory. * Enabled scanning on the registry to scan vulnerabilities in Docker images on Prod environments. * Automated OS patching and user management tasks on instances by creating Ansible playbooks. * Worked on Terraform modules to create and modify the infrastructure on the AWS cloud. * Written Dockerfiles as per requirement and built the Docker images and uploaded them to the Docker Hub. * Expertise in writing the Kubernetes manifest files for deployments, pods, secrets, and services. * Worked with Harness and Spinnaker for CI/CD related to few environments. * Deployed the application version on the Kubernetes (K8s) environment by following the Blue/Green and Rolling deployment strategies based on environments. * Deployed application into EKS AWS managed K8s services and maintain high availability of workload by creating Deployments and Replica sets for Production environment. * Worked on Boto3 SDK (Python) in Lambda functions to optimize the AWS billing by writing scripts to auto start and stop of Ec2 instances based on environments. * Implementing and managing service mesh using ISTIO for traffic routing, and load balancing. * Configured Istio controller and Envoy proxy on Kubernetes to expose the services and route traffic. * Automated the process of building and deploying new releases of applications on K8s using Argo CD. * Written Groovy, and shell Scripts to automate the build process and administration jobs. * Worked on package manager to configure and deploy reusable manifest files in K8s using Helm charts. * Installed, configured, and maintained Apache/Nginx Web Server on Ubuntu instances. * Setting up and configuring monitoring and logging solutions using ELK stack (Elasticsearch, Logstash, Kibana) to track the performance and health of applications to troubleshoot issues. * POC for writing the Custom resources and the K8s Operators using Golang. * Administered Splunk server, Splunk forwarders and indexing the logs or events for applications. * Monitoring production errors and providing the RCA by analyzing the Splunk logs. * Setup Datadog monitoring across servers and various AWS services as alert manager. * Participating in on-call rotations to maintain system reliability and resolve critical issues. * Identify root causes of recurring problems to implement effective solutions. Reduce ticket inventory by 30%. * Implemented observability on Kubernetes monitoring and alerting solutions as Prometheus and Grafana. * Have experience working on software release projects using the Agile approach and Jira for tracking tasks. Environment: AWS, GIT, Maven, Jenkins CI/CD, Docker, containers, Kubernetes, Jira, Confluence, Apache Tomcat, Python /Shell Scripts, Windows, Ubuntu, JFrog, SonarQube, Ansible, Terraform, SRE

Sr DevOps Engineer

GE Healthcare
January 2019 - August 2021
  • Responsibilities: * Implemented Azure virtual machines into a secure Network (VNets and subnets). * Created Users, Groups, Roles, and Policies to provide access to resources using Azure AD. * Created Azure Blob storage for object storage and archival purpose of the Database backups, configuring Permissions and Versioning as per requirements. * For Branching, Tagging, and Release Activities on Version Control Tools used Azure Repo (ADO). * Implemented CI & CD Process stack using Azure pipelines, Maven, SonarQube, and Azure Artifactory. * Pushed Docker images to Docker hub for deploying containers to Kubernetes (AKS) cluster. * Written manifest files to build containerized applications by creating nodes, replica sets, Deployments, and services to deploy application containers as Pods on Kubernetes cluster. * Written and modified Ansible Playbooks as requirement to automate routine server administration tasks. * Integrated Terraform modules in Azure pipelines for consistent deployments of resources in cloud. * Created Azure Dashboards to track the Build and Pipeline status and configured alerts for failures. * Configured Azure Monitoring to ensure the performance and resource utilization of applications and integrated alerts with the Jira ticketing tool. * provided on-call support during the weekend for release, updates, and troubleshooting in case of issues. * Implementation of multiple software release plans for numerous applications using the Agile methodology. Environment: Azure DevOps, Azure Repo, Docker, Kubernetes, SonarQube, Ansible, Azure pipelines, Azure Boards, Terraform, Shell Scripts, Python, Unix/ Linux environment, Azure Dashboards, Monitoring.

AWS Cloud Operations Engineer

Flipkart
August 2016 - December 2019
  • Responsibilities: * Deployed and managed the application to run in production with zero downtime on AWS cloud. * Hands-on-Experience with various AWS services like EC2, Auto Scaling, S3, Elastic Beanstalk, Elastic Load Balancing, Route53, AWS Aurora, RDS, Dynamo DB, VPC, Cloud Watch, IAM, Lambda, ECS. * Used AWS S3 to store database and log file backups. * Worked with Cloud Formation to spin up the infrastructure to run the infrastructure. * Managing security groups, subnets, and peering connections of VPC. * Worked on AWS ELB and Autoscaling Groups to load balance the application in production. * Created pipeline jobs on Jenkins to automate the build process by using Maven, SonarQube. * Built Docker images by writing Docker files as requirement and pushed them to ECR. * Worked on POC using AWS ECS and deployed containers on ECS by fetching the images from the ECR. * Deployed web applications using AWS PaaS Elastic Beanstalk service. * Created Ansible playbooks to manage web servers, Configuration files, user creations. * Configured alerts through Cloud Watch and dashboards based on logging. * Configured alerts for resources based on CPU, memory, and disk space utilization using site 24x7. * Set up basic monitoring and alerting through using cloud watch and SNS. * Used AWS Lambda to take periodic backups of DB, Restorations and Deletion of Older backup files from S3. * Migration of the existing application that runs on-premises to Amazon Web Services. * Used Terraform as IaC to build the Disaster setup environment. * Used JIRA tool for project tracking where we update the daily tasks performed. Environment: AWS EC2, S3, Elastic Beanstalk, Site 24x7, MySQL, RDS, JIRA, Elastic Filesystem, GIT, SNS, Shell Scripting, Terraform, Docker, Jenkins.

Storage Administrator

HCL Technologies
August 2014 - August 2017
  • Responsibilities: * Supported environment having SAN and NAS Hitachi arrays like VSP, HNAS, HUS arrays & NetApp across 2 data centers and 20 Remote sites. * Created Dynamic provisioning and Tiering pools, Host groups and assigning LUNs to Host groups and extending the existing LUN as per the requirement. * Generating performance reports/graphs as per the requirement at the LUN level, Host level, and * Establish replication between primary and secondary data centers using True copy & HUR. * Migrating Data between pools using Hitachi Tiered Storage Manager. * Performed DR Test successfully to ensure customer's data is redundant at the DC level. * Creating and managing Copy-on-write snapshots and Shadow Image replication. * Creating CIFS and NFS using HNAS. * Have set up/configured alerts for storage array with the tools team through SNMP/SMTP. * Created Zones while adding a new host in the Environment. * Performing Code upgrades on DCX-8510 Brocade switches. * Worked on a project to migrate data from one data center to other data centers.

Backup Administrator

BNY Mellon
May 2014 - July 2016
  • Responsibilities: * Adding new clients (Physical/VM) in the existing environment for backup. * Install Backup agents (SQL, Oracle, AD) on the clients to perform agent-based Backup. * Configure Cluster-based backup for the clients. * Creating a Storage policy as per the data retention requirement of the client * Create Schedule policies and associate them with the client. * Install Media Agent and configure VMs through Vcenter for backup in the environment. * Add Disk libraries, VTL, and Tape libraries to the Commvault and perform a backup to them. * Restore the Physical client with OS during any server crash using the 1 Touch recovery feature. * Review daily backup reports and perform troubleshooting steps according to errors. * Restoring files, folders, VMs, and DB as per requirement. * Configured One Pass the Commvault Backup and Archive solution. * Performing Hotfix upgrades and Service pack upgrades for the CommCell environment. * Configure Daily and Monthly Reports about the Backup SLA using the Web console.

Senior DevOps Engineer

Caterpillar
April 2023 - Present
  • Responsibilities: * Implement and configure various Azure services like VMs, Disks, Blob, AD, VNet, Azure DevOps, DNS, ALB. * Responsible for creating server and cloud infrastructure in Azure IaaS environment driven by CI/CD pipeline utilizing Infrastructure as Code (IAC) approach with terraform. * Assist with deployment activities into Dev/Test/pre-prod/production environments. * Using configuration management tools like Ansible and Azure Operational Insights to automate tasks. * Setup CI/CD jobs in Gitlab to perform code quality analysis by integrating SonarQube and Qualys to scan container Images and source code to remediate the Bugs and vulnerabilities. * Integrated SonarQube, Fortify, and Checkmarx during the CI pipeline job for scanning the source code to perform SAST/DAST analysis and Deserialization of vulnerabilities for java-based applications. * Troubleshoot the failed Pipeline Jobs during the build and deploy process by collaborating with developers. * Working on Python scripts to perform log analysis and parse Splunk logs and create dashboards. * Using Python scripts to monitor the resource utilization of containers on AKS clusters. * Implemented python scripts in CI/CD to automate the tag creation and update of k8s manifest files. * Skilled in using monitoring and logging tools such as Azure Monitor, Splunk, Prometheus, and Grafana to monitor and troubleshoot Azure infrastructure and container based microservices application. * Create and maintain the Kubernetes (AKS) node clusters on Azure by performing upgrades. * Created Kubernetes resources like Pods, Deployments, service, Jobs, secrets, Config Maps. * Using ArgoCD controllers to manage Kubernetes resources and deployments of latest versions. * Customizing the life cycle management of Kubernetes operators like Prometheus, ArgoCD using Golang. * Implemented CRD using Golang to extend Kubernetes API for managing specialized resources. * Integrated custom API server extensions to provide specialized features for the platform. * Configured Ingress controller, ISTIO service mesh, using Helm package manager to write re-deployable yml manifest files/ Helm charts for different environments. * Experience using Bash/Power shell scripts to automate configuration related tasks. * Tracking changes/Tickets in a project using the Jira tracking tool by following agile methodology. Environment: Gitlab, Docker, Kubernetes, SonarQube, Fortify, Qualys, Ansible, Azure Boards, Terraform, Shell Scripts, Python, Unix/ Linux environment, Azure Dashboards, Splunk Monitoring.

Education

Texas Tech University

Master of Science in Computer Science
January 2021 - January 2022

VTU

Bachelor's in engineering
January 2010 - January 2014