
Thamizhvanan Lakshmanan
Development
Michigan, United States
Skills
Cloud Computing
About
THAMIZHVANAN LAKSHMANAN's skills align with System Developers and Analysts (Information and Communication Technology). THAMIZHVANAN also has skills associated with Programmers (Information and Communication Technology). THAMIZHVANAN LAKSHMANAN has 15 years of work experience, with 3 years of management experience, including a low-level position.
View more
Work Experience
Software Engineer III
PayPal
October 2021 - Present
- Environment: Ansible, Git/Github, Jenkins, Docker, Python, GCP/AWS, dataproc, bigquery, bigtable, UC4/Airflow Scheduler, HP, DELL, RHEL, centos Responsibilities: * Managed 800 servers in a private cloud-based support. Managing extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. * Standalone application has been built in GCP to handle the process that falls under zettle application * Implemented and managed Jenkins pipelines to automate deployment processes, ensuring seamless and efficient delivery of software updates. * Monitored Jenkins pipeline executions, promptly identifying and resolving issues to maintain high system availability and performance * Integrated Jenkins with other tools and services such as Git, Docker, and Kubernetes, enabling end-to-end automation of the software delivery process. * Collaborated with DevOps and development teams to integrate Terraform into CI/CD pipelines, enabling automated infrastructure deployment alongside application releases. * Load the hive base table from existing on-prem system to Big query in-order to eliminate the latency caused by the application * Migrate the data from Teradata to GCP by data being sqooped from TD to HDFS, post which using distcp move the data to GCS bucket * Analyzing the environment to use snowproc for venmo application * Evaluating Functional & Non-Functional requirements of various Application team (Global Data Insight Analytics) and providing guidance to choose right models/solutions * Design and build scalable infrastructure and platform to collect, process and analyze very large amounts of data (structured and unstructured) * Led development of a data platform for data processing (data ingestion and data transformation) and data repository using Big Data Technologies like Hadoop stack including HDFS cluster, MapReduce, Spark, Hive. * Administer Hadoop infrastructure, Performance tuning of Hadoop clusters, Monitor Hadoop cluster connectivity and Collaboration with application teams to install the operating system and Hadoop updates, patches, version upgrades when required * Provide analytics solutions development and delivery support to partners, drive continuous improvements and elevate client experience. Act as change agent to facilitate sustainable changes in processes, adopting standard methodologies, operating models and organizational priorities. * Automating infrastructure task using python or shell scripting and integrating with signalfx to monitor the infrastructure with nice GUI dashboard * Successfully completed Kubernetes-related coursework or online training programs, gaining practical knowledge in cluster setup, configuration, and maintenance. * Developed and deployed microservices-based applications on Kubernetes clusters, utilizing concepts such as pods, deployments, services, and ingress for efficient orchestration and management.
Senior System Engineer
Ford Motor Company
June 2020 - September 2021
- Environment: Ansible, Git/Github, Jenkins, Docker, Python, GCP/AWS, UC4/Airflow Scheduler, HP, DELL, RHEL, centos Responsibilities: * Orchestrated and oversaw Jenkins pipelines to automate deployment procedures, guaranteeing the smooth and effective rollout of software updates. * Monitored Jenkins pipeline executions vigilantly, swiftly identifying and remedying issues to uphold optimal system availability and performance. * Interfaced Jenkins with various tools and services including Git, Docker, and Kubernetes, facilitating comprehensive automation of the software delivery workflow. * Partnered with DevOps and development teams to incorporate Terraform into CI/CD pipelines, enabling automated infrastructure deployment in tandem with application releases. * Led 20 members team and provided the sun model in FORD Motor Company * Develop Proof-of-Concept projects to validate new architectures and solutions * Design and build scalable infrastructure and platform to collect, process and analyze very large amounts of data (structured and unstructured) * Create technical design documents to communicate solutions that will be implemented by the development team * Collaborate and engage with BI & Analytics team * Collaborate with the global customers (North America / Asia Pacific /Australia/Europe) and provided the administrative support * Design and support development of a data platform for data processing (data ingestion and data transformation) and data repository using Big Data Technologies like Hadoop stack including HDFS cluster, MapReduce, Spark, Hive and Hbase * Collaboration with application team to install the operating system and Hadoop updates, patches, version upgrades. * Collaborating with Cloudera engineers for engineering solutions and troubleshooting the complex issues * Maintain, enhance and implement bug fixes and support the data solutions * Proactively monitor events, investigate issues, analyze solutions and drive problems through to resolution * Bought innovative mindset within the team and implemented innovative solution and reduced the manual intervention
Tech Lead
Ford Motor Company
October 2017 - June 2020
- Environment: HDFS, Hive, Spark, Pyspark, Git/Github, Oozie, zookeeper, HP, DELL, RHEL, Suse. Responsibilities: * Implemented various phases of project including System Integration, Big data technologies and Hadoop ecosystem: HDFS, Map Reduce, YARN, Pig, Hive, Oozie, Hbase, and Sqoop& Zookeeper. * Experience in administering, installation, configuration, supporting and maintaining Hadoop cluster using Hortonworks. * Hands-on experience on major components in Hadoop Ecosystem including HDFS, Yarn, Hive, Zookeeper, Oozie and other ecosystem products, Optimized the configurations of Map Reduce, spark and Hive jobs for better performance. * Experience in setting, configuring & monitoring of ambari and Hadoop cluster on Hortonworks. * Perform Ambari and HDP upgrade after the release of new patch. Currently using HDP 2.6.4 * Experience in design and maintenance and support of Big Data Analytics using Hadoop Ecosystem components like HDFS, Hive, Hbase, Sqoop, MapReduce, Oozie * Extensive knowledge of Mapper/Reduce/HDFS Framework. * Commissioning and Decommissioning Hadoop cluster Nodes including balancing HDFS block data. * Expert level skills in Managing, Scheduling and troubleshooting Jobs through oozie. * Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause. * Experience in Implementing High Availability of Name Node and Hadoop Cluster capacity planning to add and remove the nodes. * Involved in Installing and configuring Kerberos for the authentication of users and hadoop daemons. * Experience in installing, configuring Hive, its services and Metastore. Exposure to Hive Querying Language, knowledge about tables like importing data, altering and dropping tables. * Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting. * Helping application teams to import and export their Dbs using sqoop and trouble shoot their issues. * Creating collection for solr * Troubleshoot with Hadoop developers, designers in MapReduce job through oozie and tez. Spark jobs through pyspark. * Manage and review data backups and log files, Do maintenance cleanup of old log files. * Create, configure and manage access for Kafka topics. * Worked on administrating components accessibility via ranger. Create and manage policies. * Installing, Configuration and Managing Red hat Linux. * Handling hardware issues, patch management, firmware upgrades in cluster. * Expertise in shell scripting to automate daily activities. * Monitor Hadoop cluster using Grafana and Ambari metrics.
Senior Systems Administrator
Citibank
October 2015 - September 2017
- Wipro Technologies Private Ltd. Environment: AIX, RHEL, Solaris, HP, DELL, IBM Power RS/6000 series Responsibilities: * Monitoring the system's for cpu, memory and disk utilization using topas, top, sar, vmstat, netstat, iostat, nmon, etc * Replacement of disk in Aix, and Linux. * Provide on-call support * Provided backup and restore solutions using NetBackup * OS patching on Linux and AIX servers. * San package upgrade * Redhat, IBM and Oracle call log and follow-up for server issues. * Working with Datacenter team for HP and Dell server Hardware's like Motherboard, Battery, Disk and DIMM replacements * Supporting Cron job schedule for Application FIDs on UNIX servers * OS migration from Aix 6.1 to Aix 7.1 and Linux 5x to 6x * Memory, CPU addition through VIO server for lpar * Failover the cluster node to other as per request in VCS and PowerHa. * Worked taking consoles ILO, IMM, and HMC. * Profile activation for Memory and CPU allocation management through HMC on AIX servers * Worked on Taking console of VM servers using VCenter to support server access and not reachable issues. * Configuring ether channel on lpar
Senior Operations Professional
The American Express
August 2009 - September 2015
- Environment: AIX, RHEL, Solaris, HP, DELL, IBM Power RS/6000 series Responsibilties: * Logical and Physical Volume management, configuring disks, working with volume groups, logical volumes, and physical volumes. * Filesystem management, user management and device management * Performing daily health checks on all supported servers and taking steps proactively * Monitoring the system's for cpu, memory and disk utilization using top, vmstat, netstat, iostat, nmon, etc * Performing OS patching on Linux and AIX * DLPAR operations like moving resources (Memory, CPU and I/O) from one partition to another * Performing IHS configuration and updating sane modules and domain names * Deploying the applications on WebSphere application servers through NDM server * Live application maintenance using load balancing devices like Netscaler and F5 * Incident, Change & Problem Management * Provide on-call support * Support enterprise environments in co-ordination with other teams to provide best solutions within the agreed SLA's * Drafting KT plans and providing training for new joiners on amex process