
Veerasekhara Dasandam
Development
ANDHRA PRADESH, India
Skills
Data Engineering
About
Veerasekhara Dasandam's skills align with System Developers and Analysts (Information and Communication Technology). Veerasekhara also has skills associated with Programmers (Information and Communication Technology). Veerasekhara Dasandam has 12 years of work experience, with 2 years of management experience, including a low-level position.
View more
Work Experience
Lead Data Engineer
Wells Fargo
September 2022 - Present
- Consumer Data reads from different channels (through kafka and raw data files sent by mainframe) and ingest into s3 buckets. Apply different data transformation and generate consolidate consumer feature payment analytic data to consume AI alert management system. Responsibilities: Responsible for building scalable distributed data solutions with Apache Spark using Python/Scala.? Fulfilling increasing business needs of analyzing operational data sets in near real-time by using spark streaming? Designed real-time data importing method as producer to pull the data through api from the sources to Kafka clusters? Accomplished different goals of the management team by implement algorithms and business logics of data process in Spark related technologies.? Developed the sqoop jobs for extracting the RDBMS, mainframe, Informatica data and loading in to S3 tables(Data lake).? Implemented Partitioning in S3.? Load and transform large sets of structured, semi-structured data.? Developed Airflow workflows on actions Spark, Linux script.? Designed & developed UDFs with Java to improve query performance in Hive.? Setting up and maintain the continuous integration server and Live production server using Jenkins.? Developed the Python automations framework & scripts for Data integrity and validations? Strong problem-solving experience on test, identify, address, debug and then resolve technical issues that affect the integrity of the application.? Worked in the agile Environments, effectively communicated at all levels of an organization in Management and Technical roles.? Tools Used: HDFS, MapReduce, Hive, Spark, Kafka, Sqoop, Airflow, Azure Datafactory, Data Lake and databricks
senior software Engineer
S&P Global
August 2019 - August 2022
- Real-time Pricing Streaming data from various exchanges will be process to platform UI and Different Rollup duration will be process and stored into data center and Kafka topics to consume multiple clients. Syncing also happening on multiple data centers and several process to aggregate the data and will process to receptive UI Module. Responsibilities: Responsible for building scalable distributed data solutions with Apache Spark using Python/Scala.? Designed real-time data importing method as producer to pull the data from the sources to Kafka clusters and setting up the kafka clusters.? Setting up and maintain the continuous integration server and Live production server using Jenkins.? Developed the Python automations framework & scripts for Data integrity and validations? Strong problem-solving experience on test, identify, address, debug and then resolve technical issues that affect the integrity of the application.? Worked in the agile Environments, effectively communicated at all levels of an organization in Management and Technical roles? Tools Used: Nifi, Spark, Kafka, Zookeeper, Console Producer, Console Consumer, Elastic Search, Logstash, Kibana, Apache Tomcat Server etc
senior Associate
Cognizant Technologies
July 2018 - August 2019
- Network Modernization is the Amex initiative to do anomaly detection in the card transaction by machine learning process. This process will involve processing LoMo logs in real-time streaming from various components in the card payment network and pertain the machine learning models to identify the anomaly transactions in seconds Responsibilities: Responsible for building scalable distributed data solutions with Apache Spark using Python/Scala.? Designed real-time data importing method as producer to pull the data from the sources to Kafka clusters and setting up the kafka clusters.? Setting up and maintain the continuous integration server and Live production server using Jenkins.? Developed the Python automations framework & scripts for Data integrity and validations? Strong problem-solving experience on test, identify, address, debug and then resolve technical issues that affect the integrity of the application.? Worked in the agile Environments, effectively communicated at all levels of an organization in Management and Technical roles.? Tools Used: Nifi, Spark, Kafka, Zookeeper, Console Producer, Console Consumer, Elastic Search, Logstash, Kibana, Apache Tomcat Server etc
Senior software engineer
NCR Corporation
September 2015 - April 2018
- Consumer purchase data ingestion to hadoop file system by consuming raw data received from sftp. Generating user promotion based on purchase history by Appling different transformation Responsibilities: Responsible for building scalable distributed data solutions with Apache Spark using Python/Scala.? Fulfilling increasing business needs of analyzing operational data sets in near real-time by using spark streaming? Designed real-time data importing method as producer to pull the data through api from the sources to Kafka clusters? Accomplished different goals of the management team by implement algorithms and business logics of data process in Spark related technologies.? Developed the sqoop jobs for extracting the RDBMS,ftp.? Implemented Partitioning, Dynamic Partition, Buckets in HIVE.? Load and transform large sets of structured, semi-structured data.? Developed Airflow workflows on actions Spark, Linux script, HIVE scripts and HBase loads.? Designed & developed UDFs with Java to improve query performance in Hive.? Setting up and maintain the continuous integration server and Live production server using Jenkins.? Developed the Python automations framework & scripts for Data integrity and validations? Strong problem-solving experience on test, identifies, address, debug and then resolve technical issues that affect the integrity of the application.? Worked in the agile Environments, effectively communicated at all levels of an organization in Management and Technical roles.? Tools Used: HDFS, MapReduce, Hive, Spark, Kafka, Sqoop, Airflow, Hbase, EMR, EC2, S3, Aws Glue, Aws Lambda.
Software Engineer
Key point technologies (India)
November 2011 - August 2015
- Developing Languages: C, C++, VC++, Data Structures, Linux, XML ATR Feature Implementation to ADAPTXT: ATR Feature is Automatic Text Replacement supported by ADAPTXT. It is feature provided for user to write their own word into shortcut and corresponding Expansion. ADAPTXT core engine give suggestion based on uni-frequency. Whenever the shortcut tapped by user it automatically replaced with expansion. Google N-Gram updating to ADAPTXT: ADAPTXT Dictionary and context updating based on Google N-Gram. Developer Testing and Enhancement of ADAPTXT Tools: ADAPTXT is the next generation of language software, which provides text entry assistance on QWERTY and 12 keys mobile devices. Mixed Capitalization AND SMS Conversion Feature: Mixed Capitalization: Mixed Capitalization suggestion feature implementation and tool creation for extracting mixed caps words implemented. SMS Conversion: SMS words to Phrase conversion and Phrase to SMS Format algorithm implemented. Responsibilities: Architecture design? C++ Coding, API Implementation and JNI layer implementation? Unit testing and integration testing for Each feature.? Bug fixing and code enhancing? Environment: C++