Close this
Close this

Karl Nguema Ndong

Development
Maryland, United States

Skills

Data Engineering

About

Karl Gaby Nguema Ndong's skills align with System Developers and Analysts (Information and Communication Technology). Karl also has skills associated with Database Specialists (Information and Communication Technology). Karl Gaby Nguema Ndong has 7 years of work experience.
View more

Work Experience

Data Engineer

Data Endeavors
May 2023 - Present
  • * Managed the technical development of big data applications, fully leveraged data to make fact-driven decisions, and worked together with business stakeholders and other experts to build/optimize data pipelines. * Thought leadership and consultative support to all data and software activities within data engineering. * Supported the team for solving complex problems involving multiple facets, variables, * and situations. * Involved in end-to-end project management activities from requirement gathering to * closure & monitoring. * Worked with PySpark RDD & Data frames for data transformation. * Implemented multiple data transformations using PySpark Functions - Multiple Joins, * Window Functions - rank, dense rank, row number, lag, lead, Group By, Filter, Order * By, distinct, Union, With Column * Staged multiple datasets for the project in S3 which will be consumed in Athena.

Associate Data Engineer

MBI Health Services
June 2017 - May 2023
  • * Developed Hive External tables, Internal tables, and Views in Athena for business requirements * Worked with PySpark RDD & Data frames for data transformation * Implemented multiple data transformations using PySpark Functions - Multiple Joins, Window Functions - rank, dense rank, row number, lag, lead, GroupBy, Filter, OrderBy, distinct, Union, WithColumn * Staged multiple datasets for the project in S3 which will be consumed in Athena * In AWS Glue PySpark I have read multiple Athena tables using the data catalog * In AWS Glue PySpark, Created Dynamic Frame to work with AWS Native services S3 and Athena tables * In AWS Glue PySpark write data to the S3 location using Dynamic frame * Convert DynamicFrame to DataFrame and DataFrame to DynamicFrame * Involved in creating Hive tables in Athena and writing SQL queries to solve business problems * Created a plan for migrating all the existing projects which was developed on Shell Script to Spark Scripts * Using AWS Glue, developed multiple ETL data transformation jobs with PySpark * Responsible for the design & deployment of SparkSQL scripts based on business requirements. * Using Identity & Access Management

Education

University of the District of Columbia

Bachelor of Arts in Accounting and Bachelor of Arts in Management Information Systems

Bharathiar University

M.B.A