Karthik Datta Yerram
Development
New York, United States
Skills
Machine Learning (ML)
About
KARTHIK DATTA YERRAM's skills align with IT R&D Professionals (Information and Communication Technology). KARTHIK also has skills associated with Programmers (Information and Communication Technology). KARTHIK DATTA YERRAM appears to be a low-to-mid level candidate, with 4 years of experience.
View more
Work Experience
Lead AI Engineer
TrueBuilt Software
June 2023 - Present
- Owned and led Truebuilt's AI initiatives, directly accountable to the CEO and CTO. Engineered the AI stack and integrated AI into core products. This groundbreaking innovation expanded our customer base from just 1 to a portfolio of 10 within three months. Fine-tuned GPT-3.5 turbo and leveraged OCR to automate the identification of sheet numbers and titles in architectural plans. This resulted in an accuracy of 98%, significantly outperforming competitor tools. Designed and deployed 6 AI-driven features within 8 months, turning them into pivotal selling points for our software, generating an estimated $36,000 in new revenue mere months after launch. Researched and developed a novel 10-step algorithm, integrating YOLOv8 for object detection with classical computer vision techniques to automate the manual "take-off" process, reducing time from 3 hours to few seconds. [Under Patent Processing]. Architected an end-to-end pipeline (dataset creation, dataset ingestion, training) for fine tuning Segment Anything Model (SAM) on a custom dataset for automating takeoff process on 2D floor plans. The solution decreased the labelling time by 90%.
Computer Vision and Deep Learning Engineer
Shipin Systems
February 2023 - June 2023
- Led the full integration of thermal cameras into the Shipin ecosystem, developing a Hotspot detection algorithm with OpenCV for critical zones such as Generators, Engines, and Oil Water Separators, significantly enhancing safety through immediate alerts to relevant personnel. Architected an algorithm combining 2D Power Spectral Density analysis using OpenCV with Convolutional Neural Network using TensorFlow to detect blur/wetness on the camera lens. This innovative approach significantly reduced false detections by 83%. Consolidated and automated the error analysis pipeline of the object detection model used to analyze false predictions. This led to a reduction of 66.67% of the computation time.
Computer Vision Intern
Rivian Automotive
May 2022 - August 2022
- Worked on One Shot Neural Architecture Search (NAS) (trained Supernet composed of Resnet-50 blocks and used Evolution Algorithm as search strategy) to optimize classification backbones. This resulted in a 65% reduction in terms of GMACs while achieving an accuracy of 87% on CIFAR 10 dataset. Optimized Object Detection models using One Shot NAS techniques upto 87% in terms of GFlops compared to baseline YOLOv5-M on the on-board SOC for the use-case of Guard Mode Monitoring feature of the Rivian car.
Deep Learning Engineer - Intern
Vimaan Robotics India Private Ltd
August 2020 - October 2020
- Implemented automation in the Labelme software for efficient annotation propagation in videos. For static objects, utilized the SIFT algorithm to calculate key point distances. For dynamic objects, enhanced annotation accuracy by integrating Flownet 2.0 and the Lucas-Kanade algorithm with OpenCV to compute optical flow at object boundaries, leading to a 66% time saving in frame-by-frame video annotation.
Computer Vision Intern
Infrrd Private Ltd
January 2020 - July 2020
- Built a model utilizing Support Vector Machine as the classifier and Histogram of Gradients as the feature extraction algorithm with an accuracy of 87%. Leveraged OpenCV to pre-process the raw images before being fed to the model to verify the output of OCR.
Electronics and AI Member
Mars Rover Manipal
October 2017 - July 2019
- Led the Electronics and AI team of "Mars Rover Manipal", ranked "FIRST" all over Asia and "EIGHTH" in the world out of 84 teams in 'University Rover Challenge' organized by The Mars Society at the Mars Desert Research Station (MDRS) Utah, USA. Built Mobilenet V2 for the detection of visual marker and implemented the same on the on-board SOC (Jetson TX2). Implemented local planning in the use case of object avoidance and autonomous rover traversal using Intel RealSense Depth camera. Analyzed Point Cloud stream provided by the camera using Point Cloud Library.