Praneeth Bathula
Development
Telangana, India
Skills
Machine Learning (ML)
About
Praneeth Kumar Bathula's skills align with IT R&D Professionals (Information and Communication Technology). Praneeth also has skills associated with System Developers and Analysts (Information and Communication Technology). Praneeth Kumar Bathula appears to be a low-to-mid level candidate, with 4 years of experience.
View more
Work Experience
Data Science Intern
Innomatics
October 2020 - April 2021
- Performed tasks like Data Analysis, Data Visualization and predictive analysis on numerical and categorical and find good insights from the data. Building Accurate models on Machine Learning, Deep Learning, Natural language Processing etc. Performed Predictive analysis on the churn data. Build ANN, CNN deep learning models on image data.
Machine Learning Engineer
People Tech Group
April 2021 - Present
- Project: Dental AI (Ivory AI) Worked on Dental-AI project, focused on the detection of dental issues in X-ray images. Utilized a comprehensive dataset of over 63,000 X-ray images (OPG & IOP) annotated through Label Box annotations. Conducted data analysis on the images using pandas and OpenCV. Trained advanced YOLO models for both object detection and instance segmentation tasks. Achieved good performance with customized YOLOv5 X6/X7/X8 models specifically designed for OPG detection. Ensured smooth deployment across diverse environments by employing Dockerization to encapsulate the application and its dependencies. Developed and deployed a Fast API on AWS Lambda to serve the models. Implemented a dynamic model selection mechanism based on image characteristics and model type to optimize performance. Generated JSON output containing bounding box coordinates and issue labels for improved interpretability. Achieved efficient and scalable identification of dental issues through implementation of ML Ops practices and used MlFlow to track the models. Project: CPRS AI (Client: CPRS) Developing applications using Large Language Models (LLMs), Langchain and semantic search. Utilizing text embeddings and vector databases like Chroma for efficient retrieval of information from the documents. Implementing RAG (Retrieval-Augmented Generative) applications for enhanced performance. Employing Chroma dB and Pinecone vector stores for data retrieval to enhance the accuracy and relevance of information. Focusing on building systems that enable interactive conversations with both structured and unstructured data Explored different Large Language Models (LLMs) including as LLAMA2, Mistral, Zephyr and GPT models. Worked on Large language model Anthropic claude 2.1 from the Service AWS bedrock to create SQL queries based on natural language input and Retrieval augmented generation to extract information from documents. The SQL queries generated by the Large Language Model (LLM) are executed on the SQL server database to retrieve the desired information. Utilized Amazon Titan embeddings from AWS Bedrock to store vectorized metadata in the vector store pinecone Project: Fractionable AI Worked on developing an AI model using Hugging Face community resources for parsing candidate resumes effectively. Utilized Python, scikit-learn, Pandas, NumPy, and Jupyter Notebook for tasks such as data preprocessing, model training, and evaluation. Employed machine learning and natural language processing techniques to enhance parsing accuracy and efficiency. Developed a user-friendly FastAPI interface for seamless interaction with the model. Utilized AWS Lambda for serverless computing, ensuring scalability and cost-efficiency. Employed SQL for data extraction from databases, facilitating seamless integration with the AI model. Implemented an MLOps cycle to streamline model development, deployment, and maintenance. Dockerized the application to encapsulate dependencies, ensuring consistent deployment across various environments. Achieved improved scalability and efficiency by deploying the application on an AWS Lambda server.