Nisarg Patel
Development
Arizona, United States
Skills
Machine Learning (ML)
About
NISARG PATEL's skills align with Programmers (Information and Communication Technology). NISARG also has skills associated with IT R&D Professionals (Information and Communication Technology). NISARG PATEL appears to be a low-to-mid level candidate, with 4 years of experience.
View more
Work Experience
Research Assistant
Dr. Sudeep Tanwar
August 2020 - August 2022
- • Architected a state-of-the-art machine learning model for crypto currency price prediction in TensorFlow and Python. By incorporating price interdependency metrics, the model achieved a 25% reduction in RMSE • Implemented a secured deep learning framework for distributed learning specifically for object detection task in computer vision domain. Streamlined end user privacy maintaining a remarkable 40% reduction in pipeline cost.
Research & Development Intern
Samsung Research and Development Institute
January 2022 - June 2022
- Automated Information Extraction for Technical Report Creation • Constructed a data pipeline to analyze call failure logs with the capability to parallel process 1M logs within a second. Improved call quality by a remarkable 68% through analysis, eliminating the need for human intervention. • Formulated a machine learning framework to classify potential call failures, based on pre-defined threshold domains handled by specific teams. This impactful framework achieved a significant 20% reduction in call failures
Research Assisstant
Cogint Lab
August 2022 - Present
- (Mentor: Dr. Chitta Baral) Tempe, USA LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models • Developed LogicBench, a comprehensive natural language benchmark encompassing propositional, first-order, and non-monotonic logic with 25 axioms to assess the logical reasoning capabilities of language models. • Conducted assessments on 7 SOTA transformer-based models, highlighting notable difficulties in logical reasoning across various contexts. Emphasized the crucial need for additional research to enhance reasoning skills. • Illustrated the efficacy of LLMs trained on LogicBench by highlighting improved logical reasoning ability, resulting in enhanced performance across diverse logical reasoning datasets such as LogicNLI, FOLIO, LogiQA, and ReClor. Can NLP Models 'Identify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer? (ACL 2023) ? • Created QnotA dataset having 5 distinct categories of questions without definitive answers. Proposed corresponding Question-Answer instances using for Large Language Models (LLMs) training, resulting in a significant 10.15% average performance boost in GPT-3 through different prompting methodologies. • Improved GPT-3's effectiveness by prompting techniques to filter out questions unsuitable for direct answers. Achieved a considerable performance improvement for answering question's without definitive answer.