A graduate student at Arizona State University
specializing in Robotics and Autonomous Systems.
Passionate about the intersection of Robotics, Mechatronics, and Artificial intelligence. Graduate student in Robotics with expertise in Perception, Motion Planning, Computer vision, Machine learning, and Control systems. I have 2 years of experience in developing cutting-edge software robots for the automotive industry. Join me on my journey to redefine the future of technology.
Languages: Python, C++, C#, MATLAB, Embedded C, SQL, PLC programming
Software: Docker, Simulink, Solidworks, LabView, AutoCAD, Arduino IDE, UiPath, Microsoft Office, PowerBI
Framework / Tools: ROS2, Gazebo, OpenCV, PyTorch, Tensorflow, Scikit-Learn, MQTT, Ubuntu
Hardware: ESP32, Arm Cortex-M microcontrollers, ATmega, Raspberry Pi
Protocols: SPI, I2C, UART, CAN Bus, RF integration (ZigBee, LoRa, Wi-Fi, BLE)
Master of Science, Robotics and Autonomous Systems
August 2022 - May 2024
Bachelor of Technology, Mechatronics Engineering
July 2016 - May 2020
Automation Developer, Aliter Business Solutions Pvt Ltd
Robotics Intern, Larsen & Toubro
• Implemented a real time human posture detection module with EfficientNet architecture on HAR dataset. Achieved over 80% test accuracy in detecting 5 postures.
• Deployed TinyML model on Arduino Nano 33 BLE for real time inference on IMU sensor data, leveraging ESP32 and Wi-Fi communication for data transmission to AWS IoT Core and MongoDB. Orchestrated bidirectional communication between Arduino Nano 33 BLE and ESP32 via I2C protocol.
• Developed a client-side web application using HTML and JavaScript to visualize results and interact with the sensor data retrieved from MongoDB and Express.js server.
• Designed a CNN-based TinyML model for real-time audio inference, achieving 92% training and 80% on-device accuracy on a set of mental health-related keywords
• Deployed TinyML model on Arduino Nano 33 BLE for real time inference on IMU sensor data, leveraging ESP32 and Wi-Fi communication for data transmission to AWS IoT Core and MongoDB. Orchestrated bidirectional communication between Arduino Nano 33 BLE and ESP32 via I2C protocol.
• Expanded the dataset with onboard microphone data, preprocessed audio signals using noise reduction techniques to improve model accuracy by 15%.
• Deployed model with optimized inference on Arduino Nano for low-power edge audio analytics.
• Implemented RRT, RRT* and Informed RRT* algorithms on ASU Map for navigating around obstacles.
• Optimized Informed RRT* algorithm to balance computational efficiency and path quality, achieved 40% shorter paths compared to RRT with advanced heuristics and efficient sampling.
• Demonstrated a strong understanding of motion planning and robotics principles,
applying theoretical knowledge to practical applications on a real-world map.
• Developed and implemented PRM, a sampling-based algorithm using Uniform, Random, Gaussian, and bridge sampling methods,
to navigate a robot through complex environments on the ASU map.
• Analyzed the performance of PRM using various sampling techniques in achieving optimal paths for robot navigation within the ASU map.
• Implemented algorithms such as BFS, DFS, Dijkstra’s, A* and Weighted A* to interpret grid maps and navigate around obstacles.
• Created heuristic functions for A* and Weighted A* to guide the search efficiently.
• Conducted performance analysis of each algorithm, measuring runtime, memory usage, and path length.
• Implemented user interaction to input start and goal points for custom pathfinding scenarios and visually represented the grid map, obstacles, and paths through graphical representations.
• Developed and implemented diverse ML models (linear regression, lasso regression, ridge regression, decision trees, random forest, PCA, polynomial regression) for accurate car price prediction using a dataset of 70,063 records and 14 features.
• Conducted a comparative analysis of algorithms on local computer, Google Colab, and AWS SageMaker, achieving high prediction accuracy with the Random Forest Model and PCA (R^2: 0.98, MAE: 1.5, RMSE: 2.23). AWS SageMaker demonstrated the shortest training time.
• Processed and visualized the car dataset to gain insights into the data, including feature distributions, correlations, and outliers, enabling informed model selection and enhancing the overall prediction accuracy.
• Provided valuable insights into the trade-off between accuracy and training time, facilitating informed decision-making when selecting the most appropriate model for predicting used car prices based on computational resources and performance metrics.
• Developed an autonomous navigation system for Turtlebot3 in a prebuilt map in GAZEBO, leveraging ROS 2, AprilTag markers,
and yolov3 weights, with a remarkable 90% object recognition accuracy and 50% reduced navigation time.
• Leveraged ROS 2 as the communication and middleware layer. Implemented navigation, localization, SLAM-based map building, and coordinate transforms with TF2 and quaternions.
• Integrated YOLOv3-tiny neural network with ROS2 for advanced image recognition. Used OpenCV for Image filtering & segmentation.
• Developed and implemented a low level flight control algorithm using MATLAB and Simulink for the Parrot Mambo Mini-Drone, enabling autonomous navigation along a predefined track.
• Incorporated Kalman Filter with edge detection techniques and calculated specific HSV values to accurately detect the nearest edge of the track, ensuring precise positioning of the drone.
• Successfully deployed the line follower function to the drone via access point and Bluetooth, enabling autonomous operation in real-world scenarios.
• Demonstrated the practical applications of the line-following drone, showcasing its ability to navigate tracks and potential uses in industries such as logistics, warehouse management, and agricultural monitoring.
• Designed and implemented an advanced object detection feature for the Parrot Mambo Mini-Drone using MATLAB and Simulink,enabling it to hover upon detecting an object.
• Utilized the drone's built-in sensors and feedback control system to ensure stable hovering with a positional error of under 10cm.
• Integrated the Image processing module with object detetction algorithm to optimize hovering by 80% and deployed on drone utilizing access point and Bluetooth connectivity.
• Demonstrated expertise in computer vision, machine learning, and drone technology by showcasing the drone's ability to autonomously detect objects and initiate hovering, showcasing the project's focus on safe and reliable operation.
• Incorporated advanced feedback mechanisms and implemented a PID controller, resulting in less than 1% error in position control.
• Successfully assembled and integrated mechanical, electrical, and control systems to create a functional mechatronic system with precise position and velocity control.
• Leveraged MATLAB Simulink and Arduino Nano 33 IoT with Advanced Encoder Feedback to achieve ±1 RPM Velocity Control Accuracy.
• Employed Simulink for system simulation, validating position and velocity control. Achieved 20% improvement in motor response time.
• Designed and analyzed a 3-legged robot in SolidWorks for efficient stair climbing, resulting in a 50% payload increase.
• Built a self-stabilizing platform with servo motors and MPU6050 sensor data, demonstrating 80% reliability on inclined slopes.
• Demonstrated a cohesive robotic system that combines hardware design, sensor integration, and advanced algorithms to enable efficient stair climbing and autonomous path planning.