Projects
B-GAP: Behavior-Guided Action Prediction for Autonomous Navigation
|
We focus on safe ego-navigation in dense simulated traffic environments populated by road agents with varying driver behavior. Navigation in such environments is challenging due to unpredictability in agents’ actions caused by their heterogeneous behaviors. To overcome these challenges, we propose a new simulation technique which consists of enriching existing traffic simulators with behavior-rich trajectories corresponding to varying levels of aggressiveness. We generate these trajectories with the help of a driver behavior modeling algorithm. We then use the enriched simulator to train a deep reinforcement learning (DRL) policy for behavior-guided action prediction and local navigation in dense traffic. The policy implicitly models the interactions between traffic agents and computes safe trajectories for the ego-vehicle accounting for aggressive driver maneuvers such as overtaking, over-speeding, weaving, and sudden lane changes. Our enhanced behavior-rich simulator can be used for generating datasets that consist of trajectories corresponding to diverse driver behaviors and traffic densities, and our behavior-based navigation scheme reduces collisions by 7.13−8.40%, handling scenarios with 8× higher traffic density compared to prior DRL-based approaches. Project supervised by Prof. Dinesh Manocha at gamma group, University of Maryland, College Park. [arXiv] [project page] [code] [video]
|
Human Driver Behavior Classification from Partial Trajectory Observation
|
As autonomous vehicles are being tested on public roads, they must be able to share the road safely with human-driven vehicles. To ensure safety, autonomous vehicles must be capable of accurately estimating human drivers’ intentions and their future trajectories. While there has been extensive research in this area, most of the existing approaches do not take into account the individual drivers’ personalities and the patterns these personalities reflect on the trajectories of the vehicles. We tackle this issue by proposing a novel method of extracting high-level features from raw vehicle trajectory data and classifying drivers into behavioral classes based on their level of aggressiveness. We demonstrate how the identification of a driver's behavior improves the accuracy of the short-term trajectory prediction problem by introducing a prior knowledge on their behavior. Thesis supervised by Prof. Changliu Liu at the Intelligent Control Lab in the Robotics Institute of Carnegie Mellon University. [thesis] [code] [video] [dataset]
|
Autonomous Vehicle Controller Design
|
As part of the course Linear Control Systems, taught by Prof. Ding Zhao, I designed a longitudinal and a lateral controller using different techniques to track the route of an autonomous buggy vehicle around the CMU campus. A buggy simulator on python was used for getting the required response plots and tuning the parameters of each control method used. The longitudinal motion is controlled by a PID controller, while the lateral motion is controlled using:
A PID Controller
Pole Placement
Model Predictive Control
Kalman Filter
The autonomous vehicle is required to achieve certain performance criteria, such as a minimum time to complete the route and an average and maximum deviation from the reference trajectory. In the end, a race was held and we competed with each other based on these criteria. [code]
|
Robot Design
|
Drawing inspiration from penguins, I collaborated with a team of students to design, manufacture and test an underwater penguin robot for the course Robot Design & Experimentation taught by Prof. Aaron Johnson. We came up with a ball-and-socket motion transmission mechanism for the movement of the flippers, fabricated a rib-and-spar body using 3D printers and laser-cutters and used Arduino for the controls. Besides contributing to the overall design and manufacturing process of the robot, I developed an underwater simulator on gazebo with the model of our constructed robot and used it to adjust the control parameters and perform tests before submerging the robot in the water. [report] [executive summary] [video]
|
Game Design
|
Using OpenGL on C++, we implemented a 2D fighting game for the course Engineering Computation, taught by Prof. Nestor Gomez. On the single player mode, the user controls a sticky-man figure and fights vs an AI agent. On the multiplayer mode, two users control their own sticky-man figure and fight until one of them is eliminated. The sticky-man figure can take a different set of states: gun mode (ranged), knife mode (melee), fight mode (melee). I created a menu for the game, as well as the background environment, enabling the players to actively interact with it (e.g. climbing stairs, jumping towards different floor levels). I also implemented a simple AI algorithm for the enemy player on the single player mode. [code] [video]
|
Occluded Object Pose Estimation
|
While manipulating objects in activities of daily living, we come across a problem where objects
are quite often severely occluded from the egocentric viewpoint making it difficult for the object
of interest to be tracked. Inspired by this problem, we collected a synthetic dataset of manipulator postures and object poses in OpenAI Gym and mapped changes in hand pose to object displacements in order to track occluded objects. Our approach consists of a Multilayer Perceptron that takes as input the joint angles of the manipulator and outputs the position and rotation of the object. I worked on developing the neural network model and encoding the necessary input for the training process. I combined manipulator pose changes and previous object and manipulator positions into a vector, which gets fed to the network. The output of the network consists of the future positional coordinates and the quaternior (for rotation) of the predicted pose. This project was a part of the course Mechanics of Manipulation, taught by Prof. Matthew Mason. [report]
|
Genetic-Algorithm-based Optimization Framework
|
For my undergraduate diploma thesis, I developed a framework on Visual Basic that receives mathematical expressions as input, analyzes them using a suitable parser, and optimizes them with genetic algorithms. The parser allows the input of the expressions in string format and distinguishes the variables, the parameters and the operational symbols. Besides the equations, the user can choose between a set of genetic algorithms for the optimization, as well as the hyperparameters. The implemented software was tested and validated on two applications: the minimization of the forces applied onto an object grasped by a robotic arm and the maximization of the stiffness of a cantilever beam. [code] [thesis (in greek, abstract in english)]
|
Computational Robotics Project
|
Forward and inverse kinematics constitute some of the most fundamental concepts in robotics. As part of an undergraduate robotics course, I chose a 6-degree-of-freedom industrial robot, namely KUKA KR 6 R700 sixx WP, upon which I developed software on MATLAB that does the following:
Calculates the Denavit-Hartenberg parameters of the robot.
Implements the forward kinematics.
Implements the inverse kinematics.
Computes the Jacobian matrix.
Finally, I applied the software to a trajectory planning application, combining linear and quadratic interpolation for two points in space. [code]
|
2D Animation
|
For the individual project of the course Engineering Computation (taught by Prof. Nestor Gomez), I implemented a halloween-themed demo using OpenGL on C++. The demo includes 2D animation made with this rendering framework and music that I wrote, recorded and synchronized with the animation transitions. [code] [video]
|
|