Automated Intersection Management Project

From 2021-2022, I designed an automated intersection management(AIM) system for self-driving cars that improved traffic efficiency by upwards of 50%, while also improving the safety of self-driving cars and the feasibility of simpler self-driving artificial intelligences. In the process, I created a new form of Q-Learning called double Q-table learning, and I built a self-driving robot car to integrate its self-driving AI into the AIM. I placed first at my regional science fair and fourth at the Florida State Science and Engineering Fair with my project. I learned Python, Pygame, Computer Vision, Tensorflow, Q-Learning, and Raspberry Pi in the process.

The left image was the initial design for my car. It used a Raspberry Pi computer, 6-volt batteries, a motor hat, a portable charger, a webcam, and 4 DC motors with wheels attached. The camera was used for self-driving and a second object detection AI that detects common objects that appear while driving to display warnings if the objects are too large, and therefore too close.

On the right is my modified, final design for my car. I added a breadboard for a gyroscope, took the motor hat off of the Raspberry Pi, replaced the portable charger with a more powerful one, and replaced the webcam with a Pi camera.

The video to the left is an iteration of my final, best-performing Q-Learning model directing traffic in my simulation created using Pygame. Each car starts with a random position and destination, and the program guides them to their destinations based on that information. My best-performing model outperformed a typical static pattern intersection by over 50%.

The video on the right displays the interface for the self-driving car. It shows lane finding and detection(top left and right respectively), warping of lanes(middle row), and generation of the curve value(histogram on the bottom left, curve value generation on bottom middle and right) to be applied to the motors for turning. This program is designed to test improvement in performance after connecting to the automated intersection management system and to see if less processor-intensive self-driving programs could become feasible by utilizing the AIM.

The image on the left is my robot car driving on the to-scale track of the simulation that I built for testing.

The image on the right are the awards I won with my self-driving car AIM project. On the left is my 4th place award for the Intelligent Machines, Robotics, and System Software category at the Florida State Science & Engineering Fair, and on the right is my 1st place award for the Havens Regional Science & Engineering Fair. I will be presenting at the FSU Research Symposium with my research project soon.

My Github can be viewed at: https://github.com/henryyjiang


Portfolio

Our project has placed 1st at the Havens Regional Science Fair and at the Bay County Invention Convention. In addition, we placed 4th at the Florida State Science and Engineering Fair and won an “Outstanding Project” award. We also placed top twelve at the National Innovator Challenge, an international invention convention competition. I will continue to improve the design and create more prototypes to start a nonprofit or business venture.

Ultrasonic Walking Aid Project

My friend Walker and I designed a smart ultrasonic walking aid prototype together in 2022. We connected with VIBUG, a technology-based visually impaired club in Boston for suggestions on how to improve our design, and we were featured on the local news channels, newspapers, and on Fox News. Raspberry Pi, Python, object detection, ultrasonic sensors, and motors were involved in the process of designing this device.

https://www.instagram.com/heye_tech/

On the left is the front view of the walking aid. A Raspberry Pi camera detects and vocalizes objects and their positions. The Batstick uses an array of 5 ultrasonic sensors each paired with a vibration motor on the grip in the same position to convert the distance into a percentage which determines the vibration intensity, with close being more intense of a vibration. This system maps distances to the palm, simulating echolocation.

Pictured on the right is our newspaper story covered in January. We were excited to share updates on our project and discuss our patent and science fair plans.

We created an object detection program using YOLOv3 that vocalizes what an object is and where it is positioned, and it is integrated with the device via Bluetooth using a mobile application. In the video, I am using a webcam and pointing it at various objects to demonstrate its capabilities.

NAVSEA Science & Engineering Apprenticeship Program

I completed the 8-week-long Science and Engineering Apprenticeship Program internship over the summer of 2022 at the Panama City Naval Surface Warfare Center. I worked with Navy engineers on a challenge that they were actively solving.

I worked on remote collection of sediment core samples. I designed, tested, and revised an underwater ROV(remotely operated vehicle) design that was created using Seaperch ROV kits. I became a better engineer, team member, and problem solver, and I learned valuable lessons directly from Navy engineers as a result of my experiences, and I presented my results to flag level Navy leadership, and a panel of Navy engineers.

COVID-19 Visualization Website

This video demonstrates my final project for the Harvard Pre-College Program. The website has multiple settings to organize COVID-19 data and has map, chart, and graph visualizations of data. It has several modes of organizing and viewing by reported case and death amounts. HTML, CSS, Javascript, Python, and Pandas were used in the process of designing and coding the website, and organizing the data.

Android Applications

From July to September 2020, I interned for Android app development at Hangzhou Precise Robot LLC, a Chinese motor company based in Hangzhou. I designed a QR code reader app that paired the app with motors via Bluetooth for remote control. The app was also created as a WeChat mini program. I gained experience with Android Studio and WeChat Devtools.

Research

In the summer of 2023, I conducted research for Dr. Juan Claudio Nino at the University of Florida. I worked using Geant4, a Monte Carlo particle simulation program. I worked on an advanced radiation shielding simulation, and recorded the effectiveness of various compound materials on a nanoscopic scale for spacecraft applications.

I am conducting research on graph neural networks and generating atomic structure under Dr. Victor Fung at Georgia Tech. I am currently working on using reverse Monte Carlo methods to generate the structure of an atom based on it’s analyzed spectra.

Meal Max

The Meal Max is an alternative to calorie counting that prioritizes measuring the satiety of food rather than their calories. Instead of encouraging people to starve themselves through calorie restrictions, Meal Max is instead about making dieting sustainable by helping practicioners maintain their usual level of fullness through eating smartly not hardly. By tracking the average satiety of their foods instead of calories, users shift their attention from franticly balancing their present and future calories on a tightrope to mindfully choosing healthy meals which keep their bellies full. Additionally, the strong correlation between healthy foods and high satiety incentivises people to eat healthier as opposed to calorie counting which sometimes incentivises people to savor unhealthy foods whenever they can. Overall, measuring satiety is a much healthier, sustainable, and enjoyable way of helping people stay in shape, and is perfect for both the layman trying to shed the pounds as well as extremes such as eating disorder patients who need a way to desperately need a way to measure their food intake while not relapsing to their overbearing standards.

Nonprovisional Patents