The Capstone Project


This project is my capstone project for my undergraduate studies at CSU Channel Islands. Its aim was to explore various ways to train a self-driving vehicle to operate competitively on a track in a racing environment. The project focused on building off of a popular hobby project known as Donkey Car that takes a hobby RC car and transforms it into a self-driving vehicle. While there are three ways to train an AI driver, the focus of this project was behavioral cloning in which the AI would be trained based on the data points recorded from a human driver. The project did not focus on complicated hardware but rather relied on the simplicity of the base Donkey Car platform which included a Raspberry Pi as its computer and a single, wide angle camera as the AI’s vision. With these simple components, a self-driving small-scale vehicle can be used to explore the various capabilities of AI in the real-world on a smaller scale.

Skills Gained

Through this project, I was able to explore and learn more about the following:

  • Python
  • Unity Engine
  • Training an AI using datasets and TensorFlow
  • Using an embedded system such as the Raspberry Pi to interface with servos and motors
  • Using cloud computing to train an AI remotely
  • Analyzing data loss curves from training an AI

Hardware Prerequisites

  • RC Car (1/16th scale works best) with an ESC and steering servo that have a 3 pin PWM connection
  • USB Power bank to power the Raspberry Pi
  • Raspberry Pi 3 or 4 (must use Pi 4 if planning mobile app integration)
  • MicroSD Card
  • Jumper Wires
  • Wide angle Raspberry Pi Camera
  • Servo Driver PCA 9685
  • Zip ties and screws
  • 3D printed roll cage and platform for components
  • A computer with an Nvidia GPU (preferred)

Getting Started

There are various aspects to getting this project started as there are various components to this project. Please refer to for more information and to download any needed information.

  • The best way to start this project is to build the hardware, the car itself. This involves disconnecting the chassis cover of the vehicle and attaching all the components securely. As this vehicle is fast, all components need to be held down to avoid falling off or shifting during motion. Part of this step also involves setting up the software on the microSD card and inserting it into the Raspberry Pi.
  • Once the vehicle is assembled, the next step is to set up the host computer. This device will interface with the car to control the car and transmit commands to the car. If using a Pi 4, you have the option of using a mobile phone with the DonkeyCar application to setup and control the car.
  • Once everything is set up, it’s time to start driving. If setting up only the virtual environment, be sure to download the virtual Donkey Gym created with the Unity engine. It is recommended that the computer used has an Nvidia Graphics card to speed up the performance and machine learning aspect of the project.

The Software

Most of the interfacing with the vehicle uses Python code. The following code is used for both the actual hardware or the simulator and operates the same functions for both. The following are some of the main commands used to interface with the vehicle:

conda activate donkey

This will get the donkey environment started for the vehicle. Modifying the file allows various parameters of the vehicle to be changed. This file is found in the folder of your car that you must navigate to in order to proceed with the following commands.

python drive

This will start up the Donkey Car whether using the simulator or the physical car. It will get turn on the car and camera and be ready to record images from the car to create data tubs to later train the AI. At this point, drive the car around the track at least 10 to 15 times to build a good dataset to train the AI driver. Once you feel you have enough data points to train the AI, type the following command to activate the learning model using TensorFlow. This will create a driving model for the AI. If using a cell phone as the main controller, you can upload your images to a cloud based trainer from within the application and build a trained AI model driver with that data.

python train --model models/mypilot.h5

Once this model is built, it is time to see its performance. Upon completion of the build, you should see a Model loss chart as seen below. The model loss chart will give you an idea of how well the AI will perform. The more in line the charts are, the better the performance you will see from you AI driver model around the track.

Once you feel as though you have a good model to test, simply type in the following command and select the AI driver to watch your driver take the wheel.

python drive --model models/mypilot.h5

This will test the model and allow you to see the performance from the trained model as it goes around the track. The AI model based on the “bad model” graph was only able to go around the track 2 to 5 times before veering off the track and losing the track or getting stuck behind a cone. It was also extremely wobbly. The AI model trained with the better model was able to drive continuously for 30 minutes with minor grievances that it was able to self-correct and no human interaction.

Next Steps for this Project

  • Reinforcement Learning:
    • This method proved to need more work and experience than I was capable of implementing in this project. With more time, I would have loved to explore the capabilities of reinforcement learning for an AI driver. Looking through online research, there are numerous approaches to this task, each looking at different elements of what can be accomplished through this learning method. Some focus on creating a smoother driving experience while others are looking to optimize performance and track times.
  • Object recognition and avoidance:
    • Another aspect to explore of the Donkey Car elements is object recognition with the camera including stop signs, people, and cones/track walls. This would allow for simulation of real-world driving on a smaller scale and help train an AI that could steer around possible obstacles. For a racing environment, this can be applied to creating a vehicle that recognizes other cars and knows how to go around them while remaining within the boundaries of the track. Another possibility is to have the car follow cones or a track wall instead of painted lines on the ground.
  • Real-world implementation:
    • The next step for this specific data is to take the built up data sets and see how they would translate into a real-world donkey car driving experience. While I had wanted to test this aspect, it proved difficult due to space constraints but it could lead to virtual trainings for AI drivers that can then be raced in a real-world track.
  • Add more hardware:
    • While the simplicity of only using a Raspberry Pi and camera make for a great conversation piece, AI hardware has many more components that can be integrated. Integrating a second camera, LIDAR, Intel Real-sense, and various other components are possible future projects that can be attempted with this vehicle. There are already instance of people using this expanded hardware to expand and enhance the functionality of this autonomous driving car.


Special thanks to Dr. Jason Isaacs for his advisement and guidance on this project.

Also, special thanks to the Donkey Car Discord community who helped guide me through some difficulties I faced on this project.

Of course, a special thanks to the Donkey Car project founders. This project builds on the amazing and talented works of various hobbyists who have taken Donkey Car from an idea to a widespread community.

Leave a Reply

Your email address will not be published. Required fields are marked *