Zin Yamin Tun - Mechanical Engineer - Class of 2024
Lab Description:
The purpose of this lab is to set up and become familiar with the Arduino IDE and the Artemis board.
After downloading the Arduino IDE and Sparkfun Appollo 3 support software, we used various provided example codes to help us become
comfortable with programming the board by completing the following tasks.
Designs/Results:
Part 1: Blink it Up!
The first task was to run the Blink code under Basics Example to ensure the built-in LEDs were working.
The example code uses digitalWrite and delay(1000) to toggle the LED on the board. As shown in the video,
the blue LED blinked in a consistent time interval every second when the code was run.
Part 2: Serial
The second task was to run the Example04_Serial under the Artemis Example to ensure that the serial port was
working properly. The example code uses Serial.read to read the Serial Input and uses Serial.write to echo
it back to the Serial Output. In the video, I demonstrated that the Arduino’s serial port is functioning properly by
showing that the code echoed back when I typed “hello” into the Serial Monitor.
Part 3: analog_Read Temperature
The third task was to run the Example02_analongRead under Artemis Example to ensure that the on-chip ADC on the
Artemis board was working. The example code printed several outputs calculated by the ADC's measured analog voltage value to help us test
the temperature sensor. Besides the internal die temperature, the code also printed external (counts)
which read the analog voltage on the selected analog pin, VCC voltage, and VSS voltage. At the beginning of the video, we can see that the initial
temp (counts) was around the 33,500 range when it was held. When the microcontroller on the Artemis board is pressed,
the temp (counts) increase gradually to the 33,700 range. This verified that the sensor was functioning properly because
the increase in temp (count) corresponds to the increase in the internal die temperature on the sensor as the board was
heated by my hand.
Part 4: PDM [Pulse Density Modulation]
The fourth task was to run the Example1_MicrophoneOutput under PDM Example to test that the MEMS microphone on
the Artemis board is working. The PDM Microphone Output example configures the PDM microphone based on the
microcontroller and PDM clock speed to determine the sampling frequency. The code then performs an FFT to
determine the loudest frequency and prints it to the Serial Monitor. As shown in the video, the initial frequency
on the serial monitor was about 114 Hz when only ambient noise was present. In order to test the microphone, I ran
online tone generator to produce a 440 Hz frequency on my phone. As we can see in the video, the new loudest
frequency detected by the microphone was 434 Hz which is about 98.6% accuracy.
The purpose of this lab is to become familiar with the Bluetooth Module on the Artemis board. Through the Bluetooth stack, communication
between the computer and Artemis board was established, with the computer being the client and the Artemis board being the server.
With the detailed instructions and knowledgeable course staff's help, I learned to transmit and receive data using Python inside
a Jupyter notebook for the computer end and Arduino programming language for the Artemis side.
After installing the required software (Python 3, pip, and Virtual Environment), I created a new virtual environment named FastRobots_ble. To use
the Python scripts (or Jupyter notebook), I activated the FastRobots_ble virtual environment and downloaded the required packages using the following
commands.
>> source FastRobots_ble/bin/activate
>> pip install numpy pyyaml colorama nest_asyncio bleak jupyterlab
The subfolders of the Codebase are ble_arduino and ble_python. ble_arduino includes ble_arduino.ino (Arduino code that will be run on the Artemis board) and necessary class definitions. Meanwhile, ble_python includes the Python files such as demo.ipynb, connection.yaml, and other files that are necessary to establish a communication channel through BLE.
I updated the configuration file
Change the UUID in the line that said: #define BLE_UUID_TEST_SERVICE "41fee3c1-dd8f-4a47-8ef0-cccc71b3206d
Change the UUID in the line that said: ble_service: '41fee3c1-dd8f-4a47-8ef0-cccc71b3206d'
demo.ipynb is the main file for Jupyter Notebook to run commands and set up the Bluetooth connection
between the computer and the board. It includes the ArtemisBLEController Class which provides the member
functions that handle all of the BLE operations used in this lab. In the following video, I ran all the cells inside the demo.ipynb file by default.
The first task is to send an ECHO command with a string value as an argument from the computer to the Artemis board
and receive the augmented string back to the computer. For example, when I send "HELLoo" as demonstrated in the video, the computer receives “Robot says-> HELLoo:)”.
The implementation of this task uses the EString class to manipulate the string, specifically append( ) function.
The second task is to send a GET_TIME_MILLIS command to the Artemis board and receive a string characteristic such as “T:87373” as shown in the video.
In addition to the functions from the Estring class, I used millis() to get the time.
Previously, we have been using the recieve_string(uuid) command to receive the data that the Artemis board is sending back to the
computer. This could raise a problem if the computer does not know when the Artemis board is sending the data. The purpose of
this task is to solve the problem by implementing a notification handler function in the demo.ipynb.
When the GET_TIME_MILLIS command was sent, start_notify(uuid, notification handler) triggered the notification_handler
function. This function then received the string value (BLEStringCharacteristic) from the Arduino board and stored it in
the global variable timelist.
The fourth task is to send a GET_TEMP_5s command which sends an array of five timestamped internal die temperature readings
for every second such as “T:06050|C:28.545|T:07084|C:28.537” (and so on). The implementation of this code used the Estring class
and for-loop with one second delay
The fifth task is to send a GET_TEMP_5s_RAPID command which sends an array of five seconds worth of rapidly sampled temperature data.
Due to the characteristic size limit, the implementation of this code was changed from the previous task. For the GET_TEMP_5s,
I appended each data to the Estring directly. Meanwhile, GET_TEMP_5s_RAPID was implemented using a while-loop with if-else condition.
If the time is less than 5 seconds, store the data into an array. Or else append every 5 data into Estring and send it back as chunks
to the computer until all the data is sent. The code was then processed by the notification_handler_temp in the python code and stored
it in the global variable temprapid_list. As a result, GET_TEMP_5s_RAPID was able to rapidly collect and send 5 seconds worth of 250
readings.
Working with strings introduces significant latency due to a huge number of processor cycles that is required to construct a string
message. This limitation could raise a problem when the task requires relaying the data at a high rate. In these situations,
communicating results in chunks by storing and relaying data at different times is necessary to be able to accomplish the task.
The physical memory limitation of Artemis board is discussed in the following passage.
The Artemis board has 384 kB(=384,00 bytes) of RAM. If we are sending 16-bit (=2 bytes) values at 150 Hz for 5 seconds, we will
be sending 1500 bytes of data. Therefore, the board can store 5 seconds worth of data at this rate for 256 times before running out of memory.
In this lab, we set up and equipped the robot with two Time-of-Flight (TOF) sensors. These sensors characterize the capabilities of the robot in two ways. First, the faster the sensors can sample, the more the robot can trust sensor readings, and the faster the robot can drive. Second, the more accurate the sample readings, the more the robot behaves as we expect, and the better control of the robot. Therefore, we explored the limitations of the speed and accuracy of the data that the sensors collect.
According to the datasheet of the TOF sensor, the I2C serial bus uses the sensor address
(0x52) that is hardwired on the board. This could lead to a problem when we need to use
multiple sensors together since the computer will not be able to distinguish them. Since we are
using two TOF sensors in this lab, we need to either change the address programmatically (while powered)
or enable the two sensors separately through the shutdown pins.
Changing the address programmatically (while power) method is to change the I2C address of one of the TOF
sensors while we shutdown the other one using the Xshut pin. This way the computer can distinguish between
the data received from the sensors (with different I2C addresses). This method allows us to communicate the sensors
simultaneously without having to shut down either one of the sensors. Since powering off the sensors reset the address
to default address, this method does require us to change the address every time we power it on.
Enabling the two sensors separately through the shut down pins method is to pull the shutdown pin of one of the sensors
every time the other sensor is reading the data using the I2C address (which is the same address). This method allows us
to save the power by turning off one of the sensors while it is not in use. But since we need to ensure that only one of the
sensors is turned on at all times, we need to implement additional logic to the program. Additionally, this could also create
delays to the program due to the extra lines of codes that are enabling and disabling the sensors.
After discussing with the course staff, I have decided to attach my TOF sensors one at the front and the other one at the side.
This placement gives more advantages for mapping the robot than placing both of the sensors at the front or side. By attaching
the sensor at the side of the robot, I am able to gain more information to figure out the positioning of the robot. Similarly,
by attaching the second sensor at the front, the robot will be able to receive sensor readings when there is something in the front
rather than randomly bumping into things.
The disadvantage of this sensor placement is it has slightly higher ranging error than placing both of the sensors
at the front. Additionally, since it only covers one side view of the robot, it could miss the obstacles that are on the
side without the TOF sensor.
Attaching the TOF sensors to the QWIIC breakout board is straightforward. We cut one end of a QWIIC cable and solder the other to the TOF sensors with the correct wire color as shown in the following table.
When I connected 1 TOF sensor and ran the Example05_Wire_I2C file, the code printed out the following line in the serial monitor.
It printed 0x29 as the I2C address rather than 0x52. This is because I2C uses the LSB of the address for indicating read/
write status which leads the address to bit shift.
From the data sheet, the TOF sensor can work in three different modes as shown in the table below.
While the long distance mode covers longer ranging distance, the short distance mode performs better in shorter
range for different ambient lighting conditions (light or dark).
For our robot, the short mode might be the best choice to start because the long mode
is sensitive to the environment lighting conditions (especially high brightness). If the setup
of the potential obstacles required us to work with longer distances, then we can switch the mode
to medium or long depending on the maximum required ranging distance.
To test the accuracy, and repeatability of the sensor for the short mode, I took the 20 measurements
in every 100 mm from 100 mm to 500 mm using SparkFun Example1_ReadDistance code. The graph that is plotted
below is the mean and standard deviation from each distance for (1) with StopRanging, (2) without StopRanging,
and (3) Just reading.
From observing the data, the measured data in the short mode has both high accuracy (mean ~ actual distance) and precision
(low standard deviation). The average ranging time was around 98.9 ms for with StopRanging, 96.5 ms
for without StopRanging, and 92.3 ms for Just reading.
To read the distance from both sensors, I used the changing the address method to
prevent the extra delays and complications in implementation. In order to change the address,
I shut down one of the sensors in the setup by activating the XSHUT. Then, I changed the address
of the other sensor to 0x32. Finally, I turned the sensor back on by deactivating the XSHUT.
XSHUT is wired to the pin 8 on the Artemis board. Setting the voltage to low activates the XSHUT and
setting the voltage to high deactivates the XSHUT.
In the following video, I demonstrated that the two sensors were working in parallel properly by
testing (a) both at 300 mm, (b) one at 150 mm and one at 300 mm, and (c) both at 150 mm.
I modified the SparkFun Example1_ReadDistance code to print the Artemis clock to
the Serial Monitor as fast as possible, continuously, and only prints the data from the sensors when they are
available.
In order to analyze the sensor speed, I modified the previous code to print the following information in the Serial Monitor
instead using the code below: Current Time, Ranging Time, Distance from Sensor 1, Distance from Sensor 2.
Arduino Code:
From observing the Serial Monitor output shown below, it seems that the ranging time for the sensors range from 79-98 ms.
The ranging time can be lowered by decreasing the collection time given to the sensor with the trade off to the accuracy of the readings.
Serial Monitor Output:
Due to the characteristic size limit, relaying all the data at once is impossible. Therefore,
I let the two sensors record the data at one time and sent the data over Bluetooth to my computer
at a different time. In the following video, I demonstrated this by sending the Artemis clock while the
sensor is not ready otherwise relay distance sensors reading.
For the last task, I revised the previous code to graph the Time vs. Distance graph of the data sent over the Bluetooth.
In the following video, I demonstrated running the python code to plot the data.
Output Graph:
In this lab we set up and equipped the Inertial Measurement Unit (IMU) sensor on our robot. We then ran the Artemis and sensors from a battery and recorded a stunt on the RC robot.
In the following video, I tested that the IMU sensor is working by running SparkFun
Example1_Basics code. I demonstrated accelerating and rotating the IMU along the x-y-z axis
and showed that the reading reflects the movement of the IMU.
AD0_VAL is the value of the last bit of the I2C address (similar to TOF).
While we were working with TOF sensors, I2C addresses were the same for both of the sensors
which made us either manually change one of the addresses or only open one TOF at a time. For IMU,
by simply changing the AD0_VAL we can use multiple IMU sensors at a time. By default, it is set to 1
which corresponds to I2C address: 1101001 and AD0_VAL 0 corresponds to I2C address: 1101000.
Accelerometers are designed to measure non gravitational acceleration and can also gauge the orientation of a stationary object with relation
to Earth’s surface [using acceleration in x, y, z direction]. Gyroscopes use gravity to find the rate of rotation around a particular axis to
determine the object’s orientation.
As we saw in the video, when I only held the IMU, the acceleration in x and y directions were close to zero while the acceleration in z-direction was 10 m/sec^2
(due to gravitational acceleration). When I accelerate the IMU in z-direction, we can see that the data is unable to distinguish the acceleration due to the hand
and the gravity. This could be a problem if we are using accelerometer data to determine the orientation in areas where the gravitational acceleration is not constant (e.g. aircraft).
I also used the Arduino Serial Plotter to plot the output as shown below.
Acceleration (x-y-z) of Stationary IMU:
As I mentioned above, the acceleration for x and y are zero while z direction is around 10 m/sec^2 due to the
gravitational acceleration.
Gyroscope (x-y-z) of Stationary IMU:
Due to the serial plotter auto scaling function, it’s hard to see the two plots in the same scale. Therefore, I also plotted acceleration in the Z direction
for the gyroscope data graph. All the gyroscope data are around 0. We can also see that the acceleration has more noise than the gyroscope because the accelerometer uses vibration to make the measurement therefore it tends to be noisier.
Using the accelerometer's measured accelerations, pitch and roll can be determined using the following equations.
In the following video, I showed the pitch [blue line] and roll [orange line] at -90, 0, and 90 degrees. I also put a line at -90 [yellow line] and 90
[green line] to avoid the serial monitor from resizing. From observations, the accelerometer's measurements are accurate
on average but very noisy especially when the object accelerates.
To analyze the noise due to acceleration, I recorded the data at the rate of 4 millisecond or 250 Hz while tapping the IMU and
analyzed the noise in the frequency spectrum by doing a fourier transform in Python. From observing the Frequency vs. Amplitude graph
for both pitch and roll, there is no obvious spike at any specific frequency as one would expect from tapping. This is due to the built-in internal
low pass filter in the IMU that is activated by default which filtered out the taps.
To prepare for the future lab, I implemented a low pass filter starting with a frequency of 30 Hz. This frequency can be tuned later with the actual
frequency of noises on the IMU depending on the speed of the robot.
The equation I used for the low pass filter is as follows:
Using fc as 30 Hz, alpha is calculated to be 0.4299. Similar to the previous video, I demonstrated
the accelerometer with LPF output on Serial Plotter and observed that there is less noise.
Using the gyroscope, pitch, roll, and yaw can be determined using the following equations.
In the video below, I demonstrated the Gyroscope data using Serial Plotter at [-90, 0, 90] position for Pitch, Roll, and Yaw.
From observing the video, we can see that the lines are drifting away when I am trying to hold them in the same position. From this
we can conclude that the gyroscope is complementary to the accelerometer because it has low noise but has drifts in the calculations.
From the equations above, the drift in the gyroscope calculation increases as the sampling rate dt increases. To compensate for the gyroscope
drifting and accelerometer noises, I implemented a complementary filter using the following equation where the alpha is still 0.4299.
After applying complementary filters, the readings become more susceptible to random noise and quick movements. Smaller alpha
is more susceptible to noises and larger alpha is more capable of overcoming the drifting issue. But in both cases, the complementary
filter solved the drifting issues. Therefore, in the future labs alpha < 0.3 (smaller alpha) is a good design choice .
For the first part, I used the accelerometer and gyroscope complementary filter codes to implement a command that sends the pitch and roll to my computer.
According to the Arduino IDE, the maximum memory available for arrays is 393216 bytes when assuming there are no other local variables or overhead. The time values are 4 bytes and floats are 4 bytes
therefore for each pair of data 8 bytes are needed. On average, TOF sensors update around every 98 milliseconds
(about 10 values per second) and the IMU sensor sampled every 8 millisecond (about 125 values per second). Therefore,
TOF requires at most 80 bytes per second and IMU requires 1000 bytes per second. On average the data can be stored for (393216/(1000+80))= 364 seconds.
Python Code:
For the second part, I integrated with the TOF code from the previous lab to collect data on both the TOF and IMU and
send it through bluetooth. If one big array is used to collect the data from the Artemis, we need to implement more processing
code on the Python side. But using the data in the separate array means TOF and IMU data will be asynchronous since they have different updating time.
For this lab, I used one big array to collect the data because (1) for our robot we might need synchronous TOF and IMU data to
specify the robot’s orientation and location at a certain time and (2) to prevent unnecessary implementation errors from writing
processing code on the Python.
I used the 3.7 V 850 mAh battery to power the motor and 650 mAh battery to power the digital electronics
(Artemis, sensors, etc.) as shown below. This is because the larger mAh means the battery has larger capacity. Because the motor needs
to be able to run for a longer period than the electronics devices, we choose 850 mAh battery (larger capacity) for the motor vise versa.
After mounting the 850 mAh battery on the RC car,
I played around with it to familiarize myself with the car. These are my observations (1) the car is really fast,
very sensitive and hard to control, (2) sudden change in direction while it’s driving at high speed causes it to flip, (3)
rotating at constant speed makes the car rotate at the same place.
Driving RC car without Artemis:
In the following video, I connect the Artemis to the car as shown below and send the data over to my
computer. Finally, I plotted the TOF and IMU data vs. time in two separate graphs on the python.
RC car setup:
Driving RC car with Artemis:
In this lab, we disassembled the car and removed the control PCB, then we assembled all the components into the car chassis. The purpose of this lab was to run the car with an open loop control where we execute a pre-programmed series of moves, using the Artemis board and two dual motor drivers.
In this class, we are using the DRV8833 Dual Motor Driver Carrier to drive the motors.
The DRV8833 chip has a limited amount of current to deliver to each motor which is not
enough supply current for our robots to be able to drive fast. Therefore, we use two DRV8833
to drive the two motors separately to double the supply current. To do so, we connected BIN1 to
AIN1, BIN2 to AIN2, BOUT1 to AOUT1, and BOUT2 to AOUT2 where the rest of the connections remained
the same. Rather than connecting one chip to four PWM pins and two motors, now we have one chip
connected to only two PWM pins and one motor. Since each motor controls the two wheels on one side
of the car, each chip will be controlling either left or right wheels. I chose PWM 6 and 7 to control
the left motor driver and PWM 11 and 12 to control the right motor driver.
We have one 3.7 V 850 mAh battery and two 3.7 V 650 mAh batteries in this lab. In the previous lab, we connected 3.7 V 650 mAh to power the digital electronics (Artemis, sensors, etc). In this lab, I use the 3.7 850 mAh battery to power the motor. This is because the motors draw more current from the battery than the digital lectronics. Therefore, we supply the battery with higher energy capacity (850 mAh) to the motor and the one with lower energy capacity (650 mAh) to the digital electronics.
Before soldering the motor driver chips to the motors, I used the oscilloscope and bench power
supply to test them as shown below. I set the external power supply voltage to be 3 V to power the motor
driver because the motor driver chip is compatible with the 3V and 5V.
To set up, I connected the yellow alligator clip from the external power supply
and black alligator clip from the oscilloscope to the black wire that is soldered
to the GND of the motor driver. Then I connected the red alligator clip from the external
power supply to the red wire that is soldered to the VIN of the motor driver. Finally, I
connected the oscilloscope’s probe tip to the one of the wires that is connected “OUT”
sides of the motor drivers. I forgot to take a picture of the set up. But I have attached
the code and Oscilloscope output below.
After setting up the power supply and oscilloscope, I used analogWrite() to define a PWM signal.
According to the Arduino PWM output documentation, for an unsigned 8 bit integer PWM_out_level
ranges from 0 to 255. I tested the motor driver with the PWM_out_level=100 which is around
100/255= 39.21% duty cycle.
The voltage oscillated in the proper range as shown in the oscilloscope monitor. Then I changed
the power supply to 3.7 V to imitate the battery voltage and PWM_out_level=200 (which is 200/255=
78.4% duty cycle ). The current drawn by the motor on the power supply was 0.385 A as shown in the picture
below.
The current required to drive one motor per driver is 0.385 A (= 385mA) and we are using the 850 mAh battery
to drive both motors. Therefore, a fully charged battery should ideally be able to last about 850 mAh/(2 x 385 mA)=
1.1 hour to drive the motor at 78.4% duty cycle (which is pretty fast).
To drive the car, we use the analogWrite() to send the PWM signal from the Artemis to the motor driver which
ranges from 0 to 255. If we want to increase the motor speed, we increase the PWM_out_level value. However,
the motor speed is not linear to the PWM input. For example, to start a car from rest would require more power
compared to the car that is running. Therefore, I explore the lower limit PWM value for which the motors are able
to start moving forward from the rest.
To test this, I started with the lower limit of 25 and slowly increased the PWM_out_level value. As you can see in the video below,
the car started to move at the lower limit value of 45 and anything below that the motors only made a struggling sound.
Similarly, I have also explored the lower limit for PWM input that would rotate the car on the ground
from rest in the same way. Since it takes more power for the car to start rotating than to start running forward,
I started with a higher PWM value than before. I found that the lower limit value for the car to start rotating from
the rest is 100 as shown in the video below.
When I set the PWM_out_level to 50, we can see that the car drives in a straight line
without deviating the center.
When I increased the PWM_out_level to 80, we can see that the car started to move toward its left.
Since the car was moving toward the left, the right motor was stronger than the left and calibration was needed.
To find the calibration factor, I slowly increased the PWM_out_level until the car could drive straight
without deviating from the line. In the following video, I demonstrated that the car is driving in the straight
line and the PWM_out_level was 85 for the left and 80 for the right motor, which gives the calibration factor
is about 85/80= 1.0625.
Below is the code and video of the car running the open loop control.
In this lab, we programmed the robot to perform a stunt using closed loop control. The purpose of this
lab is to get a basic behavior working with either Task A or Task B. This includes setting up the bluetooth
connection between the Aretmis and laptop such that they can communicate smoothly (e.g. sending commands,
receiving sensor data, etc.)
I choose to perform Task A by implementing Proportional, Integral, Derivative (PID) controllers. For this task,
the robot drives forward as fast as possible (given the quality of my controller) towards a wall, then stops at exactly
1 foot (about 300 mm) away from the wall using the feedback from the time of flight sensor.
In order to tune the PID controller and debug during the lab, we need to send the collected
data from the robot to our laptop.
On the laptop side (using Jupyter Notebook), I implemented codes to connect and send different commands
to the Artemis. On the Artemis (using Arduino), I implemented the command handler to perform the commands
sent by the laptop through a switch-case construct.
getData: Send the data in the array back to the Laptop
startTask: Set the CR=1; start the front TOF sensor and PID controller
stopTask: Set the CR=0; stop the front TOF sensor and PID controller
startRecord: Started storing the TOF sensor data, PWM value, and time stamp in corresponding array for every 500 ms (=0.5 sec)
Additional Notes:
I created the startTask and startRecord as separate commands because we will usually only need to receive the information
from a portion of time when the robot is performing the task. Therefore, setting the array size to about 500 should be sufficient
enough to store data for that portion of the time.
Since I am doing Task A, I focused on sending the data of only the front TOF sensor. Because the sampling rate of the TOF sensor is really fast,
it would be too slow to send all the data at the same rate. Therefore, I stored the data in the Artemis
using a separate function for every 500 ms (=0.5 sec), and sent the data when it’s needed.
enum CommandTypes
{ getData,
startTask,
stopTask,
startRecord,
};
//Functions//
/*
* Taking Measurements with RHS TOF Sensor
*/
void front_tof(){
distanceSensor1.startRanging();
if (distanceSensor1.checkForDataReady()){
dist2= distanceSensor1.getDistance();
distanceSensor1.clearInterrupt();
distanceSensor1.stopRanging();
}
}
/*
* Storing data in an array
*/
void store_data(){
current_time = millis();
time_stamp= current_time-start_time;
//Sample the data every 0.5 second
if(current_time-prev_time>500 && dist2!=0){
front_dist_arr[count] = dist2;
side_dist_arr[count] = dist1;
time_arr[count] = time_stamp;
speed_arr[count] = Output;
if (count < MAX_ARR_SIZE){
count+=1;
}
else{
count=0;
}
if (size_val != MAX_ARR_SIZE){
size_val+=1;
}
prev_time= current_time;
}
}
/*
* Sending data from the array
*/
void send_data(int idx){
tx_float_time.writeValue(time_arr[idx]);
tx_int_fdist.writeValue(front_dist_arr[idx]);
tx_int_sdist.writeValue(side_dist_arr[idx]);
tx_int_speed.writeValue(speed_arr[idx]);
}
//robot_control is a class in python that helps to call the command in Arduino using just by calling the class method
rc= robot_control(ble)
rc.start_task()
rc.start_record()
rc.stop_task()
await asyncio.sleep(1)
//initializing the list to store in computer side
time_arr=[]
front_dist_arr=[]
output_arr=[]
//storing the data in computer side
for i in range(size_val):
rc.get_data(i)
await asyncio.sleep(0.2)
front_dist_arr.append(front_dist)
#side_dist_arr.append(side_dist)
time_arr.append(time)
output_arr.append(speed)
To perform the task, I initially use P in the PID to control the robot. To do so, I calculated the error between
distance sensor reading and target distance using
Since I am only using a proportional controller, it’s most likely that the car will overshoot the desired distance and bump into the wall.
To avoid that, I started with a small value of Kp (1) to decrease the overshoot, (2) to give the sensor more time to read.
Because I am testing the car at about 1500 mm away from the wall, Kp=0.07 gives the PWM value about 84 (PWM value= 0.07*(1500-300)= 84)
which seems reasonable to start. Using this logic, I started testing the Kp from the range between 0.03 to 0.1 and saw that for Kp below
0.07 the car moves really slow and for Kp above 0.07 the car overshoots and bumps into the wall.
//Computing the PWM value using proportional control
void p_pid(){
double error= dist2-300;
Output= kp*e;
drive(Output);
}
}
void drive(double Output){
if (Output>0){
moveForward(limit_range(Output));
}
else if (Output<0){
moveBackward(limit_range(Output));
}
else{
slowStop();
}
}
I also implemented the following code to make sure that the calculated PWM value doesn’t ]exceed the lower limit and upper limit. The PWM signal for analogWrite() ranges from 0 to 255 therefore I set the upper limit as 255. So when the calculated PWM value exceeds 255, I set it equal to 255. To choose the lower limit, I used the PWM value I got from the previous lab which is the minimum PWM value for the car to start moving (=45) instead of only making a struggling sound. So when the calculated PWM value gets lower than 45, I set it equals to 45.
//Computing the PWM value using proportional control
int limit_range(double Output){
if (Output>=255){
return 255;
}
else if (Output<45){
return 45;
}
else{
return Output;
}
}
The following video demonstrates the proportional controller using Kp= 0.07. We can see that the car overshot the
distance but didn’t bump into the wall. But we can also see that it takes a while for the car to settle at the 300 mm distance.
I plotted the Distance vs. Time and PWM value vs. Time for Kp= 0.07 using the data collected for every 500 ms
because the car sampling rate is too fast (about 4 ms per reading) which would slow down the code if I were to send
all of the data via bluetooth.
Although the Proportional controller works well for Task A, we see that there are still some downsides to it. By adding the derivative term
equation, the system reaches the steady state faster and improves the performance of our controller.
For example, if at time A we are at 1500 mm and time B we are at 1000 mm, the corresponding errors would be 1200 and 700 which
give PWM values of 84 and 49 for the proportional controller (PWM value= Kp * error). Intuitively we know that if we are getting
closer to our target distance, we would want the car to slow down so that it wouldn’t over shoot. To reflect this behavior of the car
getting closer to the target, or getting farther away from the target, we added the derivative term. The error for the derivative term is
calculated using the difference in error which in this example is (49-84=-35). We then multiply the derivative error with derivative gain
(I used Kd =0.08 from trial and error) to get the PWM value due to the derivative term. Finally, to get the total PWM value, we added the two
PWM values from proportional and derivative term which in this case would be (49- (35*0.08)=) 46.2. As we can see in this example,
the derivative term helps tuning the PWM values by taking into account how the system is changing.
//Computing the PWM value using proportional plus derivative control
void pd_pid(){
double error= dist2-SetPoint;
double derror= error- prev_error;
Output = kp* error + kd*derror;
drive(Output);
pre_error=error;
}
The following video demostrates the proportional controller using Kp= 0.07 and Kd=0.08. We can see that the car overshoot less
than before and reached the target distance faster.
I plotted the Distance vs. Time and PWM value vs. Time for Kp= 0.07 and Kd=0.08 which show
the clear improvement in the close loop control system as I discussed above.
Although PD controls work pretty well, we can see that there is some inconsistency between
each trial. Because the repeatability of the robot is really important, I tried implementing
the PID control using all three terms and tuned all of the control gains. In addition to adding
the integral term which I sum the error, for this implementation I also take into consideration that
the sampling rate is different for each time. Furthermore, rather than storing the data for every 0.5 second,
I changed the code to store the data for every time the data is collected. This is because I noticed that
storing the data this way didn’t affect the speed of the sampling rate that much and give more accurate results.
From trials and errors, I chose Kp = 0.07, Kd = 0.002, Ki= 0.125.
In the following video, I demonstrated the 3 trials of running the PID control. Both in the video and the graph we can see that all of the three runs were pretty consistent.
By graphing the three overlap graph, I demostrated the repeatability of my control system.
The goal of this lab is to implement the Kalman Filter which helps to execute the behavior of PID faster. Kalman Filter supplements the slowly sampled TOF values such that we can speed toward the wall as fast as possible.
To implement the Kalman Filter, we first need to define the A, B, C matrices to build the state space model for our system.
These matrices depend on drag and momentum which we estimated by driving the car towards a wall using a step response.
I chose the step-size to be 98 PWM value which was the highest PWM value from my third trial for running PID control in Lab 6
(to keep the dynamics similar).
I used the TOF sensor data from step response to plot the Distance vs Time graph as shown below.
Using the distance and time data, I calculated the velocity and plotted the following figure. From the graph,
I find that the steady state velocity is 1800 mm/s and 90% rise time is 1700 ms
[the time where the system reaches the 90% of the steady state velocity]. Since my starting
PWM is high and the lab didn’t have enough space, it didn’t reach the steady state velocity.
To take into account of this I choose the steady state velocity to be slightly higher.
We now have our A, B, C matrices as shown as below.
To implement the Kalman Filter, we need to specify initial state uncertainty, process noise and sensor noise covariance matrix.
Initial state uncertainty: Since the robot is not moving, we have strong confidence in the accuracy
of both our sensor value (initial distance) and initial velocity. Therefore, as a starting point I both of the to be 52.
Process noise covariance: This is in a way a measure of how much trust we are putting into our model.
Assuming our sensor is off by +/- 30 mm then I chose the both of the process noise covariance to be 102.
Sensor noise covariance: This is in a way a measure of how much trust we are putting into our sensor. To determine this,
I collected TOF sensor data for about 15 seconds while it’s sitting still. Then I used the data to determine the standard
deviation (=1.29397) and the covariance is the square of that number.
Using this information, I wrote the code for Kalman Filter in the Jupyter Notebook to do the sanity check on the lab 6
trial 3 data before I implemented it on the robot.
I plotted the Kalman Filter data and lab 6 data (TOF data and PWM) in the following figures. As you can see,
the Kalman Filter follows the actual data pretty well. This is as expected because the chosen sensor noise covariance
for the TOF was about 10 times smaller than the Process noise covariance. After playing around with the covariance numbers,
I saw that the larger sensor noise covariance is the more Kalman Filter stray away from the actual data. Since the robot did stop
around 304 mm [by observation], it seems reasonable to trust the TOF sensor data more.
Finally, I integrated the Kalman Filter with the Lab 6 PID code and implemented it on the robot.
In the video, we see that the robot was driving toward the left because the right motor was stronger
than the left. In order for the robot to go straight I tuned up the left motor by multiplying it with a factor
of 1.2. After doing many trials and errors, I observed that the robot overshoot and hit the wall until I increased
the sensor noise covariance. This makes sense because the accuracy of the sensor decreases (or increases in sensor
noises) when the robot is moving at a high speed compared to the static robot. Additionally, when the process noise
covariance matrix is low, I observed that the robot is oscillating back and forth near the target distance. Therefore,
I tuned the process noise covariance until the robot stopped oscillating.
In the following video, I included the videos from trials that demonstrate the behavior of the robot that I discussed above and the final successful round.
From looking at the final result, the PWM value of the robot increases successfully without crashing the walls from 100 to 150.
The purpose of this lab is to combine everything we have done so far to do some fast stunts.
Because I chose Task A for lab 6-7, for the controlled stunts, the task is to start the robot at <4m away
from the wall, drive fast forward, and when it reaches the sticky matt’s center (about 500 mm away from the wall),
perform a flip, and drive back to the direction from which it came.
From lab 4, I observed that there are multiple ways to perform this flip; drive to the wall at the highest
speed and when it reaches the desired distance (1) stop the car with brake/slow decay mode or (2) drive the car with highest
speed in reverse direction. After trying both methods, I noticed that the second method works better for me because (1)the
brake stop didn’t give enough friction for the car to flip and (2) the second method performed the task faster.
I disengaged the PID position control and only used the Kalman Filter because for this task we wanted the robot to
go as fast as possible and not be slowing down as it would if it was doing PID position control. With this in mind,
I used the following Arduino code to implement this.
After many trials and errors, I noticed that the car was overshooting in almost every trial. Therefore, I adjusted the desired distance to 980 mm
which can be seen in the code above.
This stunt is very dependent on many factors such as: the battery of the motor, the landing of the car, PWM values of the left and right motor, and
friction between the floor and the wheel. After doing many many trials, I was able to perform three successful stunts as
demonstrated in the following video.
I have also graphed the TOF data, KF data, and PWM input vs time. I noticed that when the TOF sensor reading was around 1000, the KF distance was around 500. This corresponds to what I observed because in
my code the car is supposed to flip at a distance ~950 mm away from the wall but in the video the actual distance
where the car flip around 500 mm.
In most trials, the car moves in very unexpected ways. Here are some bloopers of how the car really behaves:)
The purpose of this lab is to map out a static room. To build this map, we place our robot at a couple of marked up locations around the lab, spin the robot while collecting ToF readings. Finally, we merge the data together using transformation matrices and plot them on x-y coordinates to draw up the map.
The quality of the map depends on (1) the number of readings, and (2) how consistently they are separated in angular space. Therefore, we will need to rotate
the robot at a slow and constant angular speed to complete the task. To do so we can either use (1) open loop control, (2) PID control
on orientation (i.e. integrated gyroscope values), or (3) PID control on angular speed (i.e. the raw gyroscope values). Since I did task
A for lab 6-8, I chose (3) PID control on angular speed for controlling the robot’s rotation.
In the previous lab, I configured PID control for the front ToF sensor. In this lab, I used the ComputePID() function that I have written in lab 6 on the raw gyroscope values (Z direction)
from the IMU sensor readings. For the set value, I chose 15 degrees per second to make sure the robot rotated at a slow enough rate to get sufficient data.
After collecting the data, I realized that using PWM as 15 rotates the car really slow. To fix this issue I changed the baseline value of the PWM value to 40 instead of 0 (default).
To implement this, I still compute the error on the angular speed as before but for setting the PWM I let the error to adjust around the baseline value of 40. If the car was rotating at a rate slower
than 15 degrees per sec, the error is negative and thus increases the PWM to above 40. If the car was rotating at a rate faster than 15 degrees per second, the error is then positive and thus increases
the PWM to above 40.
The video demonstrated that the car rotates pretty smoothly at relatively constant speed without shifting too much from
the original position.
After collecting ToF reading from five marked up locations in the lab, I plotted TOF output over time in a polar coordinate system.
As we can see in the following graphs, the ToF readings are more accurate at detecting nearby objects and noisy at detecting far
away objects.
After gathering all the data points from each markup location, I need to convert the ToF sensor measurements into an inertial
reference frame of the room and plot them together to plot the final map. Using the transformation matrices,
the mapping from polar coordinate measurements to the cartesian coordinates is as follows:
Note: My ToF sensor is mounted in the front and I started the robot with same orientation (facing toward the window).
Finally, to map these cartesian coordinates data to the inertial reference frame of the room, I translated each x,y
data from the robot's frame origin to the x, y data with respect to the room's origin by adding the marked
up coordinates. Since the ToF data are in millimeters, I converted the marked up coordinates from foot to
millimeters before doing the transformation (1 foot =304.8 mm).
Finally, I plotted all the final x, y data points on one map using the python script and got the following
results.
To improve the map, I removed some of the outliers then took the data again at [0,0] location.
Finally, in order to use this map for the localization (Lab 10 and 11), I manually estimated the location of the actual
walls and obstacles in this map. To do so, I observed the general location of the data points to get the coordinates of
the estimated lines. Then, I graphed the line estimations on top of the map.
To check the accuracy of my lab, I used the data from Anya's lab 9 and observations from the physical lab set up (there were some changes in the set up) to plot the actual map on top of my plot.
The purpose of this lab is to execute grid localization using a Bayes filter on a Python simulation as a preliminary step before deploying it on our robot.
The Bayes Filter Algorithm consists of a for loop that iterates through a state variable (x_t) and within
each iteration it performs the following two steps. The first step is called the prediction step where we
incorporate the control input (movement) data. The second step is called the update step where we incorporate
the observation (measurement) data.
To implement the Bayes Filter Algorithm, we have broken it down into multiple functions to calculate the parameters.
The structure for these functions were provided to us as part of this lab (which was really helpful). To understand Bayes Filter Algorithm,
I will go through each of the functions and explain how each parameter was calculated.
The compute_control function is responsible for extracting control information, which is stored as a tuple with three
components: rotation 1, translation, and rotation 2. We define the output tuple as control input “u” which is calculated based on previous
pose (t-1) and current pose (t) of the robot.
The odom_motion_model function return the state transition probability “p(x'|x, u)” which specifies the
probability that the robot is in the current state given the previous state and actual control input u or u(t-1).
Using the compute_control function, I can compute the necessary control input u(t) required for some possible current state
using cur_pose and prev_pose as an input. Finally, because this is a probabilistic robot, I plugged these numbers into the
Gaussian function to determine the state transition probability. To use the Gaussian function, I needed to specify mu, x, and
sigma (the spread of the distribution). I used u(t) as the mu because this is the state where we are executing the motion around.
I used u(t-1) as the x because we are interested in the probability that the robot will be in this state. Finally, I used the
odom_rot_noise and odom_trans_noise variables defined in the localization.py file as the sigma. The total probability is then the
multiplication of the individual probabilities of delta_rot1, delta_trans, and delta_rot2 with the assumption that these are independent events.
The prediction_step function returns the prior belief of the robot before incorporating the latest measurement z_t “bel_bar(x_t)”
given the cur_odom (current state) and prev_odom (previous state). First, I call the compute_control function again to compute the
control input. Then I wrote the nested loop that loop through all the possible states and all the possible previous states. Within
each iteration, I called the odom_motion_model using the current pose and previous pose which I found using the from_map function
from the Mapper class and the control input. To compute the bel_bar(x_t), I multiplied the probabilities of being in each state with
previous belief at each state and summed them up. Finally, I normalize the result since it’s a probability distribution which should
sum up to 1. As we can see, this code is computationally expensive and therefore really slow. To increase the efficiency, I added an
extra line of code such that we only iterate through if and only if the motion of the robot is not too small, otherwise doing the computation would only slow down the process.
The sensor_model function returns the measurement probability “p(z_t|x_t,m)” which is the probability that the
robot is in the correct location given the true observations (current measurements) of the robot. To access the true measurement
values that are recorded, I called obs_range_data from BaseLocation. Since each measurement consists of 18 different individual
measurements, I just write the for loop that uses a gaussian function to calculate the probabilities of individual measurements
within each iteration and return the probability array.
The update_step function returns the belief of the robot at the current state “bel(x_t)” given all past sensor measurements and all past control inputs. This iterates over all the possible states to perform this computation. At each iteration, the sensor model is computed as a joint probability over the 18 sensor measurements by comparing the precached sensor data for that cell and latest sensor measurements which is then multiplied by the bel_bar(x_t) which is calculated from the prediction step. Finally since this is also probability distribution I normalized it so that the total probability is 1.
The localization takes about 2 minutes in total which seems reasonable time complexity. In the video, the green line
represents ground truth (actual position), the blue line represents the belief (Bayes Filter estimation) and the red line represents
odometry (sensor) values. As demonstrated in the video, the trajectory of the green line is fairly close to the blue line thus we can
conclude that the Bayes Filter is working well. I also observed that the trajectory is better in a straight path than in the corners.
From the statistics from one of the trials, we can see the small error by comparing the GT with Belief or looking at POS ERROR.
We could also conclude that the robot has high confidence in its belief since the probability at the Update state is almost
always close to 1. Finally, the POS ERROR for the angle was piling up to a large value because the ground truth update in the
angle is not normalized.
The purpose of this lab is to perform localization with the provided Bayes Filter code on the actual robot to see its performance in real-world systems.
To ensure the effectiveness of the provided code, I ran it on the simulator to test its performance
before deploying it on a physical robot. As we can see, the belief of the robot (blue) is fairly close
to the ground truth of the robot (green). Thus, we can proceed to implement the code on a physical robot.
In this task, we are only executing the update step of the Bayes filter algorithm using Time-of-Flight (ToF) sensor
measurements to estimate the robot's position. This is because the robot's motion is noisy, and performing the prediction
step would likely decrease the accuracy of the algorithm.
The prediction step involves estimating the robot's position based on its previous position and the control inputs,
which can be affected by noise and uncertainty. By excluding the prediction step and only using the ToF sensor measurements
in the update step, we can reduce the impact of the noisy motion and improve the performance rate of the algorithm.
However, it's important to note that this approach may not be suitable for all localization scenarios and may result in a
less accurate estimation of the robot's position. In some cases, it may be necessary to perform both the prediction and
update steps to achieve the desired level of accuracy in localization.
To implement the localization algorithm on the physical robot, we need to implement the perform_observation_loop() method
of the class RealRobot which is required by the provided localization code. This method should output a numpy column array
consisting of 18 ToF sensor readings and the corresponding angle at every 20-degree interval in the counterclockwise direction
of rotation.
To integrate the code from lab 9 with the localization code, some modifications need to be made to record only the
necessary 18 ToF sensor readings and send them back after one full rotation has been completed. To achieve this,
I used a PID controller to control the angular speed of the robot during the rotation. I set the PID gains to be
Kp =3.5, Ki=0.25, and Kd= 0.002 to ensure stable and constant rotation. While the robot is rotating, I checked
the current angle to record the sensor measurements in arrays once the robot reaches the 20 degree increment of
rotation.. When the full rotation has been completed, the two arrays containing sensor readings can be sent back
to the computer as the output of the perform_observation_loop() method.
I have plotted the belief (blue dot) and ground truth (green dot) data points at the following locations two times.
[5,3]
In the first set of data, the initial belief of the robot's location matches the ground truth,
which is why the green dot representing the robot's estimated position is located directly under
the blue dot representing the true position. This indicates that the localization algorithm is performing
accurately and providing reliable estimates of the robot's location. However, in the second set of data,
the initial belief of the robot's location is one grid below the ground truth.
[5,-3]
In the first set of data, the initial belief of the robot's location was one grid to the left and diagonally
below the ground truth position. In the second set of data, the initial belief was one grid to the right and
diagonally below the ground truth position.
[0,3]
In the first set of data, the initial belief of the robot's location matches the ground truth. In the second set
of data, the initial belief of the robot's location is one grid to the right and diagonally above the ground truth position.
[-3,-2]
In the first set of data, the initial belief of the robot's location matches the ground truth. In the second set of
data, the initial belief of the robot's location is two grids to the left and diagonally below the ground truth position.
Upon analyzing the above diagrams, it appears that the overall localization performs fairly well. Specifically, the robot's belief was accurate in three of the trials, and in four trials, it was only one grid away from the ground truth. In only one trial, the robot's belief was two grids away from the ground truth. One possible explanation for the inaccurate belief could be the limited range of the ToF sensor. As we learned in Lab 9, the accuracy of the ToF sensor decreases and becomes noisy when it detects objects that are far away. Another reason could be that the localization algorithm is less accurate in locations where the surroundings are more symmetrical. In my observations, the localization produced better results when the surroundings were not very symmetrical. Finally, it's also possible that in the second set of trials, the wheels caught dust, leading to more drift in the rotation and an inaccurate belief about the robot's location at a different grid location.
In this lab we are trying to tailor everything we have learned so far in lab 1-11. Our task for this lab is to navigate through a set of waypoints in the environment as quickly and as accurately as possible.
While I found this lab to be enjoyable, it also proved to be quite frustrating due to the consistent failures of my Bluetooth connection. This issue caused disruptions throughout the entire process, leading me to have to re-solder the Artemis module twice and the battery three times. Not only did these setbacks consume a significant amount of time, but they also had a direct impact on my ability to complete the debugging process for the localization implementation. Initially, I had planned to implement a localization method as I believed it would provide higher accuracy and robustness compared to open loop and PID control. However, the persistent issues with the Artemis Bluetooth connection and the multiple failures that occurred forced me to adapt my approach. I ended up collaborating with Tiffany Guo to implement an open loop control strategy instead. This report will outline my initial attempt at implementing the localization method, the challenges I faced, the potential solution, and the final implementation of the open loop control.
To implement the localization method, I planned to separate the tasks between the Python and Arduino sides. Here is an overview of the tasks for each side:
Calculate Turn Angle and Distance: Using the provided waypoints, I intended to calculate the angle the robot needed to turn
from its current position to face the next waypoint. Additionally, I planned to determine the distance the robot needed to
travel from its current position to reach the next waypoint.
Estimate Current Position: Utilizing the localization data, my goal was to estimate the robot's current position. This estimation
would allow me to determine whether the robot had reached the next waypoint or not. If the robot successfully reached the current
goal waypoint, it would proceed to the next waypoint in the sequence. However, if the robot did not reach the current goal,
it would make attempts to reach the same waypoint again.
Depending on the task received from the Python side, the Arduino would perform different actions:
Perform Localization: If the task sent from the computer was to perform localization,
the Arduino would execute the necessary localization algorithms and send the resulting data back to the Python side.
Move to Next Goal Position: If the task sent from the computer was to move to the next goal position,
the Arduino would initiate the required movements to navigate towards the desired waypoint.
In addition to the Bluetooth failure, another challenge I encountered was the limitation of the Time-of-Flight (ToF) sensor. Once the Python side calculated the required distance for the robot to travel, I attempted to use a simple proportional term in a PID control scheme to reach the desired distance. However, a major challenge arose when most of the waypoints had obstacles that fell outside the range of the ToF sensor. As a result, the robot unintentionally collided with walls, leading to unexpected outcomes.
To address this issue, I consulted with other students who were also using PID control in the lab. They suggested continuously adjusting the robot's orientation at each waypoint, ensuring that it always faced a wall within the sensor's range. While this approach helped to avoid collisions, it could introduce further complexity if I were to incorporate localization.
Another approach I considered involved calculating the time it took for the robot to travel a certain distance at a specific PWM value. Based on my observation from the first to the second waypoint, which took approximately 1300 seconds, I established a convention to estimate the distance at each subsequent point. However, before I could test this approach, my Artemis failed, preventing me from evaluating its effectiveness.
To implement the open-loop control, we utilized a switch statement with the expression being the variable "state."
This allowed us to determine the appropriate action for each case. Within each case, the robot was instructed to
either move forward or turn using the angle_turn() function.
More detailed explaination of this code can be found in Tiffany's website.
Although I was unable to complete the debugging process for the localization method, I had a positive experience working on the open loop control with Tiffany. I would like to express my gratitude to Tiffany for her collaboration and willingness to let me join her in completing the last lab. Additionally, I would like to extend my appreciation to all the course staff members for their support throughout the entire process, with a special thank you to Anya for her assistance with soldering and patiently addressing my questions.
Hello! My name is Zin Yamin Tun. I am a MAE Undergraduate student at Cornell University. This is my portfolio for ECE 4160/ MAE 4190: Fast Robots projects. This course is about designing a fast autonomous car and exploring a dynamic behaviors to introduce systems level design and implementation of dynamic autonomous robots.