Posted: 16 May 2018
The robot that can be controlled using IoT. It will be accessible via any handheld devices through the internet. The robot contains a sonar sensor to detect any obstacles, and the user will see the live feed from the robot. The application lets the user control the robot and move accordingly. This robot can be used for home security or even as a pet monitoring device.
For the project, we used a wide range of technologies. The pet monitoring bot encompasses aspects of hardware, networking, programming languages, and the web.
The finished robot is composed of these components; a 2-wheel drive motor chassis kit, a 5MP camera, resistors (1kOm and 2kOm), wires and cables, a rechargeable battery pack, a 9V battery, a breadboard, a motor driver, and of course the Raspberry Pi.
When it comes to the chassis, not much is to be said about it; it’s just a frame so that we can put our Raspberry Pi on top. A benefit of the chassis is that it came with a base and a roof and that we placed the breadboard on the first floor and the Raspberry Pi on the roof, avoiding jamming everything together.
As mentioned before, we wanted to have a movable bot that can capture footage of what’s in front of it; naturally, a camera is needed for this. The reason we chose the 5MP camera for Raspberry Pi is that we knew that our Raspberry Pi has a CSI camera port, which meant that it could be connected directly to the Raspberry Pi, saving USB ports. The 5MP meant that the still picture resolution could be as high as 2592 x 1944 pixels, and that video outputs can be as high as 1080p at 30 fps, producing high-quality videos for live feeds which is exactly what we were looking for.
We envision our bot also to be able to avoid incoming obstacles automatically, even while controlling it. This acts as a Fail safe in case the operator makes a mistake, preventing damage to the bot. An ultrasonic sensor (HC-SR04) is added to our robot to detect the distance between itself and the object in front. By modifying existing python codes, we used this information to determine when it should change the course of direction.
Ultrasonic sensors work by sending a series of sound waves to objects and wait for the echo to hit back to calculate the distance between the two. The sensors required two resistors (one 1kOm, one 2kOm) to function correctly.
This is because the initial input is 5V, and the sensor only accepts 3.3V, and the resistor lowers the current passing through so that the sensor can work. There are limitations to using an ultrasonic sensor; shape, size, and angle of the object can change the accuracy of the readings.
Objects that absorb sound will also drastically reduce their operating effectiveness. Other alternatives to measure object distance can be other sensors like proximity sensors, where it emits an electromagnetic field or beam. But since there is no need for our bot to detect objects far away, our sensors should suffice.
Since our bot can be remote-controlled, we cannot use the power cable as it would limit the operating range. We, therefore, acquired a portable power bank to stick onto the chassis; the power bank is rechargeable and is used to power up the Raspberry Pi. The limitation of such item is that it’s simply not strong enough to power both the computer and the motors, and that’s where the additional battery come in. A connection is applied to the DC motor so that it can connect to the motors. We cannot fit another power bank onto the chassis, so we decided to use a smaller disposable battery that can be connected to the breadboard. All these components mentioned above have pins that need to be plugged.
The breadboard is used as a frame to create our electric circuit; it is usually composed of a tin plate encased with plastic with holes. These holes act as contact points to attach numerous electronic components to create a connection/ circuit.
It is Solder less, meaning that the pins can be changed at any time and reusable. The inner part of the board is composed of metal clips, and each clip connects five holes in a row. The two outer columns marked with the + and - symbols are power rails, and the clips are attached in columns of 5 for the power rails as seen in the picture to the right.
The power rails are meant to connect with power supplies; in this case, this is where we put in our battery. This is crucial to understand because the circuit might not work. After all, one pin is in the wrong row/column. It is possible to burn the circuits if the pins are in the wrong column for the power rails.
The reason we implemented the breadboard is that there are too many pins to fit in the GPIO in Raspberry Pi. For example, we’ve already occupied the two 5V GPIO pins, so the breadboard helps create more space for the nails. Another important reason is that we used a motor driver (IC) that needs to be connected to the breadboard. The motor driver we chose is the L293D. It’s an integrated chip that lets DC motor to drive in either direction.
This motor driver has 16-pins, which allows the control of two DC motors, matching our requirements. This motor driver uses the concept of H-bridge, which is a circuit that lets voltage to be flown in either direction. The L293D chip has two H-Bridge circuits, meaning it can rotate the two motors independently.
Here is a sketch of how our connections are like in the breadboard:
The Raspberry Pi is used to run the scripts needed for the bot to function. We only used the camera socket and the general-purpose Input/Output (GPIO). When the remote server is set up, the serial connection between the bot and the computer can be severed and allows the remote control of the bot.
We are using Raspberry Pi to connect all components.
Python was used for the development of the application. Motors, sensors are connected via the GPIO in the expansion header in the Raspberry Pi. Python connects it from the RPi.GPIO library. Then the script will set up the pins and control it by passing current to each pin when needed.
With the setting up of the motors, it will be controlled by a few functions. Each motor will need a signal passed to its enable pin. In our implementation, we use 22, 29 pins for the enable pin. To move forward, we have to send a HIGH(1) value to the 1st pin and a LOW(0) value to the 2nd pin. Steering left, and right will be stopping the left and right, respectively.
Similarly, the sonar sensor will be connected with the GPIO library of Python. Echo and Trigger will be given two pins and setting the echo an input and trigger output. To calculate the distance (Velocity = Distance / Time), we need to set a sensor by setting it to sleep.
Our main objective is to access this from handheld devices. Therefore a server was needed to run the application. We used the apache server to host all pages in it where we can access it with the IP address given to it. Apache server will be used to run PHP and the RPi Cam Web Interface.
RPi Cam Web Interface/ PHP and Python Connection
The Raspberry Pi Camera is connected with the RPi Cam Web Interface which runs on Apache server. This will connect the camera to the front end of our system, together with the controls.
The web interface will take image snapshots and play it in a sequential manner which gives a video output for the application.
In conclusion, we did many experiments with the project and came up with a minimal viable product. However, the chasis and the wheels can be made better. There were many downfalls due to the weight of the device.
This project was done for a Queensland University of Technology unit.