IC4U ver.02 - Robot Guide Dog for Visually Impaired People

As I enjoyed working on the robot, I decided to use IC4U as a development platform for myself and began to work on the 2nd version: IC4U2. As I wanted to add AI & ML applications such as object detection and voice feedback, I needed a more powerful microprocessor so I chose to use a Raspberry Pi 3B+.

I installed a Google AIY Voice Kit to iC4U2. The voice kit gave the visually impaired person the ability to give voice commands directly to the robot instead of via an mobile app. I added kneecaps to this version’s legs and installed servo motors so that IC4U can sit down, stand up, and lie down.

In this version, I used artificial intelligence and machine learning to detect objects rather than an ultrasonic sensor. iC4U2 can process images by using a Raspberry Pi Camera, TensorFlow (Machine Learning Platform), the OpenCV (Image Processing Library), and the “MS COCO” dataset. For example, iC4U2 stops when it sees a stop sign and tells the visually impaired person why it is stopping.

iC4U2 can make conversation with the visually impaired person and others by using the Dialogflow platform for this feature. I also included Google Maps so that the visually impaired person could get directions from iC4U2.

iC4U ver.02 Videos