Robotics and Mechatronics

In grad school, I started out going to lectures about robots, and demos with robots. Then I decided I wanted to make some robots.  Like any engineering project, I thought it was best to start simple, and then keep building on what I know. I know I like to code in C++ and python, I know some electronics and machine learning and want to learn more, and I love making puppets.  So why not mash them all together?

First I needed to brush up on my mechatronics skills.  I worked my way through an intro Arduino kit, and reminded myself of what I learned in electronics lab.  Although I didn’t make up these projects, I’m posting them here because in doing them, I learned how to put some cool pieces together and then started applying the same concepts to more advanced projects.

Basic Arduino learning projects

 

… and then came Raspberry Pi

Arduinos are great for timing and pretty simple to use.  But I wanted to try using python optical character recognition, text to voice, and speech recognition projects.  So I dove into the world of Raspberry Pi.  Here is my first robot with a Raspberry Pi. It is very simple, and is made out of cardboard. The mouth moves with a servo motor.  I used the Raspberry Pi 3 B+ and programmed in python to get the mouth to move, eyes to light up, and the robot to talk. I chose a Raspberry Pi 3 B+ based on other users’ recommendations and because it has wifi and most of the other functionality of a laptop or desktop computer.  Most of my struggles getting this project up and running were that the Raspberry Pi processor and functionality is still very different from what I was used to on a desktop. Even something as basic as installing python packages from conda channels doesn’t work on the Pi: Anaconda isn’t supported. I found workarounds for this and other issues, but I kept having to re-learn that different processors have different constraints, and you have to know the constraints in order to make the product you want.

Just for fun, the robot introduces itself in three languages: English, Swahili, and Spanish.  Espeak supports many different languages including these, though it seems the voice is pretty similar in all of the languages I tried.  Originally, I wanted the robot to ask the user who it is and have the user respond verbally. However, after much consternation with ALSA, I realized that the Raspberry Pi does not easily support audio input, as it does not have a sound card and the audio jack is only for speakers/headphones.  Therefore, I changed the input to be the user enters their name on the keyboard. This works fine for a basic project.  In fact, this project is so basic that a Raspberry Pi 3 B+ was overkill, and on the next project I am going to try an even more basic processor: the Raspberry Pi Zero. I plan to experiment with a USB microphone and/or a microphone that I can connect directly to the GPIO pins.