I've gotten a few requests to put up a tutorial for building a robotic head.
I'm assuming if you want to build a robotic head, you'll have some experience using an arduino, servos, and some other language that can interface between a webcam and the arduino. I used processing as my interface between the arduino and my webcam. I'm also using the OpenCV library for processing.
I'll describe the overarching structure behind the code to give the illusion that a robot is following a person's face. At a high level, there are only a few steps to have a webcam follow a person's face:
- Detect face. Take a look at some OpenCV examples on processing's website.
- Grab X-Y pixel coordinates of the face.
- Calculate the pixel distance between the center of the face and center of the webcam's view. In other words, you're going to take the images from the webcam, and calculate the distance between the center of that image and the center of the face.
- Write an algorithm that minimizes the distance between the webcam's center and the face's center.
- This algorithm will also control the servos.
You can download my processing code here and my arduino code is based on the code found here from letsmakerobots.
Let me know if you guys have any questions. I'll try to answer them as soon as I get a chance.
The following pictures are the first iteration:
How to include OpenCV on Arduino code?
ReplyDeleteI'm confused.
Thanks,
ILHAM
Don't include the opencv code into the arduino code. The arduino isn't powerful enough to perform computer vision calculations. I recommend using another program, like processing, to do all the opencv calculations. Then send the servo commands to the arduino. Does that make sense?
ReplyDeleteHi Benjamin
ReplyDeleteWe are working on a similar project, but its a desk lamp little following your movement.
The thing is, I couldn't get face detect working in my processing no matter how, just wondering if you are using PC or Mac?
I heard a lot issues are around openCV for processing Win7..
We have stuck here for a few days, can't think of ways to work around..
Thanks
Shanshan
I have a PC. Opencv for processing is kinda funky, the video class for opencv doesn't directly capture from a webcam on a pc. You're going to have to use processing's video class and then use opencv.copy to then copy each webcam frame into opencv. If you download my processing code, you'll notice in the detectface class, that I'm copying each frame from the video class... I hope this helps.
ReplyDeleteYea, i actually figured that out, and I did ask openCV to copy processing.video..
ReplyDeletebut it still doesn't work for some reason
It reads and displays image, but can't detect any face.. I doubt it's opencv.detect doesn't work some how, maybe it's not configured right?
I would have to look at your code, because so many things could be influencing whether it's detecting or not. Are you using the right haar-classifier?
ReplyDeleteHello, could you give me some more information on what your setup was?
ReplyDeleteDid you run the face tracking and robot control locally to the robot?
If you did, how did you intercept the Skype video stream?
Hi Dave,
ReplyDeleteI had an arduino controlling the robot's servos. I was running opencv within processing. I intercepted the Skype video stream using the java robot class. The java robot class has a function that lets you screen capture the background, and you can also screen capture a portion of that background. So I didn't directly interface with Skype via an API, instead I was doing a screen capture thirty times a second. Does that help?
Here's a working example: http://www.benfarahmand.com/2012/04/robotic-telepresence-via-skype-and-face.html
-Ben
Yeah that helps! thanks Ben
ReplyDeletehi Benjamin,
ReplyDeleteim working on a project where a car (rc in my case) will have to run on a track which is similar to real life road, so its like all kinds of obstacles and speed bumbs are there
first I wanted to use color sensors and distance sensors to make it, but color sensors have a very short range and when the rc is taking a turn I cant really relay on the sensors coz it detects them only when its 3-4 cm close, or else the readings are not accurate
so I was thinking of using a camera that can detect the yellow and white line on both sides of the track ,
so is it possible ? if not with arduino then what other option do I have?its suppose to be a self driving car and the dimesions are kinda limited, will I be able to send data from camera to pc do all the tracking and send the commands to arduino?
sorry for writing an essay here ;D
Jim, you have a three options that come readily to mind.
DeleteThe first is using the zigbee, which interfaces with the arduino to transfer the video feed to a computer and analyze the webcam feed, then send the appropriate commands back to the rc car via the zigbee. I don't know the type of bandwidth requirements for a webcam feed.
The second option is to use the raspberry pi, beagle board, or pcduino, and attempt to run the necessary open cv applications on one of these mini-computers... I've never used any of these three boards. So I don't know how complex the project would be.
The third option, which is my favorite because it's the simplest and smallest, attach a smart phone to the front of the rc car. Write a smart phone application (java for android, objective-c for iPhone). I've played around with some android programming using things like processing (processsing.org), it's pretty straight forward. If you use an android phone, you can use the built-in webcam, analyze it on the phone, and then send the commands via usb to an arduino to control the servos and motor. I think the benefit of using a smart phone is that you also have access to the smart phone's accelerometer, which will be vital for knowing how fast you're turning and how fast you're going.
Let me know what you think.