Tonight I finally got my robot and webcam working together. The big bullseye on top of the robot is visible in the webcam, so my OpenCV-based computer vision code can figure out where the robot is in the real world. This lets me take sensor data from the robot, such as my two ultrasonic distance sensors, and chart it in real-world coordinates.
This lets me build a "virtual world" that matches the real world--currently just for fun, but this same kind of cyber-physical system is how we're going to teach robots to be able to move on their own, for example to rescue people from burning buildings.
The hardware here is an ordinary USB webcam for localization, some bullseye printouts on brightly colored paper, and a "Rovoduino", basically a Dagu Rover 5 chassis with a little custom motor driver board. The software is what's interesting, and it combines some fairly ordinary OpenGL display code with an XBee communication channel to the robot, and my OpenCV gradient-based bullseye detector code.
I'm using C++ with tons of libraries on this project, a few of my own, some more by Mike Moss, as well as OpenCV and OpenGL, but the main control code is here:
[ Ссылка ]
The whole project is here:
[ Ссылка ]
![](https://i.ytimg.com/vi/HXHTZxkeglQ/maxresdefault.jpg)