|Target Tracking and Following System|
The constructive need for robots to co-exist with humans requires Human-machine interaction. It is a challenge to operate these robots in such dynamic environments which requires continuous decision making and environment attribute updates in real-time. An autonomous robot guide is well suitable in places such as museums, libraries, schools, hospital, etc. In order to provide the basic navigational ability to the robot and to study the coarse structure of the environment, visual, sonar, ultrasonic, infrared and other range sensors are required. The robots have to acquire information about the environment through various sensors. Nevertheless, the dynamic nature of the environment and the need to interact with the users has set requirements that are more challenging in robot perception. In this work, the research focus is put on the interaction capability of the mobile robot, particular in detecting, tracking and following of human subjects.
The research platform is the Magellan Pro mobile robot (Fig 1), which is equipped with 16 ultrasonic and 16 tactile sensors with 360 degree coverage. It has an onboard Pentium II computer with Red Hat 6.2 Linux Operating System and operates on a 24V re-chargeable battery supply. A wireless Ethernet port is used to control the robot from a remote computer terminal as required. A Sony EVI-D30 pantilt camera is installed onto the robot for visual sensing and is controlled via the serial RS232 communication port.
Two different tracking and following systems are implemented on the Magellan robot. The first tracking system mainly focuses on face tracking based on skin color. A neural network is utilized to learn the skin and non-skin colors. The color skin probability map is utilized for skin classification and morphology based pre-processing. Heuristic rule is used for face ratio analysis and Bayesian cost analysis for label classification. A face detection module based on a two-dimensional color model in the YCrCb and YUV color space is selected over the traditional skin color model in a three dimensional color space. A modified CAMSHIFT tracking mechanism in a one-dimensional HSV color space is developed and implemented onto the mobile robot. In addition to the visual cues, the tracking process considers sixteen sonar scan and tactile sensor readings from the robot to generate a robust measure of the person’s distance from the robot. The robot thus decides an appropriate action; namely, to follow the human subject and perform obstacle avoidance. The proposed approach is orientation invariant under varying lighting conditions and invariant to natural transformations such as translation, rotation and scaling. Such a multi-modal solution is effective for face detection and tracking. Fig. 2 and Fig. 3 show successful face tracking case and human following case respectively.
The second tracking system is used to tracking random moving objects with different colors and shapes. In the second tracking system, an improved particle-filter algorithm is proposed to track a randomly moving object. Initially, the moving object is detected through a sequence of images taken by the stationary pan–tilt camera using the motion-detection algorithm. Then, the particle-filter-based tracking algorithm, which relies on information from multiple sensors, is utilized to track the moving object. The robot vision system and the control system are integrated effectively through the state variable representation. The object size deformation problem is taken care of by a variable particle-object size. When moving randomly, the object’s position and velocity vary quickly and are hard to track. This results in serious sample impoverishment (all particles collapse to a single point within a few iterations) in the particle-filter algorithm. A new resampling algorithm is proposed to tackle sample impoverishment. The experimental results with the mobile robot show that the new algorithm can reduce sample impoverishment effectively. The mobile robot continuously follows the object with the help of the pan–tilt camera by keeping the object at the center of the image. The robot is capable of continuously tracking a human’s random movement at walking rate. Fig. 4 shows a sequence of interesting images obtained when the mobile robot follows a person who performs random movements. The robot watches the movements and approaches the person. In Fig. 4, the size of the tracked object varies largely from frame to frame, while the blue dot is always on the tracked object as the pan–tilt camera is able to lock the object successfully.
|< Prev||Next >|