Publication:
Adaptive sensor-fusion of depth and color information for cognitive robotics

Loading...
Thumbnail Image
Identifiers
Publication date
2011
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
The presented work goes one step further than only combining data from different sensors. The corresponding points of an image and a 3D point cloud are determined through calibration. Color information is thereby assigned to every voxel in the overlapping area of a stereo camera system and a laser range finder. Then we analyze the image and search for the locations, which are especially susceptible to errors by both sensors. Depending on the ascertained situation, we try to correct or minimize errors. By analyzing and interpreting the images as well as removing errors we create an adaptive tool which improves multi-sensor fusion. This allows us to correct the fused data and to perfect the multi-modal sensor fusion or to predict the locations where the sensor information is vague or defective. The presented results demonstrate a clear improvement over standard procedures and show that other progress based on our work is possible.
Description
Proceedings of: 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO), December 7-11, 2011, Phuket (Thailand)
Keywords
Bibliographic citation
2011 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2011. Pp. 957 - 962