This book presents some of the most recent research results in the area of machine learning and robot perception. The book contains eight chapters.
Relevant progress has been done, within the Robotics field, in mechanical systems, actuators, control and planning. This fact, allows a wide application of industrial robots, where manipulator arms, Cartesian robots, etc., widely outcomes human capacity. However, the achievement of a robust and reliable autonomous mobile robot, with ability to evolve and accomplish general tasks in unconstrained environments, is still far from accomplishment. This is due, mainly, because autonomous mobile robots suffer the limitations of nowadays perception systems. A robot has to perceive its environment in order to interact (move, find and manipulate objects, etc.) with it. Perception allows making an internal representation (model) of the environment, which has to be used for moving, avoiding collision, finding its position and its way to the target, and finding objects to manipulate them. Without a sufficient environment perception, the robot simply can’t make any secure displacement or interaction, even with extremely efficient motion or planning systems. The more unstructured an environment is, the most dependent the robot is on its sensorial system. The success of industrial robotics relies on rigidly controlled and planned environments, and a total control over robot’s position in every moment. But as the environment structure degree decreases, robot capacity gets limited.
Some kind of model environment has to be used to incorporate perceptions and taking control decisions. Historically, most mobile robots are based on a geometrical environment representation for navigation tasks. This facilitates path planning and reduces dependency on sensorial system, but forces to continuously monitor robot’s exact position, and needs precise environment modeling. The navigation problem is solved with odometryrelocalization, or with an external absolute localization system, but only in highly structured environments. Nevertheless, the human beings use a topological environment representation to achieve their amazing autonomous capacity. Here, environment is sparsely modeled by a series of identifiable objects or places and the spatial relations between them. Resultant models are suitable to be learned, instead of hard-coded. This is well suited for open and dynamic environments, but has a greater dependency on the perception system. Computer Vision is the most powerful and flexible sensor family available at the present moment. The combination of topological environment modeling and vision is the most promising selection for future autonomous robots. This implies the need for developing visual perception systems able to learn from the environment.