Publication:
An integrated approach of learning, planning, and execution

Loading...
Thumbnail Image
Identifiers
ISSN: 0921-0296 (Print)
ISSN: 1573-0409 (Online)
Publication date
2000-09
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
Agents (hardware or software) that act autonomously in an environment have to be able to integrate three basic behaviors: planning, execution, and learning. This integration is mandatory when the agent has no knowledge about how its actions can affect the environment, how the environment reacts to its actions, or, when the agent does not receive as an explicit input, the goals it must achieve. Without an a priori theory, autonomous agents should be able to self-propose goals, set-up plans for achieving the goals according to previously learned models of the agent and the environment, and learn those models from past experiences of successful and failed executions of plans. Planning involves selecting a goal to reach and computing a set of actions that will allow the autonomous agent to achieve the goal. Execution deals with the interaction with the environment by application of planned actions, observation of resulting perceptions, and control of successful achievement of the goals. Learning is needed to predict the reactions of the environment to the agent actions, thus guiding the agent to achieve its goals more efficiently. In this context, most of the learning systems applied to problem solving have been used to learn control knowledge for guiding the search for a plan, but few systems have focused on the acquisition of planning operator descriptions. As an example, currently, one of the most used techniques for the integration of (a way of) planning, execution, and learning is reinforcement learning. However, they usually do not consider the representation of action descriptions, so they cannot reason in terms of goals and ways of achieving those goals. In this paper, we present an integrated architecture, lope, that learns operator definitions, plans using those operators, and executes the plans for modifying the acquired operators. The resulting system is domain-independent, and we have performed experiments in a robotic framework. The results clearly show that the integrated planning, learning, and executing system outperforms the basic planner in that domain.
Description
Keywords
Autonomous intelligent systems, Embedded machine learning, Planning and execution, Reinforcement learning, Theory formation, Theory revision, Unsupervised machine learning
Bibliographic citation
Journal of intelligent and robotic systems, September 2000, vol. 29, n. 1, p. 47-78