Publication:
Integrating Planning, Execution, and Learning to Improve Plan Execution

Loading...
Thumbnail Image
Identifiers
Publication date
2013-02
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
Wiley
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
Algorithms for planning under uncertainty require accurate action models that explicitly capture the uncertainty of the environment. Unfortunately, obtaining these models is usually complex. In environments with uncertainty, actions may produce countless outcomes and hence, specifying them and their probability is a hard task. As a consequence, when implementing agents with planning capabilities, practitioners frequently opt for architectures that interleave classical planning and execution monitoring following a replanning when failure paradigm. Though this approach is more practical, it may produce fragile plans that need continuous replanning episodes or even worse, that result in execution dead-ends. In this paper, we propose a new architecture to relieve these shortcomings. The architecture is based on the integration of a relational learning component and the traditional planning and execution monitoring components. The new component allows the architecture to learn probabilistic rules of the success of actions from the execution of plans and to automatically upgrade the planning model with these rules. The upgraded models can be used by any classical planner that handles metric functions or, alternatively, by any probabilistic planner. This architecture proposal is designed to integrate off-the-shelf interchangeable planning and learning components so it can profit from the last advances in both fields without modifying the architecture.
Description
Keywords
Cognitive architectures, Relational reinforcement learning, Symbolic planning
Bibliographic citation
Computational Intelligence (2013), vol. 29, no. 1, pp. 1-36.