Jiménez Celorrio, SergioFernández Rebollo, FernandoBorrajo Millán, Daniel2015-10-012015-10-012013-02Computational Intelligence (2013), vol. 29, no. 1, pp. 1-36.0824-7935https://hdl.handle.net/10016/21644Algorithms for planning under uncertainty require accurate action models that explicitly capture the uncertainty of the environment. Unfortunately, obtaining these models is usually complex. In environments with uncertainty, actions may produce countless outcomes and hence, specifying them and their probability is a hard task. As a consequence, when implementing agents with planning capabilities, practitioners frequently opt for architectures that interleave classical planning and execution monitoring following a replanning when failure paradigm. Though this approach is more practical, it may produce fragile plans that need continuous replanning episodes or even worse, that result in execution dead-ends. In this paper, we propose a new architecture to relieve these shortcomings. The architecture is based on the integration of a relational learning component and the traditional planning and execution monitoring components. The new component allows the architecture to learn probabilistic rules of the success of actions from the execution of plans and to automatically upgrade the planning model with these rules. The upgraded models can be used by any classical planner that handles metric functions or, alternatively, by any probabilistic planner. This architecture proposal is designed to integrate off-the-shelf interchangeable planning and learning components so it can profit from the last advances in both fields without modifying the architecture.application/pdfeng© John Wiley & Sons, Inc.Cognitive architecturesRelational reinforcement learningSymbolic planningIntegrating Planning, Execution, and Learning to Improve Plan Executionresearch articleInformática10.1111/j.1467-8640.2012.00447.xopen access1136Computational Intelligence29AR/0000012832