Christoforou, EvgeniaFernández Anta, AntonioGeorgiou, ChryssisMosteiro, Miguel A.Sánchez, Angel2015-07-092015-07-092013-05Journal of Statistical Physics 151 (2013) 3-4, pp. 654-6720022-4715 (Print)1572-9613 (Online)https://hdl.handle.net/10016/21375Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive-conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.19application/pdfeng© 2013 SpringerEvolutionary game theoryCooperation; Markov chainsCrowd computingReinforcement learningCrowd computing as a cooperation problem: an evolutionary approachresearch articleMatemáticas10.1007/s10955-012-0661-0open access6543-4672Journal of Statistical Physics151AR/0000013131