Akyildiz, Omer DenizCrisan, DanMíguez Arenas, Joaquín2023-07-142023-07-142020-11Akyildiz, Ö. D., Crisan, D., & Míguez, J. (2020). Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization. Statistics and Computing, 30(6), 1645-1663.0960-3174https://hdl.handle.net/10016/37838We introduce and analyze a parallel sequential Monte Carlo methodology for the numerical solution of optimization problems that involve the minimization of a cost function that consists of the sum of many individual components. The proposed scheme is a stochastic zeroth-order optimization algorithm which demands only the capability to evaluate small subsets of components of the cost function. It can be depicted as a bank of samplers that generate particle approximations of several sequences of probability measures. These measures are constructed in such a way that they have associated probability density functions whose global maxima coincide with the global minima of the original cost function. The algorithm selects the best performing sampler and uses it to approximate a global minimum of the cost function. We prove analytically that the resulting estimator converges to a global minimum of the cost function almost surely and provide explicit convergence rates in terms of the number of generated Monte Carlo samples and the dimension of the search space. We show, by way of numerical examples, that the algorithm can tackle cost functions with multiple minima or with broad "flat" regions which are hard to minimize using gradient-based techniques.19eng© The Author(s) 2020.Atribución 3.0 EspañaSequential Monte CarloStochastic optimizationNonconvex optimizationGradient-free optimizationSamplingParallel sequential Monte Carlo for stochastic gradient-free nonconvex optimizationresearch articleBiología y BiomedicinaCiencias de la InformaciónEconomíaElectrónicaEstadísticaTelecomunicacioneshttps://doi.org/10.1007/s11222-020-09964-4open access164561663Statistics and Computing30AR/0000027719