DTSC - GTSA - Artículos de Revistashttp://hdl.handle.net/10016/90422018-01-23T07:26:18Z2018-01-23T07:26:18ZImportance sampling with transformed weightsVázquez López, Manuel AlbertoMíguez Arenas, Joaquínhttp://hdl.handle.net/10016/257922018-01-12T13:45:55Z2017-06-08T00:00:00ZImportance sampling with transformed weights
Vázquez López, Manuel Alberto; Míguez Arenas, Joaquín
The importance sampling (IS) method lies at the core of many Monte Carlo-based techniques. IS allows the approximation of a target probability distribution by drawing samples from a proposal (or importance) distribution, different from the target, and computing importance weights (IWs) that account for the discrepancy between these two distributions. The main drawback of IS schemes is the degeneracy of the IWs, which significantly reduces the efficiency of the method. It has been recently proposed to use transformed IWs (TIWs) to alleviate the degeneracy problem in the context of population Monte Carlo, which is an iterative version of IS. However, the effectiveness of this technique for standard IS is yet to be investigated. The performance of IS when using TIWs is numerically assessed, and showed that the method can attain robustness to weight degeneracy thanks to a bias/variance trade-off.
2017-06-08T00:00:00ZA proof of uniform convergence over time for a distributed particle filterMíguez Arenas, JoaquínVázquez López, Manuel Albertohttp://hdl.handle.net/10016/258992018-01-12T13:45:56Z2016-05-01T00:00:00ZA proof of uniform convergence over time for a distributed particle filter
Míguez Arenas, Joaquín; Vázquez López, Manuel Alberto
Distributed signal processing algorithms have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters (PFs). However, most distributed PFs involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard PFs do not hold for their distributed counterparts. In this paper, we analyze a distributed PF based on the non-proportional weight-allocation scheme of Bolic et al (2005) and prove rigorously that, under certain stability assumptions, its asymptotic convergence is guaranteed uniformly over time, in such a way that approximation errors can be kept bounded with a fixed computational budget. To illustrate the theoretical findings, we carry out computer simulations for a target tracking problem. The numerical results show that the distributed PF has a negligible performance loss (compared to a centralized filter) for this problem and enable us to empirically validate the key assumptions of the analysis.
2016-05-01T00:00:00ZToward Massive, Ultrareliable, and Low-Latency Wireless Communication With Short PacketsDurisi, GiuseppeKoch, Tobias MircoPopovski, Petarhttp://hdl.handle.net/10016/260572018-01-12T13:45:57Z2016-09-01T00:00:00ZToward Massive, Ultrareliable, and Low-Latency Wireless Communication With Short Packets
Durisi, Giuseppe; Koch, Tobias Mirco; Popovski, Petar
Most of the recent advances in the design of high-speed wireless systems are based on information-theoretic principles that demonstrate how to efficiently transmit long data packets. However, the upcoming wireless systems, notably the fifth-generation (5G) system, will need to support novel traffic types that use short packets. For example, short packets represent the most common form of traffic generated by sensors and other devices involved in machine-to-machine (M2M) communications. Furthermore, there are emerging applications in which small packets are expected to carry critical information that should be received with low latency and ultrahigh reliability. Current wireless systems are not designed to support short-packet transmissions. For example, the design of current systems relies on the assumption that the metadata (control information) is of negligible size compared to the actual information payload. Hence, transmitting metadata using heuristic methods does not affect the overall system performance. However, when the packets are short, metadata may be of the same size as the payload, and the conventional methods to transmit it may be highly suboptimal. In this paper, we review recent advances in information theory, which provide the theoretical principles that govern the transmission of short packets. We then apply these principles to three exemplary scenarios (the two-way channel, the downlink broadcast channel, and the uplink random access channel), thereby illustrating how the transmission of control information can be optimized when the packets are short. The insights brought by these examples suggest that new principles are needed for the design of wireless protocols supporting short packets. These principles will have a direct impact on the system design.
2016-09-01T00:00:00ZOn noncoherent fading relay channels at high signal-to-noise ratioKoch, Tobias MircoKramer, Gerhardhttp://hdl.handle.net/10016/260072018-01-12T13:45:57Z2013-04-01T00:00:00ZOn noncoherent fading relay channels at high signal-to-noise ratio
Koch, Tobias Mirco; Kramer, Gerhard
The capacity of noncoherent regular-fading relay channels is studied where all terminals are aware of the fading statistics but not of their realizations. It is shown that if the fading coefficient of the channel between the transmitter and the receiver can be predicted more accurately from its infinite past than the fading coefficient of the channel between the relay and the receiver, then at high signal-to-noise ratio (SNR), the relay does not increase capacity. It is further shown that if the fading coefficient of the channel between the transmitter and the relay can be predicted more accurately from its infinite past than the fading coefficient of the channel between the relay and the receiver, then at high SNR, one can achieve communication rates that are within one bit of the capacity of the multiple-input single-output fading channel that results when the transmitter and the relay can cooperate.
2013-04-01T00:00:00Z