Citation:
Louliej, A., Jabrane, Y., Jiménez, V. P. G., & Guilloud, F. (2021). Dimensioning an FPGA for Real-Time Implementation of State of the Art Neural Network-Based HPA Predistorter. Electronics, 10(13)
xmlui.dri2xhtml.METS-1.0.item-contributor-funder:
Comunidad de Madrid Ministerio de Economía y Competitividad (España)
Sponsor:
This work was partly funded by projects TERESA-ADA (TEC2017-90093-C3-2-R) (MINECO/
AEI/FEDER, UE) and MFOC (Madrid Flight on Chip—Innovation Cooperative Projects Comunidad
of Madrid—HUBS 2018/MadridFlightOnChip).
Project:
Gobierno de España. TEC2017-90093-C3-2-R Comunidad de Madrid. HUBS 2018/MadridFlightOnChip
Orthogonal Frequency Division Multiplexing (OFDM) is one of the key modulations for current and novel broadband communications standards. For example, Multi-band Orthogonal Frequency Division Multiplexing (MB-OFDM) is an excellent choice for the ECMA-368 UltraOrthogonal Frequency Division Multiplexing (OFDM) is one of the key modulations for current and novel broadband communications standards. For example, Multi-band Orthogonal Frequency Division Multiplexing (MB-OFDM) is an excellent choice for the ECMA-368 Ultra Wide-band (UWB) wireless communication standard. Nevertheless, the high Peak to Average Power Ratio (PAPR) of MB-OFDM UWB signals reduces the power efficiency of the key element in mobile devices, the High Power Amplifier (HPA), due to non-linear distortion, known as the non-linear saturation of the HPA. In order to deal with this limiting problem, a new and efficient pre-distorter scheme using a Neural Networks (NN) is proposed and also implemented on Field Programmable Gate Array (FPGA). This solution based on the pre-distortion concept of HPA non-linearities offers a good trade-off between complexity and performance. Some tests and validation have been conducted on the two types of HPA: Travelling Wave Tube Amplifiers (TWTA) and Solid State Power Amplifiers (SSPA). The results show that the proposed pre-distorter design presents low complexity and low error rate. Indeed, the implemented architecture uses 10% of DSP (Digital Signal Processing) blocks and 1% of LUTs (Look up Table) in case of SSPA, whereas it only uses 1% of LUTs in case of TWTA. In addition, it allows us to conclude that advanced machine learning techniques can be efficiently implemented in hardware with the adequate design.[+][-]