xmlui.dri2xhtml.METS-1.0.item-contributor-funder:
Ministerio de Economía y Competitividad (España) Comunidad de Madrid
Sponsor:
S. Liu and F. Lombardi would like to acknowledge the support of National Science Foundation, USA grants CCF-1953961 and 1812467, and P. Reviriego would like to acknowledge the support of the ACHILLES project PID2019-104207RB-I00 and the Go2Edge network RED2018-102585-T funded by the Spanish Ministry of Science and Innovation and by the Madrid Community research project TAPIR-CM P2018/TCS-4496.
Project:
Gobierno de España. PID2019-104207RB-I00 Gobierno de España. RED2018-102585-T Comunidad de Madrid. P2018/TCS-4496
Keywords:
Classification
,
Memories
,
Error tolerance
,
K nearest neighbors
,
Error control codes
Classification is used in a wide range of applications to determine the class of a new element; for example, it can be used to determine whether an object is a pedestrian based on images captured by the safety sensors of a vehicle. Classifiers are commonly impClassification is used in a wide range of applications to determine the class of a new element; for example, it can be used to determine whether an object is a pedestrian based on images captured by the safety sensors of a vehicle. Classifiers are commonly implemented using electronic components and thus, they are subject to errors in memories and combinational logic. In some cases, classifiers are used in safety critical applications and thus, they must operate reliably. Therefore, there is a need to protect classifiers against errors. The k Nearest Neighbors (kNNs) classifier is a simple, yet powerful algorithm that is widely used; its protection against errors in the neighbor computations has been recently studied. This paper considers the protection of kNNs classifiers against errors in the memory that stores the dataset used to select the neighbors. Initially, the effects of errors in the most common memory configurations (unprotected, Parity-Check protected and Single Error Correction-Double Error Detection (SEC-DED) protected) are assessed. The results show that surprisingly, for most datasets, it is better to leave the memory unprotected than to use error detection codes to discard the element affected by an error in terms of tolerance. This observation is then leveraged to develop Less-is-Better Protection (LBP), a technique that does not require any additional parity bits and achieves better error tolerance than Parity-Check for single bit errors (reducing the classification errors by 59% for the Iris dataset) and SEC-DED codes for double bit errors (reducing the classification errors by 42% for the Iris dataset).[+][-]