Recently, a new set of biometric traits, called medical biometrics, have been explored for human identity verification. This study introduces a novel framework for recognizing human identity through heart sound signals, commonly referred to as phonocardiograms (PCGs). The framework is built on extracting and suitably processing Mel-Frequency Cepstral Coefficients (MFCCs) from PCGs and on a classifier based on a Multilayer Perceptron (MLP) network. A large dataset containing heart sounds acquired from 206 people has been used to perform the experiments. The classifier was tuned to obtain the same false positive and false negative misclassification rates (equal error rate: EER = FPR = FNR) on chunks of audio lasting 2 s. This target has been reached, splitting the dataset into 70% and 30% training and testing non-overlapped subsets, respectively. A recurrence filter has been applied to also improve the performance of the system in the presence of noisy recordings. After the application of the filter on chunks of audio signal lasting from 2 to 22 s, the performance of the system has been evaluated in terms of recall, specificity, precision, negative predictive value, accuracy, and F1-score. All the performance metrics are higher than 97.86% with the recurrence filter applied on a window lasting 22 s and in different noise conditions.

Robust Biometric Verification Using Phonocardiogram Fingerprinting and a Multilayer-Perceptron-Based Classifier

Serrano, Salvatore
2024-01-01

Abstract

Recently, a new set of biometric traits, called medical biometrics, have been explored for human identity verification. This study introduces a novel framework for recognizing human identity through heart sound signals, commonly referred to as phonocardiograms (PCGs). The framework is built on extracting and suitably processing Mel-Frequency Cepstral Coefficients (MFCCs) from PCGs and on a classifier based on a Multilayer Perceptron (MLP) network. A large dataset containing heart sounds acquired from 206 people has been used to perform the experiments. The classifier was tuned to obtain the same false positive and false negative misclassification rates (equal error rate: EER = FPR = FNR) on chunks of audio lasting 2 s. This target has been reached, splitting the dataset into 70% and 30% training and testing non-overlapped subsets, respectively. A recurrence filter has been applied to also improve the performance of the system in the presence of noisy recordings. After the application of the filter on chunks of audio signal lasting from 2 to 22 s, the performance of the system has been evaluated in terms of recall, specificity, precision, negative predictive value, accuracy, and F1-score. All the performance metrics are higher than 97.86% with the recurrence filter applied on a window lasting 22 s and in different noise conditions.
2024
Inglese
ELETTRONICO
Multidisciplinary Digital Publishing Institute (MDPI)
13
22
1
22
22
https://www.mdpi.com/2079-9292/13/22/4377
Internazionale
Esperti anonimi
biometrics, heart print verification, Mel-Frequency Cepstral Coefficient (MFCC), MLP neural networks
no
info:eu-repo/semantics/article
Avanzato, Roberta; Beritelli, Francesco; Serrano, Salvatore
14.a Contributo in Rivista::14.a.1 Articolo su rivista
3
262
open
File in questo prodotto:
File Dimensione Formato  
electronics-13-04377-with-cover.pdf

accesso aperto

Descrizione: Full text
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 2.41 MB
Formato Adobe PDF
2.41 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11570/3319650
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 3
social impact