TY - GEN
T1 - Intelligent Jamming of Deep Neural Network Based Signal Classification for Shared Spectrum
AU - Zhang, Wenhan
AU - Krunz, Marwan
AU - Ditzler, Gregory
N1 - Publisher Copyright: © 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Deep neural networks (DNNs) have recently been applied in the classification of radio frequency (RF) signals. One use case of interest relates to the discernment between different wireless technologies that share the spectrum. Although highly accurate DNN classifiers have been proposed, preliminary research points to the vulnerability of these classifiers to adversarial machine learning (AML) attacks. In one such attack, a surrogate DNN model is trained by the attacker to produce intelligently crafted low-power 'perturbations' that degrade the classification accuracy of the legitimate classifier. In this paper, we design four DNN-based classifiers for the identification of Wi-Fi, 5G NR-Unlicensed (NR-U), and LTE LAA transmissions over the 5 GHz UNII bands. Our DNN models include both convolutional neural networks (CNNs) as well as several recurrent neural networks (RNNs) models, particularly LSTM and Bidirectional LSTM (BiLSTM) networks. We demonstrate the high classification accuracy of these models under 'benign' (non-adversarial) noise. We then study the efficacy of these classifiers under AML-based perturbations. Specifically, we use the fast gradient sign method (FGSM) to generate adversarial perturbations. Different attack scenarios are studied, depending on how much information the attacker has about the defender's classifier. In one extreme scenario, called 'white-box' attack, the attacker has full knowledge of the defender's DNN, including its hyperparameters, its training dataset, and even the seeds used to train the network. This attack is shown to significantly degrade the classification accuracy even when the FGSM-based perturbations are low power, i.e., the received SNR is relatively high. We then consider more realistic attack scenarios, where the attacker has partial or no knowledge of the defender's classifier. Even under limited knowledge, adversarial perturbations can still lead to significant reduction in the classification accuracy, relative to classification under AWGN with the same SNR level.
AB - Deep neural networks (DNNs) have recently been applied in the classification of radio frequency (RF) signals. One use case of interest relates to the discernment between different wireless technologies that share the spectrum. Although highly accurate DNN classifiers have been proposed, preliminary research points to the vulnerability of these classifiers to adversarial machine learning (AML) attacks. In one such attack, a surrogate DNN model is trained by the attacker to produce intelligently crafted low-power 'perturbations' that degrade the classification accuracy of the legitimate classifier. In this paper, we design four DNN-based classifiers for the identification of Wi-Fi, 5G NR-Unlicensed (NR-U), and LTE LAA transmissions over the 5 GHz UNII bands. Our DNN models include both convolutional neural networks (CNNs) as well as several recurrent neural networks (RNNs) models, particularly LSTM and Bidirectional LSTM (BiLSTM) networks. We demonstrate the high classification accuracy of these models under 'benign' (non-adversarial) noise. We then study the efficacy of these classifiers under AML-based perturbations. Specifically, we use the fast gradient sign method (FGSM) to generate adversarial perturbations. Different attack scenarios are studied, depending on how much information the attacker has about the defender's classifier. In one extreme scenario, called 'white-box' attack, the attacker has full knowledge of the defender's DNN, including its hyperparameters, its training dataset, and even the seeds used to train the network. This attack is shown to significantly degrade the classification accuracy even when the FGSM-based perturbations are low power, i.e., the received SNR is relatively high. We then consider more realistic attack scenarios, where the attacker has partial or no knowledge of the defender's classifier. Even under limited knowledge, adversarial perturbations can still lead to significant reduction in the classification accuracy, relative to classification under AWGN with the same SNR level.
UR - http://www.scopus.com/inward/record.url?scp=85124148235&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124148235&partnerID=8YFLogxK
U2 - 10.1109/MILCOM52596.2021.9653072
DO - 10.1109/MILCOM52596.2021.9653072
M3 - Conference contribution
T3 - Proceedings - IEEE Military Communications Conference MILCOM
SP - 987
EP - 992
BT - MILCOM 2021 - 2021 IEEE Military Communications Conference
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE Military Communications Conference, MILCOM 2021
Y2 - 29 November 2021 through 2 December 2021
ER -