Go to main content

PDF

Description

Robustness of deep learning methods remains an open issue in a variety of NLP tasks due to the inherent complexity of neural networks. In this paper, we focus on a simple, yet effective model for large-scale text classification: Multinomial Naive Bayes (MNB). In this work, we derive the robust counterpart to MNB, Robust Naive Bayes (RNB), in different adversarial settings that are relevant to text. We compare the robustness of our model against SVM, logistic regression and neural networks in a variety of settings. Our results show that RNB is comparable to other models under random perturbations but vastly outperforms them against targeted attacks. We describe an algorithm for training our model which is orders of magnitude faster than the training time of more complex models.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS