Title | Class-based Prediction Errors to Categorize Text with Out-of-vocabulary Words |
Publication Type | Conference Proceedings |
Year of Conference | 2017 |
Authors | Serrà, Joan, Ilias Leontiadis, Dimitris Spathis, Gianluca Stringhini, Jeremy Blackburn, and Athena Vakali |
Series Title | ALW1'17 |
Conference Location | Vancouver, Canada |
Abstract | Common approaches to text categorization essentially rely either on n-gram counts or on word embeddings. This presents important difficulties in highly dynamic or quickly-interacting environments, where the appearance of new words and/or varied misspellings is the norm. A paradigmatic example of this situation is abusive online behavior, with social networks and media platforms struggling to effectively combat uncommon or non-blacklisted hate words. To better deal with these issues in those fast-paced environments, we propose using the error signal of class-based language models as input to text classification algorithms. In particular, we train a next-character prediction model for any given class, and then exploit the error of such class-based models to inform a neural network classifier. This way, we shift from the ability to describe seen documents to the ability to predict unseen content. Preliminary studies using out-of-vocabulary splits from abusive tweet data show promising results, outperforming competitive text categorization strategies by 4–11%. |
Class-based Prediction Errors to Categorize Text with Out-of-vocabulary Words
PDF: