Deep learning regularization in imbalanced data
Date
2020-11-03
item.page.datecreated
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers Inc.
Abstract
Deep neural networks are known to have a large number of parameters which can lead to overfitting. As a result various regularization methods designed to mitigate the model overfitting have become an indispensable part of many neural network architectures. However, it remains unclear which regularization methods are the most effective. In this paper, we examine the impact of regularization on neural network performance in the context of imbalanced data. We consider three main regularization approaches: L{1}, L{2}, and dropout regularization. Numerical experiments reveal that the L{1} regularization method can be an effective tool to prevent overfitting in neural network models for imbalanced data. Index Terms-regularization, neural networks, imbalanced data. © 2020 IEEE.
Description
This conference paper is not available at CUD collection. The version of scholarly record of this conference paper is published in 2020 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI) (2020), available online at: https://doi.org/10.1109/CCCI49893.2020.9256674
item.page.type
Conference Paper
item.page.format
Keywords
imbalanced data, neural networks, regularization
Citation
Kamalov, F., & Leung, H. H. (2020, November). Deep learning regularization in imbalanced data. In 2020 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI) (pp. 1-5). IEEE. https://doi.org/10.1109/CCCI49893.2020.9256674