Evaluation of the performance of deep learning techniques over tampered dataset

UNCG Author/Contributor (non-UNCG co-authors, if there are any, appear on document)
Mokhaled N.A. Al-Hamadani (Creator)
Institution
The University of North Carolina at Greensboro (UNCG )
Web Site: http://library.uncg.edu/
Advisor
Shan Suthaharan

Abstract: The reduction of classification error over supervised data sets is the main goal in Deep Learning (DL) approaches. However, tampered data is a serious problem in machine learning techniques. One of the recent interests to the machine learning community is the performance enhancement of supervised learning algorithms over tampered training data. In this thesis, the well-known deep learning techniques known as No-Drop, Dropout and DropConnect have been investigated by using toy example data set, the popular handwritten digits data set (MNIST), and our new natural images data set. The investigation divided into three groups which are training Deep Learning techniques over regular data sets, tampered data sets and noisy data sets. First, Deep Learning techniques have been investigated over regular data sets, the experiments showed good results in terms of accuracy and error rate. Then, Deep learning techniques were investigated with tampered MNIST data, this tampered mechanism is the first step toward the security analysis of Deep Learning techniques. The results of DL techniques over tampered MNIST data set showed the same as in regular MNIST. Therefore, the investigation continued with adding two noises which were Gaussian noise and Salt and Pepper noise to reduce the clarity of the MNIST data set. The results showed that Deep Learning techniques still give good accuracy under noise field environment. The thesis contribution is the extensive research that supports Deep Learning techniques that trained over tampered data to obtain high classification accuracy.

Additional Information

Publication
Thesis
Language: English
Date: 2015
Keywords
Deep Learning, DropConnect, Dropout, Machine Learning, Neural Networks, No-Drop
Subjects
Machine learning

Email this document to