Interpretability and performance of deep neural network based anomaly detection in cyber security and telecommunications
MetadataShow full item record
The rapid development of technology and proliferation of data have driven businesses to pursue anomaly detection research. The application of artificial neural networks (ANNs) in anomaly detection achieves the state-of-the-art, but the end user cannot easily interpret their output. Therefore, to leverage ANNs in the field of Anomaly Detection, it is important to interpret the neural network models. This thesis addresses the question of whether it is possible to design and develop high performance and interpretable anomaly detection solutions based on artificial neural networks. Anomaly detection is an important technique in Cyber Security as, compared to signature based methods, an anomaly detection based approach is capable of detecting previously unseen attacks. One approach to develop a Host Based Intrusion Detection System for Cyber Security is to examine sequences of traces of operating system calls. Two approaches to anomaly detection for sequential data are a prediction based approach and a reconstruction error based approach. A prediction based approach predicts the next element in a sequence based on the previously observed sequence. The work incorporates stacked Convolutional Neural Network (CNNs) with Gated Recurrent Units (GRUs) to analyse the operation system call sequences with an order of magnitude smaller training times. The reconstruction error based approach leverage bidirectional autoencoders to detect the anomalous system call se quences. This approach achieved better Area Under the Curve (AUC) when compared to the predictive approach. This approach to anomaly detection forms the basis for an interpretability framework. Anomaly Detection is also an important technique in telecommunications monitoring. The Cluster Characterized Autoencoder (CCA) Framework was designed, implemented, and evaluated to identify candidate anomalies and interpret the model predictions. This framework addresses the neural network interpretability to support network engineers to perform troubleshooting and aid in root cause analysis.
- PhD Theses 
The following license files are associated with this item: