Deep learning (dl)methods are highly promising for cognitive decoding, with their unmatchedability to learn versatile representations of complex data.
Yet, their widespread application in cognitive decoding is hindered by their general lack of interpretability as well as difficulties in applying them to small datasets and in ensuring their reproducibility and robustness.
Deep learning models are hardly adopted in clinical workflows,mainly due to their lack of interpretability.
The black-box-ness of deeplearning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of explainable artificial intelligence (xai).
In this paper, we are concerned with the investigation of the investigation of the various deep learning techniques employed for network intrusion detection and we introduce a deep learning framework for cybersecurity applications.