Logo PTI Logo FedCSIS

Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)

Annals of Computer Science and Information Systems, Volume 43

Q-ID: A Reinforcement Learning Framework for Adaptive Intrusion Detection

,

DOI: http://dx.doi.org/10.15439/2025F1820

Citation: Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 43, pages 3542 ()

Full text

Abstract. The growing sophistication and frequency of cyber threats in communication networks demand Intrusion Detection Systems (IDS) that adapt to evolving attack patterns. Traditional approaches, based on static rules or purely supervised models, often fail to recognize novel attacks, leaving critical infrastructures exposed. Reinforcement Learning (RL) provides a dynamic alternative by enabling agents to refine detection policies through continuous feedback. In this work, we propose a Qlearning--based Intrusion Detection (Q-ID) system and train it on the CICIDS2017 dataset. The RL formulation defines the state as the flow's feature vector, the action as the classification decision, and the reward as +1 for correct predictions and −1 otherwise. To ensure stable convergence, the reward is integrated with cross-entropy loss in a hybrid objective, allowing continued improvement even after the supervised component has plateaued. Unlike prior IDS methods that rely solely on offline supervised training, our approach fuses reinforcement feedback with supervised optimization to support adaptive and robust detection. Experimental results, conducted under class imbalance and realistic evaluation splits, show that the proposed system achieves 99.3\% accuracy, outperforming strong baselines including deep neural networks and traditional classifiers. Moreover, the RL agent demonstrates robustness under skewed traffic distributions and adaptability to previously unseen attack types. These results highlight reinforcement learning as a promising paradigm for building resilient IDS in critical communication environments

References

  1. M. D. J. Dulik, “Cyber Security Challenges in Future Military Battlefield Information Networks”, Advances in Military Technology, vol. 14, no. 2, pp. 263-277, 2019.
  2. S. Desai, B. Dave, T. Vyas and A. R. Nair, “Intrusion Detection System - Deep Learning Perspective,” 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, pp. 1193-1198, 2021, https://dx.doi.org/10.1109/ICAIS50930.2021.9395992.
  3. M. Wiering and M. van Otterlo, “Reinforcement Learning”, Berlin, Germany:Springer, vol. 12, 2012.
  4. Canadian Institute for Cybersecurity, “Intrusion Detection Evaluation Dataset (CIC-IDS2017),” University of New Brunswick. [Online]. Available: https://www.unb.ca/cic/datasets/ids-2017.html. [Accessed: 10-Jun-2024].
  5. S. Otoum, B. Kantarci and H. Mouftah, “Empowering Reinforcement Learning on Big Sensed Data for Intrusion Detection,” ICC 2019 - 2019 IEEE International Conference on Communications (ICC), Shanghai, China, pp. 1-7, 2019, https://dx.doi.org/10.1109/ICC.2019.8761575.
  6. M. Maliha, “A Supervised Learning Approach: Detection of Cyber Attacks,” 2021 IEEE International Conference on Telecommunications and Photonics (ICTP), Dhaka, Bangladesh, pp. 1-5, 2021, https://dx.doi.org/10.1109/ICTP53732.2021.9744169.
  7. Choudhary, S. and Kesswani, N., “Analysis of KDD-Cup’99, NSL-KDD and UNSW-NB15 datasets using deep learning in IoT”. Procedia Computer Science, 167, pp.1561-1573, 2020.
  8. S. Norwahidayah, A. A. Noraniah, N. Farahah, A. Amirah, N. Liyana and N. Suhana, “Performances of artificial neural network (ANN) and particle swarm optimization (PSO) using KDD cup’99 dataset in intrusion detection system (IDS)”, J. Phys. Conf. Ser., vol. 1874, no. 1, May 2021.
  9. D. Wang, D. Tan and L. Liu, “Particle swarm optimization algorithm: An overview”, Soft Comput., vol. 22, no. 2, pp. 387-408, 2018.
  10. Fox, K.L., Henning, R.R., Reed, J.H. and Simonian, R., “A neural network approach towards intrusion detection”, In Proceedings of the 13th national computer security conference, vol. 1, pp. 125-134, October 1990.
  11. Debar, H., Becker, M. and Siboni, D., May, “A neural network component for an intrusion detection system”, In IEEE symposium on security and privacy, vol. 727, pp. 240-250, 1992.
  12. Cansian, A., Moreira, E.D.S., Carvalho, A.C.P.D.L.F. and Bonifácio Junior, J.M., “Network intrusion detection using neural networks”, In Proceedings of International Conference on Computational Intelligence and Multimedia Applications, 1997.
  13. Ramadas, M., Ostermann, S. and Tjaden, B., “Detecting anomalous network traffic with self-organizing maps”. In International Workshop on Recent Advances in Intrusion Detection (pp. 36-54). Berlin, Heidelberg: Springer Berlin Heidelberg, September 2003.
  14. A. Ghubaish, Z. Yang and R. Jain, “HDRL-IDS: A Hybrid Deep Reinforcement Learning Intrusion Detection System for Enhancing the Security of Medical Applications in 5G Networks,” 2024 International Conference on Smart Applications, Communications and Networking (SmartNets), Harrisonburg, VA, USA, 2024, pp. 1-6, https://dx.doi.org/10.1109/SmartNets61466.2024.10577692.
  15. Mahjoub, C., Hamdi, M., Alkanhel, R.I., Mohamed, S. and Ejbali, R., 2024. An adversarial environment reinforcement learning-driven intrusion detection algorithm for Internet of Things. EURASIP Journal on Wireless Communications and Networking, 2024(1), p.21.
  16. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. , “Playing Atari with Deep Reinforcement Learning”, 2013, ArXiv. /abs/1312.5602
  17. Wolf, P., Hubschneider, C., Weber, M., Bauer, A., Härtl, J., Dürr, F. and Zöllner, J.M., 2017, June. Learning how to drive in a real world simulation with deep q-networks. In 2017 IEEE Intelligent Vehicles Symposium (IV) (pp. 244-250). IEEE.
  18. Rosu, I., “The bellman principle of optimality”, 2002, Availiable at: http://faculty.chicagogsb.edu/ioanid.rosu/research/notes/bellman.pdf.