Logo PTI Logo FedCSIS

Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS)

Annals of Computer Science and Information Systems, Volume 43

Deep Differentiable Logic Gate Networks Based on Fuzzy Łukasiewicz T-norm

,

DOI: http://dx.doi.org/10.15439/2025F1666

Citation: Proceedings of the 20th Conference on Computer Science and Intelligence Systems (FedCSIS), M. Bolanowski, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 43, pages 219230 ()

Full text

Abstract. Differentiable Logic Gate Networks (DLNs) offer a compelling framework for symbolic interpretability and reducing inference cost. Building on prior works using Menger [12] and Zadeh T-norms [18], we investigate the \L{}ukasiewicz T-norm as an alternative relaxation for classical logic gates. While it provides strong gradients in some regions, its flat areas result in vanishing gradients hinder training. To address this issue, we use an initialization strategy [13] that is analogous to residual connection in Neural Networks to encourages error signal propagation during training. Our empirical results show that \L{}ukasiewicz based DLNs, though slightly less accurate, benefit from faster inference and lower memory requirements compared to Neural Networks, giving the opportunity of practical application in e.g. in resource constrained devices. Due to the structural clarity DLNs facilitate direct inspection and tracing of information flow which make them suitable for application in explainable artificial intelligence (XAI)

References

  1. B. Becker and R. Kohavi, “Adult," UCI Machine Learning Repository, 1996. Available: https://doi.org/10.24432/C5XW20
  2. S. Bosse, “IoT and Edge Computing using virtualized low-resource integer Machine Learning with support for CNN, ANN, and Decision Trees,” Proc. 18th Conf. Comput. Sci. Intell. Syst. (FedCSIS), vol. 35, pp. 367–376, 2023, M. Ganzha, L. Maciaszek, M. Paprzycki, and D. Śl˛ ezak, Eds., IEEE. Available: http://dx.doi.org/10.15439/2023F7745
  3. J. Choi, Z. Wang, S. Venkataramani, P. Chuang, V. Srinivasan, and K. Gopalakrishnan, “PACT: Parameterized Clipping Activation for Quantized Neural Networks," 2018. Available: https://arxiv.org/abs/1805.06085
  4. L. da Cruz, C. Sierra-Franco, G. Silva-Calpa, and A. Raposo, “Enabling Autonomous Medical Image Data Annotation: A human-in-the-loop Reinforcement Learning Approach,” in *Proc. 16th Conf. on Computer Science and Intelligence Systems (FedCSIS)*, vol. 25, M. Ganzha, L. Maciaszek, M. Paprzycki, and D. Śl˛ ezak, Eds., IEEE, 2021, pp. 271–279. Available: http://dx.doi.org/10.15439/2021F86
  5. L. Dey, S. Jana, T. Dasgupta, and T. Gupta, “Deciphering Clinical Narratives – Augmented Intelligence for Decision Making in Healthcare Sector,” in *Proc. 18th Conf. on Computer Science and Intelligence Systems*, M. Ganzha, L. Maciaszek, M. Paprzycki, and D. Śl˛ ezak, Eds., vol. 35, *Annals of Computer Science and Information Systems*, IEEE, 2023, pp. 11–24. Available: http://dx.doi.org/10.15439/2023F3385
  6. D. Długosz, A. Królak, T. Eftestøl, S. Ørn, T. Wiktorski, K. R. J. Oskal, and M. Nygård, “ECG Signal Analysis for Troponin Level Assessment and Coronary Artery Disease Detection: the NEEDED Study 2014,” in *Proc. 2018 Federated Conf. Comput. Sci. Inf. Syst.*, vol. 15, M. Ganzha, L. Maciaszek, and M. Paprzycki, Eds. IEEE, 2018, pp. 1065–1068. Available: http://dx.doi.org/10.15439/2018F247
  7. S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, “Deep learning with limited numerical precision," in Proceedings of the 32nd International Conference on Machine Learning (ICML’15), 2015, pp. 1737–1746.
  8. T. Hoefler, D. Alistarh, T. Ben-Nun, N. Dryden, and A. Peste, “Sparsity in deep learning: pruning and growth for efficient inference and training in neural networks," J. Mach. Learn. Res., vol. 22, no. 1, art. no. 241, Jan. 2021, 124 pp.
  9. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization," 2014. Available: https://arxiv.org/abs/1412.6980
  10. E. van Krieken, E. Acar, and F. van Harmelen, “Analyzing Differentiable Fuzzy Logic Operators," Artificial Intelligence, vol. 302, 2022, pp. 103602. Available: https://www.sciencedirect.com/science/article/pii/S0004370221001533
  11. J. Kosiński, K. Szklanny, A. Wieczorkowska, and M. Wichrowski, “An Analysis of Game-Related Emotions Using EMOTIV EPOC,” *Proc. 2018 Federated Conf. on Computer Science and Information Systems (FedCSIS)*, vol. 15, Annals of Computer Science and Information Systems, pp. 913–917, 2018, M. Ganzha, L. Maciaszek, and M. Paprzycki, Eds. IEEE. Available: http://dx.doi.org/10.15439/2018F296
  12. A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images," Univ. of Toronto, 2012.
  13. Y. LeCun, C. Cortes, and C. J. C. Burges, “The MNIST database of handwritten digits," 1998. Available: http://yann.lecun.com/exdb/mnist/
  14. M. Marcinkiewicz and G. Mrukwa, “Quantitative Impact of Label Noise on the Quality of Segmentation of Brain Tumors on MRI scans,” Proc. 2019 Federated Conf. on Computer Science and Information Systems (FedCSIS), vol. 18, pp. 61–65, 2019. Edited by M. Ganzha, L. Maciaszek, and M. Paprzycki. IEEE. Available: http://dx.doi.org/10.15439/2019F273
  15. K. Menger, “Statistical Metrics," Proc. Nat. Acad. Sci. U.S.A., vol. 28, no. 12, Dec. 1942, pp. 535–537. Available: https://doi.org/10.1073/pnas.28.12.535
  16. D. C. Mocanu, E. Mocanu, P. Stone, P. H. Nguyen, M. Gibescu, and A. Liotta, “Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science," Nature Communications, vol. 9, no. 1, Jun. 2018, art. 2383. Available: https://doi.org/10.1038/s41467-018-04316-3
  17. A. Morar, F. Moldoveanu, A. Moldoveanu, O. Balan, and V. Asavei, “GPU Accelerated 2D and 3D Image Processing,” in *Proc. 2017 Federated Conf. Comput. Sci. Inf. Syst. (FedCSIS)*, vol. 11, M. Ganzha, L. Maciaszek, and M. Paprzycki, Eds., IEEE, 2017, pp. 653–656. Available: http://dx.doi.org/10.15439/2017F265
  18. A. Paszke et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library," 2019. Available: https://arxiv.org/abs/1912.01703
  19. F. Petersen, C. Borgelt, H. Kuehne, and O. Deussen, “Deep Differentiable Logic Gate Networks," in Advances in Neural Information Processing Systems, vol. 35, 2022, pp. 2006–2018. Available: https://proceedings.neurips.cc/paper_files/paper/2022/file/0d3496dd0cec77a999c98d35003203ca-Paper-Conference.pdf
  20. F. Petersen, H. Kuehne, C. Borgelt, J. Welzel, and S. Ermon, “Convolutional Differentiable Logic Gate Networks," in Advances in Neural Information Processing Systems, vol. 37, 2024, pp. 121185–121203. Available: https://proceedings.neurips.cc/paper_files/paper/2024/file/db988b089d8d97d0f159c15ed0be6a71-Paper-Conference.pdf
  21. M. Pudo, M. Wosik, and A. Janicki, “Open Vocabulary Keyword Spotting with Small-Footprint ASR-based Architecture and Language Models,” *Proc. 18th Conf. Computer Science and Intelligence Systems (FedCSIS)*, Annals of Computer Science and Information Systems, vol. 35, pp. 657–666, 2023, M. Ganzha, L. Maciaszek, M. Paprzycki, and D. Śl˛ ezak, Eds., IEEE. Available: http://dx.doi.org/10.15439/2023F8594
  22. H. Qin, R. Gong, X. Liu, X. Bai, J. Song, and N. Sebe, “Binary neural networks: A survey," Pattern Recognition, vol. 105, 2020, art. 107281. Available: https://www.sciencedirect.com/science/article/pii/S0031320320300856
  23. M. Rapoport and T. Tamir, “Best Response Dynamics for VLSI Physical Design Placement,” in *Proc. 2019 Federated Conf. on Computer Science and Information Systems (FedCSIS)*, M. Ganzha, L. Maciaszek, and M. Paprzycki, Eds., IEEE, vol. 18, 2019, pp. 147–156. Available: http://dx.doi.org/10.15439/2019F91
  24. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors," Nature, vol. 323, no. 6088, Oct. 1986, pp. 533–536. Available: https://doi.org/10.1038/323533a0
  25. A. Telikani, A. Tahmassebi, W. Banzhaf, and A. H. Gandomi, “Evolutionary Machine Learning: A Survey," ACM Comput. Surv., vol. 54, no. 8, art. 161, Oct. 2021. Available: https://doi.org/10.1145/3467477
  26. S. Thrun et al.,“The Monk’s Problem’s: A Performance Comparison of Different Learning Methods," 1991. Available: https://api.semanticscholar.org/CorpusID:59810521
  27. L. Uberg and S. Kadry, “Analysis of Brain Tumor Using MRI Images,” in *Proc. 17th Conf. Computer Science and Intelligence Systems (FedCSIS)*, M. Ganzha, L. Maciaszek, M. Paprzycki, and D. Śl˛ ezak, Eds., vol. 30, *Annals of Computer Science and Information Systems*, IEEE, 2022, pp. 201–204. Available: http://dx.doi.org/10.15439/2022F69
  28. P. Wasilewski, and Ch. D. Nguy, “Deep Differentiable Logic Gate Networks Based on Fuzzy Zadeh’s T-norm,” Proceedings of 6th Polish Conference on Artificial Intelligence (PP-RAI 2025), Katowice, Poland, Apr. 7-9, 2025. to appear in Lecture Notes in Networks and Systems, Springer.
  29. L. A. Zadeh, “Fuzzy sets," Information and Control, vol. 8, no. 3, 1965, pp. 338–353. Available: https://www.sciencedirect.com/science/article/pii/S001999586590241X
  30. M. Zwitter and M. Soklic, “Breast Cancer," UCI Machine Learning Repository, 1988. Available: https://doi.org/10.24432/C51P4M