Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 15

Proceedings of the 2018 Federated Conference on Computer Science and Information Systems

Deep Evolving Stacking Convex Cascade Neo-Fuzzy Network and Its Rapid Learning

, , , ,

DOI: http://dx.doi.org/10.15439/2018F200

Citation: Proceedings of the 2018 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 15, pages 2933 ()

Full text

Abstract. A deep evolving stacking convex neo-fuzzy network is proposed. It is a feedforward cascade hybrid system, the layers-stacks of which are formed by generalized neo-fuzzy neurons that implement Wang--Mendel fuzzy reasoning. The optimal in the sense of speed algorithms are proposed for its learning. Due to independent layer adjustment, parallelization of calculations in non-linear synapses and optimization of learning processes, the proposed network has high speed that allows to process information in online mode.

References

  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature, vol. 521, pp. 436-444, 2015. http://dx.doi.org/10.1038/nature14539
  2. J. Schmidhuber, “Deep Learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85-117, 2015. http://dx.doi.org/10.1016/j.neunet.2014.09.003
  3. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
  4. D. Graupe, Deep Learning Neural Networks. Design and Case Studies. Singapore : World Scientific, 2016. http://dx.doi.org/10.1142/10190
  5. A. Ivakhnenko, “The group method of data handling – a rival of the method of stochastic approximation,” Soviet Automatic Control, vol. 13, no. 3, pp. 43-55, 1968.
  6. A. Ivakhnenko, “The group method of data handling – a rival of the method of stochastic approximation,” Automatica, vol. 6, no. 2, pp. 207-219, 1970.
  7. N. Kasabov, Evolving Connectionist Systems. Springer-Verlag London, 2007. http://dx.doi.org/10.1007/978-1-84628-347-5
  8. E. Lughofer, Evolving Fuzzy Systems – Methodologies, Advanced Concepts and Applications. Springer Berlin, 2011. http://dx.doi.org/10.1007/978-3-642-18087-3
  9. G. Setlak, Ye. Bodyanskiy, O. Vynokurova, and I. Pliss, “Deep evolving GMDH-SVM-neural network and its learning for Data Mining tasks,” in Proc. 2016 Federated Conf. on Computer Science and Information Systems (FedCSIS), Gdansk, Poland, pp. 141-145, 2016. http://dx.doi.org/10.15439/2016F183
  10. Ye. Bodyanskiy, O. Vynokurova, I. Pliss, G. Setlak, and P. Mulesa, “Fast learning algorithm for deep evolving GMDH-SVM neural network in Data Stream Mining tasks,” in Proc. First IEEE Conf. on Data Stream Mining & Processing, Lviv, Ukraine, pp. 318-321, 2016. http://dx.doi.org/10.1109/DSMP.2016.7583555
  11. A. Bifet, Adaptive Stream Mining: Pattern Learning and Mining from Evolving Data Streams, Amsterdam: IOS Press, 2010. http://dx.doi.org/10.3233/978-1-60750-472-6-i
  12. C. C. Aggarwal, Data Streams: Models and Algorithms (advances in database systems), New York: Springer, 2007. http://dx.doi.org/10.1007/978-0-387-47534-9
  13. S. E. Fahlman and C. Lebiere, “The cascade-correlation learning architecture,” in Advances in Neural Information Processing Systems, D. S. Touretzky Ed. San Mateo, CA : Morgan Kaufman, pp. 524–532, 1990.
  14. Y. Bodyanskiy, O. Tyshchenko, and D. Kopaliani, “A hybrid cascade neural network with an optimized pool in each cascade,” Soft Computing, 19, No12, pp. 3445-3454, 2015. http://dx.doi.org/10.1007/s00500-014-1344-3
  15. Y. Bodyanskiy, O. Tyshchenko, and D. Kopaliani, “Adaptive learning of an evolving cascade neo-fuzzy system in data stream mining tasks,” Evolving Systems, 7, No2, pp. 107-116, 2016. http://dx.doi.org/10.1007/s12530-016-9149-5
  16. T. Yamakawa, E. Uchino, T. Miki, and H. Kusanagi, “A neo-fuzzy neuron and its applications to system identification and prediction of the system behavior,” in Proc. 2nd Int. Conf. on Fuzzy Logic and Neural Networks, pp. 477-483, 1992.
  17. E. Uchino and T. Yamakawa, “Soft computing based signal prediction, restoration and filtering,” Intelligent Hybrid Systems: Fuzzy Logic, Neural Networks and Genetic Algorithms, Boston: Kluwer Academic Publisher, pp. 331-349, 1997. http://dx.doi.org/10.1007/978-1-4615-6191-0_14
  18. T. Miki and T. Yamakawa, “Analog implementation of neo-fuzzy neuron and its on-board learning,” Computational Intelligence and Applications, Piraeus: WSES Press, pp. 144-149, 1999.
  19. Ye. Bodyanskiy, I. Pliss, D. Peleshko, and O. Vynokurova, “Deep hybrid system of computational intelligence for time series prediction,” Int. J. “Information Theories and Applications”, 24, No1, pp. 35-49, 2017.
  20. Ye. Bodyanskiy, O. Vynokurova, I. Pliss, D. Peleshko, and Yu. Rashkevych, “Deep stacking convex neuro-fuzzy system and its online learning,” Advances in “Intelligent Systems and Computing”, vol. 582, Cham, Springer, pp. 49-59, 2018.
  21. Y. Bodyanskiy, G. Setlak, D. Peleshko, and O. Vynokurova, “Hybrid generalized additive neuro-fuzzy system and its adaptive learning algorithms,” in Proc. 2015 IEEE 8th Int. Conf. on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications “IDAACS 2015”, pp. 328-333, 2015. http://dx.doi.org/10.1109/IDAACS.2015.7340753
  22. Y. Bodyanskiy, O. Vynokurova, G. Setlak, and I. Pliss, “Hybrid neuro-neo-fuzzy system and its adaptive learning algorithm,” in Proc. Int. Conf. on Computer Sciences and Information Technologies “CSIT 2015”, pp. 111-114, 2015. http://dx.doi.org/10.1109/STC-CSIT.2015.7325445
  23. Y. Bodyanskiy, O. Vynokurova, I. Pliss, D. Peleshko, and Y. Rashkevych, “Hybrid generalized additive wavelet-neuro-fuzzy- system and its adaptive learning,” Advances in Intelligent Systems and Computing, vol. 470, Cham, Springer, pp. 51-61, 2016. http://dx.doi.org/10.1007/978-3-319-39639-2_5
  24. Y. Bodyanskiy, O. Vynokurova, G. Setlak, D. Peleshko, and P. Mulesa, “Adaptive multivariate hybrid neuro-fuzzy system and its on-board fast learning,” Neurocomputing, 230, pp. 409-416, 2017. http://dx.doi.org/10.1016/j.neucom.2016.12.042
  25. Y. Bodyanskiy, O. Vynokurova, I. Pliss, and D. Peleshko, “Hybrid adaptive systems of computational intelligence and their on-line learning for green IT in energy management tasks,” Studies in Systems, Decision and Control, vol. 74, pp. 229-244, 2017. http://dx.doi.org/10.1007/978-3-319-44162-7_12
  26. T. Hastie and R. Tibshirani, Generalized Additive Models, Chapman and Hall / CRC, 1990.
  27. D. Wolpert, “Stacked generalization,” Neural Networks, vol. 5, No2, pp. 241-259, 1992. http://dx.doi.org/10.1016/S0893-6080(05)80023-1
  28. L. Deng, D. Yu, and J. Platt, “Scalable stacking and learning for building deep architectures,” in 2012 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 2133-2136, 2012. http://dx.doi.org/10.1109/ICASSP.2012.6288333
  29. R. P. Landim, B. Rodrigues, S. R. Silva, and W. M. Caminhas, “A neo-fuzzy-neuron with real time training applied to flux observer for an induction motor,” in Proc. Vth Brazilian Symposium on Neural Networks, pp. 67-72, 1998. http://dx.doi.org/10.1109/SBRN.1998.730996
  30. L. Deng and D. Yu, “Deep convex net: a scalable architecture for speech pattern classification,” in Proc. of Annual Conference of the International Speech Communication Association (Interspeech), pp. 2285-2288, 2011.
  31. Ye. Bodyanskiy, V. Kolodyazhniy, and A. Stephan, “An adaptive learning algorithm for a neuro-fuzzy network,” Lecture Notes in Computer Science 2206, Berlin – Heidelberg – New York, Springer, pp. 68-75, 2001. http://dx.doi.org/10.1007/3-540-45493-4_11
  32. P. Otto, Ye. Bodyanskiy, and V. Kolodyazhniy, “A new learning algorithm for a forecasting neuro-fuzzy network,” Integrated Computer-Aided Engineering, vol. 10, No4, pp. 399-409, 2003.
  33. O. G. Rudenko, E. V. Bodyanskii, I. P. Pliss, “Adaptive algorithm for prediction of random sequences,” Soviet automatic control, 12, No1, pp. 46-48, 1979.
  34. https://archive.ics.uci.edu/ml/datasets/wine
  35. https://archive.ics.uci.edu/ml/datasets/glass+identification