Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 8

Proceedings of the 2016 Federated Conference on Computer Science and Information Systems

Deep Evolving GMDH-SVM-Neural Network and its Learning for Data Mining Tasks

, , ,

DOI: http://dx.doi.org/10.15439/2016F183

Citation: Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 8, pages 141145 ()

Full text

Abstract. In the paper, the deep evolving neural network and its learning algorithms (in batch and on-line mode) are proposed. The deep evolving neural network's architecture is developed based on GMDH approach and least squares support vector machines with fixed number of the synaptic weights. The proposed system is simple in computational implementation, characterized by high learning speed and allows processing of data, which are fed sequentially in on-line mode. The proposed system can be used for solving a wide class of Dynamic Data Mining tasks, which are connected with non-stationary, nonlinear stochastic and chaotic signals. The computational experiments are confirmed to effectiveness of the developed approach.

References

  1. S. Haykin, Neural Networks and Learning Machines. Upper Saddle River, New Jersey: Pearson, Prentice Hall, 2009.
  2. K.-L. Du and M. Swamy, Neural Networks and Statistical Learning. Springer-Verlag London, 2014. http://dx.doi.org/10.1007/978-1-4471-5571-3
  3. G. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, May 2006. http://dx.doi.org/10.1162/neco.2006.18.7.1527.
  4. I. Arel, D. Rose, and T. Karnowski, “Deep machine learning - a new frontier in artificial intelligence research,” IEEE Computational Intelligence Magazine, vol. 5, no. 4, pp. 13–18, Nov. 2010. http://dx.doi.org/10.1109/MCI.2010.938364.
  5. Y. Bengio, Y. LeCun, and G. Hinton, “Deep learning,” Nature, no. 521, pp. 436–444, May 2015. http://dx.doi.org/10.1038/nature14539.
  6. J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks, no. 61, pp. 85–117, Jan. 2015. http://dx.doi.org/10.1016/j.neunet.2014.09.003.
  7. J.Kacprzyk and W. Pedrycz, Eds., Springer Handbook of Computational Intelligence. Springer-Verlag Berlin Heidelberg, 2015. [Online]. Available: http://dx.doi.org/10.1007/978-3-662-43505-2
  8. W.Pedrycz and S.-M. Chen, Eds., Information Granularity, Big Data, and Computational Intelligence. Springer International Publishing Switzerland, 2015. http://dx.doi.org/10.1007/978-3-319-08254-7
  9. V. Vapnik, The Nature of Statistical Learning Theory. Springer-Verlag New York, 2000. http://dx.doi.org/10.1007/978-1-4757-3264-1
  10. C. Cortes and V. Vapnik, “Support vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, Sep. 1995. http://dx.doi.org/10.1023/A:1022627411411.
  11. M. Grochowina and L. Leniowska, “Comparison of svm and k-nn classifiers in the estimation of the state of the arteriovenous fistula problem,” in Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, ser. Annals of Computer Science and Information Systems, M. Ganzha, L. Maciaszek, and M. Paprzycki, Eds., vol. 5. IEEE, 2015. http://dx.doi.org/10.15439/2015F194 pp. 249–254.
  12. B. Krawczyk, “Combining one-class support vector machines for microarray classification,” in Proceedings of the 2013 Federated Confer- ence on Computer Science and Information Systems, M. P. M. Ganzha, L. Maciaszek, Ed. IEEE, 2013, pp. pages 83–89.
  13. M. Ochab and W. Wajs, “Bronchopulmonary dysplasia prediction using support vector machine and libsvm,” in Proceedings of the 2014 Federated Conference on Computer Science and Information Systems, ser. Annals of Computer Science and Information Systems, M. P. M. Ganzha, L. Maciaszek, Ed., vol. 2. IEEE, 2014. http://dx.doi.org/10.15439/2014F111 pp. pages 201–208.
  14. S. Kim, , S. Kavuri, and M. Lee, Neural Information Processing: 20th International Conference, ICONIP 2013, Daegu, Korea, November 3-7, 2013. Proceedings, Part I. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, ch. Deep Network with Support Vector Machines, pp. 458–465. http://dx.doi.org/10.1007/978-3-642-42054-2_57
  15. S. Kim, Y. Choi, and M. Lee, “Deep learning with support vector data description,” Neurocomputing, vol. 165, no. 1-2, pp. 111–117, Oct. 2015. http://dx.doi.org/10.1016/j.neucom.2014.09.086.
  16. S. Kim, Z. Yu, R. Kil, and M. Lee, “Deep learning of support vector machines with class probability output networks,” Neural Networks, vol. 64, pp. 19–28, Apr. 2015. http://dx.doi.org/10.1016/j.neunet.2014.09.007.
  17. J. Suykens, T. Gestel, J. Brabanter, B. Moor, and J. Vanderwalle, Least Squares Support Vector Machines. World Scientific, 2002. http://dx.doi.org/10.1007/978-3-319-08254-7
  18. A. Ivakhnenko, “The group method of data handling - a rival of the method of stochastic approximation,” Soviet Automatic Control, vol. 13, no. 3, pp. 43–55, 1968.
  19. A. Ivakhnenko, “The group method of data handling - a rival of the method of stochastic approximation,” Automatica, vol. 6, no. 2, pp. 207–219, 1970.
  20. A. Ivakhnenko, “Polynomial theory of complex systems,” IEEE Transaction on Systems, Man, Cybernetics, vol. 1, no. 4, pp. 364–378, Oct. 1971. http://dx.doi.org/10.1109/TSMC.1971.4308320.
  21. A. Ivakhnenko, G. Ivakhnenko, and A. Muller, “Self-organization of the neural networks with active neurons,” Pattern Recognition and Image Analysis, vol. 4, no. 2, pp. 177–178, 1994.
  22. A. Ivakhnenko, “Self-organization of neuro net with active neurons for effects of nuclear test explosions forecasting,” System Analysis Modeling Simulation, vol. 20, no. 1-2, pp. 107–116, 1995.
  23. T. Kondo, “Gmdh neural network algorithm using the heuristic self-organization method and its application to the pattern identification problem,” in Proceedings of the 37th SICE Annual Conference, Japan, Tokyo, Jul. 1998. http://dx.doi.org/10.1109/SICE.1998.742993 pp. 1143–1148.
  24. T. Kondo, “Identification of radial basis function networks by using revised gmdh-type neural networks with a feedback loop,” in Proceedings of the 41st SICE Annual Conference, Japan, Osaka, vol. 5, Aug. 2002. http://dx.doi.org/10.1109/SICE.2002.1195514 pp. 2882–2887.
  25. Y. Bodyanskiy, I. Pliss, and O. Vynokurova, “Hybrid gmdh-neural network of computational intelligence,” in Proceedings of 3rd International Workshop on Inductive Modelling, Poland, Krynica, Sep. 2009, pp. 100–107.
  26. Y. Bodyanskiy, Y. Zaychenko, E. Pavlikovskaya, M. Samarina, and Y. Viktorov, “The neo-fuzzy neural network structure optimization using the gmdh for the solving forecasting and classification problems,” in Proceedings of 3rd International Workshop on Inductive Modelling, Poland, Krynica, Sep. 2009, pp. 77–89.
  27. Y. Bodyanskiy, O. Vynokurova, and N. Teslenko, “Cascade gmdh-wavelet-neuro-fuzzy network,” in Proceedings of International Workshop Inductive Modelling, Ukraine, Kyiv, Sep. 2011, pp. 22–30.
  28. Y. Bodyanskiy, O. Vynokurova, A. Dolotov, and O. Kharchenko, “Wavelet-neuro-fuzzy-network structure optimization using gmdh for the solving forecasting tasks,” in Proceedings of International Workshop Inductive Modelling, Ukraine, Kyiv, Sep. 2013, pp. 61–67.
  29. N. Kasabov, Evolving Connectionist Systems. Springer-Verlag London, 2007. http://dx.doi.org/10.1007/978-1-84628-347-5
  30. P. Angelov, D. Filev, and N. Kasabov, Evolving Intelligent Systems: Methodology and Applications. John Willey and Sons, 2010.
  31. K. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Transaction on Neural Networks, vol. 1, no. 1, pp. 4–26, Mar. 1990. http://dx.doi.org/10.1109/72.80202.