Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 12

Position Papers of the 2017 Federated Conference on Computer Science and Information Systems

Direct Potentiality Assimilation for Improving Multi-Layered Neural Networks

DOI: http://dx.doi.org/10.15439/2017F552

Citation: Position Papers of the 2017 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 12, pages 1923 ()

Full text

Abstract. The present paper aims to propose a new potential learning method to overcome the problem of collective interpretation for interpreting multi-layered neural networks. The potential learning has been introduced to detect important components of neural networks and to train them, taking into account the importance of components. Recently, it has been applied to multi-layered neural networks and then the interpretation of input neurons or variables can be possible by collectively treating intermediate layers. However, the collective interpretation for multi-layered neural networks tends to be instable, because the potentialities computed in the pre-training become different from those in the main training. To overcome this problem, we introduce the potential learning with direct potential assimilation. The direct potential assimilation means that the potentiality assimilation is not applied in the phase of pre-training but it is applied directly to training multi-layered neural networks. The new method was applied to the student evaluation data set. Then, it was observed that the selectivity of connection weights could be increased. Then, the input-output potentiality was quite similar to the regression coefficients of logistic regression analysis. Finally, the new method could extract more explicitly input-output relations than the regression coefficients by the logistic regression analysis, while improving generalization performance.


  1. R. Andrews, J. Diederich, and A. B. Tickle, “Survey and critique of techniques for extracting rules from trained artificial neural networks,” Knowledge-based systems, vol. 8, no. 6, pp. 373–389, 1995.
  2. J. M. Benı́tez, J. L. Castro, and I. Requena, “Are artificial neural networks black boxes?,” IEEE Transactions on neural networks, vol. 8, no. 5, pp. 1156–1164, 1997.
  3. M. Ishikawa, “Rule extraction by successive regularization,” Neural Networks, vol. 13, no. 10, pp. 1171–1183, 2000.
  4. T. Q. Huynh and J. A. Reggia, “Guiding hidden layer representations for improved rule extraction from neural networks,” IEEE Transactions on Neural Networks, vol. 22, no. 2, pp. 264–275, 2011.
  5. B. Mak and T. Munakata, “Rule extraction from expert heuristics: a comparative study of rough sets with neural network and ID3,” European journal of operational research, vol. 136, pp. 212–229, 2002.
  6. J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson, “Understanding neural networks through deep visualization,” arXiv preprint https://arxiv.org/abs/1506.06579, 2015.
  7. D. Erhan, Y. Bengio, A. Courville, and P. Vincent, “Visualizing higher-layer features of a deep network,” University of Montreal, vol. 1341, 2009.
  8. R. Kamimura, “Simple and stable internal representation by potential mutual information maximization,” in International Conference on Engineering Applications of Neural Networks, pp. 309–316, Springer, 2016.
  9. R. Kamimura, “Collective interpretation and potential joint information maximization,” in Intelligent Information Processing VIII: 9th IFIP TC 12 International Conference, IIP 2016, Melbourne, VIC, Australia, November 18-21, 2016, Proceedings, pp. 12–21, Springer, 2016.
  10. R. Kamimura, “Repeated potentiality assimilation: Simplifying learning procedures by positive, independent and indirect operation for improving generalization and interpretation (in press),” in Proc. of IJCNN-2016, (Vancouver), 2016.
  11. R. Linsker, “Self-organization in a perceptual network,” Computer, vol. 21, no. 3, pp. 105–117, 1988.
  12. R. Linsker, “How to generate ordered maps by maximizing the mutual information between input and output signals,” Neural computation, vol. 1, no. 3, pp. 402–411, 1989.
  13. R. Linsker, “Local synaptic learning rules suffice to maximize mutual information in a linear network,” Neural Computation, vol. 4, no. 5, pp. 691–702, 1992.
  14. R. Linsker, “Improved local learning rule for information maximization and related applications,” Neural networks, vol. 18, no. 3, pp. 261–265, 2005.
  15. G. Castellano and A. M. Fanelli, “Variable selection using neural-network models,” Neurocomputing, vol. 31, pp. 1–13, 1999.
  16. G. G. Oliveira, O. C. Pedrollo, and N. M. Castro, “Simplifying artificial neural network models of river basin behaviour by an automated procedure for input variable selection,” Engineering Applications of Artificial Intelligence, vol. 40, pp. 47–61, 2015.
  17. J. D. Olden, M. K. Joy, and R. G. Death, “An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data,” Ecological Modelling, vol. 178, no. 3, pp. 389– 397, 2004.
  18. C. E. Shannon, “A mathematical theory of communication,” ACM SIGMOBILE Mobile Computing and Communications Review, vol. 5, no. 1, pp. 3–55, 2001.
  19. C. E. Shannon, “Prediction and entropy of printed english,” Bell system technical journal, vol. 30, no. 1, pp. 50–64, 1951.
  20. N. Abramson, “Information theory and coding,” 1963.
  21. G. Hinton, “A practical guide to training restricted boltzmann machines,” Momentum, vol. 9, no. 1, p. 926, 2010.
  22. J. Kim, V. D. Calhoun, E. Shim, and J.-H. Lee, “Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia,” NeuroImage, vol. 124, pp. 127–146, 2016.
  23. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
  24. T. Xiao, H. Li, W. Ouyang, and X. Wang, “Learning deep feature representations with domain guided dropout for person re-identification,” arXiv preprint https://arxiv.org/abs/1604.07528, 2016.
  25. J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, 2015.
  26. K. Bache and M. Lichman, “UCI machine learning repository,” 2013.
  27. R. Kamimura, “Self-organizing selective potentiality learning to detect important input neurons,” in Systems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on, pp. 1619–1626, IEEE, 2015.
  28. D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber, “Deep, big, simple neural nets for handwritten digit recognition,” Neural computation, vol. 22, no. 12, pp. 3207–3220, 2010.