Logo PTI
Polish Information Processing Society
Logo FedCSIS

Annals of Computer Science and Information Systems, Volume 8

Proceedings of the 2016 Federated Conference on Computer Science and Information Systems

Modification of the Probabilistic Neural Network with the Use of Sensitivity Analysis Procedure

,

DOI: http://dx.doi.org/10.15439/2016F280

Citation: Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, M. Ganzha, L. Maciaszek, M. Paprzycki (eds). ACSIS, Vol. 8, pages 97103 ()

Full text

Abstract. In this article, the modified probabilistic neural network (MPNN) is proposed. The network is an extension of conventional PNN with the weight coefficients introduced between pattern and summation layer of the model. These weights are calculated by using the sensitivity analysis (SA) procedure. MPNN is employed to the classification tasks and its performance is assessed on the basis of prediction accuracy. The effectiveness of MPNN is also verified by analyzing its results with these obtained for both the original PNN and commonly known classification algorithms: support vector machine, multilayer perceptron, radial basis function network and k-Means clustering procedure. It is shown that the proposed modification improves the prediction ability of the PNN classifier.

References

  1. D. F. Specht, “Probabilistic neural networks,” Neural Networks, vol. 3, no. 1, pp. 109–118, 1990.
  2. D. F. Specht, “Probabilistic neural networks and the polynomial adaline as complementary techniques for classification,” Neural Networks, IEEE Transactions on, vol. 1, no. 1, pp. 111–121, Mar 1990. http://dx.doi.org/10.1109/72.80210
  3. R. Folland, E. Hines, R. Dutta, P. Boilot, and D. Morgan, “Comparison of neural network predictors in the classification of tracheal–bronchial breath sounds by respiratory auscultation,” Artificial intelligence in medicine, vol. 31, no. 3, pp. 211–220, 2004.
  4. D. Mantzaris, G. Anastassopoulos, and A. Adamopoulos, “Genetic algorithm pruning of probabilistic neural networks in medical disease estimation,” Neural Networks, vol. 24, no. 8, pp. 831–835, 2011.
  5. M. Kusy and R. Zajdel, “Application of reinforcement learning algorithms for the adaptive computation of the smoothing parameter for probabilistic neural network,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 26, no. 9, pp. 2163–2175, 2015.
  6. M. Kusy and R. Zajdel, “Probabilistic neural network training procedure based on q(0)–learning algorithm in medical data classification,” Applied Intelligence, vol. 41, no. 3, pp. 837–854, 2014.
  7. Y. Chtioui, S. Panigrahi, and R. Marsh, “Conjugate gradient and approximate newton methods for an optimal probabilistic neural network for food color classification,” Optical Engineering, vol. 37, no. 11, pp. 3015–3023, 1998.
  8. S. Ramakrishnan and S. Selvan, “Image texture classification using wavelet based curve fitting and probabilistic neural network,” International Journal of Imaging Systems and Technology, vol. 17, no. 4, pp. 266–275, 2007.
  9. X.-B. Wen, H. Zhang, X.-Q. Xu, and J.-J. Quan, “A new watermarking approach based on probabilistic neural network in wavelet domain,” Soft Computing, vol. 13, no. 4, pp. 355–360, 2009.
  10. H. Adeli and A. Panakkat, “A probabilistic neural network for earthquake magnitude prediction,” Neural networks, vol. 22, no. 7, pp. 1018–1024, 2009.
  11. S. Venkatesh and S. Gopal, “Orthogonal least square center selection technique–a robust scheme for multiple source partial discharge pattern recognition using radial basis probabilistic neural network,” Expert Systems with Applications, vol. 38, no. 7, pp. 8978–8989, 2011.
  12. P. A. Kowalski and P. Kulczycki, “Data sample reduction for classification of interval information using neural network sensitivity analysis,” in Artificial Intelligence: Methodology, Systems, and Applications, ser. Lecture Notes in Computer Science, D. Dicheva and D. Dochev, Eds. Springer Berlin Heidelberg, 2010, vol. 6304, pp. 271–272.
  13. P. A. Kowalski and P. Kulczycki, “Interval probabilistic neural network,” Neural Computing and Applications, 2016. http://dx.doi.org/10.1007/s00521-015-2109-3 Online available.
  14. K. Elenius and H. G. Tråvén, “Multi-layer perceptrons and probabilistic neural networks for phoneme recognition.” in EUROSPEECH, 1993.
  15. T. P. Tran, T. T. S. Nguyen, P. Tsai, and X. Kong, “Bspnn: boosted subspace probabilistic neural network for email security,” Artificial Intelligence Review, vol. 35, no. 4, pp. 369–382, 2011.
  16. T. P. Tran, L. Cao, D. Tran, and C. D. Nguyen, “Novel intrusion detection using probabilistic neural network and adaptive boosting,” International Journal of Computer Science and Information Security, vol. 6, no. 1, pp. 83–91, 2009.
  17. D. Montana, “A weighted probabilistic neural network,” in NIPS, 1991, pp. 1110–1117.
  18. T. Song, M. Jamshidi, R. R. Lee, and M. Huang, “A novel weighted probabilistic neural network for mr image segmentation,” in Systems, Man and Cybernetics, 2005 IEEE International Conference on, vol. 3. IEEE, 2005, pp. 2501–2506.
  19. T. Song, C. Gasparovic, N. Andreasen, J. Bockholt, M. Jamshidi, R. R. Lee, and M. Huang, “A hybrid tissue segmentation approach for brain mr images,” Medical and Biological Engineering and Computing, vol. 44, no. 3, pp. 242–249, 2006. http://dx.doi.org/10.1007/s11517-005-0021-1
  20. S. Ramakrishnan and M. Emary, Ibrahiem, “Comparative study between traditional and modified probabilistic neural networks,” Telecommun. Syst., vol. 40, no. 1-2, pp. 67–74, 2009. http://dx.doi.org/10.1007/s11235-008-9138-5
  21. D. Nanjundappan et al., “Hybrid weighted probabilistic neural network and biogeography based optimization for dynamic economic dispatch of integrated multiple-fuel and wind power plants,” International Journal of Electrical Power & Energy Systems, vol. 77, pp. 385–394, 2016.
  22. M. Lichman, “UCI machine learning repository,” 2013. [Online]. Available: http://archive.ics.uci.edu/ml
  23. J. M. Zurada, A. Malinowski, and S. Usui, “Perturbation method for deleting redundant inputs of perceptron networks,” Neurocomputing, vol. 14, no. 2, pp. 177–193, 1997.
  24. J. Zurada, A. Malinowski, and I. Cloete, “Sensitivity analysis for minimization of input data dimension for feedforward neural network,” in Circuits and Systems, 1994. ISCAS ’94., 1994 IEEE International Symposium on, vol. 6, May 1994, pp. 447–450.
  25. P. A. Kowalski and P. Kulczycki, “A complete algorithm for the reduction of pattern data in the classification of interval information,” International Journal of Computational Methods, vol. 13, no. 03, p. 1650018, 2016. http://dx.doi.org/10.1142/S0219876216500183
  26. B. W. Silverman, Density estimation for statistics and data analysis. CRC press, 1986, vol. 26.
  27. V. Vapnik, The nature of statistical learning theory. Springer Science & Business Media, 2013.
  28. D. E. Rumelhart, J. L. McClelland, and C. PDP Research Group, Eds., Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations. Cambridge, MA, USA: MIT Press, 1986.
  29. D. S. Broomhead and D. Lowe, “Multivariable functional interpolation and adaptive networks,” Complex Systems, vol. 2, pp. 321–355, 1988.
  30. J. A. Hartigan and M. A. Wong, “Algorithm as 136: A k-means clustering algorithm,” Applied statistics, pp. 100–108, 1979.
  31. P. H. Sherrod, “Dtreg predictive modelling software.” [Online]. Available: http://www.dtreg.com
  32. S. Chen, X. Wang, and C. J. Harris, “Experiments with repeating weighted boosting search for optimization signal processing applica- tions,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 35, no. 4, pp. 682–693, 2005.