Logo PTI Logo FedCSIS

Proceedings of the 17th Conference on Computer Science and Intelligence Systems

Annals of Computer Science and Information Systems, Volume 30

Independent Component Analysis Based on Jacobi Iterative Framework and L1-norm Criterion

DOI: http://dx.doi.org/10.15439/2022F157

Citation: Proceedings of the 17th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 30, pages 305312 ()

Full text

Abstract. Most recently, a link between principal component analysis (PCA) based on L1-norm and independent component analysis (ICA) has been discovered. It was shown that the ICA can actually be performed by L1-PCA under the whitening assumption, inheriting the improved robustness to outliers. In this paper, a novel ICA algorithm based on Jacobi iterative framework is proposed that utilizes the non-differentiable L1-norm criterion as an objective function. We show that such function can be optimized by sequentially applying Jacobi rotations to the whitened data, wherein optimal rotation angles are found using an exhaustive search method. The experiments show that the proposed method provides a superior convergence as compared to FastICA variants. It also outperforms existing methods in terms of source extraction performance for Laplacian distributed sources. Although the proposed approach exploits the exhaustive search method, it offers a lower computational complexity than that of the optimal L1-PCA algorithm.

References

  1. P. Comon, “Independent component analysis, a new concept?” Signal Process., vol. 36, no. 3, pp. 287–314, 1994. http://dx.doi.org/10.1016/0165-1684(94)90029-9
  2. P. Comon and C. Jutten, Eds., Handbook of Blind Source Separation. Independent Component Analysis and Applications, 1st ed. Oxford, USA: Academic Press, inc., 2010.
  3. A. Hyvärinen, “New approximations of differential entropy for independent component analysis and projection pursuit,” in Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems 10, 1997, pp. 273–279.
  4. A. Hyvärinen, “Fast and robust fixed-point algorithms for independent component analysis,” IEEE Trans. Neural Netw., vol. 10, no. 3, pp. 626–634, 1999. http://dx.doi.org/10.1109/72.761722
  5. A. Hyvärinen, “Survey on independent component analysis,” Neural Computing Surveys, vol. 2, 07 1999.
  6. I. T. Jolliffe, Principal Component Analysis. New York: Springer Verlag, 2002.
  7. N. Kwak, “Principal component analysis based on L1-norm maximization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 9, pp. 1672–1680, 2008. http://dx.doi.org/10.1109/TPAMI.2008.114
  8. P. P. Markopoulos, G. N. Karystinos, and D. A. Pados, “Optimal algorithms for L1 -subspace signal processing,” IEEE Transactions on Signal Processing, vol. 62, no. 19, pp. 5046–5058, 2014. http://dx.doi.org/10.1109/TSP.2014.2338077
  9. P. P. Markopoulos, S. Kundu, S. Chamadia, and D. A. Pados, “Efficient L1-norm principal-component analysis via bit flipping,” IEEE Transactions on Signal Processing, vol. 65, no. 16, pp. 4252–4264, 2017. doi: 10.1109/TSP.2017.2708023
  10. R. Martın-Clemente and V. Zarzoso, “On the link between L1-PCA and ICA,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 3, pp. 515–528, 2017. http://dx.doi.org/10.1109/TPAMI.2016.2557797
  11. J. Cardoso and A. Souloumiac, “Blind beamforming for non-Gaussian signals,” IEE Proceedings F - Radar and Signal Processing, vol. 140, no. 6, pp. 362–370, 1993. http://dx.doi.org/10.1049/ip-f-2.1993.0054
  12. J. Cardoso, “High-order contrasts for independent component analysis,” Neural Computation, vol. 11, no. 1, pp. 157–192, 1999. doi: 10.1162/089976699300016863
  13. E. Learned-Miller and J. Fisher, “ICA using spacings estimates of entropy,” Journal of Machine Learning Research, vol. 4, pp. 1271–1295, Dec. 2003.
  14. T. M. Cover and J. A. Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). USA: Wiley-Interscience, 2006.
  15. A. Dermoune and T. Wei, “FastICA algorithm: Five criteria for the optimal choice of the nonlinearity function,” IEEE Transactions on Signal Processing, vol. 61, pp. 2078–2087, 04 2013. http://dx.doi.org/10.1109/TSP.2013.2243440
  16. V. Zarzoso and P. Comon, “Robust independent component analysis by iterative maximization of the kurtosis contrast with algebraic optimal step size,” IEEE Transactions on Neural Networks, vol. 21, no. 2, pp. 248–261, 2010. http://dx.doi.org/10.1109/TNN.2009.2035920
  17. A. Bell and T. Sejnowski, “An information-maximization approach to blind separation and blind deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129–1159, 1995. http://dx.doi.org/10.1162/neco.1995.7.6.1129
  18. G. Golub and C. Van Loan, Matrix Computations. USA: Johns Hopkins University Press, 2013.
  19. W. Ouedraogo, A. Souloumiac, and C. Jutten, “Non-negative independent component analysis algorithm based on 2D Givens rotations and a Newton optimization,” in Latent Variable Analysis and Signal Separation. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15995-4 pp. 522–529.
  20. M. Parfieniuk, “A parallel factorization for generating orthogonal matrices,” in International Conference on Parallel Processing and Applied Mathematics (PPAM) 2019. Bialystok, Poland: Springer, 2019. doi: 10.1007/978-3-030-43229 pp. 567–578.
  21. M. Tsagris, C. Beneki, and H. Hassani, “On the folded normal distribution,” Mathematics, vol. 2, no. 1, pp. 12–28, feb 2014. doi: 10.3390/math2010012
  22. C. Samuelsson, “Comparative evaluation of the stochastic simplex bisection algorithm and the scipy.optimize module,” in Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, ser. Annals of Computer Science and Information Systems, vol. 5, 2015. http://dx.doi.org/10.15439/2015F47 pp. 573–578.
  23. T. Krzeszowski and K. Wiktorowicz, “Evaluation of selected fuzzy particle swarm optimization algorithms,” in Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, ser. Annals of Computer Science and Information Systems, vol. 8, 2016. http://dx.doi.org/10.15439/2016F206 pp. 571–575.
  24. K. Pytel, “Hybrid multievolutionary system to solve function optimization problems,” in Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, ser. Annals of Computer Science and Information Systems, vol. 11, 2017. http://dx.doi.org/10.15439/2017F85 pp. 87–90.
  25. A. Alihodzic, S. Delalić, and D. Gusic, “An effective integrated meta-heuristic algorithm for solving engineering problems,” in Proceedings of the 2020 Federated Conference on Computer Science and Information Systems, ser. Annals of Computer Science and Information Systems, vol. 21, 2020. http://dx.doi.org/10.15439/2020KM81 pp. 207–214.
  26. P. Tichavsky, Z. Koldovsky, and E. Oja, “Performance analysis of the FastICA algorithm and Cramér-Rao bounds for linear independent component analysis,” IEEE Transactions on Signal Processing, vol. 54, no. 4, pp. 1189–1203, 2006. http://dx.doi.org/10.1109/TSP.2006.870561
  27. Z. Koldovský, P. Tichavsky, and E. Oja, “Efficient variant of algorithm FastICA for independent component analysis attaining the Cramér-Rao lower bound,” IEEE Transactions on Neural Networks, vol. 17, pp. 1265–77, 10 2006. http://dx.doi.org/10.1109/TNN.2006.875991
  28. A. Borowicz, “Orthogonal approach to independent component analysis using quaternionic factorization,” EURASIP Journal on Advances in Signal Processing, vol. 2020, no. 39, p. 23, September 2020. http://dx.doi.org/10.1186/s13634-020-00697-0