Logo PTI Logo FedCSIS

Proceedings of the 18th Conference on Computer Science and Intelligence Systems

Annals of Computer Science and Information Systems, Volume 35

Performance Analysis of a 3D Elliptic Solver on Intel Xeon Computer System

, ,

DOI: http://dx.doi.org/10.15439/2023F5683

Citation: Proceedings of the 18th Conference on Computer Science and Intelligence Systems, M. Ganzha, L. Maciaszek, M. Paprzycki, D. Ślęzak (eds). ACSIS, Vol. 35, pages 10531058 ()

Full text

Abstract. It was shown that block-circulant preconditioners applied to a conjugate gradient method used to solve structured sparse linear systems arising from 2D or 3D elliptic problems have very good numerical properties and a potential for good parallel efficiency. Hybrid parallelization based on MPI and OpenMP standards is investigated. The aim of the presentation is to analyze the parallel performance of the implemented parallel algorithms on a supercomputer using Intel Xeon processors as well as Intel Xeon Phi coprocessors.

References

  1. A. Axelsson and M. Neytcheva, Supercomputers and numerical linear algebra. Nijmegen: KUN, 1997.
  2. I. Lirkov and Y. Vutov, “Parallel performance of a 3D elliptic solver,” in Proceedings of the International Multiconference on Computer Science and Information Technology, M. Ganzha, M. Paprzycki, J. Wachowicz, and K. Węcel, Eds., vol. 1, 2006, pp. 579–590.
  3. ——, “The convergence rate and parallel performance of a 3D elliptic solver,” System Science, vol. 32, no. 4, pp. 73–81, 2007.
  4. I. Lirkov and S. Margenov, “Parallel complexity of conjugate gradient method with circulant block-factorization preconditioners for 3D elliptic problems,” in Recent Advances in Numerical Methods and Applications, O. Iliev, M. Kaschiev, B. Sendov, and P. Vassilevski, Eds. Singapore: World Scientific, 1999, pp. 482–490.
  5. I. Lirkov, S. Margenov, and M. Paprzyck, “Parallel performance of a 3d elliptic solver,” in Numerical Analysis and Its Applications II, ser. Lecture Notes in Computer Science, L. Vulkov, J. Waśniewski, and P. Yalamov, Eds., vol. 1988. Springer, 2001, pp. 535–543.
  6. R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, and J. McDonald, Parallel programming in OpenMP. Morgan Kaufmann, 2000.
  7. B. Chapman, G. Jost, and R. Van Der Pas, Using OpenMP: portable shared memory parallel programming, ser. Scientific and engineering computation series. MIT press, 2008, vol. 10.
  8. W. Gropp, E. Lusk, and A. Skjellum, Using MPI: Portable Parallel Programming with the Message-Passing Interface. The MIT Press, 2014.
  9. M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra, MPI: The Complete Reference, ser. Scientific and engineering computation series. Cambridge, Massachusetts: The MIT Press, 1997, second printing.
  10. D. Walker and J. Dongarra, “MPI: a standard Message Passing Interface,” Supercomputer, vol. 63, pp. 56–68, 1996.