[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

A comparative assessment of OMP and MATLAB for parallel computation

Published: 01 January 2024 Publication History

Abstract

The prime goal of parallel computing is the simultaneous parallel execution of several program instructions. Consequently, to accomplish this, the program should be divided into independent sets so that each processor can execute its program part concurrently with the other processors. This study compares OMP and MATLAB, two important parallel computing simulation tools, through the use of a dense matrix multiplication technique. The results showed that OMP outperformed the MATLAB parallel environment by over 8 times in sequential execution and 6 times in parallel execution. From this proposed method, it was also observed that OMP with an even slower processor performs much better than MATLAB with a higher processor. Thus, the present analysis indicates that OMP is a superior environment for parallel computing and should be preferred over parallel MATLAB.

References

[1]
P. Graham, “OpenMP: A Parallel Programming Model for Shared Memory Architectures” Edinburgh Parallel Computing Centre, The University of Edinburgh, Version 1.1 March 1999.
[2]
M.J. Quinn and M. Hill, “Parallel Programming in C with MPI and OpenMP” 1st edition, 2004.
[3]
Uusheikh, “Begin Parallel Programming with OpenMP” CPOL Malaysia 5 Jun 2007.
[5]
Parallel computing toolbox, Available online at: www.mathworks.com.
[6]
K. Asanovic, R. Bodik, B. Catanzaro, et al. The landscape of parallel computing research: A view from Berkeley. Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, December 2006.
[7]
K. Thouti and S.R. Sathe, Comparison of OpenMP and OpenCL Parallel processing Technologies, UACSA 3(4) 2012, 56–61.
[8]
R. Choy and A. Edelman, “Parallel MATLAB: Doing it Right,” Computer Science AI Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139.
[9]
A. Dadashzadeh, S. Ternsjö, S. Engblom and D. Lukarski, “Parallel assembly of sparse matrices using OpenMP,” Department of Information Technology, Box 337, Uppsala Universitet, Sweden.
[10]
K. Hwang and N. Jotwani, “Advanced Computer Architecture”, Tata McGraw Hill Education Private Limited, Second Edition, 2001.
[11]
Y. Dash, S. Kumar and V.K. Patle, “A Survey on Serial and Parallel Optimization Techniques Applicable for Matrix Multiplication Algorithm”, American Journal of Computer Science and Engineering Survey (AJCSES) 3(1), Feb 2015.
[12]
Y. Dash, S. Kumar and V.K. Patle, “Evaluation of Performance on OpenMP Parallel Platform Based on Problem Size,” International Journal of Modern Education and Computer Science (IJMECS) 8(6) (2016), 35–40.
[13]
J. Ali and R.Z. Khan, Performance Analysis of Matrix Multiplication Algorithms Using MPI, International Journal of Computer Science and Information Technologies (IJCSIT) 3(1) (2012), 3103–3106.
[14]
R. Patel and S. Kumar, “Effect of problem size on parallelism”, Proc. of 2nd International Conference on Biomedical Engineering & Assistive Technologies at NIT Jalandhar, (2012), pp. 418–420.
[15]
S.K. Sharma and K. Gupta, “Performance Analysis of Parallel Algorithms on Multi-core System using OpenMP Programming Approaches”, International Journal of Computer Science, Engineering and Information Technology (IJCSEIT) 2(5) (2012).
[16]
S. Kathavate and N.K. Srinath, “Efficiency of Parallel Algorithms on Multi-Core Systems Using OpenMP,” International Journal of Advanced Research in Computer and Communication Engineering 3(10), (October 2014), 8237–8241.
[17]
P. Kulkarni and S. Pathare, “Performance analysis of parallel algorithm over sequential using OpenMP,” IOSR Journal of Computer Engineering (IOSR-JCE) 16(2), 58–62.
[18]
N. Demeure, T. Kisner, R. Keskitalo, R. Thomas, J. Borrill and W. Bhimji, High-level GPU code: a case study examining JAX and OpenMP. In Proceedings of the SC’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, (2023, November), (pp. 1105–1113).
[19]
A.V. Tabakov and A.A. Paznikov, “Algorithms for Optimization of Relaxed Concurrent Priority Queues in Multicore Systems,” 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), Saint Petersburg and Moscow, Russia, 2019, pp. 360–365.
[20]
Y. Chen, Evaluating the Performance of Parallel Computing in Hybrid Models. Available at SSRN 4482277. Andrade, (2023).
[21]
Y. Dash, “An Insight into Parallel Computing Paradigm,” 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), Kannur, India, 2019, pp. 804–808.
[22]
G. L. (2023). Improving parallel programming assessment: challenges, methods, and opportunities in coding productivity.
[23]
2023. GCC Wiki – OpenMP. https://gcc.gnu.org/wiki/openmp. [Online; accessed 30-May-2023].
[24]
2023. LLVM/OpenMP Documentation. https://openmp.llvm.org/. [Online; accessed 15-October-2023].
[25]
J. Gao, W. Ji, F. Chang, S. Han, B. Wei, Z. Liu and Y. Wang, A Systematic Survey of General Sparse Matrix-matrix Multiplication. ACM Comput. Surv. 55, 12, Article 244 (March 2023), 3.
[26]
A. Dadashzadeh, S. Ternsjö, S. Engblom and D. Lukarski, “Parallel assembly of sparse matrices using OpenMP,” Department of Information Technology, Box 337, Uppsala Universitet, Sweden.
[27]
A.B. Rathod, S.M. Gulhane, V.L. Faye and K.A. Fulkar, “A Comparative Evaluation of Parallel Matlab and OpenMP Parallel Technologies,” IETE 46th Midterm Symposium “Impact of Technology on Skill Development, MTS-2015” Special Issue of International Journal of Electronics, Communication & Soft Computing Science and Engineering 4(2) (2015), 199–202.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image International Journal of Hybrid Intelligent Systems
International Journal of Hybrid Intelligent Systems  Volume 20, Issue 1
2024
42 pages

Publisher

IOS Press

Netherlands

Publication History

Published: 01 January 2024

Author Tags

  1. Parallel computing
  2. OMP
  3. MATLAB
  4. matrix multiplication

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 04 Feb 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media