[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/2499968.2499978acmotherconferencesArticle/Chapter ViewAbstractPublication Pageshp3cConference Proceedingsconference-collections
research-article

GPU-based Monte Carlo simulation for the Gibbs ensemble

Published: 07 April 2013 Publication History

Abstract

Scientists are interested in simulating large biomolecular systems for longer times to get more accurate results. However, longer running times mean more execution steps with large computation overhead. We present an implementation of Monte Carlo simulation for the Gibbs ensemble using Lennard-Jones atoms on GPUs. Moreover, we use massive multithreading to utilize the large number of cores that the GPU has and hide the parallel setup overhead, such as global memory access and kernel launch overhead. However, this process of porting the code to the GPU includes managing the available resources such as the number of registers, the amount of shared memory, number of threads per Streaming Multiprocessor, and global memory bandwidth used by each thread and kernel. To the best of our knowledge, no other similar work that uses the GPU on this scale has been done for Monte Carlo simulation of the Gibbs ensemble. The evaluation results show over 45 times speedup using a commodity GPU compared to running on a single processor core.

References

[1]
P. Hanrahan, "Why is graphics hardware so fast?" in Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, ser. PPoPP '05. ACM, 2005, pp. 1--1.
[2]
G. Chakrabarti, V. Grover, B. Aarts, X. Kong, M. Kudlur, Y. Lin, J. Marathe, M. Murphy, and J.-Z. Wang, "Cuda: Compiling and optimizing for a gpu platform," Procedia Computer Science, vol. 9, no. 0, pp. 1910--1919, 2012.
[3]
Nvidia, "Nvidia developer zone," http://developer.nvidia.com, Jun 2012.
[4]
CUDA C Programming guide 4.2, 4th ed., NVIDIA, Feb 2012.
[5]
K. Asanovic, R. Bodik, B. Catanzaro, J. Gebis, P. Husbands, K. Keutzer, D. Patterson, W. Plishker, J. Shalf, S. Williams et al., "The landscape of parallel computing research: A view from berkeley," Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, Tech. Rep., 2006.
[6]
W. Brown, S. Hampton, P. Agarwal, P. Wang, P. Crozier, and S. Plimpton, "Porting lammps to gpus." Sandia National Laboratories, Tech. Rep., 2010.
[7]
J. C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhor-shid, E. Villa, C. Chipot, R. D. Skeel, L. Kal, and K. Schulten, "Scalable molecular dynamics with namd," Journal of Computational Chemistry, vol. 26, no. 16, pp. 1781--1802, 2005.
[8]
M. G. Martin and A. P. Thompson, "Industrial property prediction using towhee and lammps," Fluid Phase Equilibria, vol. 217, no. 1, pp. 105--110, 2004.
[9]
J. A. Anderson, C. D. Lorenz, and A. Travesset, "General purpose molecular dynamics simulations fully implemented on graphics processing units," Journal of Computational Physics, vol. 227, no. 10, pp. 5342--5359, 2008.
[10]
"Hoomd-blue web page," http://codeblue.umich.edu/hoomd-blue, Aug 2012.
[11]
P. K. Jha, R. Sknepnek, G. I. Guerrero-Garcia, and M. Olvera de la Cruz, "A graphics processing unit implementation of coulomb interaction in molecular dynamics," Journal of Chemical Theory and Computation, vol. 6, no. 10, pp. 3058--3065, 2010.
[12]
S. C. Mcgrother and K. E. Gubbins, "Constant pressure gibbs ensemble monte carlo simulations of adsorption into narrow pores," Molecular Physics, vol. 97, no. 8, pp. 955--965, 1999.
[13]
T. Cagin and B. M. Pettitt, "Grand molecular dynamics: A method for open systems," Molecular Simulation, vol. 6, no. 1-3, pp. 5--26, 1991.
[14]
A. Papadopoulou, E. D. Becker, M. Lupkowski, and F. van Swol, "Molecular dynamics and monte carlo simulations in the grand canonical ensemble: Local versus global control," The Journal of Chemical Physics, vol. 98, no. 6, pp. 4897--4908, 1993.
[15]
L. Dagum and R. Menon, "Openmp: an industry standard api for shared-memory programming," Computational Science Engineering, IEEE, vol. 5, no. 1, pp. 46--55, jan-mar 1998.
[16]
K. B. Tarmyshov and F. Mller-Plathe, "Parallelizing a molecular dynamics algorithm on a multiprocessor workstation using openmp," Journal of Chemical Information and Modeling, vol. 45, no. 6, pp. 1943--1952, 2005.
[17]
K. Esselink, L. D. J. C. Loyens, and B. Smit, "Parallel monte carlo simulations," Phys. Rev. E, vol. 51, pp. 1560--1568, Feb 1995. {Online}. Available: http://link.aps.org/doi/10.1103/PhysRevE.51.1560
[18]
M. Martin, B. Chen, C. Wick, J. Potoff, J. Stubbs, and J. Siepmann, "Mcccs towhee," http://towhee.sourceforge.net/, Aug 2012.
[19]
J. Kim, J. M. Rodgers, M. Athnes, and B. Smit, "Molecular monte carlo simulations using graphics processing units: To waste recycle or not?" Journal of Chemical Theory and Computation, vol. 7, no. 10, pp. 3208--3222, 2011.
[20]
A. Z. Panagiotopoulos, "Direct determination of phase coexistence properties of fluids by monte carlo simulation in a new ensemble," Molecular Physics, vol. 61, pp. 813--826, 1987.
[21]
D. Frenkel and B. Smit, Understanding Molecular Simulation, 2nd ed. San Diego: Academic Press, 2002.
[22]
S. Sengupta, M. Harris, M. Garland, and J. D. Owens, "Efficient parallel scan algorithms for many-core gpus," Scientific Computing with Multicore and Accelerators, 2011.
[23]
J. J. Potoff and A. Z. Panagiotopoulos, "Critical point and phase behavior of the pure fluid and a lennard-jones mixture," The Journal of Chemical Physics, vol. 109, no. 24, pp. 10914--10920, 1998.
[24]
"Benchmark results for the lennard-jones fluid," http://www.cstl.nist.gov/srs/LJ_PURE/index.htm, Aug 2012.
[25]
M. Matsumoto and T. Nishimura, "Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator," ACM Trans. Model. Comput. Simul., vol. 8, pp. 3--30, January 1998.

Cited By

View all
  • (2016)Improving performance of GPU code using novel features of the NVIDIA kepler architectureConcurrency and Computation: Practice & Experience10.1002/cpe.374428:13(3586-3605)Online publication date: 10-Sep-2016
  • (2015)Efficient parallel cell list algorithms for Monte Carlo simulations (WIP)Proceedings of the Conference on Summer Computer Simulation10.5555/2874916.2874986(1-7)Online publication date: 26-Jul-2015

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
HPC '13: Proceedings of the High Performance Computing Symposium
April 2013
142 pages
ISBN:9781627480338

Sponsors

  • SCS: Society for Modeling and Simulation International

In-Cooperation

Publisher

Society for Computer Simulation International

San Diego, CA, United States

Publication History

Published: 07 April 2013

Check for updates

Author Tags

  1. GPGPU
  2. Gibbs ensemble
  3. Monte Carlo simulation
  4. lennard-Jones particles
  5. parallel computing

Qualifiers

  • Research-article

Conference

SpringSim '13
Sponsor:
  • SCS
SpringSim '13: 2013 Spring Simulation Multiconference
April 7 - 10, 2013
California, San Diego

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2016)Improving performance of GPU code using novel features of the NVIDIA kepler architectureConcurrency and Computation: Practice & Experience10.1002/cpe.374428:13(3586-3605)Online publication date: 10-Sep-2016
  • (2015)Efficient parallel cell list algorithms for Monte Carlo simulations (WIP)Proceedings of the Conference on Summer Computer Simulation10.5555/2874916.2874986(1-7)Online publication date: 26-Jul-2015

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media