Abstract
The aim of this paper is to show that the multidimensional Monte Carlo integration can be efficiently implemented on computers with modern multicore CPUs and manycore accelerators including Intel MIC and GPU architectures using a new vectorized version of LCG pseudorandom number generator which requires limited amount of memory. We introduce two new implementations of the algorithm based on directive-based parallel programming standards OpenMP and OpenACC and consider their performance using Hockney–Jesshope theoretical model of vector computations. We also present and discuss the results of experiments performed on dual-processor Intel Xeon E5-2670 computers with Intel Xeon Phi 7120P and NVIDIA K40m.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Many problems in physics, chemistry and financial analysis involve computing of multidimensional integrals of the form
where \(I^d=[0,1]^d\) and \(d\ge 1\). Very often such problems have to be solved numerically because their analytical solutions are not known. Monte Carlo methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results approximating exact analytical solutions. Using parallel computers can accelerate the performance of programs that use such methods in science, technology or financial applications [3, 20, 24, 24]. Monte Carlo methods are also very attractive for solving multidimensional integration problems [1]. However, to ensure appropriately high accuracy of the results, the number of random samples should be really huge [15]; thus, the use of parallel processing is necessary. Unfortunately, high-quality algorithms for generation pseudorandom points have rather sequential nature. For example, the routine D01GBF available in NAG Numerical Library computes the approximation of the multidimensional integral of a function using Monte Carlo algorithm described in [12] and does not utilize multiple cores of target architectures and its performance is really poor [29].
In [28, 29] we have shown that the multidimensional Monte Carlo integration can be efficiently implemented on various distributed memory parallel computers and clusters of multicore GPU-accelerated nodes using recently developed parallel versions of LCG and LFG pseudorandom number generators [26]. Unfortunately, those implementations use all available memory to generate in parallel a huge number of random points. In such a case it is necessary to increase the number of nodes within a cluster when higher accuracy of results is desired.
The aim of this paper is to introduce a new simplified algorithm for multidimensional Monte Carlo integration based on a new vectorized version of LCG pseudorandom number generator which requires limited amount of memory. We consider its performance using Hockney–Jesshope model of vector computations [6, 4]. This model can also be applied to find the values of important parameters of the algorithm to minimize its execution time. Our new portable implementations are based on two directive-based parallel programming standards OpenMP and OpenACC; thus, the algorithm can be run on computers with modern multicore CPUs and manycore accelerators including Intel MIC and GPU architectures. We also present and discuss the results of experiments performed on dual-processor Intel Xeon E5-2670 computers with Intel Xeon Phi 7120P and NVIDIA K40m.
2 Multidimensional Monte Carlo integration
The idea of Monte Carlo methods for multidimensional integration can be found in [15]. For a given \(d\ge 1\) let \(I^d=[0,1]^d\) be the d-dimensional unit cube and let \(f(\mathbf {v})\) be a bounded Lebesgue-integrable function defined on \(I^d\). The approximation for the integral of f over \(I^d\) can be found using
where \(\mathbf {v}_1,\ldots ,\mathbf {v}_N\) are random points from \(I^d\). The strong law of large numbers guarantees that such numerical approximation of the integral converges almost surely and from the central limit theorem it follows that the expected error is \(O(N^{-1/2})\) [15]. The idea of Monte Carlo algorithm for multidimensional integration is quite simple [29]. We calculate the sum of the values of \(f(\mathbf {v}_i)\), where all \(\mathbf {v}_i\in I^{d}\), \(i=1,\ldots ,N\), are constructed using a sequence of \(N\cdot d\) pseudorandom real numbers from [0, 1).
Linear congruential generator (LCG) is a well-known pseudorandom number generator. It is based on the recurrence relation
where \({x_{i}}\) is a sequence of pseudorandom values, \(m>0\) is the modulus, a, \(0<a<m\) is the multiplier, c, \(0 \le c <m\) is the increment, \(x_0\equiv \sigma \), \(0\le \sigma <m\) is the seed or start value. Usually, \(m=2^M\), and \(M=32\) or \(M=64\); thus, the generators produce numbers from \(\mathbb {Z}_{m}=\{0,1,\ldots ,m-1\}\). It allows the modulus operations to be computed by merely truncating all but the rightmost 32 or 64 bits, respectively. Thus, we can neglect ”\((\mod m)\)”. The integers \(x_k\) can be converted to real values \(v_k\in [0,1)\) by \(v_k=x_k/m\).
Properties of the sequence generated using (3) heavily depend on properties of a, c and m. The sequence has the maximum possible period, when the following conditions are preserved [10]:
-
1.
c is relatively prime to m,
-
2.
every prime factor of m also divides \(a-1\),
-
3.
if 4 divides m, then 4 divides \(a-1\).
It is clear that m should be large enough. For \(m=2^{64}\), LCG can be used to generate sequences of double-precision real numbers. Then \(a=6364136223846793005\) and \(c=1442695040888963407\) is a good choice [11]. When single precision is sufficient, one can chose \(m=2^{32}\), \(a=1664525\), \(c=1013904223\) [19].
Algorithm 1 performs multidimensional Monte Carlo integration based on LCG. Note that we need N points from \(I^d\); thus, we apply (3) to generate \(N\cdot d\) points. It is clear that LCG is a special case of linear recurrence systems [25] that can be defined as follows
Algorithms strictly based on (4) cannot fully utilize the underlying hardware of modern computers, i.e., vector extensions and multiple cores, what is essential in case of modern multicore and manycore computer architectures.
To introduce a method that would be suitable for modern computer architectures, let us assume that there exist two positive integers r and s such that \(n=rs\). This assumption can be easily omitted. If we choose \(n<N\), where N is the desired number of points, then we can use the following method to find n random points from \(I^{d}\). Remaining \(N-n\) points can be found using (3). Equation (4) can be rewritten in the following block form [26, 29]:
where \(\mathbf {x}_i = (x_{is},\ldots ,x_{(i+1)s-1})^T\in \mathbb {Z}^{s}_m\) and \(\mathbf {f}_0 = (\sigma ,c,\ldots ,c)^T \in \mathbb {Z}^{s}_m\), and \(\mathbf {f} = (c,\ldots ,c)^T \in \mathbb {Z}^{s}_m\), and the matrices A and B are defined as follows
From (5) we have
When we set \(\mathbf {t}=A^{-1}\mathbf {f}\) and \(\mathbf {y}=A^{-1}(a\mathbf {e}_{0})\), where \(\mathbf {e}_{0} = (1,0,\ldots ,0)^{T}\in \mathbb {Z}^{s}_m\), then we get the main formula for the vectorized version of LCG:
Algorithm 2 shows how to apply (7) to perform multidimensional Monte Carlo integration. First we have to find vectors \(\mathbf {x}_0\), \(\mathbf {y}\), \(\mathbf {t}\) using three separate tasks that can be executed in parallel. Note that because of data dependencies, the loops 2–4, 6–8, 10–12 are strictly sequential. Then we convert all entries of \(\mathbf {x}_0\) to real values (line 13), using fully vectorized operation and find the sum of \(f(\mathbf {v}_j)\) for \(j=1,\ldots , s-1\) (lines 15–17, parallel loop). In the second part of the algorithm, we find all vectors \(\mathbf {x}_i\), \(i=1,\ldots ,r-1\), convert their entries to real values using fully vectorized operations (lines 20 and 21, respectively) and find (lines 22–24, parallel loop) the sum of \(f(\mathbf {v}_j)\). It should be noticed that although the operations from lines 13, 20, 21 are written in a vector form, they can be treated as loops without data dependencies.
3 Performance analysis
It is clear that the performance of Algorithm 2 depends on chosen values of r and s. Our experiments show that the right choice of these parameters can improve the performance significantly. Moreover, the right choice remains appropriate for various functions f (see Sect. 5). Thus, for the sake of simplicity, we will use the theoretical model of vector computations introduced by Hockney and Jesshope [6, 4] to analyze Algorithm 2 reduced to the problem of finding \(r\cdot s\) vectors from \(I^d\). It will also help us to predict the right choice of r and s that can minimize the execution time of the algorithm.
The performance of computers involved in scientific calculations is usually expressed in terms of the number of floating point operations completed per second. The basic measure can be expressed as
where N represents the number of floating point operations executed in t seconds. Obviously, when N floating point operations is executed with the average performance of \(r_\mathrm {avg}\), the execution time of a given algorithm is
The performance \(r_{N}\) of a loop of length N can be expressed in terms of two parameters \(r_{\infty }\) and \(n_{1/2}\), which are specific for a given type of a loop and computer architectures [6, 4]. The first parameter represents the performance in Gflops for a very long loop, while the second the loop length for which the performance of about \(r_{\infty }/2\) is achieved. Then
Let m denotes the number of floating point operations repeated during each iteration of a given loop. Applying (9) and (10) we get the execution time of such a loop of length N
The total execution time of Algorithm 2 reduced to the problem of finding \(r\cdot s\) vectors from \(I^d\) (i.e., without lines 14–17 and 22–24) depends on the values of parameters r, s and it can be expressed as \(T(r,s)=\sum _{i=1}^3 t_i\), where \(t_1\) is the execution time of the first three parallel tasks (the loops 2–4, 6–8, 10–12, respectively), \(t_2\) is the time needed to convert nd integers to real values (lines 13 and 21), \(t_3\) is the time needed for \(r-1\) executions of the line 20. Using (11) we get the following (in seconds):
where pairs \((r_{\infty }^A,n_{1/2}^A)\), \((r_{\infty }^B,n_{1/2}^B)\) and \((r_{\infty }^C,n_{1/2}^C)\) characterize the execution of the relative loops. This yields
where D denotes the execution time that does not depend on r and s. We assume that \(rs=n\); thus, (12) reduces to the following
It can be easily verified that T(s) defined by (13) reaches its minimum at the point
The optimal choice of s depends on the problem size n, the dimension d and the value of \(\beta \), which is machine-dependent. It should also be noticed that \(r_{\infty }^C>r_{\infty }^A\), and thus \(\beta >0\). The numbers s and r should be integers; thus, one can choose
It is clear that we need to know the values of \(r_{\infty }^A\), \(r_{\infty }^B\), \(r_{\infty }^C\) and \(n_{1/2}^B\), \(n_{1/2}^C\) to find \(\beta \) and then \(s^{\star }\) [7]. However, it is also possible to estimate \(s^{\star }\) after some numerical experiments performed for several various values of n and d. The value of the parameter \(\beta \) can be found using
Then one can use (15) to find \(s^{\star }\) and tune the performance of the algorithm for given values of n and d (i.e., to minimize the execution time of the algorithm).
It should be noticed that the theoretical model of performance based on (10) gives us some information about possible scalability of vectorized algorithms. When loops are split over available p processors or cores, the values of parameters \(r_{\infty }\), \(n_{1/2}\) grow by a factor of p. However, due to synchronization, \(r_{\infty }\) grows slower, but \(n_{1/2}\) faster [4]. Therefore, when the number of processors grows, our method can achieve better performance for bigger problem sizes (i.e., when parallelized loops are sufficiently large).
4 Implementation issues
Both Algorithms 1 and 2 have been implemented in C. The implementation of Algorithm 1 is straightforward; however, one can suppose that a modern compiler can apply some optimization techniques that can improve the overall performance of the algorithm on modern CPUs. For example, Intel C/C++ compiler applies the loop fission technique in which each of the loops 3–6 and 11–14 is broken into two loops. The first one is strictly sequential and performs \(x_j\leftarrow ax_{j-1}+c\), while the second one performs \(v_j\leftarrow x_j/m\) and can be vectorized using new SIMD extensions like AVX and AVX-512 which are available in modern multicore and manycore processors [27, 9]. It can be enforced by placing the pragma simd before each loop [27]. To optimize memory access the array, \(\mathbf {v}\) should be allocated using the _mm_malloc() intrinsic. It works just like the malloc function and additionally allows data alignment [27]. This loop has limited length (i.e., d); thus, the use of multiple threads cannot be profitable.
Algorithm 2 can easily be parallelized using OpenMP [2]. Lines 1–25 can be treated as a parallel region defined by the parallel construct. Lines 1–4, 5–8, and 9–12, respectively, can be treated as three independent sections that can be run in parallel. Lines 13, 20, 21 can be rewritten as loops annotated with pragma omp for to be executed in parallel. Such loops can also be vectorized by the compiler using SIMD extensions. The loops from lines 15–17 and 22–24 can also be executed in parallel; however, the variable result should be updated and is shown in Fig. 1.
The OpenMP implementation of Algorithm 2 can be used on various multicore CPUs and Intel Xeon Phi working in native mode [8, 21]. To develop portable implementation for GPUs, we have used OpenACC [18]. This standard offers compiler directives for offloading C/C++ and Fortran programs from host to attached accelerator devices. Such simple directives allow to mark regions of source code for automatic acceleration in a vendor-independent manner [23]. OpenACC provides the parallel construct that launches gangs of workers that will execute in parallel. Gangs may support multiple workers that execute in vector or SIMD mode [18] available in GPUs. The standard also provides several constructs that can be used to specify the scope of data in accelerated parallel regions.
Our OpenACC implementation of Algorithm 2 assumes that the most compute-intensive part of the algorithm, namely the lines 18–25, will be offloaded to an accelerator (see Fig. 2). We use the OpenACC data construct to specify the scope of data in the accelerated region. The construct parallel loop is used to vectorize the internal loops 20, 21, 22–24. Note that the variable result resides in host memory and it is updated using the value of temp. We also use the OpenACC construct update host to guarantee that the actual value of the last entry of x resides in host memory.
It should be noticed that PGI compiler which supports OpenACC has one disadvantage, namely it does not support indirect calls to external functions within accelerated regions. Instead one should consider the use of inlined functions supported by the compiler. Unfortunately, in such a case the source code of the implementation should be recompiled for each function.
5 Results of experiments
Algorithm 2 for the multidimensional integration using the vectorized version of LCG has been tested on three different target architectures which are modern accelerated systems allowing OpenMP and OpenMP+OpenACC programming models:
- E5-2670::
-
a server with two Intel Xeon E5-2670 v3 (totally 24 cores with hyperthreading, 2.3 GHz), 128GB RAM, running under CentOS 6.5 with Intel Parallel Studio ver. 2016,
- Xeon Phi::
-
like E5-2670, additionally with Intel Xeon Phi Coprocessor 7120P (61 cores with multithreading, 1.238 GHz, 16GB RAM), all experiments have been carried out on Xeon Phi working in native mode [8, 21],
- K40m::
-
like E5-2670, additionally with NVIDIA Tesla K40m GPU [13] (2880 cores, 12GB RAM), CUDA 7.0 and Portland Group compilers and tools with OpenMP and OpenACC support.
Algorithm 1 has been tested only on E5-2670. However, due to its sequential nature, only one core has been utilized.
First we have tested the performance of the Algorithms 1 and 2 for the test functions proposed in [5] using various values of r and s. We have observed that the right choice of these parameters can significantly improve the performance of Algorithm 2. Moreover, the optimal values of these parameters minimize the execution time for various test functions.
Figure 3 shows the performance of Algorithm 2 reduced to the problem of finding \(r\cdot s\) vectors from \(I^d\). We can observe that the optimal value of s depends on the problem size, namely n and d. It is also different for each architecture. After these experiments we have evaluated the approximation of the parameter \(\beta \) using (16). Then applying (14) and (15) we have obtained theoretical approximation of optimal values of s for various n and d (see Table 1).
In the second set of our experiments we have compared the performance of Algorithm 2 for several test functions. For each target architecture we have used the estimated value of \(\beta \). Equations (15) have been applied to find the optimal values of s and r for given problem sizes n and d. In [28] we used the set of the test functions proposed in [5] but we observed that it is sufficient to test only four of them:
where \(\mathbf {c},\mathbf {w}\in \mathbb {R}^d\) are fixed coefficients, because remaining ones can be characterized similarly.
Table 2 shows the execution times of Algorithm 1 (only on E5-2670) and Algorithm 2 for Continuous and NAG test functions. Similarly, Table 3 shows the performance of the algorithms for Corner peak and Product peak functions. Figures 4 and 5 show the speedup of Algorithm 2 over Algorithm 1 for all tested functions. It should be noticed that Algorithm 1 is completely useless on manycore architectures like GPUs and Intel MIC because it can utilize only a small fraction of their theoretical peak performance and its execution time would be much longer then on E5-2670. Figures 4 and 5 help us to realize the advantages of the use of manycore architectures and Algorithm 2 which is much more sophisticated than Algorithm 1.
We can observe that Algorithm 2 outperforms Algorithm 1 significantly for all architectures and test functions. However, the use of Algorithm 2 is much more profitable for bigger problem sizes and more complicated functions. In such cases, manycore architectures, namely Xeon Phi and K40m, outperform E5-2670. For Corner peak and Product peak functions Xeon Phi outperforms K40m for \(n>1e+06\) and the advantage is greater for \(d\ge 16\). K40m works fine for smaller values of d when coalesced memory access can take place, i.e., when multiple memory accesses into a single transaction. [17, 16]. Similarly, GPUs outperform Xeon Phi when integrand functions have calls to transcendental functions. In such a case plenty of cores can be utilized. Table 3 shows that the form of integrand functions has a great influence on the performance of the algorithm on individual architectures. In case of Xeon Phi the performance of Algorithm 2 for Corner peak and Product peak is nearly the same, while K40m requires almost twice as much time for Product peak than for Corner peak. Computations on GPUs are much more effective when computations performed by cores are rather simple, but the form of Product peak is more complicated and requires more memory references. When the integrand function is really simple, K40m achieves much better performance than Xeon Phi and E5-2670 (see Fig. 4). The results presented in Figs. 4 and 5 also confirm theoretical considerations regarding scalability (see Sect. 3). Indeed, Xeon Phi and K40m (i.e., manycore architectures) achieve better speedup for bigger problem sizes and when the integrand function is rather simple. Too many memory references that may appear in more complex integrand functions can limit speedup of the method.
It should be pointed out that the performance of Algorithm 2 on GPUs could be improved by using CUDA [17] or OpenCL [14] programming interfaces. Unfortunately, it would require much more effort than using OpenACC. On the other hand, the implementation for Intel architectures can be optimized by using more sophisticated techniques like programming with intrinsics for Intel Advanced Vector Extensions [9].
6 Conclusions
We have showed that the multidimensional Monte Carlo integration based on a new vectorized version of linear congruential generator can be easily and efficiently implemented on modern CPU, GPU and Intel MIC architectures including Intel Xeon E5-2670, Xeon Phi 7120P and NVIDIA K40m using high-level directive-based standards like OpenMP and OpenACC. The new version of LCG requires a limited amount of memory; thus, the number of generated pseudorandom points can be really huge. We have also shown how to use Hockney–Jesshope theoretical model of vector computations to find values of algorithm’s parameters to minimize its execution time.
References
Bull JM, Freeman TL (1994) Parallel globally adaptive quadrature on the KSR-1. Adv Comput Math 2:357–373. https://doi.org/10.1007/BF02521604
Chandra R, Dagum L, Kohr D, Maydan D, McDonald J, Menon R (2001) Parallel programming in OpenMP. Morgan Kaufmann Publishers, San Francisco
Chen C, Huang K, Lyuu Y (2015) Accelerating the least-square Monte Carlo method with parallel computing. The J Supercomput 71(9):3593–3608. https://doi.org/10.1007/s11227-015-1451-7
Dongarra J, Duff I, Sorensen D, Van der Vorst H (1991) Solving linear systems on vector and shared memory computers. SIAM, Philadelphia
Hahn T (2005) CUBA-a library for multidimensional integration. Comput Phys Commun 168:78–95
Hockney R, Jesshope C (1981) Parallel computers: architecture. Programming and algorithms. Adam Hilger Ltd., Bristol
RW Hockney (1985) \((r_{{\infty }}, n_{{1/2}}, s_{{1/2}})\) measurements on the 2-cpu CRAY X-MP. Parallel Comput 2(1):1–14. https://doi.org/10.1016/0167-8191(85)90014-6
Jeffers J, Reinders J (2013) Intel Xeon Phi coprocessor high-performance programming. Morgan Kaufman, Waltham
Jeffers J, Reinders J, Sodani A (2016) Intel Xeon Phi processor high-performance programming. Knights landing edition. Morgan Kaufman, Cambridge
Knuth DE (1981) The art of computer programming, volume II: seminumerical algorithms, 2nd edn. Addison-Wesley, Boston
Knuth DE (1999) MMIXware, a RISC computer for the third millennium, lecture notes in computer science, vol 1750. Springer, Berlin
Lautrup B (1971) An adaptive multi-dimensional integration procedure. In: Proceedings 2nd Colloquium Advanced Methods in Theoretical Physics, Marseille
Li Y, Schwiebert L, Hailat E, Mick JR, Potoff JJ (2016) Improving performance of GPU code using novel features of the NVIDIA Kepler architecture. Concurr Comput Pract Exp 28(13):3586–3605. https://doi.org/10.1002/cpe.3744
Munshi A (2009) The OpenCL Specification v. 1.0. Khronos OpenCL Working Group
Niederreiter H (1978) Quasi-Monte Carlo methods and pseudo-random numbers. Bull Am Math Soc 84:957–1041
NVIDIA (2015) CUDA C best practices guide. NVIDIA Corporation, available at http://www.nvidia.com/
NVIDIA Corporation (2015) CUDA programming guide. NVIDIA Corporation, available at http://www.nvidia.com/
OpenACC-standardorg (2015) The OpenACC application programming interface, v2.5. Tech. rep., OpenACC-Standard.org, http://www.openacc.org/sites/default/files/OpenACC_2pt5.pdf
Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical recipes in C, 2nd edn. Cambridge University Press, Cambridge
Pryor DV, Burns PJ (1989) Vectorized monte carlo molecular aerodynamics simulation of the rayleigh problem. The J Supercomput 3(4):305–330. https://doi.org/10.1007/BF00128168
Rahman R (2013) Intel Xeon Phi coprocessor architecture and tools: the guide for application developers. Apress, Berkely
Ripoll DR, Thomas SJ (1992) A parallel monte carlo search algorithm for the conformational analysis of polypeptides. The J Supercomput 6(2):163–185. https://doi.org/10.1007/BF00129777
Sabne A, Sakdhnagool P, Lee S, Vetter JS (2014) Evaluating performance portability of OpenACC. In: Languages and Compilers for Parallel Computing-27th International Workshop, LCPC 2014, Hillsboro, OR, USA, September 15–17, 2014, Revised Selected Papers, pp 51–66, https://doi.org/10.1007/978-3-319-17473-0_4
Santos EE, Rickman JM, Muthukrishnan G, Feng S (2008) Efficient algorithms for parallelizing Monte Carlo simulations for 2d ising spin models. The J Supercomput 44(3):274–290. https://doi.org/10.1007/s11227-007-0163-z
Stpiczyński P (2011) Solving linear recurrence systems on hybrid GPU accelerated manycore systems. In: Proceedings of the Federated Conference on Computer Science and Information Systems, September 18–21, 2011, Szczecin, Poland, IEEE Computer Society Press, pp 465–470, https://fedcsis.org/proceedings/2011/pliks/148.pdf
Stpiczyński P, Szałkowski D, Potiopa J (2012) Parallel GPU-accelerated recursion-based generators of pseudorandom numbers. In: Proceedings of the Federated Conference on Computer Science and Information Systems, September 9–12, 2012, Wroclaw, Poland, IEEE Computer Society Press, pp 571–578, http://fedcsis.org/proceedings/2012/pliks/380.pdf
Supalov A, Semin A, Klemm M, Dahnken C (2014) Optimizing HPC applications with intel cluster tools. Apress, Berkely
Szałkowski D, Stpiczyński P (2014) Multidimensional Monte Carlo integration on clusters with hybrid GPU-accelerated nodes. In: Parallel Processing and Applied Mathematics, 10th International Conference, PPAM 2013, Warsaw, Poland, September 8–11, 2013, Revised Selected Papers, Part I, Springer, Lecture Notes in Computer Science, vol 8384, pp 603–612, https://doi.org/10.1007/978-3-642-55224-3_56
Szałkowski D, Stpiczyński P (2015) Using distributed memory parallel computers and GPU clusters for multidimensional Monte Carlo integration. Concurr Comput Pract Exp 27(4):923–936. https://doi.org/10.1002/cpe.3365
Acknowledgements
This work was partially supported by the National Centre for Research and Development under MICLAB Project POIG.02.03.00-24-093/13. The use of computer resources installed at Institute of Mathematics, Maria Curie-Skłodowska University, Lublin, is kindly acknowledged.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Stpiczyński, P. Vectorized algorithm for multidimensional Monte Carlo integration on modern GPU, CPU and MIC architectures. J Supercomput 74, 936–952 (2018). https://doi.org/10.1007/s11227-017-2172-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-017-2172-x