Page 43 - Krész, Miklós, and Andrej Brodnik (eds.). MATCOS-13. Proceedings of the 2013 Mini-Conference on Applied Theoretical Computer Science. Koper: University of Primorska Press, 2016.
P. 43
number of 1 2 4 6 8 12 of parallelization.
threads z
6. ACKNOWLEDGMENTS
A 400 214 137 112 143 166
B - 260 125 83.0 68.3 46.1 This work was supported by RFBR (12-01-31046, 13-01-
C 17.8 13.7 12.8 14.0 25.4 35.3 00019, 13-05-12051).
D - 10.4 6.69 5.82 7.82 8.69
7. REFERENCES
Table 2: Computational time of all the algorithms
depending on the number of threads z: A - regular [1] The OpenMP R API specification for parallel
mesh and direct usage of OpenMP pragmas, B - reg- programming. http://openmp.org.
ular mesh and usage of parallelism based on the data
distribution, C - adaptive mesh and direct usage of [2] V. S. Babkin and Y. M. Laevskii. Seepage gas
OpenMP pragmas, D - adaptive mesh and usage of combustion. Combustion, Explosion, and Shock Waves,
parallelism based on the data distribution; obtained 23(5):531–547, September-October 1987.
with NKS-30T of SSCC
[3] T. A. Kandryukova and Y. M. Laevsky. Numerical
number of 1 2 4 6 8 12 16 simulation of filtration gas combustion. In Proceedings
threads z of the 11th International Conference on Mathematical
and Numerical Aspects of Waves, pages 43–44, June
A 323 178 112 90.2 77.4 113 115 2013.
B - 203 103 68.2 54.2 38.7 33.5
[4] T. A. Kandryukova and Y. M. Laevsky. Simulating the
Table 3: Computational time of all the algorithms filtration combustion of gases on multi-core computers.
depending on the number of threads z: A - regular Journal of Applied and Industrial Mathematics,
mesh and direct usage of OpenMP pragmas, B - 8(2):218–226, April 2014.
regular mesh and usage of parallelism based on the
data distribution; obtained with MVS-10P of JSCC [5] Y. M. Laevsky and L. V. Yausheva. Simulation of
RAS filtrational gas combustion processes in
nonhomogeneous porous media. Numerical Analysis
and Applications, 2(2):140–153, April 2009.

5. CONCLUSIONS

The appearance of supercomputers and elaboration of par-
allel technology have given renewed impetus for the further
development of numerical methods and has opened the doors
for the modeling of complex tasks. The problem of FGC is
the one of such problems. The specific of its solutions causes
the usage of an adaptive mesh to be extremely efficient for
the numerical simulation of FGC processes in the case of se-
quential implementation. However, one should pay special
attention to the parallel realization of such an algorithm,
since unfortunate parallelization might not only show un-
satisfactory scalability, but even augment the computational
time with increasing number of threads. At the same time
the fact, that the number of cores on the computational node
constantly grows and the influence of parallelism increases,
implies the need to construct new algorithms permitting al-
most perfect scaling in the number of threads.
In the paper a special approach to the issue of shared mem-
ory parallelism based on the distribution of data across threads
has been proposed. It has been applied to the algorithms
both with the regular and the adaptive grids. The results
of comparison this method with the classical one, when
OpenMP directives is applied to all the internal loops of
the program, have shown that the proposed method is more
efficient for the task in question and is acceptable to the
parallel implementation of the algorithm with the usage of
embedded fine mesh. All calculations were performed for the
problem with characteristic dimensions and empirically cho-
sen parameters of the mathematical model. Solutions pro-
duced by all the constructed algorithms have the required
accuracy grade of approximation and correspond to physical
data. Meanwhile the computational time is reduced by 10
times in the case of the regular mesh and by 3 times in the
case of the adaptive one by the use of the proposed method

m a t c o s -1 3 Proceedings of the 2013 Mini-Conference on Applied Theoretical Computer Science 43
Koper, Slovenia, 10-11 October
   38   39   40   41   42   43   44   45   46