Using device 2 (rank 2, local rank 2, local size 3) : Tesla P100-PCIE-16GB Using device 0 (rank 0, local rank 0, local size 3) : Tesla P100-PCIE-16GB Using device 1 (rank 1, local rank 1, local size 3) : Quadro GP100 running on 3 total cores distrk: each k-point on 3 cores, 1 groups distr: one band on 1 cores, 3 groups ******************************************************************************* You are running the GPU port of VASP! When publishing results obtained with this version, please cite: - M. Hacene et al., http://dx.doi.org/10.1002/jcc.23096 - M. Hutchinson and M. Widom, http://dx.doi.org/10.1016/j.cpc.2012.02.017 in addition to the usual required citations (see manual). GPU developers: A. Anciaux-Sedrakian, C. Angerer, and M. Hutchinson. ******************************************************************************* ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | Please note that VASP has recently been ported to GPU by means of | | OpenACC. You are running the CUDA-C GPU-port of VASP, which is | | deprecated and no longer actively developed, maintained, or | | supported. In the near future, the CUDA-C GPU-port of VASP will be | | dropped completely. We encourage you to switch to the OpenACC | | GPU-port of VASP as soon as possible. | | | ----------------------------------------------------------------------------- vasp.6.2.1 16May21 (build Apr 11 2022 11:03:26) complex MD_VERSION_INFO: Compiled 2022-04-11T18:25:55-UTC in devlin.sd.materialsdesign. com:/home/medea2/data/build/vasp6.2.1/16685/x86_64/src/src/build/gpu from svn 1 6685 This VASP executable licensed from Materials Design, Inc. POSCAR found type information on POSCAR SiO H POSCAR found : 3 types and 36 ions NWRITE = 1 NWRITE = 1 ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | You use a magnetic or noncollinear calculation, but did not specify | | the initial magnetic moment with the MAGMOM tag. Note that a | | default of 1 will be used for all atoms. This ferromagnetic setup | | may break the symmetry of the crystal, in particular it may rule | | out finding an antiferromagnetic solution. Thence, we recommend | | setting the initial magnetic moment manually or verifying carefully | | that this magnetic setup is desired. | | | ----------------------------------------------------------------------------- NWRITE = 1 LDA part: xc-table for Pade appr. of Perdew WARNING: The GPU port of VASP has been extensively tested for: ALGO=Normal, Fast, and VeryFast. Other algorithms may produce incorrect results or yield suboptimal performance. Handle with care! ----------------------------------------------------------------------------- | | | W W AA RRRRR N N II N N GGGG !!! | | W W A A R R NN N II NN N G G !!! | | W W A A R R N N N II N N N G !!! | | W WW W AAAAAA RRRRR N N N II N N N G GGG ! | | WW WW A A R R N NN II N NN G G | | W W A A R R N N II N N GGGG !!! | | | | The distance between some ions is very small. Please check the | | nearest-neighbor list in the OUTCAR file. | | I HOPE YOU KNOW WHAT YOU ARE DOING! | | | ----------------------------------------------------------------------------- POSCAR, INCAR and KPOINTS ok, starting setup creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUDA streams... creating 32 CUFFT plans with grid size 54 x 96 x 40... creating 32 CUFFT plans with grid size 54 x 96 x 40... creating 32 CUFFT plans with grid size 54 x 96 x 40... FFT: planning ... WAVECAR not read entering main loop N E dE d eps ncg rms rms(c) DAV: 1 0.110103441120E+04 0.11010E+04 -0.41134E+04 3456 0.800E+02 DAV: 2 0.420557633873E+03 -0.68048E+03 -0.63542E+03 4752 0.164E+02 DAV: 3 0.282631361567E+03 -0.13793E+03 -0.13205E+03 5598 0.648E+01 DAV: 4 0.271956959292E+03 -0.10674E+02 -0.10570E+02 4950 0.200E+01 DAV: 5 0.271594560382E+03 -0.36240E+00 -0.36181E+00 5013 0.405E+00 0.106E+03 DAV: 6 0.310700738294E+03 0.39106E+02 -0.51293E+02 5949 0.572E+01 0.209E+02 DAV: 7 0.247112029276E+03 -0.63589E+02 -0.55873E+02 4986 0.469E+01 0.342E+02 DAV: 8 0.362891383994E+03 0.11578E+03 -0.26007E+02 4887 0.339E+01 0.225E+02 DAV: 9 0.361878706174E+03 -0.10127E+01 -0.55041E+01 4662 0.124E+01 0.201E+02 DAV: 10 0.361950928981E+03 0.72223E-01 -0.39972E+00 4518 0.445E+00 0.185E+02 DAV: 11 0.368744997327E+03 0.67941E+01 -0.20326E+01 4275 0.103E+01 0.128E+02 DAV: 12 0.368735337494E+03 -0.96598E-02 -0.33454E+00 4437 0.355E+00 0.112E+02 DAV: 13 0.368074936830E+03 -0.66040E+00 -0.93590E+00 4482 0.671E+00 0.133E+02 DAV: 14 0.366627182210E+03 -0.14478E+01 -0.72580E+00 4077 0.451E+00 0.151E+02 DAV: 15 0.366703677283E+03 0.76495E-01 -0.46243E-01 4707 0.138E+00 0.151E+02 DAV: 16 0.363528103778E+03 -0.31756E+01 -0.33446E+00 4392 0.468E+00 0.150E+02 DAV: 17 0.364007200656E+03 0.47910E+00 -0.11809E+00 3915 0.186E+00 0.156E+02 ----------------------------------------------------------------------------- | | | EEEEEEE RRRRRR RRRRRR OOOOOOO RRRRRR ### ### ### | | E R R R R O O R R ### ### ### | | E R R R R O O R R ### ### ### | | EEEEE RRRRRR RRRRRR O O RRRRRR # # # | | E R R R R O O R R | | E R R R R O O R R ### ### ### | | EEEEEEE R R R R OOOOOOO R R ### ### ### | | | | Error EDDDAV: Call to ZHEGV failed. Returncode = 8 1 9 | | | | ----> I REFUSE TO CONTINUE WITH THIS SICK JOB ... BYE!!! <---- | | | ----------------------------------------------------------------------------- ***************************** Error running VASP parallel with MPI #!/bin/bash cd "/home/user/MD/TaskServer/Tasks/172.16.0.39-32000-task39749" export PATH="/home/user/MD/Linux-x86_64/IntelMPI5/bin:$PATH" export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/MD/Linux-x86_64/IntelMPI5/lib:/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64" "/home/user/MD/Linux-x86_64/IntelMPI5/bin/mpirun" -r ssh -np 3 "/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64/vasp_gpu" 1 1 1 *****************************