Using device 0 (rank 0, local rank 0, local size 4) : Tesla K80
Using device 2 (rank 2, local rank 2, local size 4) : Tesla K80
Using device 3 (rank 3, local rank 3, local size 4) : Tesla K80
Using device 1 (rank 1, local rank 1, local size 4) : Tesla K80
 running on    4 total cores
 distrk:  each k-point on    4 cores,    1 groups
 distr:  one band on    1 cores,    4 groups
  
 *******************************************************************************
  You are running the GPU port of VASP! When publishing results obtained with
  this version, please cite:
   - M. Hacene et al., http://dx.doi.org/10.1002/jcc.23096
   - M. Hutchinson and M. Widom, http://dx.doi.org/10.1016/j.cpc.2012.02.017
  
  in addition to the usual required citations (see manual).
  
  GPU developers: A. Anciaux-Sedrakian, C. Angerer, and M. Hutchinson.
 *******************************************************************************
  
 -----------------------------------------------------------------------------
|                                                                             |
|           W    W    AA    RRRRR   N    N  II  N    N   GGGG   !!!           |
|           W    W   A  A   R    R  NN   N  II  NN   N  G    G  !!!           |
|           W    W  A    A  R    R  N N  N  II  N N  N  G       !!!           |
|           W WW W  AAAAAA  RRRRR   N  N N  II  N  N N  G  GGG   !            |
|           WW  WW  A    A  R   R   N   NN  II  N   NN  G    G                |
|           W    W  A    A  R    R  N    N  II  N    N   GGGG   !!!           |
|                                                                             |
|     Please note that VASP has recently been ported to GPU by means of       |
|     OpenACC. You are running the CUDA-C GPU-port of VASP, which is          |
|     deprecated and no longer actively developed, maintained, or             |
|     supported. In the near future, the CUDA-C GPU-port of VASP will be      |
|     dropped completely. We encourage you to switch to the OpenACC           |
|     GPU-port of VASP as soon as possible.                                   |
|                                                                             |
 -----------------------------------------------------------------------------

 vasp.6.2.1 16May21 (build Apr 11 2022 11:03:26) complex                        
  
 MD_VERSION_INFO: Compiled 2022-04-11T18:25:55-UTC in devlin.sd.materialsdesign.
 com:/home/medea2/data/build/vasp6.2.1/16685/x86_64/src/src/build/gpu from svn 1
 6685
 
 This VASP executable licensed from Materials Design, Inc.
 
 POSCAR found type information on POSCAR SiO H 
 POSCAR found :  3 types and      35 ions
 NWRITE =            1
 NWRITE =            1
 NWRITE =            1
 NWRITE =            1
 LDA part: xc-table for Pade appr. of Perdew
  
 WARNING: The GPU port of VASP has been extensively
 tested for: ALGO=Normal, Fast, and VeryFast.
 Other algorithms may produce incorrect results or
 yield suboptimal performance. Handle with care!
  
 -----------------------------------------------------------------------------
|                                                                             |
|           W    W    AA    RRRRR   N    N  II  N    N   GGGG   !!!           |
|           W    W   A  A   R    R  NN   N  II  NN   N  G    G  !!!           |
|           W    W  A    A  R    R  N N  N  II  N N  N  G       !!!           |
|           W WW W  AAAAAA  RRRRR   N  N N  II  N  N N  G  GGG   !            |
|           WW  WW  A    A  R   R   N   NN  II  N   NN  G    G                |
|           W    W  A    A  R    R  N    N  II  N    N   GGGG   !!!           |
|                                                                             |
|     The distance between some ions is very small. Please check the          |
|     nearest-neighbor list in the OUTCAR file.                               |
|     I HOPE YOU KNOW WHAT YOU ARE DOING!                                     |
|                                                                             |
 -----------------------------------------------------------------------------

 POSCAR, INCAR and KPOINTS ok, starting setup
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
 FFT: planning ...
 WAVECAR not read
 entering main loop
       N       E                     dE             d eps       ncg     rms          rms(c)
DAV:   1     0.128869705831E+04    0.12887E+04   -0.38348E+04  1408   0.921E+02 
DAV:   2     0.630204391766E+03   -0.65849E+03   -0.63323E+03  1960   0.182E+02 
DAV:   3     0.501569181047E+03   -0.12864E+03   -0.11935E+03  2288   0.676E+01 
DAV:   4     0.478382720834E+03   -0.23186E+02   -0.21563E+02  2288   0.252E+01 
DAV:   5     0.476867844447E+03   -0.15149E+01   -0.14998E+01  1920   0.618E+00    0.123E+03
DAV:   6     0.471086387121E+03   -0.57815E+01   -0.17080E+03  2192   0.621E+01    0.480E+02
DAV:   7     0.548932660380E+03    0.77846E+02   -0.27014E+02  2256   0.303E+01    0.249E+02
DAV:   8     0.547917853586E+03   -0.10148E+01   -0.62199E+01  1944   0.154E+01    0.331E+02
DAV:   9     0.547278736929E+03   -0.63912E+00   -0.44839E+00  2328   0.433E+00    0.356E+02
DAV:  10     0.554971549018E+03    0.76928E+01   -0.54846E+00  2240   0.456E+00    0.373E+02
DAV:  11     0.549305056429E+03   -0.56665E+01   -0.14834E+01  1816   0.479E+00    0.385E+02
DAV:  12     0.543129756445E+03   -0.61753E+01   -0.14557E+01  1872   0.559E+00    0.321E+02
DAV:  13     0.548270334011E+03    0.51406E+01   -0.39643E+00  1896   0.327E+00    0.307E+02
DAV:  14     0.556592357104E+03    0.83220E+01   -0.71705E+00  1920   0.475E+00    0.356E+02
DAV:  15     0.557017087106E+03    0.42473E+00   -0.65948E+01  2072   0.138E+01    0.539E+02
DAV:  16     0.564267382722E+03    0.72503E+01   -0.86915E+01  1784   0.207E+01    0.778E+02
DAV:  17     0.563036957920E+03   -0.12304E+01   -0.24006E+01  2360   0.851E+00    0.854E+02
DAV:  18     0.562864342027E+03   -0.17262E+00   -0.69282E+00  2272   0.396E+00    0.877E+02
DAV:  19     0.562823542147E+03   -0.40800E-01   -0.83128E-01  2320   0.132E+00    0.881E+02
DAV:  20     0.563178998423E+03    0.35546E+00   -0.16542E-01  2024   0.864E-01    0.876E+02
 -----------------------------------------------------------------------------
|                                                                             |
|     EEEEEEE  RRRRRR   RRRRRR   OOOOOOO  RRRRRR      ###     ###     ###     |
|     E        R     R  R     R  O     O  R     R     ###     ###     ###     |
|     E        R     R  R     R  O     O  R     R     ###     ###     ###     |
|     EEEEE    RRRRRR   RRRRRR   O     O  RRRRRR       #       #       #      |
|     E        R   R    R   R    O     O  R   R                               |
|     E        R    R   R    R   O     O  R    R      ###     ###     ###     |
|     EEEEEEE  R     R  R     R  OOOOOOO  R     R     ###     ###     ###     |
|                                                                             |
|     Error EDDDAV: Call to ZHEGV failed. Returncode = 14 2 16                |
|                                                                             |
|       ---->  I REFUSE TO CONTINUE WITH THIS SICK JOB ... BYE!!! <----       |
|                                                                             |
 -----------------------------------------------------------------------------

*****************************
Error running VASP parallel with MPI

#!/bin/bash
cd "/home/user/MD/TaskServer/Tasks/172.16.0.59-32000-task14609"
export PATH="/home/user/MD/Linux-x86_64/IntelMPI5/bin:$PATH"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/MD/Linux-x86_64/IntelMPI5/lib:/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64"
"/home/user/MD/Linux-x86_64/IntelMPI5/bin/mpirun" -r ssh  -np 4 "/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64/vasp_gpu"

1
1
1
1
*****************************