Using device 1 (rank 1, local rank 1, local size 4) : Tesla K80
Using device 0 (rank 0, local rank 0, local size 4) : Tesla K80
Using device 2 (rank 2, local rank 2, local size 4) : Tesla K80
Using device 3 (rank 3, local rank 3, local size 4) : Tesla K80
 running on    4 total cores
 distrk:  each k-point on    4 cores,    1 groups
 distr:  one band on    1 cores,    4 groups
  
 *******************************************************************************
  You are running the GPU port of VASP! When publishing results obtained with
  this version, please cite:
   - M. Hacene et al., http://dx.doi.org/10.1002/jcc.23096
   - M. Hutchinson and M. Widom, http://dx.doi.org/10.1016/j.cpc.2012.02.017
  
  in addition to the usual required citations (see manual).
  
  GPU developers: A. Anciaux-Sedrakian, C. Angerer, and M. Hutchinson.
 *******************************************************************************
  
 -----------------------------------------------------------------------------
|                                                                             |
|           W    W    AA    RRRRR   N    N  II  N    N   GGGG   !!!           |
|           W    W   A  A   R    R  NN   N  II  NN   N  G    G  !!!           |
|           W    W  A    A  R    R  N N  N  II  N N  N  G       !!!           |
|           W WW W  AAAAAA  RRRRR   N  N N  II  N  N N  G  GGG   !            |
|           WW  WW  A    A  R   R   N   NN  II  N   NN  G    G                |
|           W    W  A    A  R    R  N    N  II  N    N   GGGG   !!!           |
|                                                                             |
|     Please note that VASP has recently been ported to GPU by means of       |
|     OpenACC. You are running the CUDA-C GPU-port of VASP, which is          |
|     deprecated and no longer actively developed, maintained, or             |
|     supported. In the near future, the CUDA-C GPU-port of VASP will be      |
|     dropped completely. We encourage you to switch to the OpenACC           |
|     GPU-port of VASP as soon as possible.                                   |
|                                                                             |
 -----------------------------------------------------------------------------

 vasp.6.2.1 16May21 (build Apr 11 2022 11:03:26) complex                        
  
 MD_VERSION_INFO: Compiled 2022-04-11T18:25:55-UTC in devlin.sd.materialsdesign.
 com:/home/medea2/data/build/vasp6.2.1/16685/x86_64/src/src/build/gpu from svn 1
 6685
 
 This VASP executable licensed from Materials Design, Inc.
 
 POSCAR found type information on POSCAR SiO H 
 POSCAR found :  3 types and      35 ions
 NWRITE =            1
 NWRITE =            1
 NWRITE =            1
 NWRITE =            1
 LDA part: xc-table for Pade appr. of Perdew
  
 WARNING: The GPU port of VASP has been extensively
 tested for: ALGO=Normal, Fast, and VeryFast.
 Other algorithms may produce incorrect results or
 yield suboptimal performance. Handle with care!
  
 -----------------------------------------------------------------------------
|                                                                             |
|           W    W    AA    RRRRR   N    N  II  N    N   GGGG   !!!           |
|           W    W   A  A   R    R  NN   N  II  NN   N  G    G  !!!           |
|           W    W  A    A  R    R  N N  N  II  N N  N  G       !!!           |
|           W WW W  AAAAAA  RRRRR   N  N N  II  N  N N  G  GGG   !            |
|           WW  WW  A    A  R   R   N   NN  II  N   NN  G    G                |
|           W    W  A    A  R    R  N    N  II  N    N   GGGG   !!!           |
|                                                                             |
|     The distance between some ions is very small. Please check the          |
|     nearest-neighbor list in the OUTCAR file.                               |
|     I HOPE YOU KNOW WHAT YOU ARE DOING!                                     |
|                                                                             |
 -----------------------------------------------------------------------------

 POSCAR, INCAR and KPOINTS ok, starting setup
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
 FFT: planning ...
 WAVECAR not read
 entering main loop
       N       E                     dE             d eps       ncg     rms          rms(c)
DAV:   1     0.120818425093E+04    0.12082E+04   -0.39162E+04  1408   0.924E+02 
DAV:   2     0.456207405252E+03   -0.75198E+03   -0.71867E+03  1936   0.195E+02 
DAV:   3     0.289536564214E+03   -0.16667E+03   -0.16289E+03  2496   0.800E+01 
DAV:   4     0.276585585927E+03   -0.12951E+02   -0.12709E+02  1960   0.252E+01 
DAV:   5     0.275993164299E+03   -0.59242E+00   -0.58898E+00  2064   0.612E+00    0.107E+03
DAV:   6     0.306762260417E+03    0.30769E+02   -0.10907E+03  2584   0.683E+01    0.146E+02
DAV:   7     0.288964284726E+03   -0.17798E+02   -0.12895E+03  2352   0.721E+01    0.290E+02
DAV:   8     0.367122012384E+03    0.78158E+02   -0.33698E+02  2120   0.444E+01    0.291E+02
DAV:   9     0.362285136532E+03   -0.48369E+01   -0.81062E+01  1968   0.176E+01    0.241E+02
DAV:  10     0.361470634115E+03   -0.81450E+00   -0.77486E+00  1976   0.661E+00    0.250E+02
DAV:  11     0.363909206165E+03    0.24386E+01   -0.24362E+00  1880   0.372E+00    0.262E+02
DAV:  12     0.364314812861E+03    0.40561E+00   -0.68335E+00  1944   0.471E+00    0.297E+02
DAV:  13     0.337478295779E+03   -0.26837E+02   -0.95035E+01  1920   0.174E+01    0.267E+02
DAV:  14     0.350089030339E+03    0.12611E+02   -0.23451E+01  1864   0.980E+00    0.247E+02
DAV:  15     0.354471544042E+03    0.43825E+01   -0.24309E+01  2000   0.453E+00    0.251E+02
DAV:  16     0.351907162723E+03   -0.25644E+01   -0.27287E+01  2288   0.915E+00    0.116E+02
DAV:  17     0.358965109622E+03    0.70579E+01   -0.14729E+01  1672   0.650E+00    0.631E+01
DAV:  18     0.361887151037E+03    0.29220E+01   -0.27820E+00  1856   0.360E+00    0.577E+01
DAV:  19     0.369982416310E+03    0.80953E+01   -0.10626E+01  1880   0.481E+00    0.520E+01
DAV:  20     0.373417999584E+03    0.34356E+01   -0.54782E+00  1720   0.497E+00    0.436E+01
DAV:  21     0.373889583218E+03    0.47158E+00   -0.38480E+00  2016   0.227E+00    0.419E+01
DAV:  22     0.373883541759E+03   -0.60415E-02   -0.19217E+00  1680   0.150E+00    0.379E+01
DAV:  23     0.373816066316E+03   -0.67475E-01   -0.33934E-01  1632   0.861E-01    0.363E+01
 -----------------------------------------------------------------------------
|                                                                             |
|     EEEEEEE  RRRRRR   RRRRRR   OOOOOOO  RRRRRR      ###     ###     ###     |
|     E        R     R  R     R  O     O  R     R     ###     ###     ###     |
|     E        R     R  R     R  O     O  R     R     ###     ###     ###     |
|     EEEEE    RRRRRR   RRRRRR   O     O  RRRRRR       #       #       #      |
|     E        R   R    R   R    O     O  R   R                               |
|     E        R    R   R    R   O     O  R    R      ###     ###     ###     |
|     EEEEEEE  R     R  R     R  OOOOOOO  R     R     ###     ###     ###     |
|                                                                             |
|     Error EDDDAV: Call to ZHEGV failed. Returncode = 7 1 8                  |
|                                                                             |
|       ---->  I REFUSE TO CONTINUE WITH THIS SICK JOB ... BYE!!! <----       |
|                                                                             |
 -----------------------------------------------------------------------------

*****************************
Error running VASP parallel with MPI

#!/bin/bash
cd "/home/user/MD/TaskServer/Tasks/172.16.0.59-32000-task12889"
export PATH="/home/user/MD/Linux-x86_64/IntelMPI5/bin:$PATH"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/MD/Linux-x86_64/IntelMPI5/lib:/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64"
"/home/user/MD/Linux-x86_64/IntelMPI5/bin/mpirun" -r ssh  -np 4 "/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64/vasp_gpu"

1
1
1
1
*****************************