Using device 3 (rank 3, local rank 3, local size 4) : Tesla K80
Using device 2 (rank 2, local rank 2, local size 4) : Tesla K80
Using device 0 (rank 0, local rank 0, local size 4) : Tesla K80
Using device 1 (rank 1, local rank 1, local size 4) : Tesla K80
 running on    4 total cores
 distrk:  each k-point on    4 cores,    1 groups
 distr:  one band on    1 cores,    4 groups
  
 *******************************************************************************
  You are running the GPU port of VASP! When publishing results obtained with
  this version, please cite:
   - M. Hacene et al., http://dx.doi.org/10.1002/jcc.23096
   - M. Hutchinson and M. Widom, http://dx.doi.org/10.1016/j.cpc.2012.02.017
  
  in addition to the usual required citations (see manual).
  
  GPU developers: A. Anciaux-Sedrakian, C. Angerer, and M. Hutchinson.
 *******************************************************************************
  
 -----------------------------------------------------------------------------
|                                                                             |
|           W    W    AA    RRRRR   N    N  II  N    N   GGGG   !!!           |
|           W    W   A  A   R    R  NN   N  II  NN   N  G    G  !!!           |
|           W    W  A    A  R    R  N N  N  II  N N  N  G       !!!           |
|           W WW W  AAAAAA  RRRRR   N  N N  II  N  N N  G  GGG   !            |
|           WW  WW  A    A  R   R   N   NN  II  N   NN  G    G                |
|           W    W  A    A  R    R  N    N  II  N    N   GGGG   !!!           |
|                                                                             |
|     Please note that VASP has recently been ported to GPU by means of       |
|     OpenACC. You are running the CUDA-C GPU-port of VASP, which is          |
|     deprecated and no longer actively developed, maintained, or             |
|     supported. In the near future, the CUDA-C GPU-port of VASP will be      |
|     dropped completely. We encourage you to switch to the OpenACC           |
|     GPU-port of VASP as soon as possible.                                   |
|                                                                             |
 -----------------------------------------------------------------------------

 vasp.6.2.1 16May21 (build Apr 11 2022 11:03:26) complex                        
  
 MD_VERSION_INFO: Compiled 2022-04-11T18:25:55-UTC in devlin.sd.materialsdesign.
 com:/home/medea2/data/build/vasp6.2.1/16685/x86_64/src/src/build/gpu from svn 1
 6685
 
 This VASP executable licensed from Materials Design, Inc.
 
 POSCAR found type information on POSCAR SiO H 
 POSCAR found :  3 types and      35 ions
 NWRITE =            1
 NWRITE =            1
 NWRITE =            1
 NWRITE =            1
 LDA part: xc-table for Pade appr. of Perdew
  
 WARNING: The GPU port of VASP has been extensively
 tested for: ALGO=Normal, Fast, and VeryFast.
 Other algorithms may produce incorrect results or
 yield suboptimal performance. Handle with care!
  
 -----------------------------------------------------------------------------
|                                                                             |
|           W    W    AA    RRRRR   N    N  II  N    N   GGGG   !!!           |
|           W    W   A  A   R    R  NN   N  II  NN   N  G    G  !!!           |
|           W    W  A    A  R    R  N N  N  II  N N  N  G       !!!           |
|           W WW W  AAAAAA  RRRRR   N  N N  II  N  N N  G  GGG   !            |
|           WW  WW  A    A  R   R   N   NN  II  N   NN  G    G                |
|           W    W  A    A  R    R  N    N  II  N    N   GGGG   !!!           |
|                                                                             |
|     The distance between some ions is very small. Please check the          |
|     nearest-neighbor list in the OUTCAR file.                               |
|     I HOPE YOU KNOW WHAT YOU ARE DOING!                                     |
|                                                                             |
 -----------------------------------------------------------------------------

 POSCAR, INCAR and KPOINTS ok, starting setup
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUDA streams...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
creating 32 CUFFT plans with grid size 54 x 96 x 40...
 FFT: planning ...
 WAVECAR not read
 entering main loop
       N       E                     dE             d eps       ncg     rms          rms(c)
DAV:   1     0.102479473426E+04    0.10248E+04   -0.38844E+04  1408   0.922E+02 
DAV:   2     0.318574012148E+03   -0.70622E+03   -0.68025E+03  2000   0.185E+02 
DAV:   3     0.153115212082E+03   -0.16546E+03   -0.16117E+03  2600   0.766E+01 
DAV:   4     0.136853963373E+03   -0.16261E+02   -0.15816E+02  2136   0.252E+01 
DAV:   5     0.136084375095E+03   -0.76959E+00   -0.75836E+00  2232   0.586E+00    0.668E+02
DAV:   6     0.203676373327E+03    0.67592E+02   -0.11016E+03  2160   0.723E+01    0.126E+02
DAV:   7     0.200080794097E+03   -0.35956E+01   -0.50869E+02  2024   0.478E+01    0.982E+01
DAV:   8     0.247699415095E+03    0.47619E+02   -0.21145E+02  2072   0.360E+01    0.105E+02
DAV:   9     0.243060679652E+03   -0.46387E+01   -0.47872E+01  1952   0.130E+01    0.910E+01
DAV:  10     0.237626034666E+03   -0.54346E+01   -0.36649E+01  2248   0.127E+01    0.100E+02
DAV:  11     0.239150463261E+03    0.15244E+01   -0.44779E+01  1872   0.110E+01    0.118E+02
DAV:  12     0.227283702904E+03   -0.11867E+02   -0.24612E+01  2048   0.125E+01    0.146E+02
DAV:  13     0.156278060448E+03   -0.71006E+02   -0.23366E+02  1952   0.377E+01    0.243E+02
DAV:  14     0.153642446592E+03   -0.26356E+01   -0.49061E+01  1712   0.153E+01    0.255E+02
DAV:  15     0.157043090072E+03    0.34006E+01   -0.12270E+01  1808   0.885E+00    0.233E+02
DAV:  16     0.169325441291E+03    0.12282E+02   -0.74994E+00  2032   0.573E+00    0.212E+02
DAV:  17     0.183764329605E+03    0.14439E+02   -0.13047E+01  1696   0.945E+00    0.179E+02
DAV:  18     0.190942071742E+03    0.71777E+01   -0.16381E+01  2280   0.481E+00    0.155E+02
DAV:  19     0.202385018473E+03    0.11443E+02   -0.44385E+00  1944   0.488E+00    0.136E+02
DAV:  20     0.206375816262E+03    0.39908E+01   -0.15270E+00  2024   0.301E+00    0.126E+02
DAV:  21     0.213557163990E+03    0.71813E+01   -0.14882E+00  1912   0.270E+00    0.122E+02
DAV:  22     0.224643280989E+03    0.11086E+02   -0.57140E+00  1904   0.557E+00    0.115E+02
 -----------------------------------------------------------------------------
|                                                                             |
|     EEEEEEE  RRRRRR   RRRRRR   OOOOOOO  RRRRRR      ###     ###     ###     |
|     E        R     R  R     R  O     O  R     R     ###     ###     ###     |
|     E        R     R  R     R  O     O  R     R     ###     ###     ###     |
|     EEEEE    RRRRRR   RRRRRR   O     O  RRRRRR       #       #       #      |
|     E        R   R    R   R    O     O  R   R                               |
|     E        R    R   R    R   O     O  R    R      ###     ###     ###     |
|     EEEEEEE  R     R  R     R  OOOOOOO  R     R     ###     ###     ###     |
|                                                                             |
|     Error EDDDAV: Call to ZHEGV failed. Returncode = 7 1 8                  |
|                                                                             |
|       ---->  I REFUSE TO CONTINUE WITH THIS SICK JOB ... BYE!!! <----       |
|                                                                             |
 -----------------------------------------------------------------------------

*****************************
Error running VASP parallel with MPI

#!/bin/bash
cd "/home/user/MD/TaskServer/Tasks/172.16.0.59-32000-task14807"
export PATH="/home/user/MD/Linux-x86_64/IntelMPI5/bin:$PATH"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/MD/Linux-x86_64/IntelMPI5/lib:/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64"
"/home/user/MD/Linux-x86_64/IntelMPI5/bin/mpirun" -r ssh  -np 4 "/home/user/MD/TaskServer/Tools/vasp-gpu6.2.1/Linux-x86_64/vasp_gpu"

1
1
1
1
*****************************