Running VASP on full 1 node using srun

To share experience including discussions about scientific questions.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
Barbara_Szpunar1
Newbie
Newbie
Posts: 1
Joined: Sun Feb 07, 2021 8:59 pm

Running VASP on full 1 node using srun

#1 Post by Barbara_Szpunar1 » Sat Feb 20, 2021 5:01 pm

When running on a full one node (48 processors) using srun -n 48 processors for Si in loop does not run correctly and please see warning and batch job and summary listed:

Code: Select all

vasp.6.1.2 22Jul20 (build Feb 05 2021 00:14:58) complex                        
  
 executed on             LinuxIFC date 2021.02.10  14:16:11
 running on   48 total cores
 distrk:  each k-point on   48 cores,    1 groups
 distr:  one band on NCORE=   1 cores,   48 groups


--------------------------------------------------------------------------------------------------------


 INCAR:
 POTCAR:   PAW_PBE Si 05Jan2001                   
 -----------------------------------------------------------------------------
|                                                                             |
|           W    W    AA    RRRRR   N    N  II  N    N   GGGG   !!!           |
|           W    W   A  A   R    R  NN   N  II  NN   N  G    G  !!!           |
|           W    W  A    A  R    R  N N  N  II  N N  N  G       !!!           |
|           W WW W  AAAAAA  RRRRR   N  N N  II  N  N N  G  GGG   !            |
|           WW  WW  A    A  R   R   N   NN  II  N   NN  G    G                |
|           W    W  A    A  R    R  N    N  II  N    N   GGGG   !!!           |
|                                                                             |
|     For optimal performance we recommend to set                             |
|       NCORE = 4 - approx SQRT(number of cores).                             |
|     NCORE specifies how many cores store one orbital (NPAR=cpu/NCORE).      |
|     This setting can greatly improve the performance of VASP for DFT.       |
|     The default, NCORE=1 might be grossly inefficient on modern             |
|     multi-core architectures or massively parallel machines. Do your        |
|     own testing!!!!                                                         |
|     Unfortunately you need to use the default for GW and RPA                |
|     calculations (for HF NCORE is supported but not extensively tested      |
|     yet).         
Unfortunately I could not attach files since it states wrong extension.

Therefore I have to copy them:

Batch job:

Code: Select all

#!/bin/bash -l
#SBATCH --job-name=VASP_Si
#SBATCH --nodes=1
#SBATCH --tasks-per-node=48
#SBATCH --mem=0
#SBATCH --time=1:00:00

# Load the modules:
module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 vasp/6.1.2

# The real work loop  starts here 

rm WAVECAR SUMMARY.fcc

for i in  3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 ; do
cat >POSCAR <<!
fcc:
   $i
 0.5 0.5 0.0
 0.0 0.5 0.5
 0.5 0.0 0.5
   1
cartesian
 0 0 0
!
echo "a= $i"

srun -n 48 vasp_std

E=`awk '/F=/ {print $0}' OSZICAR` ; echo $i $E  >>SUMMARY.fcc
done
cat SUMMARY.fcc

Code: Select all

3.5
3.6
3.7
3.8 1 F= -.48645041E+01 E0= -.48630063E+01 d E =-.299563E-02
3.9 1 F= -.48773847E+01 E0= -.48758538E+01 d E =-.306175E-02
4.0 1 F= -.48487436E+01 E0= -.48481092E+01 d E =-.126876E-02
4.1
4.2
4.3 1 F= -.45831166E+01 E0= -.45811836E+01 d E =-.386594E-02
However it runs fine when changing to 4 processors only: srun -n 4 vasp_std

Code: Select all

 vasp.6.1.2 22Jul20 (build Feb 05 2021 00:14:58) complex

 executed on             LinuxIFC date 2021.02.20  08:12:57
 running on    4 total cores
 distrk:  each k-point on    4 cores,    1 groups
 distr:  one band on NCORE=   1 cores,    4 groups

and SUMMARY.fcc is:
3.5 1 F= -.44190139E+01 E0= -.44166572E+01 d E =-.471331E-02
3.6 1 F= -.46466840E+01 E0= -.46445590E+01 d E =-.424999E-02
3.7 1 F= -.47521909E+01 E0= -.47509426E+01 d E =-.249669E-02
3.8 1 F= -.47936586E+01 E0= -.47923103E+01 d E =-.269665E-02
3.9 1 F= -.47743538E+01 E0= -.47725494E+01 d E =-.360878E-02
4.0 1 F= -.47074598E+01 E0= -.47057908E+01 d E =-.333808E-02
4.1 1 F= -.46065980E+01 E0= -.46013301E+01 d E =-.105359E-01
4.2 1 F= -.45108123E+01 E0= -.45059370E+01 d E =-.975067E-02
4.3 1 F= -.44175090E+01 E0= -.44133512E+01 d E =-.831553E-02
Why it is happening? QE runs OK on one full node.

Thank you for your help,

Barbara

henrique_miranda
Global Moderator
Global Moderator
Posts: 505
Joined: Mon Nov 04, 2019 12:41 pm
Contact:

Re: Running VASP on full 1 node using srun

#2 Post by henrique_miranda » Mon Feb 22, 2021 6:33 am

You haven't posted the error of the failed calculations when running on 48 cores so it is not easy for me to help.
Also since you did not post the INCAR file I don't know which type of calculation you are doing and if it makes sense to use so many cores.

I assume you are not using a very dense k-point grid, but if you are then I suggest you use:
wiki/index.php/KPAR

Furthermore, I recommend that you read this section:
wiki/index.php/NCORE

Hope this helps :)

Post Reply