I am trying to compile the 6.3.2 version of VASP on our systems (CPU, intelmpi), and the testsuite runs fine initially. However at the stage of bulk_GaAs_ACFDT, specifically the ACFDT subprocess of this task, the test is stuck forever. I am using intel/2021.3 compilers and intelmpi/2021.3.0 for this.
The line that this test job is stuck at is the following:
Code: Select all
k-point 25 : 0.3333 0.6667 0.3333 plane waves: 101
k-point 26 : -0.6667-0.3333-0.3333 plane waves: 101
k-point 27 : 0.6667 0.3333 0.3333 plane waves: 101
maximum and minimum number of plane-waves per node : 113 98
maximum number of plane-waves: 113
maximum index in each direction:
IXMAX= 3 IYMAX= 3 IZMAX= 3
IXMIN= -3 IYMIN= -3 IZMIN= -3
exchange correlation table for LEXCH = 8
RHO(1)= 0.500 N(1) = 2000
RHO(2)= 100.500 N(2) = 4000
min. memory requirement per mpi rank 55.4 MB, per node 221.6 MB
shmem allocating 16 responsefunctions rank= 114
response function shared by NCSHMEM nodes 2
all allocation done, memory is now:
total amount of memory used by VASP MPI-rank0 57610. kBytes
=======================================================================
base : 30000. kBytes
nonl-proj : 1705. kBytes
fftplans : 406. kBytes
grid : 758. kBytes
one-center: 16. kBytes
HF : 135. kBytes
nonlr-proj: 1125. kBytes
wavefun : 20971. kBytes
response : 2494. kBytes
--------------------------------------------------------------------------------------------------------
NQ= 1 0.0000 0.0000 0.0000,
For this problematic system, if I kill this specific test job and make it proceed to the next one, it often gets stuck on the GW calculations. Here are the jobs that failed in this test:
Code: Select all
==================================================================
SUMMARY:
==================================================================
The following tests failed, please check the output file manually:
bulk_GaAs_ACFDT bulk_GaAs_ACFDT_RPR bulk_GaAs_G0W0_sym bulk_GaAs_G0W0_sym_RPR bulk_GaAs_scGW0_ALGO=D_sym bulk_GaAs_scGW0_ALGO=D_sym_RPR bulk_GaAs_scGW0_sym bulk_GaAs_scGW0_sym_RPR bulk_GaAs_scGW_ALGO=D_sym bulk_GaAs_scGW_ALGO=D_sym_RPR bulk_GaAs_scGW_sym bulk_GaAs_scGW_sym_RPR bulk_InP_SOC_G0W0_sym bulk_InP_SOC_G0W0_sym_RPR bulk_SiO2_elastic_properties_ibrion6_RPR bulk_SiO2_elastic_properties_ibrion8 HEG_333_LW SiC8_GW0R SiC_ACFDTR_T
Finally, I am attaching my makefile.include
Code: Select all
# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxIFC\"\
-DMPI -DMPI_BLOCK=32000 \
-Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=16000 \
-Davoidalloc \
-Duse_bse_te \
-Dtbdyn \
-Duse_shmem
CPP = fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
FC = mpiifort
FCL = mpiifort -mkl=cluster -lstdc++
FREE = -free -names lowercase
FFLAGS = -assume byterecl -w -heap-arrays 64
OFLAG = -O2 -XCORE-AVX2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
MKL_PATH = $(MKLROOT)/lib/intel64
BLAS =
LAPACK =
BLACS =
SCALAPACK =
OBJECTS = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d.o
INCS =-I$(MKLROOT)/include/fftw
LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS)
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB= linpack_double.o getshmem.o
# For the parser library
CXX_PARS = icpc
LIBS += parser
LLIBS += -Lparser -lparser -lstdc++
# Normally no need to change this
SRCDIR = ../../src
BINDIR = ../../bin
#================================================
# GPU Stuff
CPP_GPU = -DCUDA_GPU -DRPROMU_CPROJ_OVERLAP -DUSE_PINNED_MEMORY -DCUFFT_MIN=28 -UscaLAPACK
OBJECTS_GPU = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d_gpu.o fftmpiw_gpu.o
CC = icc
CXX = icpc
CFLAGS = -fPIC -DADD_ -Wall -openmp -DMAGMA_WITH_MKL -DMAGMA_SETAFFINITY -DGPUSHMEM=300 -DHAVE_CUBLAS
CUDA_ROOT ?= /usr/local/cuda/
NVCC := $(CUDA_ROOT)/bin/nvcc -ccbin=icc
CUDA_LIB := -L$(CUDA_ROOT)/lib64 -lnvToolsExt -lcudart -lcuda -lcufft -lcublas
GENCODE_ARCH := -gencode=arch=compute_30,code=\"sm_30,compute_30\" \
-gencode=arch=compute_35,code=\"sm_35,compute_35\" \
-gencode=arch=compute_60,code=\"sm_60,compute_60\"
MPI_INC = $(I_MPI_ROOT)/include64/