To whom it may concern,
I cannot build VASP 6.4.1 OpenACC with the NVidia HPCToolkit 22.3 or 22.5 on our NVidia cluster. See https://www.lcrc.anl.gov/systems/resources/swing/#arch for a description of our cluster.
I have built VASP 6.4.0 OpenACC with the NVidia HPCToolkit 22.3 on this cluster.
My builds stop while compiling metagga.f90 with the following message.
____________start of error message_______________
nvvmCompileProgram error 9: NVVM_ERROR_COMPILATION.
Error: /tmp/pgacclOWvlHHriJKeX.gpu (26826, 25): parse '@__pgi_atomicAddd_llvm' defined with type 'double (i8 addrspace(1)*, double)*'
NVFORTRAN-F-0155-Compiler failed to translate accelerator region (see -Minfo messages): Device compiler exited with error status code (metagga.f90: 1)
NVFORTRAN/x86-64 Linux 22.3-0: compilation aborted
_____________end of error message_______________
I have attached a zip file that contains my makefile.include, the standard output from the "make DEPS=1 -j32 all" command, and the output from the lscpu command.
Is there a workaround to this issue?
John Low
Building VASP 6.4.1 OpenACC
Moderators: Global Moderator, Moderator
-
- Newbie
- Posts: 19
- Joined: Wed Nov 06, 2019 3:12 pm
Building VASP 6.4.1 OpenACC
You do not have the required permissions to view the files attached to this post.
-
- Administrator
- Posts: 282
- Joined: Mon Sep 24, 2018 9:39 am
Re: Building VASP 6.4.1 OpenACC
Dear John,
This seems to be related to a compiler problem.
For vasp-6.4.1, we recommend to use at least nvhpc-22.11.
However, I found following workaround that allowed me to compile vasp 6.4.1 with nvhpc-22.3 successfully:
For completeness, here is the complete makefile-include:
This seems to be related to a compiler problem.
For vasp-6.4.1, we recommend to use at least nvhpc-22.11.
However, I found following workaround that allowed me to compile vasp 6.4.1 with nvhpc-22.3 successfully:
Code: Select all
FREE = -Mfree -Mx,231,0x1
Code: Select all
# Default precompiler options
CPP_OPTIONS = -DHOST=\"LinuxNV\" \
-DMPI -DMPI_INPLACE -DMPI_BLOCK=8000 -Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Dvasp6 \
-Duse_bse_te \
-Dtbdyn \
-Dqd_emulate \
-Dfock_dblbuf \
-D_OPENMP \
-D_OPENACC \
-DUSENCCL -DUSENCCLP2P
CPP = nvfortran -Mpreprocess -Mfree -Mextend -E $(CPP_OPTIONS) $*$(FUFFIX) > $*$(SUFFIX)
# N.B.: you might need to change the cuda-version here
# to one that comes with your NVIDIA-HPC SDK
FC = mpif90 -acc -gpu=cc60,cc70,cc80,cuda11.0 -mp
FCL = mpif90 -acc -gpu=cc60,cc70,cc80,cuda11.0 -mp -c++libs
FREE = -Mfree -Mx,231,0x1
FFLAGS = -Mbackslash -Mlarge_arrays
OFLAG = -fast
DEBUG = -Mfree -O0 -traceback
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
LLIBS = -cudalib=cublas,cusolver,cufft,nccl -cuda
# Redefine the standard list of O1 and O2 objects
SOURCE_O1 := pade_fit.o minimax_dependence.o
SOURCE_O2 := pead.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = nvfortran
CC_LIB = nvc -w
CFLAGS_LIB = -O
FFLAGS_LIB = -O1 -Mfixed
FREE_LIB = $(FREE)
OBJECTS_LIB = linpack_double.o
# For the parser library
CXX_PARS = nvc++ --no_warnings
##
## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...
##
# When compiling on the target machine itself , change this to the
# relevant target when cross-compiling for another architecture
VASP_TARGET_CPU ?= -tp host
FFLAGS += $(VASP_TARGET_CPU)
# Specify your NV HPC-SDK installation (mandatory)
#... first try to set it automatically
NVROOT =$(shell which nvfortran | awk -F /compilers/bin/nvfortran '{ print $$1 }')
# If the above fails, then NVROOT needs to be set manually
#NVHPC ?= /opt/nvidia/hpc_sdk
#NVVERSION = 21.11
#NVROOT = $(NVHPC)/Linux_x86_64/$(NVVERSION)
## Improves performance when using NV HPC-SDK >=21.11 and CUDA >11.2
#OFLAG_IN = -fast -Mwarperf
#SOURCE_IN := nonlr.o
# Software emulation of quadruple precsion (mandatory)
QD ?= $(NVROOT)/compilers/extras/qd
LLIBS += -L$(QD)/lib -lqdmod -lqd
INCS += -I$(QD)/include/qd
# Intel MKL for FFTW, BLAS, LAPACK, and scaLAPACK
MKLROOT ?= /path/to/your/mkl/installation
LLIBS_MKL = -Mmkl -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64
INCS += -I$(MKLROOT)/include/fftw
# Use a separate scaLAPACK installation (optional but recommended in combination with OpenMPI)
# Comment out the two lines below if you want to use scaLAPACK from MKL instead
SCALAPACK_ROOT ?= /path/to/your/scalapack/installation
LLIBS_MKL = -L$(SCALAPACK_ROOT)/lib -lscalapack -Mmkl
LLIBS += $(LLIBS_MKL)
# HDF5-support (optional but strongly recommended)
#CPP_OPTIONS+= -DVASP_HDF5
#HDF5_ROOT ?= /path/to/your/hdf5/installation
#LLIBS += -L$(HDF5_ROOT)/lib -lhdf5_fortran
#INCS += -I$(HDF5_ROOT)/include
# For the VASP-2-Wannier90 interface (optional)
#CPP_OPTIONS += -DVASP2WANNIER90
#WANNIER90_ROOT ?= /path/to/your/wannier90/installation
#LLIBS += -L$(WANNIER90_ROOT)/lib -lwannier
# For the fftlib library (hardly any benefit for the OpenACC GPU port, especially in combination with MKL's FFTs)
#CPP_OPTIONS+= -Dsysv
#FCL += fftlib.o
#CXX_FFTLIB = nvc++ -mp --no_warnings -std=c++11 -DFFTLIB_USE_MKL -DFFTLIB_THREADSAFE
#INCS_FFTLIB = -I./include -I$(MKLROOT)/include/fftw
#LIBS += fftlib
#LLIBS += -ldl