Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.
Moderators: Global Moderator, Moderator
-
pmignon
- Newbie
- Posts: 29
- Joined: Mon Jul 03, 2006 11:19 am
#1
Post
by pmignon » Wed Apr 26, 2017 10:57 am
Dear all,
I have been compiling version 5.4.1 successfully. However the compilation of the 5.4.4 version gives me an error rapidly for the compilation of mpi.f routine :
Code: Select all
mkdir build/std ; \
cp src/makefile src/.objects makefile.include build/std ; \
make -C build/std VERSION=std all
make[1]: Entering directory '/home/pmignon/bin/VASP/VASP544/vasp.5.4.4/build/std'
rsync -ru ../../src/lib .
cp makefile.include lib
make -C lib -j1
make[2]: Entering directory '/home/pmignon/bin/VASP/VASP544/vasp.5.4.4/build/std/lib'
make libdmy.a
make[3]: Entering directory '/home/pmignon/bin/VASP/VASP544/vasp.5.4.4/build/std/lib'
gcc -E -P -C preclib.F >preclib.f90
mpif90 -O1 -ffree-form -ffree-line-length-none -c -o preclib.o preclib.f90
gcc -O -c -o timing_.o timing_.c
gcc -O -c -o derrf_.o derrf_.c
gcc -O -c -o dclock_.o dclock_.c
gcc -E -P -C diolib.F >diolib.f90
mpif90 -O1 -ffree-form -ffree-line-length-none -c -o diolib.o diolib.f90
gcc -E -P -C dlexlib.F >dlexlib.f90
mpif90 -O1 -ffree-form -ffree-line-length-none -c -o dlexlib.o dlexlib.f90
gcc -E -P -C drdatab.F >drdatab.f90
mpif90 -O1 -ffree-form -ffree-line-length-none -c -o drdatab.o drdatab.f90
mpif90 -O1 -c linpack_double.f
gcc -O -c -o getshmem.o getshmem.c
rm -f libdmy.a
ar vq libdmy.a preclib.o timing_.o derrf_.o dclock_.o diolib.o dlexlib.o drdatab.o linpack_double.o getshmem.o
ar: création de libdmy.a
a - preclib.o
a - timing_.o
a - derrf_.o
a - dclock_.o
a - diolib.o
a - dlexlib.o
a - drdatab.o
a - linpack_double.o
a - getshmem.o
make[3]: Leaving directory '/home/pmignon/bin/VASP/VASP544/vasp.5.4.4/build/std/lib'
make[2]: Leaving directory '/home/pmignon/bin/VASP/VASP544/vasp.5.4.4/build/std/lib'
rsync -u ../../src/*.F ../../src/*.inc .
rm -f vasp ; make vasp ; cp vasp ../../bin/vasp_std
make[2]: Entering directory '/home/pmignon/bin/VASP/VASP544/vasp.5.4.4/build/std'
gcc -E -P -C c2f_interface.F >c2f_interface.f90 -DMPI -DHOST=\"IFC91_ompi\" -DIFC -DCACHE_SIZE=4000 -Davoidalloc -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective -DnoAugXCmeta -Duse_bse_te -Duse_shmem -Dtbdyn -DNGZhalf
mpif90 -ffree-form -ffree-line-length-none -O2 -I/usr/include -c c2f_interface.f90
gcc -E -P -C base.F >base.f90 -DMPI -DHOST=\"IFC91_ompi\" -DIFC -DCACHE_SIZE=4000 -Davoidalloc -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective -DnoAugXCmeta -Duse_bse_te -Duse_shmem -Dtbdyn -DNGZhalf
mpif90 -ffree-form -ffree-line-length-none -O2 -I/usr/include -c base.f90
gcc -E -P -C profiling.F >profiling.f90 -DMPI -DHOST=\"IFC91_ompi\" -DIFC -DCACHE_SIZE=4000 -Davoidalloc -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective -DnoAugXCmeta -Duse_bse_te -Duse_shmem -Dtbdyn -DNGZhalf
mpif90 -ffree-form -ffree-line-length-none -O2 -I/usr/include -c profiling.f90
gcc -E -P -C openmp.F >openmp.f90 -DMPI -DHOST=\"IFC91_ompi\" -DIFC -DCACHE_SIZE=4000 -Davoidalloc -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective -DnoAugXCmeta -Duse_bse_te -Duse_shmem -Dtbdyn -DNGZhalf
mpif90 -ffree-form -ffree-line-length-none -O2 -I/usr/include -c openmp.f90
gcc -E -P -C mpi.F >mpi.f90 -DMPI -DHOST=\"IFC91_ompi\" -DIFC -DCACHE_SIZE=4000 -Davoidalloc -DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective -DnoAugXCmeta -Duse_bse_te -Duse_shmem -Dtbdyn -DNGZhalf
mpif90 -ffree-form -ffree-line-length-none -O2 -I/usr/include -c mpi.f90
mpi.f90:436.65:
CALL MPI_Comm_split_type(COMM%MPI_COMM,MPI_COMM_TYPE_SHARED,0,MPI_INFO_NULL,COMM_intra%MPI_COMM,ierror)
1
Error: Symbol 'mpi_comm_type_shared' at (1) has no IMPLICIT type
mpi.f90:556.10:
USE mpimy
1
Fatal Error: Can't open module file 'mpimy.mod' for reading at (1): Aucun fichier ou dossier de ce type
makefile:169: recipe for target 'mpi.o' failed
make[2]: *** [mpi.o] Error 1
make[2]: Leaving directory '/home/pmignon/bin/VASP/VASP544/vasp.5.4.4/build/std'
cp: impossible d'évaluer « vasp »: Aucun fichier ou dossier de ce type
makefile:142: recipe for target 'all' failed
make[1]: *** [all] Error 1
make[1]: Leaving directory '/home/pmignon/bin/VASP/VASP544/vasp.5.4.4/build/std'
makefile:10: recipe for target 'std' failed
make: *** [std] Error 2
Here is my makefile.include file :
Code: Select all
# Precompiler options
CPP_OPTIONS= -DMPI -DHOST=\"IFC91_ompi\" -DIFC \
-DCACHE_SIZE=4000 -Davoidalloc \
-DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective \
-DnoAugXCmeta -Duse_bse_te \
-Duse_shmem -Dtbdyn
CPP = gcc -E -P -C $*$(FUFFIX) >$*$(SUFFIX) $(CPP_OPTIONS)
FC = mpif90
FCL = mpif90
FREE = -ffree-form -ffree-line-length-none
FFLAGS =
OFLAG = -O2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
LIBDIR = /usr/lib/x86_64-linux-gnu
BLAS = -L$(LIBDIR) -lblas
LAPACK = -L$(LIBDIR) -llapack
BLACS = -L$(LIBDIR) -lblacs-openmpi -lblacsCinit-openmpi
SCALAPACK = -L$(LIBDIR) -lscalapack-openmpi $(BLACS)
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \
/usr/lib/x86_64-linux-gnu/libfftw3.a
INCS =-I/usr/include
LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS)
OBJECTS_O1 += fft3dfurth.o fftw3d.o fftmpi.o fftmpiw.o chi.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = gcc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB= linpack_double.o getshmem.o
# Normally no need to change this
SRCDIR = ../../src
BINDIR = ../../bin
Does anyone could help ?
Thank you in advance,
Pierre
-
pmignon
- Newbie
- Posts: 29
- Joined: Mon Jul 03, 2006 11:19 am
#2
Post
by pmignon » Thu Apr 27, 2017 6:38 pm
Solved : Install and compile with a newer version of openmpi : openmpi-2.1.0 ...
-
AndreiKS
- Newbie
- Posts: 1
- Joined: Fri Jun 09, 2017 8:20 am
- License Nr.: 5-773
#3
Post
by AndreiKS » Wed Jun 21, 2017 9:52 am
Dear all,
Using your makefile with slight changes, I was able to compile the program.
However, when the program was started, an error occurred
Code: Select all
v:~/tmp/Lp2$ /usr/local/openmpi/bin/mpiexec -np 1 ../vasp.5.4.1/bin/vasp_std
--------------------------------------------------------------------------
A requested component was not found, or was unable to be opened. This
means that this component is either not installed or is unable to be
used on your system (e.g., sometimes this means that shared libraries
that the component requires are unable to be found/loaded). Note that
Open MPI stopped checking at the first component that it did not find.
Host: v
Framework: ess
Component: pmi
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
orte_ess_base_open failed
--> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
[v:04812] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file runtime/orte_init.c at line 116
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: orte_init failed
--> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
[v:4812] *** An error occurred in MPI_Init
[v:4812] *** on a NULL communicator
[v:4812] *** Unknown error
[v:4812] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
An MPI process is aborting at a time when it cannot guarantee that all
of its peer processes in the job will be killed properly. You should
double check that everything has shut down cleanly.
Reason: Before MPI_INIT completed
Local host: v
PID: 4812
--------------------------------------------------------------------------
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpiexec detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[12221,1],0]
Exit code: 1
--------------------------------------------------------------------------
Here is my makefile.include file :
Code: Select all
# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxGNU\" \
-DMPI -DMPI_BLOCK=8000 \
-Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Duse_bse_te \
-Dtbdyn \
-Duse_shmem
CPP = gcc -E -P -C $*$(FUFFIX) >$*$(SUFFIX) $(CPP_OPTIONS)
FC = /usr/local/openmpi/bin/mpif90
FCL = /usr/local/openmpi/bin/mpif90
FREE = -ffree-form -ffree-line-length-none
FFLAGS =
OFLAG = -O2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
LIBDIR = /usr/lib
BLAS = -L$(LIBDIR) -lblas
LAPACK = -L$(LIBDIR) -llapack
BLACS = -L$(LIBDIR) -lblacs-openmpi -lblacsCinit-openmpi
SCALAPACK = -L$(LIBDIR) -lscalapack-openmpi $(BLACS)
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \
/usr/lib/x86_64-linux-gnu/libfftw3.a
INCS =-I/usr/include
LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS)
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = gcc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB= linpack_double.o getshmem.o
# For the parser library
CXX_PARS = g++
LIBS += parser
LLIBS += -Lparser -lparser -lstdc++
# Normally no need to change this
SRCDIR = ../../src
BINDIR = ../../bin
Does anyone could help ?
Thank you in advance,
Andrei
-
mwistey
- Newbie
- Posts: 12
- Joined: Fri Sep 26, 2014 2:22 am
- License Nr.: 5-1877
#4
Post
by mwistey » Fri Aug 18, 2017 9:14 am
This is an old post, but if it helps anyone else diagnose the problem, I got the same errors using intel/14.0.2 and intelmpi/14.0.2. Deleting "-Duse_shmem" from the makefile.include file seems to have allowed the compile to go farther, although that's probably going to cause other problems on large shared memory nodes. Either way, now it's failing to link:
mpi.o: In function `m_ibcast_z_from_':
mpi.f90:(.text+0x365a): undefined reference to `mpi_ibcast_'
Maybe 5.4.4 requires a newer MPI library?
-
zhouych
- Newbie
- Posts: 6
- Joined: Sun Dec 27, 2015 7:08 am
#5
Post
by zhouych » Mon Jan 29, 2018 2:38 pm
I also met this problem. success in vasp.5.4.1, but failed in vasp.5.4.4:
CALL MPI_Comm_split_type(COMM%MPI_COMM,MPICOMM_TYPE_SHARED,0,MPI_INFO_NU)
1
Error: Symbol 'mpi_comm_type_shared' at (1) has no IMPLICIT type
mpi_gpu.f90:689.10:
USE mpimy
1
Fatal Error: Can't open module file 'mpimy.mod' for reading at (1): No such file or directory
I use intel compilor 2013 and intel impi.4.01.
Does anyone know how to solve it?
Thanks in advance!
-
wpiskorz
- Newbie
- Posts: 4
- Joined: Sun Oct 11, 2009 4:07 pm
#6
Post
by wpiskorz » Thu Jun 14, 2018 12:21 pm
pmignon wrote:Solved : Install and compile with a newer version of openmpi : openmpi-2.1.0 ...
Hello everybody,
Yes, it works.