Dear All, i am compiling the vasp 4.6.38 with xeon processor e5-2440 using Intel XE Composer 2013 on centos 6.4. I am using the test suite v1 from Prof Peter Larsson from NSC. The test suite can be downloaded from here http://www.nsc.liu.se/~pla/vasptest/ . I didn't finish testing but it appears that whenever i am testing on Si , the test suite fails. Cu-fcc, Fe-bcc and TiO2 rutile are alright. Some of the test suite result is as follows. I would like to ask if there is any problem with my compilation of vasp using the makefile as given below, or there are other reasons why the Si results failed.
----------------------------------| Test suite: quick |------------------------------------
Analyzing Fe bcc spin-polarized...
* Total energy (eV)..................................................................[ OK ]
* Fermi energy (eV)..................................................................[ OK ]
* Band energy (eV)...................................................................[ OK ]
* Cell Pressure (kPa)................................................................[ OK ]
* Stress Tensor xx component (kPa)...................................................[ OK ]
* Stress Tensor yy component (kPa)...................................................[ OK ]
* Stress Tensor zz component (kPa)...................................................[ OK ]
* Stress Tensor xy component (kPa)...................................................[ OK ]
* Magnetic moment (uB)...............................................................[ OK ]
* Number of SCF iterations...........................................................[ OK ]
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
Analyzing Cu fcc...
* Total energy (eV)..................................................................[ OK ]
* Fermi energy (eV)..................................................................[ OK ]
* Band energy (eV)...................................................................[ OK ]
* Cell Pressure (kPa)................................................................[ OK ]
* Stress Tensor xx component (kPa)...................................................[ OK ]
* Stress Tensor yy component (kPa)...................................................[ OK ]
* Stress Tensor zz component (kPa)...................................................[ OK ]
* Stress Tensor xy component (kPa)...................................................[ OK ]
* Number of SCF iterations...........................................................[ OK ]
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
Analyzing Si cubic diamond...
* Total energy (eV)..............................................................[ FAILED ]
Actual result: -43.392542
Expected result: -43.348092
Delta: -4.445e-02
* Fermi energy (eV)..............................................................[ FAILED ]
Actual result: 5.799864
Expected result: 5.802649
Delta: -2.785e-03
* Band energy (eV)...............................................................[ FAILED ]
Actual result: 32.002751
Expected result: 32.106471
Delta: -1.037e-01
* Cell Pressure (kPa)............................................................[ FAILED ]
Actual result: 15.66
Expected result: 18.34
Delta: -2.680e+00
* Stress Tensor xx component (kPa)...............................................[ FAILED ]
Actual result: 15.66
Expected result: 18.34245
Delta: -2.682e+00
* Stress Tensor yy component (kPa)...............................................[ FAILED ]
Actual result: 15.66
Expected result: 18.34245
Delta: -2.682e+00
* Stress Tensor zz component (kPa)...............................................[ FAILED ]
Actual result: 15.66
Expected result: 18.34245
Delta: -2.682e+00
* Stress Tensor xy component (kPa)...................................................[ OK ]
* Number of SCF iterations.......................................................[ FAILED ]
Actual result: 13
Expected result: 15
Delta: -2.000e+00
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
Analyzing TiO2 rutile...
* Total energy (eV)..................................................................[ OK ]
* Cell Pressure (kPa)................................................................[ OK ]
* Stress Tensor xx component (kPa)...................................................[ OK ]
* Stress Tensor yy component (kPa)...................................................[ OK ]
* Stress Tensor zz component (kPa)...................................................[ OK ]
* Stress Tensor xy component (kPa)...................................................[ OK ]
* Number of SCF iterations...........................................................[ OK ]
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
Summary
Passed: 3
Failed: 1
Errors: 0
Status: FAILED
--------------------------------| End of test suite: quick |--------------------------------
-----------------------------------| Test suite: geoopt |-----------------------------------
Analyzing Si (only coords)...
* Total energy (eV)..............................................................[ FAILED ]
Actual result: -43.392511
Expected result: -43.348076
Delta: -4.444e-02
* Cell Pressure (kPa)............................................................[ FAILED ]
Actual result: 15.66
Expected result: 18.47
Delta: -2.810e+00
* Number of SCF iterations.......................................................[ FAILED ]
Actual result: 61
Expected result: 74
Delta: -1.300e+01
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
* Number of ionic optimization steps.................................................[ OK ]
* RMS of direct coordinate difference................................................[ OK ]
* Length of a-vector.................................................................[ OK ]
* Length of b-vector.................................................................[ OK ]
* Length of c-vector.................................................................[ OK ]
Analyzing Si (only volume)...
* Total energy (eV)..............................................................[ FAILED ]
Actual result: -42.826592
Expected result: -42.790252
Delta: -3.634e-02
* Cell Pressure (kPa)............................................................[ FAILED ]
Actual result: 0.02
Expected result: -0.05
Delta: +7.000e-02
* Number of SCF iterations.......................................................[ FAILED ]
Actual result: 37
Expected result: 35
Delta: +2.000e+00
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
* Number of ionic optimization steps.................................................[ OK ]
* Length of a-vector.................................................................[ OK ]
* Length of b-vector.................................................................[ OK ]
* Length of c-vector.................................................................[ OK ]
Analyzing Si (only shape)...
* Total energy (eV)..............................................................[ FAILED ]
Actual result: -42.872278
Expected result: -42.828497
Delta: -4.378e-02
* Cell Pressure (kPa)............................................................[ FAILED ]
Actual result: 19.87
Expected result: 22.64
Delta: -2.770e+00
* Number of SCF iterations.......................................................[ FAILED ]
Actual result: 41
Expected result: 46
Delta: -5.000e+00
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
* Number of ionic optimization steps.................................................[ OK ]
* RMS of direct coordinate difference................................................[ OK ]
* Length of a-vector.................................................................[ OK ]
* Length of b-vector.................................................................[ OK ]
* Length of c-vector.................................................................[ OK ]
Analyzing Si (everything)...
* Total energy (eV)..............................................................[ FAILED ]
Actual result: -43.405432
Expected result: -43.366151
Delta: -3.928e-02
* Cell Pressure (kPa)............................................................[ FAILED ]
Actual result: -0.06
Expected result: -0.16
Delta: +1.000e-01
* Number of SCF iterations.......................................................[ FAILED ]
Actual result: 75
Expected result: 89
Delta: -1.400e+01
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
* Number of ionic optimization steps.................................................[ OK ]
* RMS of direct coordinate difference................................................[ OK ]
* Length of a-vector.................................................................[ OK ]
* Length of b-vector.................................................................[ OK ]
* Length of c-vector.................................................................[ OK ]
Analyzing Si (cg)...
* Total energy (eV)..............................................................[ FAILED ]
Actual result: -43.39251
Expected result: -43.348074
Delta: -4.444e-02
* Cell Pressure (kPa)............................................................[ FAILED ]
Actual result: 15.66
Expected result: 18.47
Delta: -2.810e+00
* Number of SCF iterations.......................................................[ FAILED ]
Actual result: 78
Expected result: 107
Delta: -2.900e+01
* POTCAR file employed...............................................................[ OK ]
* Syntax in vasprun.xml..............................................................[ OK ]
* Symmetry of cell...................................................................[ OK ]
* Number of ionic optimization steps.............................................[ FAILED ]
Actual result: 14
Expected result: 15
Delta: -1.000e+00
* RMS of direct coordinate difference................................................[ OK ]
* Length of a-vector.................................................................[ OK ]
* Length of b-vector.................................................................[ OK ]
* Length of c-vector.................................................................[ OK ]
Summary
Passed: 0
Failed: 5
Errors: 0
Vasp Installation Enquries with Test Suite Problem
Moderators: Global Moderator, Moderator
-
- Newbie
- Posts: 3
- Joined: Sat Apr 06, 2013 3:53 pm
- License Nr.: 666 (vasp.4.6)
- Location: Singapore
- Contact:
Vasp Installation Enquries with Test Suite Problem
Last edited by emoh79 on Sat Apr 06, 2013 7:24 pm, edited 1 time in total.
-
- Newbie
- Posts: 3
- Joined: Sat Apr 06, 2013 3:53 pm
- License Nr.: 666 (vasp.4.6)
- Location: Singapore
- Contact:
Vasp Installation Enquries with Test Suite Problem
The makefile i use is as below
.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Intel Fortran compiler for P4 systems
#
# The makefile was tested only under Linux on Intel platforms
# (Suse 5.3- Suse 9.0)
# the followin compiler versions have been tested
# 5.0, 6.0, 7.0 and 7.1 (some 8.0 versions seem to fail compiling the code)
# presently we recommend version 7.1 or 7.0, since these
# releases have been used to compile the present code versions
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
# retrieve the lapackage from ftp.netlib.org
# and compile the blas routines (BLAS/SRC directory)
# please use g77 or f77 for the compilation. When I tried to
# use pgf77 or pgf90 for BLAS, VASP hang up when calling
# ZHEEV (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
# http://developer.intel.com/software/products/mkl/
# this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
# http://math-atlas.sourceforge.net/
# you certainly need atlas on the Athlon, since the mkl
# routines are not optimal on the Athlon.
# If you want to use atlas based BLAS, check the lines around LIB=
#
# 3c) mindblowing fast SSE2 (4 GFlops on P4, 2.53 GHz)
# Kazushige Goto's BLAS
# http://www.cs.utexas.edu/users/kgoto/signup_first.html
#
#-----------------------------------------------------------------------
# all CPP processed fortran files have the extension .f90
SUFFIX=.f90
#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC=ifort
# fortran linker
FCL=$(FC) -mkl
#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE X.X, maybe some Red Hat distributions:
CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)
#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# IFC work around some IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
#-----------------------------------------------------------------------
CPP = $(CPP_) -DHOST=\"LinuxIFC\" \
-Dkind8 -DCACHE_SIZE=24000 -DPGF90 -Davoidalloc -DNGXhalf \
-DRPROMU_DGEMV -DRACCMU_DGEMV
#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
#-----------------------------------------------------------------------
FFLAGS = -I/opt/intel/composer_xe_2013.2.146/mkl/include/fftw -free -names lowercase -assume byterecl -i_dynamic -fpe0 -fp-model strict
FFLAGS_F77= -i_dynamic
#-----------------------------------------------------------------------
# optimization
# we have tested whether higher optimisation improves performance
# -axK SSE1 optimization, but also generate code executable on all mach.
# xK improves performance somewhat on XP, and a is required in order
# to run the code on older Athlons as well
# -xW SSE2 optimization
# -axW SSE2 optimization, but also generate code executable on all mach.
# -tpp6 P3 optimization
# -tpp7 P4 optimization
#-----------------------------------------------------------------------
OFLAG=-O0
#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# on P4, VASP works fastest with the libgoto library
# so that's what I recommend
#-----------------------------------------------------------------------
MKLINCLUDE=/opt/intel/composer_xe_2013.2.146/mkl/include
MKLPATH=/opt/intel/composer_xe_2013.2.146/mkl/lib/intel64
BLAS=-L$(MKLPATH) $(MKLPATH)/libmkl_blas95_ilp64.a -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -lm
LAPACK=-L$(MKLPATH) $(MKLPATH)/libmkl_lapack95_ilp64.a -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -lm
#-----------------------------------------------------------------------
LIB = -L../vasp.4.lib -ldmy \
../vasp.4.lib/linpack_double.o $(LAPACK) \
$(BLAS)
# options for linking (for compiler version 6.X, 7.1) nothing is required
LINK =
# compiler version 7.0 generates some vector statments which are located
# in the svml library, add the LIBPATH and the library (just in case)
#LINK = -L/opt/intel/compiler70/ia32/lib/ -lsvml
#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.6 can use fftw.3.0.X (http://www.fftw.org)
# since this version is faster on P4 machines, we recommend to use it
#-----------------------------------------------------------------------
FFT3D = fftw3d.o fft3dlib.o /opt/intel/composer_xe_2013.2.146/mkl/interfaces/fftw3xf/libfftw3xf_intel.a
#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77
# appends *two* underscores to symbols that contain already an
# underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90/ifc
# compilers however append only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X to lam-7.0.4 are stable
# mpich.1.2.1 was configured with
# ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \
# -f90="pgf90 " \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
# ./configure -prefix /opt/libs/lam-7.0.4 --with-cflags=-O -with-fc=ifc \
# --with-f77flags=-O --without-romio
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above, you can use the following line
#-----------------------------------------------------------------------
#FC=mpif77
#FCL=$(FC)
#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------
#CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \
# -Dkind8 -DNGZhalf -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \
# -DMPI_BLOCK=500 \
## -DRPROMU_DGEMV -DRACCMU_DGEMV
#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------
#SCA= $(SCA_)/libscalapack.a \
#$(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a
#SCA=
#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------
#LIB = -L../vasp.4.lib -ldmy \
# ../vasp.4.lib/linpack_double.o $(LAPACK) \
# $(SCA) $(BLAS)
# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
#FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o
# fftw.3.0.1 is slighly faster and should be used if available
#FFT3D = fftmpiw.o fftmpi_map.o fft3dlib.o /opt/libs/fftw-3.0.1/lib/libfftw3.a
#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC= symmetry.o symlib.o lattlib.o random.o
SOURCE= base.o mpi.o smart_allocate.o xml.o \
constant.o jacobi.o main_mpi.o scala.o \
asa.o lattice.o poscar.o ini.o setex.o radial.o \
pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \
nonl.o nonlr.o dfast.o choleski2.o \
mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \
metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \
tet.o hamil.o steep.o \
chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \
ebs.o wavpre.o wavpre_noio.o broyden.o \
dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \
brent.o stufak.o fileio.o opergrid.o stepver.o \
dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \
edtest.o electron.o shm.o pardens.o paircorrection.o \
optics.o constr_cell_relax.o stm.o finite_diff.o \
elpol.o setlocalpp.o aedens.o
INC=
vasp: $(SOURCE) $(FFT3D) $(INC) main.o
rm -f vasp
$(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
$(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)
clean:
-rm -f *.g *.f *.o *.L *.mod ; touch *.F
main.o: main$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX)
makeparam.o: makeparam$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX)
makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F
$(OBJ_HIGH):
$(CPP)
$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
$(CPP)
$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)
fft3dlib_f77.o: fft3dlib_f77.F
$(CPP)
$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)
.F.o:
$(CPP)
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
$(CPP)
$(SUFFIX).o:
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
# special rules
#-----------------------------------------------------------------------
# these special rules are cummulative (that is once failed
# in one compiler version, stays in the list forever)
# -tpp5|6|7 P, PII-PIII, PIV
# -xW use SIMD (does not pay of on PII, since fft3d uses double prec)
# all other options do no affect the code performance since -O1 is used
#-----------------------------------------------------------------------
#-------------------------------------------------------------------------
#fft3dlib.o : fft3dlib.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -tpp7 -xW -unroll0 -w95 -vec_report3 -c $*$(SUFFIX)
#fft3dfurth.o : fft3dfurth.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#radial.o : radial.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#symlib.o : symlib.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#symmetry.o : symmetry.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#dynbr.o : dynbr.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#broyden.o : broyden.F
# $(CPP)
# $(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
#us.o : us.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#wave.o : wave.F
# $(CPP)
# $(FC) -FR -lowercase -O0 -c $*$(SUFFIX)
#LDApU.o : LDApU.F
# $(CPP)
# $(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
#---------------------------------------------------------------------
#fft3dlib.o : fft3dlib.F
# $(CPP)
# $(FC) -FR -O1 -unroll0 -vec_report3 -c $*$(SUFFIX)
ft3dlib.o : fft3dlib.F
$(CPP)
$(FC) -FR -O1 -unroll0 -vec_report3 -c $*$(SUFFIX)
fft3dfurth.o : fft3dfurth.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
radial.o : radial.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
symlib.o : symlib.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
symmetry.o : symmetry.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
dynbr.o : dynbr.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
broyden.o : broyden.F
$(CPP)
$(FC) -FR -O2 -c $*$(SUFFIX)
us.o : us.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
wave.o : wave.F
$(CPP)
$(FC) -FR -O0 -c $*$(SUFFIX)
LDApU.o : LDApU.F
$(CPP)
$(FC) -FR -O2 -c $*$(SUFFIX)
.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Intel Fortran compiler for P4 systems
#
# The makefile was tested only under Linux on Intel platforms
# (Suse 5.3- Suse 9.0)
# the followin compiler versions have been tested
# 5.0, 6.0, 7.0 and 7.1 (some 8.0 versions seem to fail compiling the code)
# presently we recommend version 7.1 or 7.0, since these
# releases have been used to compile the present code versions
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
# retrieve the lapackage from ftp.netlib.org
# and compile the blas routines (BLAS/SRC directory)
# please use g77 or f77 for the compilation. When I tried to
# use pgf77 or pgf90 for BLAS, VASP hang up when calling
# ZHEEV (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
# http://developer.intel.com/software/products/mkl/
# this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
# http://math-atlas.sourceforge.net/
# you certainly need atlas on the Athlon, since the mkl
# routines are not optimal on the Athlon.
# If you want to use atlas based BLAS, check the lines around LIB=
#
# 3c) mindblowing fast SSE2 (4 GFlops on P4, 2.53 GHz)
# Kazushige Goto's BLAS
# http://www.cs.utexas.edu/users/kgoto/signup_first.html
#
#-----------------------------------------------------------------------
# all CPP processed fortran files have the extension .f90
SUFFIX=.f90
#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC=ifort
# fortran linker
FCL=$(FC) -mkl
#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE X.X, maybe some Red Hat distributions:
CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)
#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# IFC work around some IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
#-----------------------------------------------------------------------
CPP = $(CPP_) -DHOST=\"LinuxIFC\" \
-Dkind8 -DCACHE_SIZE=24000 -DPGF90 -Davoidalloc -DNGXhalf \
-DRPROMU_DGEMV -DRACCMU_DGEMV
#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
#-----------------------------------------------------------------------
FFLAGS = -I/opt/intel/composer_xe_2013.2.146/mkl/include/fftw -free -names lowercase -assume byterecl -i_dynamic -fpe0 -fp-model strict
FFLAGS_F77= -i_dynamic
#-----------------------------------------------------------------------
# optimization
# we have tested whether higher optimisation improves performance
# -axK SSE1 optimization, but also generate code executable on all mach.
# xK improves performance somewhat on XP, and a is required in order
# to run the code on older Athlons as well
# -xW SSE2 optimization
# -axW SSE2 optimization, but also generate code executable on all mach.
# -tpp6 P3 optimization
# -tpp7 P4 optimization
#-----------------------------------------------------------------------
OFLAG=-O0
#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# on P4, VASP works fastest with the libgoto library
# so that's what I recommend
#-----------------------------------------------------------------------
MKLINCLUDE=/opt/intel/composer_xe_2013.2.146/mkl/include
MKLPATH=/opt/intel/composer_xe_2013.2.146/mkl/lib/intel64
BLAS=-L$(MKLPATH) $(MKLPATH)/libmkl_blas95_ilp64.a -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -lm
LAPACK=-L$(MKLPATH) $(MKLPATH)/libmkl_lapack95_ilp64.a -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core -lpthread -lm
#-----------------------------------------------------------------------
LIB = -L../vasp.4.lib -ldmy \
../vasp.4.lib/linpack_double.o $(LAPACK) \
$(BLAS)
# options for linking (for compiler version 6.X, 7.1) nothing is required
LINK =
# compiler version 7.0 generates some vector statments which are located
# in the svml library, add the LIBPATH and the library (just in case)
#LINK = -L/opt/intel/compiler70/ia32/lib/ -lsvml
#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.6 can use fftw.3.0.X (http://www.fftw.org)
# since this version is faster on P4 machines, we recommend to use it
#-----------------------------------------------------------------------
FFT3D = fftw3d.o fft3dlib.o /opt/intel/composer_xe_2013.2.146/mkl/interfaces/fftw3xf/libfftw3xf_intel.a
#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77
# appends *two* underscores to symbols that contain already an
# underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90/ifc
# compilers however append only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X to lam-7.0.4 are stable
# mpich.1.2.1 was configured with
# ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \
# -f90="pgf90 " \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
# ./configure -prefix /opt/libs/lam-7.0.4 --with-cflags=-O -with-fc=ifc \
# --with-f77flags=-O --without-romio
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above, you can use the following line
#-----------------------------------------------------------------------
#FC=mpif77
#FCL=$(FC)
#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------
#CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \
# -Dkind8 -DNGZhalf -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \
# -DMPI_BLOCK=500 \
## -DRPROMU_DGEMV -DRACCMU_DGEMV
#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------
#SCA= $(SCA_)/libscalapack.a \
#$(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a
#SCA=
#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------
#LIB = -L../vasp.4.lib -ldmy \
# ../vasp.4.lib/linpack_double.o $(LAPACK) \
# $(SCA) $(BLAS)
# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
#FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o
# fftw.3.0.1 is slighly faster and should be used if available
#FFT3D = fftmpiw.o fftmpi_map.o fft3dlib.o /opt/libs/fftw-3.0.1/lib/libfftw3.a
#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC= symmetry.o symlib.o lattlib.o random.o
SOURCE= base.o mpi.o smart_allocate.o xml.o \
constant.o jacobi.o main_mpi.o scala.o \
asa.o lattice.o poscar.o ini.o setex.o radial.o \
pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \
nonl.o nonlr.o dfast.o choleski2.o \
mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \
metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \
tet.o hamil.o steep.o \
chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \
ebs.o wavpre.o wavpre_noio.o broyden.o \
dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \
brent.o stufak.o fileio.o opergrid.o stepver.o \
dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \
edtest.o electron.o shm.o pardens.o paircorrection.o \
optics.o constr_cell_relax.o stm.o finite_diff.o \
elpol.o setlocalpp.o aedens.o
INC=
vasp: $(SOURCE) $(FFT3D) $(INC) main.o
rm -f vasp
$(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
$(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)
clean:
-rm -f *.g *.f *.o *.L *.mod ; touch *.F
main.o: main$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX)
makeparam.o: makeparam$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX)
makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F
$(OBJ_HIGH):
$(CPP)
$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
$(CPP)
$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)
fft3dlib_f77.o: fft3dlib_f77.F
$(CPP)
$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)
.F.o:
$(CPP)
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
$(CPP)
$(SUFFIX).o:
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
# special rules
#-----------------------------------------------------------------------
# these special rules are cummulative (that is once failed
# in one compiler version, stays in the list forever)
# -tpp5|6|7 P, PII-PIII, PIV
# -xW use SIMD (does not pay of on PII, since fft3d uses double prec)
# all other options do no affect the code performance since -O1 is used
#-----------------------------------------------------------------------
#-------------------------------------------------------------------------
#fft3dlib.o : fft3dlib.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -tpp7 -xW -unroll0 -w95 -vec_report3 -c $*$(SUFFIX)
#fft3dfurth.o : fft3dfurth.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#radial.o : radial.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#symlib.o : symlib.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#symmetry.o : symmetry.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#dynbr.o : dynbr.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#broyden.o : broyden.F
# $(CPP)
# $(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
#us.o : us.F
# $(CPP)
# $(FC) -FR -lowercase -O1 -c $*$(SUFFIX)
#wave.o : wave.F
# $(CPP)
# $(FC) -FR -lowercase -O0 -c $*$(SUFFIX)
#LDApU.o : LDApU.F
# $(CPP)
# $(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
#---------------------------------------------------------------------
#fft3dlib.o : fft3dlib.F
# $(CPP)
# $(FC) -FR -O1 -unroll0 -vec_report3 -c $*$(SUFFIX)
ft3dlib.o : fft3dlib.F
$(CPP)
$(FC) -FR -O1 -unroll0 -vec_report3 -c $*$(SUFFIX)
fft3dfurth.o : fft3dfurth.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
radial.o : radial.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
symlib.o : symlib.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
symmetry.o : symmetry.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
dynbr.o : dynbr.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
broyden.o : broyden.F
$(CPP)
$(FC) -FR -O2 -c $*$(SUFFIX)
us.o : us.F
$(CPP)
$(FC) -FR -O1 -c $*$(SUFFIX)
wave.o : wave.F
$(CPP)
$(FC) -FR -O0 -c $*$(SUFFIX)
LDApU.o : LDApU.F
$(CPP)
$(FC) -FR -O2 -c $*$(SUFFIX)
Last edited by emoh79 on Sat Apr 06, 2013 7:25 pm, edited 1 time in total.
-
- Newbie
- Posts: 3
- Joined: Sat Apr 06, 2013 3:53 pm
- License Nr.: 666 (vasp.4.6)
- Location: Singapore
- Contact:
Vasp Installation Enquries with Test Suite Problem
The makefile for libdmy.a file is as below
.SUFFIXES: .inc .f .F
#-----------------------------------------------------------------------
# Makefile for Portland Group F90/HPF compiler
# the makefile was tested only under Linux on Intel platforms
# however it might work on other platforms as well
#
# this release of vasp.4.lib contains lapack v2.0
# this can be compiled with pgf90 compiler if the option -O1 is used
#
# Mind: one user reported that he had to copy preclib.F diolib.F
# dlexlib.F and drdatab.F to the directory vasp.4.4, compile the files
# there and link them directly into vasp
# for no obvious reason these files could not be linked from the library
#
#-----------------------------------------------------------------------
# C-preprocessor
#CPP = gcc -E -P -C $*.F >$*.f
#CC = icc -E -P -C $*.F >$*.f
CPP = gcc -E -P -C -DLONGCHAR $*.F >$*.f
FC=ifort
CFLAGS = -O
FFLAGS = -O0 -FI
FREE = -FR
DOBJ = preclib.o timing_.o derrf_.o dclock_.o diolib.o dlexlib.o drdatab.o
#-----------------------------------------------------------------------
# general rules
#-----------------------------------------------------------------------
#libdmy.a: $(DOBJ) linpack_double.o lapack_atlas.o
# -rm libdmy.a
# ar vq libdmy.a $(DOBJ)
# files which do not require autodouble
#lapack_min.o: lapack_min.f
# $(FC) $(FFLAGS) $(NOFREE) -c lapack_min.f
#lapack_double.o: lapack_double.f
#$(FC) $(FFLAGS) $(NOFREE) -c lapack_double.f
#lapack_single.o: lapack_single.f
# $(FC) $(FFLAGS) $(NOFREE) -c lapack_single.f
#lapack_atlas.o: lapack_atlas.f
# $(FC) $(FFLAGS) $(NOFREE) -c lapack_atlas.f
#linpack_double.o: linpack_double.f
# $(FC) $(FFLAGS) $(NOFREE) -c linpack_double.f
#linpack_single.o: linpack_single.f
# $(FC) $(FFLAGS) $(NOFREE) -c linpack_single.f
#.c.o:
# $(CC) $(CFLAGS) -c $*.c
#.F.o:
# $(CPP)
# $(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
#.F.f:
# $(CPP)
#.f.o:
# $(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
#-----------------------------------------------------------------------------------
# coding using http://www.nsc.liu.se/~pla/blog/2013/01 ... -lindgren/
#-----------------------------------------------------------------------------------
libdmy.a: $(DOBJ) linpack_double.o
-rm libdmy.a
ar vq libdmy.a $(DOBJ)
linpack_double.o: linpack_double.f
$(FC) $(FFLAGS) $(NOFREE) -c linpack_double.f
.c.o:
$(CC) $(CFLAGS) -c $*.c
.F.o:
$(CPP)
$(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
.F.f:
$(CPP)
.f.o:
$(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
.SUFFIXES: .inc .f .F
#-----------------------------------------------------------------------
# Makefile for Portland Group F90/HPF compiler
# the makefile was tested only under Linux on Intel platforms
# however it might work on other platforms as well
#
# this release of vasp.4.lib contains lapack v2.0
# this can be compiled with pgf90 compiler if the option -O1 is used
#
# Mind: one user reported that he had to copy preclib.F diolib.F
# dlexlib.F and drdatab.F to the directory vasp.4.4, compile the files
# there and link them directly into vasp
# for no obvious reason these files could not be linked from the library
#
#-----------------------------------------------------------------------
# C-preprocessor
#CPP = gcc -E -P -C $*.F >$*.f
#CC = icc -E -P -C $*.F >$*.f
CPP = gcc -E -P -C -DLONGCHAR $*.F >$*.f
FC=ifort
CFLAGS = -O
FFLAGS = -O0 -FI
FREE = -FR
DOBJ = preclib.o timing_.o derrf_.o dclock_.o diolib.o dlexlib.o drdatab.o
#-----------------------------------------------------------------------
# general rules
#-----------------------------------------------------------------------
#libdmy.a: $(DOBJ) linpack_double.o lapack_atlas.o
# -rm libdmy.a
# ar vq libdmy.a $(DOBJ)
# files which do not require autodouble
#lapack_min.o: lapack_min.f
# $(FC) $(FFLAGS) $(NOFREE) -c lapack_min.f
#lapack_double.o: lapack_double.f
#$(FC) $(FFLAGS) $(NOFREE) -c lapack_double.f
#lapack_single.o: lapack_single.f
# $(FC) $(FFLAGS) $(NOFREE) -c lapack_single.f
#lapack_atlas.o: lapack_atlas.f
# $(FC) $(FFLAGS) $(NOFREE) -c lapack_atlas.f
#linpack_double.o: linpack_double.f
# $(FC) $(FFLAGS) $(NOFREE) -c linpack_double.f
#linpack_single.o: linpack_single.f
# $(FC) $(FFLAGS) $(NOFREE) -c linpack_single.f
#.c.o:
# $(CC) $(CFLAGS) -c $*.c
#.F.o:
# $(CPP)
# $(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
#.F.f:
# $(CPP)
#.f.o:
# $(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
#-----------------------------------------------------------------------------------
# coding using http://www.nsc.liu.se/~pla/blog/2013/01 ... -lindgren/
#-----------------------------------------------------------------------------------
libdmy.a: $(DOBJ) linpack_double.o
-rm libdmy.a
ar vq libdmy.a $(DOBJ)
linpack_double.o: linpack_double.f
$(FC) $(FFLAGS) $(NOFREE) -c linpack_double.f
.c.o:
$(CC) $(CFLAGS) -c $*.c
.F.o:
$(CPP)
$(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
.F.f:
$(CPP)
.f.o:
$(FC) $(FFLAGS) $(FREE) $(INCS) -c $*.f
Last edited by emoh79 on Sat Apr 06, 2013 7:27 pm, edited 1 time in total.