Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.


Moderators: Global Moderator, Moderator

Locked
Message
Author
JR
Newbie
Newbie
Posts: 7
Joined: Tue May 05, 2009 9:34 am
License Nr.: 1073
Location: Sydney, Australia
Contact:

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#1 Post by JR » Tue May 05, 2009 9:54 am

Good Day,
I am having problems getting Vasp 5.2.2 to work. I am compiling with PGI 8.0.5 and have experienced the same crash using standalone pgf90 and openmpi. (I have previously compiled and run Vasp 4.6 with no errors)

The program compiles without errors but then crashes before the first iteration.

Code: Select all

$ ~/install/vasp/vasp.5.2/vasp 
 vasp.5.2.2 15Apr09 complex 
 POSCAR found :  1 types and   64 ions

 ----------------------------------------------------------------------------- 
|                                                                             |
|  ADVICE TO THIS USER RUNNING 'VASP/VAMP'   (HEAR YOUR MASTER'S VOICE ...):  |
|                                                                             |
|      You have a (more or less) 'large supercell' and for larger cells       |
|      it might be more efficient to use real space projection operators      |
|      So try LREAL= Auto  in the INCAR   file.                               |
|      Mind: If you want to do a very accurate calculations keep the          |
|      reciprocal projection scheme          (i.e. LREAL=.FALSE.)             |
|                                                                             |
 ----------------------------------------------------------------------------- 

 LDA part: xc-table for Pade appr. of Perdew
 POSCAR, INCAR and KPOINTS ok, starting setup
 WARNING: small aliasing (wrap around) errors must be expected
 FFT: planning ...(            1 )
 reading WAVECAR
 entering main loop
       N       E                     dE             d eps       ncg     rms          rms(c)
*** glibc detected *** /home/ruddj/install/vasp/vasp.5.2/vasp: free(): invalid next size (fast): 0x0000000011be6b00 ***
======= Backtrace: =========
/lib64/libc.so.6[0x3e9f671684]
/lib64/libc.so.6(cfree+0x8c)[0x3e9f674ccc]
/home/ruddj/install/vasp/vasp.5.2/vasp[0x53f0b3]
======= Memory map: ========
00400000-0099f000 r-xp 00000000 00:17 1802245                            /home/ruddj/install/vasp/vasp.5.2/vasp
00b9f000-00bf8000 rwxp 0059f000 00:17 1802245                            /home/ruddj/install/vasp/vasp.5.2/vasp
00bf8000-01068000 rwxp 00bf8000 00:00 0 
11be1000-122c0000 rwxp 11be1000 00:00 0 
3e9f200000-3e9f21a000 r-xp 00000000 08:01 392729                         /lib64/ld-2.5.so
3e9f41a000-3e9f41b000 r-xp 0001a000 08:01 392729                         /lib64/ld-2.5.so
3e9f41b000-3e9f41c000 rwxp 0001b000 08:01 392729                         /lib64/ld-2.5.so
3e9f600000-3e9f74a000 r-xp 00000000 08:01 392730                         /lib64/libc-2.5.so
3e9f74a000-3e9f949000 ---p 0014a000 08:01 392730                         /lib64/libc-2.5.so
3e9f949000-3e9f94d000 r-xp 00149000 08:01 392730                         /lib64/libc-2.5.so
3e9f94d000-3e9f94e000 rwxp 0014d000 08:01 392730                         /lib64/libc-2.5.so
3e9f94e000-3e9f953000 rwxp 3e9f94e000 00:00 0 
3ea0600000-3ea0615000 r-xp 00000000 08:01 392735                         /lib64/libpthread-2.5.so
3ea0615000-3ea0814000 ---p 00015000 08:01 392735                         /lib64/libpthread-2.5.so
3ea0814000-3ea0815000 r-xp 00014000 08:01 392735                         /lib64/libpthread-2.5.so
3ea0815000-3ea0816000 rwxp 00015000 08:01 392735                         /lib64/libpthread-2.5.so
3ea0816000-3ea081a000 rwxp 3ea0816000 00:00 0 
3ea0a00000-3ea0a82000 r-xp 00000000 08:01 392500                         /lib64/libm-2.5.so
3ea0a82000-3ea0c81000 ---p 00082000 08:01 392500                         /lib64/libm-2.5.so
3ea0c81000-3ea0c82000 r-xp 00081000 08:01 392500                         /lib64/libm-2.5.so
3ea0c82000-3ea0c83000 rwxp 00082000 08:01 392500                         /lib64/libm-2.5.so
3ea1200000-3ea1207000 r-xp 00000000 08:01 392736                         /lib64/librt-2.5.so
3ea1207000-3ea1407000 ---p 00007000 08:01 392736                         /lib64/librt-2.5.so
3ea1407000-3ea1408000 r-xp 00007000 08:01 392736                         /lib64/librt-2.5.so
3ea1408000-3ea1409000 rwxp 00008000 08:01 392736                         /lib64/librt-2.5.so
3ea2200000-3ea220d000 r-xp 00000000 08:01 392740                         /lib64/libgcc_s-4.1.2-20080102.so.1
3ea220d000-3ea240d000 ---p 0000d000 08:01 392740                         /lib64/libgcc_s-4.1.2-20080102.so.1
3ea240d000-3ea240e000 rwxp 0000d000 08:01 392740                         /lib64/libgcc_s-4.1.2-20080102.so.1
2b89a40ae000-2b89a40b4000 rwxp 2b89a40ae000 00:00 0 
2b89a40b8000-2b89a98aa000 rwxp 2b89a40b8000 00:00 0 
2b89ac000000-2b89ac021000 rwxp 2b89ac000000 00:00 0 
2b89ac021000-2b89b0000000 ---p 2b89ac021000 00:00 0 
7fff069e6000-7fff069fc000 rwxp 7fff069e6000 00:00 0                      [stack]
ffffffffff600000-ffffffffffe00000 ---p 00000000 00:00 0                  [vdso]
Aborted
The make file used is:

Code: Select all

.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Portland Group F90/HPF compiler release 3.0-1, 3.1
# and release 1.7
# (http://www.pgroup.com/ & ftp://ftp.pgroup.com/x86/, you need
#  to order the HPF/F90 suite)
#  we have found no noticable performance differences between 
#  any of the releases, even Athlon or PIII optimisation does
#  not seem to improve performance
#
# The makefile was tested only under Linux on Intel platforms
# (Suse X,X)
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# Mind that some Linux distributions (Suse 6.1) have a bug in 
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
#   retrieve the lapackage from ftp.netlib.org
#   and compile the blas routines (BLAS/SRC directory)
#   please use g77 or f77 for the compilation. When I tried to
#   use pgf77 or pgf90 for BLAS, VASP hang up when calling
#   ZHEEV  (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
#   for a list of optimized BLAS try
#     http://www.kachinatech.com/~hjjou/scilib/opt_blas.html
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
#     http://developer.intel.com/software/products/mkl/
#   this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
#     http://math-atlas.sourceforge.net/
#   you certainly need atlas on the Athlon, since the  mkl
#   routines are not optimal on the Athlon.
#
#-----------------------------------------------------------------------

# all CPP processed fortran files have the extension .f 
SUFFIX=.f

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC=pgf90
# fortran linker
FCL=$(FC)


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
#  CPP_   =  /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C 
#
# that's probably the right line for some Red Hat distribution:
#
#  CPP_   =  /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
#  SUSE 6.X, maybe some Red Hat distributions:

CPP_ =  ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# possible options for CPP:
# NGXhalf             charge density   reduced in X direction
# wNGXhalf            gamma point only reduced in X direction
# avoidalloc          avoid ALLOCATE if possible
# IFC                 work around some IFC bugs
# CACHE_SIZE          1000 for PII,PIII, 5000 for Athlon, 8000 P4
# RPROMU_DGEMV        use DGEMV instead of DGEMM in RPRO (usually  faster)
# RACCMU_DGEMV        use DGEMV instead of DGEMM in RACC (faster on P4)
#  **** definitely use -DRACCMU_DGEMV if you use the mkl library
#-----------------------------------------------------------------------

CPP    = $(CPP_) -DHOST=\"LinuxPgi\" \
          -Dkind8 -DNGXhalf -DCACHE_SIZE=2000 -DPGF90 -Davoidalloc \
          -DRPROMU_DGEMV  

#-----------------------------------------------------------------------
# general fortran flags  (there must a trailing blank on this line)
# the -Mx,119,0x200000 is required if you use older pgf90 versions
# on a more recent LINUX installation
# the option will not do any harm on other 3.X pgf90 distributions
#-----------------------------------------------------------------------

FFLAGS =  -Mfree -Mx,119,0x200000  

#-----------------------------------------------------------------------
# optimization,
# we have tested whether higher optimisation improves
# the performance, and found no improvements with -O3-5 or -fast
# (even on Athlon system, Athlon specific optimistation worsens performance)
#-----------------------------------------------------------------------

OFLAG  = -O2  -tp k8-64 

OFLAG_HIGH = $(OFLAG)
OBJ_HIGH = 
OBJ_NOOPT = 
DEBUG  = -g -O0
INLINE = $(OFLAG)


#-----------------------------------------------------------------------
# the following lines specify the position of BLAS  and LAPACK
# what you chose is very system dependent
# P4: VASP works fastest with Intels mkl performance library
# Athlon: Atlas based BLAS are presently the fastest
# P3: no clue
#-----------------------------------------------------------------------

# Atlas based libraries
#ATLASHOME= $(HOME)/archives/BLAS_OPT/ATLAS/lib/Linux_ATHLONXP_SSE1/
#BLAS=   -L$(ATLASHOME)  -lf77blas -latlas
BLAS=   -lacml

# use specific libraries (default library path points to other libraries)
#BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a

# use the mkl Intel libraries for p4 (www.intel.com)
#BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4  -lpthread

# LAPACK, simplest use vasp.5.lib/lapack_double
#LAPACK= ../vasp.5.lib/lapack_double.o

# use atlas optimized part of lapack
LAPACK= ../vasp.5.lib/lapack_atlas.o  -llapack -lblas -lacml

# use the mkl Intel lapack
#LAPACK= -lmkl_lapack


#-----------------------------------------------------------------------

LIB  = -L../vasp.5.lib -ldmy \
     ../vasp.5.lib/linpack_double.o $(LAPACK) \
     $(BLAS)

# options for linking (none required)
LINK    = 

#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.5 can use FFTW (http://www.fftw.org)
# since the FFTW is very slow for radices 2^n the fft3dlib is used
# in these cases
# if you use fftw3d you need to insert -lfftw in the LIB line as well
# please do not send us any querries reltated to FFTW (no support)
# if it fails, use fft3dlib
#-----------------------------------------------------------------------

FFT3D   = fft3dfurth.o fft3dlib.o
#FFT3D   = fftw3d+furth.o fft3dlib.o


#=======================================================================
# MPI section, uncomment the following lines
# 
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77             
# appends *two* underscores to symbols that contain already an        
# underscore (i.e. MPI_SEND becomes mpi_send__).  The pgf90
# compiler however appends only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X are stable
# mpich.1.2.1 was configured with 
#  ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000"  \
# -f90="pgf90 -Mx,119,0x200000" \
# --without-romio --without-mpe -opt=-O \
# 
# lam was configured with the line
#  ./configure  -prefix /usr/local/lam-6.5.X --with-cflags=-O -with-fc=pgf90 \
# --with-f77flags=-O --without-romio
# 
# lam was generally faster and we found an average communication
# band with of roughly 160 MBit/s (full duplex)
#
# please note that you might be able to use a lam or mpich version 
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above,  you can use the following lines
#-----------------------------------------------------------------------


#FC=mpif77
#FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf               charge density   reduced in Z direction
# wNGZhalf              gamma point only reduced in Z direction
# scaLAPACK             use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------

#CPP    = $(CPP_) -DMPI  -DHOST=\"LinuxPgi\" \
#     -Dkind8 -DNGZhalf -DCACHE_SIZE=2000 -DPGF90 -Davoidalloc -DRPROMU_DGEMV 

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------

#BLACS=/usr/local/BLACS_lam
#SCA_= /usr/local/SCALAPACK_lam

#SCA= $(SCA_)/scalapack_LINUX.a $(SCA_)/pblas_LINUX.a $(SCA_)/tools_LINUX.a \
# $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a

SCA=

#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------

#LIB     = -L../vasp.5.lib -ldmy  \
#      ../vasp.5.lib/linpack_double.o $(LAPACK) \
#      $(SCA) $(BLAS)

# FFT: only option  fftmpi.o with fft3dlib of Juergen Furthmueller

#FFT3D   = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o 

#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC=   symmetry.o symlib.o   lattlib.o  random.o   

SOURCE=  base.o     mpi.o      smart_allocate.o      xml.o  \
         constant.o jacobi.o   main_mpi.o  scala.o   \
         asa.o      lattice.o  poscar.o   ini.o       xclib.o     xclib_grad.o \
         radial.o   pseudo.o   mgrid.o    gridq.o     ebs.o  \
         mkpoints.o wave.o     wave_mpi.o  wave_high.o  \
         $(BASIC)   nonl.o     nonlr.o    nonl_high.o dfast.o    choleski2.o \
         mix.o      hamil.o    xcgrad.o   xcspin.o    potex1.o   potex2.o  \
         metagga.o constrmag.o cl_shift.o relativistic.o LDApU.o \
         paw_base.o egrad.o    pawsym.o   pawfock.o  pawlhf.o    paw.o   \
         mkpoints_full.o       charge.o   dipol.o    pot.o  \
         dos.o      elf.o      tet.o      tetweight.o hamil_rot.o \
         steep.o    chain.o    dyna.o     sphpro.o    us.o  core_rel.o \
         aedens.o   wavpre.o   wavpre_noio.o broyden.o \
         dynbr.o    rmm-diis.o reader.o   writer.o   tutor.o xml_writer.o \
         brent.o    stufak.o   fileio.o   opergrid.o stepver.o  \
         chgloc.o   fast_aug.o fock.o     mkpoints_change.o sym_grad.o \
         mymath.o   internals.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o \
         hamil_high.o nmr.o    force.o \
         pead.o     subrot.o   subrot_scf.o pwlhf.o  gw_model.o optreal.o   davidson.o \
         electron.o rot.o  electron_all.o shm.o    pardens.o  paircorrection.o \
         optics.o   constr_cell_relax.o   stm.o    finite_diff.o elpol.o    \
         hamil_lr.o rmm-diis_lr.o  subrot_cluster.o subrot_lr.o \
         lr_helper.o hamil_lrf.o   elinear_response.o ilinear_response.o \
         linear_optics.o linear_response.o   \
         setlocalpp.o  wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o \
         ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o local_field.o \
         ump2.o bse.o acfdt.o chi.o sydmat.o 

INC=

vasp: $(SOURCE) $(FFT3D) $(INC) main.o 
	rm -f vasp
	$(FCL) -o vasp main.o  $(SOURCE)   $(FFT3D) $(LIB) $(LINK)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
	$(FCL) -o makeparam  $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
	$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
	$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) 
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
	$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
	$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)

clean:	
	-rm -f *.g *.f *.o *.L *.mod ; touch *.F

main.o: main$(SUFFIX)
	$(FC) $(FFLAGS)$(DEBUG)  $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
	$(FC) $(FFLAGS) $(INLINE)  $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
	$(FC) $(FFLAGS) $(INLINE)  $(INCS) -c xcspin$(SUFFIX)

makeparam.o: makeparam$(SUFFIX)
	$(FC) $(FFLAGS)$(DEBUG)  $(INCS) -c makeparam$(SUFFIX)

makeparam$(SUFFIX): makeparam.F main.F 
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F

$(OBJ_HIGH):
	$(CPP)
	$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
	$(CPP)
	$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)

fft3dlib_f77.o: fft3dlib_f77.F
	$(CPP)
	$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)

.F.o:
	$(CPP)
	$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
	$(CPP)
$(SUFFIX).o:
	$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
Any suggestions on changes I need to make to get it run correctly?
Last edited by JR on Tue May 05, 2009 9:54 am, edited 1 time in total.

JR
Newbie
Newbie
Posts: 7
Joined: Tue May 05, 2009 9:34 am
License Nr.: 1073
Location: Sydney, Australia
Contact:

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#2 Post by JR » Tue May 12, 2009 7:24 am

Could this thread please be moved the the Bugreports forum, as it appears to be more a code error than an installation one.

I have done some more testing with the serial version with no optimisation and it appears the problem occurs when it is writing out the results of the calculation.

Using a basic 2 ion POSCAR, it appears to complete the calculation then crashes before it can write to the WAVECAR, CHGCAR.

POSCAR:

Code: Select all

cubic diamond
  5.753
 0.0    0.5     0.5
 0.5    0.0     0.5
 0.5    0.5     0.0
  2
Direct
0 0 0  
0.25 0.25 0.25
Output:

Code: Select all

 vasp.5.2.2 15Apr09 complex 
 POSCAR found :  1 types and    2 ions
 LDA part: xc-table for Pade appr. of Perdew
 POSCAR, INCAR and KPOINTS ok, starting setup
 WARNING: small aliasing (wrap around) errors must be expected
 FFT: planning ...(            1 )
 reading WAVECAR
 entering main loop
       N       E                     dE             d eps       ncg     rms          rms(c)
DAV:   1     0.191821435905E+02    0.19182E+02   -0.22289E+03    16   0.443E+02
DAV:   2     0.429246603389E+01   -0.14890E+02   -0.14890E+02    32   0.631E+01
DAV:   3     0.406919318383E+01   -0.22327E+00   -0.22327E+00    16   0.104E+01
DAV:   4     0.406765487301E+01   -0.15383E-02   -0.15383E-02    16   0.891E-01
DAV:   5     0.406762975436E+01   -0.25119E-04   -0.25119E-04    16   0.113E-01    0.621E+00
DAV:   6     0.455554606725E+01    0.48792E+00   -0.20097E-01    16   0.256E+00    0.378E+00
DAV:   7     0.482983903721E+01    0.27429E+00   -0.58061E-01    16   0.484E+00    0.326E-01
DAV:   8     0.481731419063E+01   -0.12525E-01   -0.96007E-03    16   0.639E-01    0.785E-02
DAV:   9     0.481671500325E+01   -0.59919E-03   -0.82323E-04    16   0.218E-01    0.181E-02
DAV:  10     0.481662848559E+01   -0.86518E-04   -0.13157E-04    16   0.873E-02
*** glibc detected *** /home/ruddj/install/vasp/vasp.5.2/vasp: free(): invalid next size (fast): 0x00000000124748b0 ***
======= Backtrace: =========
/lib64/libc.so.6[0x2b42fcaa6ce2]
/lib64/libc.so.6(cfree+0x8c)[0x2b42fcaaa90c]
/home/ruddj/install/vasp/vasp.5.2/vasp[0x56c556]
======= Memory map: ========
00400000-00a52000 r-xp 00000000 08:05 38423816                           /home/ruddj/install/vasp/vasp.5.2/vasp
00c51000-00cab000 rwxp 00651000 08:05 38423816                           /home/ruddj/install/vasp/vasp.5.2/vasp
00cab000-01132000 rwxp 00cab000 00:00 0 
121e2000-127e6000 rwxp 121e2000 00:00 0 
2b42fc170000-2b42fc18c000 r-xp 00000000 08:01 3872739                    /lib64/ld-2.5.so
2b42fc18c000-2b42fc192000 rwxp 2b42fc18c000 00:00 0 
2b42fc1a3000-2b42fc2a9000 rwxp 2b42fc1a3000 00:00 0 
2b42fc38b000-2b42fc38c000 r-xp 0001b000 08:01 3872739                    /lib64/ld-2.5.so
2b42fc38c000-2b42fc38d000 rwxp 0001c000 08:01 3872739                    /lib64/ld-2.5.so
2b42fc38d000-2b42fc394000 r-xp 00000000 08:01 3872775                    /lib64/librt-2.5.so
2b42fc394000-2b42fc594000 ---p 00007000 08:01 3872775                    /lib64/librt-2.5.so
2b42fc594000-2b42fc595000 r-xp 00007000 08:01 3872775                    /lib64/librt-2.5.so
2b42fc595000-2b42fc596000 rwxp 00008000 08:01 3872775                    /lib64/librt-2.5.so
2b42fc596000-2b42fc5ac000 r-xp 00000000 08:01 3872771                    /lib64/libpthread-2.5.so
2b42fc5ac000-2b42fc7ab000 ---p 00016000 08:01 3872771                    /lib64/libpthread-2.5.so
2b42fc7ab000-2b42fc7ac000 r-xp 00015000 08:01 3872771                    /lib64/libpthread-2.5.so
2b42fc7ac000-2b42fc7ad000 rwxp 00016000 08:01 3872771                    /lib64/libpthread-2.5.so
2b42fc7ad000-2b42fc7b1000 rwxp 2b42fc7ad000 00:00 0 
2b42fc7b1000-2b42fc833000 r-xp 00000000 08:01 3872755                    /lib64/libm-2.5.so
2b42fc833000-2b42fca32000 ---p 00082000 08:01 3872755                    /lib64/libm-2.5.so
2b42fca32000-2b42fca33000 r-xp 00081000 08:01 3872755                    /lib64/libm-2.5.so
2b42fca33000-2b42fca34000 rwxp 00082000 08:01 3872755                    /lib64/libm-2.5.so
2b42fca34000-2b42fca35000 rwxp 2b42fca34000 00:00 0 
2b42fca35000-2b42fcb81000 r-xp 00000000 08:01 3872747                    /lib64/libc-2.5.so
2b42fcb81000-2b42fcd81000 ---p 0014c000 08:01 3872747                    /lib64/libc-2.5.so
2b42fcd81000-2b42fcd85000 r-xp 0014c000 08:01 3872747                    /lib64/libc-2.5.so
2b42fcd85000-2b42fcd86000 rwxp 00150000 08:01 3872747                    /lib64/libc-2.5.so
2b42fcd86000-2b42fcd8c000 rwxp 2b42fcd86000 00:00 0 
2b42fcd8c000-2b42fcd99000 r-xp 00000000 08:01 3873040                    /lib64/libgcc_s-4.1.2-20080825.so.1
2b42fcd99000-2b42fcf99000 ---p 0000d000 08:01 3873040                    /lib64/libgcc_s-4.1.2-20080825.so.1
2b42fcf99000-2b42fcf9a000 rwxp 0000d000 08:01 3873040                    /lib64/libgcc_s-4.1.2-20080825.so.1
2b4300000000-2b4300021000 rwxp 2b4300000000 00:00 0 
2b4300021000-2b4304000000 ---p 2b4300021000 00:00 0 
7fffae924000-7fffae93a000 rwxp 7fffae924000 00:00 0                      [stack]
ffffffffff600000-ffffffffffe00000 ---p 00000000 00:00 0                  [vdso]
Aborted
Last edited by JR on Tue May 12, 2009 7:24 am, edited 1 time in total.

pkroll
Newbie
Newbie
Posts: 28
Joined: Tue Jun 14, 2005 2:48 pm

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#3 Post by pkroll » Tue Jun 02, 2009 5:04 am

JR, I have similar problems in similar calculations, but already much earlier when the first call to ewald or hamil is made. All of the kind

p0_13848: p4_error: interrupt SIGx: 6
*** glibc detected *** double free or corruption (!prev): 0x00000000011f3c40 ***

p0_1672: p4_error: interrupt SIGx: 6
*** glibc detected *** malloc(): memory corruption: 0x0000000000da5f50 ***

*** glibc detected *** free(): invalid next size (fast): 0x0000000000c563f0 ***

Well, I have them also when running vasp.4.6.31 compiled with PGI 8.0-6 .... Only the PGI 6 compiler produces a running version of vasp 4.6.31. Unfortunately, Vasp 5.2.2 cannot be compiled with PGI 6 (some not yet implemented MPI calls are made in 5.2.2)

Hitherto, I suspected that it's due to the implementation/environment set by our compute center [MPICH / MPICH2 ..]
But it may also be triggered in PGI by some of the routines within vasp ...(no problems on another platform using Intels compiler ...)
Last edited by pkroll on Tue Jun 02, 2009 5:04 am, edited 1 time in total.

pkroll
Newbie
Newbie
Posts: 28
Joined: Tue Jun 14, 2005 2:48 pm

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#4 Post by pkroll » Tue Jun 02, 2009 5:04 am

JR, I have similar problems in similar calculations, but already much earlier when the first call to ewald or hamil is made. All of the kind

p0_13848: p4_error: interrupt SIGx: 6
*** glibc detected *** double free or corruption (!prev): 0x00000000011f3c40 ***

p0_1672: p4_error: interrupt SIGx: 6
*** glibc detected *** malloc(): memory corruption: 0x0000000000da5f50 ***

*** glibc detected *** free(): invalid next size (fast): 0x0000000000c563f0 ***

Well, I have them also when running vasp.4.6.31 compiled with PGI 8.0-6 .... Only the PGI 6 compiler produces a running version of vasp 4.6.31. Unfortunately, Vasp 5.2.2 cannot be compiled with PGI 6 (some not yet implemented MPI calls are made in 5.2.2)

Hitherto, I suspected that it's due to the implementation/environment set by our compute center [MPICH / MPICH2 ..]
But it may also be triggered in PGI by some of the routines within vasp ...(no problems on another platform using Intels compiler ...)
Last edited by pkroll on Tue Jun 02, 2009 5:04 am, edited 1 time in total.

alex
Hero Member
Hero Member
Posts: 583
Joined: Tue Nov 16, 2004 2:21 pm
License Nr.: 5-67
Location: Germany

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#5 Post by alex » Thu Jun 04, 2009 7:34 am

does a smaller system run? (do you have enough memory?)
Last edited by alex on Thu Jun 04, 2009 7:34 am, edited 1 time in total.

pkroll
Newbie
Newbie
Posts: 28
Joined: Tue Jun 14, 2005 2:48 pm

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#6 Post by pkroll » Fri Jun 05, 2009 10:21 pm

as by now, I can do "larger" test problems. the size depends on parallel/serial and the number of procs chosen.
Using Vasp.4.6.36 I can now compile this with PGI-8.0-6 (and Intel 11).

Vasp 5.2.2 still awaits a solution (probebly all is related to hamil.F and interfacing older F77 code --- but lets leave this to the experts)
Last edited by pkroll on Fri Jun 05, 2009 10:21 pm, edited 1 time in total.

JR
Newbie
Newbie
Posts: 7
Joined: Tue May 05, 2009 9:34 am
License Nr.: 1073
Location: Sydney, Australia
Contact:

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#7 Post by JR » Sun Jun 07, 2009 4:10 am

Hi pkroll,
Thanks for your reply. I'm not sure if this would be caused by the same thing. I can compile the 4.6 version with the PGI 8 compiler with no difficulty. It only has problems with the 5.2 code, and only when writing out the result. I am compiling the non-MPI version to try to debug it, so the MPICH options shouldn't matter

Hi Alex,
I'm not sure if your question is addressed at pkroll or myself. I have tried it with a 2 ion system, so I am not sure if I can get it much smaller.
Last edited by JR on Sun Jun 07, 2009 4:10 am, edited 1 time in total.

pkroll
Newbie
Newbie
Posts: 28
Joined: Tue Jun 14, 2005 2:48 pm

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#8 Post by pkroll » Mon Jun 08, 2009 7:58 pm

@JR: well, I think you're right. It's not exactly the same error. As by now, i can compile 4.6.36 (there we go ..) using PGI-8 as well.
Version 5.2.2 can be compiled but runs only on "small" problems. Using the debugger I can trace my problems being related to "hamil.F" and the interfacing to F77 sub-routines (the new compilers are a bit more rigorously in their interpretation of F90 standards; compare hamil.F in 4.6.31 with that in 4.6.36 --- and then compare it to 5.2.2 which still contains the "old" interfacing of 4.6.31).

The SEGFAULT I have is "size-dependent" and only occurs at the first call in hamil. Never (like in your case) after some successful iterations.

However, it may well be that the origin of your error is just the same.
Last edited by pkroll on Mon Jun 08, 2009 7:58 pm, edited 1 time in total.

semiluo
Newbie
Newbie
Posts: 2
Joined: Mon Apr 03, 2006 5:07 am
Location: Colorado, USA

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#9 Post by semiluo » Fri Jul 10, 2009 1:58 pm

This problem can be solved if we change the static arrays in paw.F to the dynamic arrays.

lines 884-886 in paw.F
------------------------------------
! OVERLAP CTMP(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)),CSO(LMDIM,LMDIM,WDES%NCDIJ), &
! CHF(LMDIM,LMDIM,WDES%NCDIJ)
! OVERLAP COCC(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)),COCC_IM(LMDIM,LMDIM)
OVERLAP,ALLOCATABLE:: CTMP(:,:,:),CSO(:,:,:),CHF(:,:,:)
OVERLAP,ALLOCATABLE:: COCC(:,:,:),COCC_IM(:,:)

---

add following two lines around line961 in paw.F
---------------------------
ALLOCATE (CTMP(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)),CSO(LMDIM,LMDIM,WDES%NCDIJ),CHF(CTMP(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)))
ALLOCATE (COCC(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)),COCC_IM(LMDIM,LMDIM))

add following two lines around line1432 in paw.F
---------------------------
DEALLOCATE (COCC,COCC_IM,CHF)
DEALLOCATE (CTMP,CSO)
Last edited by semiluo on Fri Jul 10, 2009 1:58 pm, edited 1 time in total.

pkroll
Newbie
Newbie
Posts: 28
Joined: Tue Jun 14, 2005 2:48 pm

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#10 Post by pkroll » Fri Jul 10, 2009 6:07 pm

see discussion at
http://www.pgroup.com/userforum/viewtop ... 6e01d2ea3f

hmm, may this be the reason for other cases of vasp crashing described elsewhere as well ?
Last edited by pkroll on Fri Jul 10, 2009 6:07 pm, edited 1 time in total.

JR
Newbie
Newbie
Posts: 7
Joined: Tue May 05, 2009 9:34 am
License Nr.: 1073
Location: Sydney, Australia
Contact:

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#11 Post by JR » Mon Jul 13, 2009 4:21 am

Thanks Semiluo. Could you please give me some context of where the lines should be added, i.e. the line above and below where the extra allocate line should be.
The first entry is you already show the existing lines commented out, but the second and last entry could be placed in a number of locations that may affect operation. e.g. Is last entry before or after END SUBROUTINE SET_DD_PAW ?

Thankyou
Last edited by JR on Mon Jul 13, 2009 4:21 am, edited 1 time in total.

JR
Newbie
Newbie
Posts: 7
Joined: Tue May 05, 2009 9:34 am
License Nr.: 1073
Location: Sydney, Australia
Contact:

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#12 Post by JR » Mon Jul 13, 2009 8:54 am

I have made the changes you suggested to paw.F (you had a copy paste error in the code) and compiled it.
It appears to run OK and write the file out at the end.
I have tested it in single and paraellel and it appears to run completely for both now.

Near line 880

Code: Select all

      LOGICAL, EXTERNAL :: USEFOCK_CONTRIBUTION, USEFOCK_AE_ONECENTER
      REAL(q) DDLM(LMDIM*LMDIM),RHOLM(LMDIM*LMDIM),RHOLM_(LMDIM*LMDIM,WDES%NCDIJ)
      !OVERLAP CTMP(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)),CSO(LMDIM,LMDIM,WDES%NCDIJ), &
      !        CHF(LMDIM,LMDIM,WDES%NCDIJ)
      !OVERLAP COCC(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)),COCC_IM(LMDIM,LMDIM)
      OVERLAP,ALLOCATABLE:: CTMP(:,:,:),CSO(:,:,:),CHF(:,:,:)
      OVERLAP,ALLOCATABLE:: COCC(:,:,:),COCC_IM(:,:) 
      
      REAL(q),ALLOCATABLE :: POT(:,:,:), RHO(:,:,:), POTAE(:,:,:), RHOAE(:,:,:)
around Line 958 #previously had CHF(CTMP(...)

Code: Select all

      ALLOCATE (RHOCOL( NDIM, LMMAX, NCDIJ ))
      ALLOCATE (CTMP(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)),CSO(LMDIM,LMDIM,WDES%NCDIJ), &
       CHF(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)))
      ALLOCATE (COCC(LMDIM,LMDIM,MAX(2,WDES%NCDIJ)),COCC_IM(LMDIM,LMDIM)) 
      
! allocate kinetic energy density if metagga
around line 1428

Code: Select all

      E%PAWAE=DOUBLEC_AE
      E%PAWPS=DOUBLEC_PS
      
      DEALLOCATE (COCC,COCC_IM,CHF)
      DEALLOCATE (CTMP,CSO) 

      IF (LUSE_THOMAS_FERMI) CALL POP_XC_TYPE

      CALL RELEASE_PAWFOCK

    END SUBROUTINE SET_DD_PAW
Last edited by JR on Mon Jul 13, 2009 8:54 am, edited 1 time in total.

tommy91779

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#13 Post by tommy91779 » Mon Sep 07, 2009 9:45 pm

JR and Semiluo,


I am having a similar problem attempting to run calculations on larger atom systems in VASP 5.2.2. I was wondering if you could either post in more detail, the context of the dynamic matrix code for the three groups of statements mentioned (i.e. the exact placement w.r. to existing or commented out code) or just post the modified paw.F file.


Thanks in advance

Tom
Last edited by tommy91779 on Mon Sep 07, 2009 9:45 pm, edited 1 time in total.

admin
Administrator
Administrator
Posts: 2921
Joined: Tue Aug 03, 2004 8:18 am
License Nr.: 458

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#14 Post by admin » Mon Oct 12, 2009 9:02 am

please check the stack limit sizes in your system and set it to unlimited
(ulimit -s unlimited)
Last edited by admin on Mon Oct 12, 2009 9:02 am, edited 1 time in total.

hengji
Newbie
Newbie
Posts: 1
Joined: Tue Nov 10, 2009 5:32 pm

Vasp 5.2.2 Keeps crashing free(): invalid next size (fast)

#15 Post by hengji » Fri Nov 20, 2009 2:54 am

[quote="admin"]please check the stack limit sizes in your system and set it to unlimited
(ulimit -s unlimited) [/quote]

Hello,

I am also trying to compile parallel vasp.5.2 with PGI.

At the beginning, the compiled VASP can do USPP simulation in good agreement with other cluster data. But for PAW, it does not work well.

Then, I follow this thread discussion, make a little correction to PAW.F and compiled it again. Now I could do parallel calculation for single Fe atom with 32 cpus. Results looks fine But for 47 atoms system, I met a big trouble.


The error message is "BRMIX: very serious problems the old and the new charge density differ". The input files I used are the same as those in computation from another cluster. I don't find this charge difference error there. So is there any suggestions on my problem?

I checked the stack size by typing ulimit -a. It is 49067724 kb. Is it be large enough?

Thanks!!
Last edited by hengji on Fri Nov 20, 2009 2:54 am, edited 1 time in total.

Locked