Error when compiling VASP 6.3.2 with Shared memory
Posted: Wed Feb 22, 2023 3:19 am
Hi,
I'm trying to compile VASP 6.3.2 with Shared memory (-Duse_shmem) to reduce the memory usage when running on-the-fly MLFF generation.
However I'm running into the following problem at the end of the compilation
Here I'm using the arch/makefile.include.intel which comes with the VASP source and just adding the -Duse_shmem precompiler flag.
Just to note that without the -Duse_shmem precompiler flag compilation completes successfully.
I tried the following versions of the intel compiler
/Daniel
I'm trying to compile VASP 6.3.2 with Shared memory (-Duse_shmem) to reduce the memory usage when running on-the-fly MLFF generation.
However I'm running into the following problem at the end of the compilation
Code: Select all
mpiifort -qmkl=sequential -o vasp c2f_interface.o nccl2for.o simd.o base.o profiling.o string.o tutor.o version.o command_line.o vhdf5_base.o incar_reader.o reader_base.o openmp.o openacc_struct.o mpi.o mpi_shmem.o mathtools.o hamil_struct.o radial_struct.o pseudo_struct.o mgrid_struct.o wave_struct.o nl_struct.o mkpoints_struct.o poscar_struct.o afqmc_struct.o fock_glb.o chi_glb.o smart_allocate.o xml.o extpot_glb.o constant.o ml_ff_c2f_interface.o ml_ff_prec.o ml_ff_constant.o ml_ff_taglist.o ml_ff_struct.o ml_ff_mpi_help.o ml_ff_mpi_shmem.o vdwforcefield_glb.o jacobi.o main_mpi.o openacc.o scala.o asa.o lattice.o poscar.o ini.o mgrid.o ml_ff_error.o ml_ff_mpi.o ml_ff_helper.o ml_ff_logfile.o ml_ff_math.o ml_ff_iohandle.o ml_ff_memory.o ml_ff_abinitio.o ml_ff_ff.o ml_ff_mlff.o setex_struct.o xclib.o vdw_nl.o xclib_grad.o setex.o radial.o pseudo.o gridq.o ebs.o symlib.o mkpoints.o random.o wave.o wave_mpi.o wave_high.o bext.o spinsym.o symmetry.o lattlib.o nonl.o nonlr.o nonl_high.o dfast.o choleski2.o mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o constrmag.o cl_shift.o relativistic.o LDApU.o paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o diis.o rhfatm.o hyperfine.o fock_ace.o paw.o mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o solvation.o scpc.o pot.o tet.o dos.o elf.o hamil_rot.o chain.o dyna.o fileio.o vhdf5.o sphpro.o us.o core_rel.o aedens.o wavpre.o wavpre_noio.o broyden.o dynbr.o reader.o writer.o xml_writer.o brent.o stufak.o opergrid.o stepver.o fast_aug.o fock_multipole.o fock.o fock_dbl.o fock_frc.o mkpoints_change.o subrot_cluster.o sym_grad.o mymath.o npt_dynamics.o subdftd3.o subdftd4.o internals.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o nmr.o pead.o k-proj.o subrot.o subrot_scf.o paircorrection.o rpa_force.o ml_reader.o ml_interface.o force.o pwlhf.o gw_model.o optreal.o steep.o rmm-diis.o davidson.o david_inner.o root_find.o lcao_bare.o locproj.o electron_common.o electron.o rot.o electron_all.o shm.o pardens.o optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o hamil_lr.o rmm-diis_lr.o subrot_lr.o lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o linear_optics.o setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o gauss_quad.o m_unirnk.o minimax_ini.o minimax_dependence.o minimax_functions1D.o minimax_functions2D.o minimax_struct.o minimax_varpro.o minimax.o umco.o mlwf.o ratpol.o pade_fit.o screened_2e.o wave_cacher.o crpa.o chi_base.o wpot.o local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o bse_te.o bse.o time_propagation.o acfdt.o afqmc.o rpax.o chi.o acfdt_GG.o dmft.o GG_base.o greens_orbital.o lt_mp2.o rnd_orb_mp2.o greens_real_space.o chi_GG.o chi_super.o sydmat.o rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o wave_interpolate.o linear_response.o auger.o dmatrix.o phonon.o wannier_mats.o elphon.o core_con_mat.o embed.o extpot.o rpa_high.o fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o main.o -Llib -ldmy -Lparser -lparser -lstdc++ -L/cmcm/eb/software/imkl/2022.2.1/mkl/2022.2.1/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
ld: mpi.o: in function `mpimy_mp_m_divide_shmem_':
mpi.f90:(.text+0x3617): undefined reference to `getshmem_C'
ld: mpi.f90:(.text+0x3636): undefined reference to `attachshmem_C'
ld: mpi.f90:(.text+0x370a): undefined reference to `detachshmem_C'
ld: mpi.f90:(.text+0x3826): undefined reference to `destroyshmem_C'
ld: mpi.o: in function `mpimy_mp_m_divide_intra_inter_node_':
mpi.f90:(.text+0x433e): undefined reference to `getshmem_C'
ld: mpi.f90:(.text+0x435d): undefined reference to `attachshmem_C'
ld: mpi.f90:(.text+0x4433): undefined reference to `detachshmem_C'
ld: mpi.f90:(.text+0x480a): undefined reference to `destroyshmem_C'
ld: ml_ff_mpi.o: in function `mpi_data_mp_m_divide_intra_inter_node_':
ml_ff_mpi.f90:(.text+0x10c): undefined reference to `getshmem_C'
ld: ml_ff_mpi.f90:(.text+0x12b): undefined reference to `attachshmem_C'
ld: ml_ff_mpi.f90:(.text+0x33d): undefined reference to `detachshmem_C'
ld: ml_ff_mpi.f90:(.text+0x34d): undefined reference to `destroyshmem_C'
ld: wave.o: in function `wave_mp_newwav_shmem_':
wave.f90:(.text+0xbac1): undefined reference to `attachshmem_C'
ld: wave.f90:(.text+0xbc06): undefined reference to `attachshmem_C'
ld: wave.f90:(.text+0xbd64): undefined reference to `getshmem_C'
ld: wave.f90:(.text+0xbd8a): undefined reference to `getshmem_C'
ld: wave.o: in function `wave_mp_delwav_shmem_':
wave.f90:(.text+0xc817): undefined reference to `detachshmem_C'
ld: wave.f90:(.text+0xc8f9): undefined reference to `destroyshmem_C'
ld: auger.o: in function `auger_mp_allocate_wavefun_shmem_':
auger.f90:(.text+0x9ac): undefined reference to `getshmem_C'
ld: auger.f90:(.text+0xa55): undefined reference to `attachshmem_C'
ld: auger.f90:(.text+0xb34): undefined reference to `getshmem_C'
ld: auger.f90:(.text+0xc06): undefined reference to `attachshmem_C'
ld: auger.o: in function `auger_mp_deallocate_wavefun_shmem_':
auger.f90:(.text+0xdc4): undefined reference to `detachshmem_C'
ld: auger.f90:(.text+0xde3): undefined reference to `destroyshmem_C'
ld: auger.f90:(.text+0xe05): undefined reference to `detachshmem_C'
ld: auger.f90:(.text+0xe24): undefined reference to `destroyshmem_C'
make[2]: *** [vasp] Error 1
Just to note that without the -Duse_shmem precompiler flag compilation completes successfully.
I tried the following versions of the intel compiler
- 2019.5.281
- 2020.4.304
- 2021.4.0
- 2022.2.1
/Daniel