TABLE OF CONTENTS
ABINIT/defs_abitypes [ Modules ]
NAME
defs_abitypes
FUNCTION
This module contains definitions of high-level structured datatypes for the ABINIT package. If you are sure a new high-level structured datatype is needed, write it here, and DOCUMENT it properly (not all datastructure here are well documented, it is a shame ...). Do not forget: you will likely be the major winner if you document properly. Proper documentation of a structured datatype means: (1) Mention it in the list just below (2) Describe it in the NOTES section (3) Put it in alphabetical order in the the main section of this module (4) Document each of its records, except if they are described elsewhere (this exception is typically the case of the dataset associated with input variables, for which there is a help file) (5) Declare variables on separated lines in order to reduce the occurence of git conflicts. List of datatypes: * MPI_type: the data related to MPI parallelization
COPYRIGHT
Copyright (C) 2001-2024 ABINIT group (XG) This file is distributed under the terms of the GNU General Public License, see ~abinit/COPYING or http://www.gnu.org/copyleft/gpl.txt .
SOURCE
32 #if defined HAVE_CONFIG_H 33 #include "config.h" 34 #endif 35 36 #include "abi_common.h" 37 38 39 module defs_abitypes 40 41 use defs_basis 42 use m_abicore 43 use m_distribfft 44 45 implicit none
defs_abitypes/MPI_type [ Types ]
[ Top ] [ defs_abitypes ] [ Types ]
NAME
MPI_type
FUNCTION
The MPI_type structured datatype gather different information about the MPI parallelisation: number of processors, the index of my processor, the different groups of processors, etc ...
SOURCE
61 type MPI_type 62 63 ! WARNING : if you modify this datatype, please check whether there might be creation/destruction/copy routines, 64 ! declared in another part of ABINIT, that might need to take into account your modification. 65 ! Variables should be declared on separated lines in order to reduce the occurence of git conflicts. 66 67 ! ***************************************************************************************** 68 ! Please make sure that initmpi_seq is changed so that any variable or any flag in MPI_type 69 ! is initialized with the value used for sequential executions. 70 ! In particular any MPI communicator should be set to MPI_COMM_SELF 71 ! ************************************************************************************ 72 73 ! Set of variables for parallelism, that do NOT depend on input variables. 74 ! These are defined for each dataset 75 76 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 77 ! Main variables for parallelisation 78 integer :: comm_world 79 ! world communicator MPI COMM WORLD 80 81 integer :: me 82 ! rank my processor in the group of all processors 83 84 integer :: nproc 85 ! number of processors 86 87 integer :: me_g0 88 ! if set to 1, means that the current processor is taking care of the G(0 0 0) planewave. 89 90 integer :: me_g0_fft 91 ! same as me_g0, but in the FFT representation (me_g0_fft=1 if me_fft=0). 92 93 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 94 ! This is for the parallelisation over atoms (PAW) 95 integer :: comm_atom 96 ! Communicator for atom parralelism 97 98 integer :: nproc_atom 99 ! Size of the communicator over atoms 100 101 integer :: my_natom 102 ! Number of atoms treated by current proc 103 104 integer,pointer :: my_atmtab(:) => null() 105 ! Indexes of the atoms treated by current processor 106 107 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 108 ! This is for the parallelisation over perturbations 109 integer :: paral_pert 110 ! to activate parallelisation over perturbations for linear response 111 112 integer :: comm_pert 113 ! communicator for calculating perturbations 114 115 integer :: comm_cell_pert 116 ! general communicator over all processors treating the same cell 117 118 integer :: me_pert 119 ! number of my processor in my group of perturbations 120 121 integer :: nproc_pert 122 ! number of processors in my group of perturbations 123 124 integer, allocatable :: distrb_pert(:) 125 ! distrb_pert(1:npert) 126 ! index of processor treating each perturbation 127 128 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 129 ! This is for the parallelisation over images 130 integer :: paral_img 131 ! Flag activated if parallelization over image is on 132 133 integer :: my_nimage 134 ! Number of images of the cell treated by current proc (i.e. local nimage) 135 136 integer :: comm_img 137 ! Communicator over all images 138 139 integer :: me_img 140 ! Index of my processor in the comm. over all images 141 142 integer :: nproc_img 143 ! Size of the communicator over all images 144 145 integer,allocatable :: distrb_img(:) 146 ! distrb_img(1:dtset%nimage) 147 ! index of processor treating each image (in comm_img communicator) 148 149 integer,allocatable :: my_imgtab(:) 150 ! index_img(1:my_nimage) indexes of images treated by current proc 151 152 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 153 ! This is for the parallelisation over the cell 154 integer :: comm_cell 155 ! local Communicator over all processors treating the same cell 156 157 integer :: me_cell 158 ! Index of my processor in the comm. over one cell 159 160 integer :: nproc_cell 161 ! Size of the communicator over one cell 162 163 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 164 ! This is for the parallelisation over fft 165 integer :: comm_fft 166 ! Communicator over fft 167 168 integer :: me_fft 169 ! Rank of my processor in my group of FFT 170 171 integer :: nproc_fft 172 ! number of processors in my group of FFT 173 174 type(distribfft_type),pointer :: distribfft => null() 175 ! Contains all the information related to the FFT distribution 176 177 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 178 ! This is for the parallelisation over bands 179 integer :: paralbd 180 ! paralbd=0 : (no //ization on bands) 181 ! paralbd=1 : (//ization on bands) 182 183 integer :: comm_band 184 ! Communicator over bands 185 186 integer :: me_band 187 ! Rank of my proc in my group of bands 188 189 integer :: nproc_band 190 ! Number of procs on which we distribute bands 191 192 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 193 ! This is for the spinor parallelisation 194 integer :: paral_spinor 195 ! Flag: activation of parallelization over spinors 196 197 integer :: comm_spinor 198 ! Communicator over spinors 199 200 integer :: me_spinor 201 ! Rank of my proc in the communicator over spinors 202 ! Note: me_spinor is related to the index treated by current proc 203 ! (nspinor_index= mpi_enreg%me_spinor + 1) 204 205 integer :: nproc_spinor 206 ! Number of procs on which we distribute spinors 207 208 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 209 ! This is for the kpt/nsppol parallelisation 210 integer :: comm_kpt 211 ! Communicator over kpt 212 213 integer :: me_kpt 214 ! Rank of my proc in the communicator over kpt 215 216 integer :: nproc_spkpt 217 ! Number of procs on which we distribute spins and kpt 218 219 integer, allocatable :: proc_distrb(:,:,:) 220 ! proc_distrb(nkpt,mband,nsppol) 221 ! number of the processor that will treat 222 ! each band in each k point. 223 224 integer :: my_isppoltab(2) 225 ! my_isppoltab(2) contains the flags telling which value of isppol is treated by current proc 226 ! in sequential, its value is (1,0) when nsppol=1 and (1,1) when nsppol=2 227 ! in parallel, its value is (1,0) when nsppol=1 and (1,0) when nsppol=2 and up-spin is treated 228 ! or (0,1) when nsppol=2 and dn-spin is treated 229 ! or (1,1) when nsppol=2 and both spins are treated 230 231 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 232 ! This is for the band-FFT-kpt-spinor parallelisation 233 integer :: paral_kgb 234 ! Flag: activation of parallelization over kpt/band/fft 235 236 integer :: bandpp 237 ! Number of Bands in the paral_kgb blocl treated by this Processor 238 239 integer :: comm_bandspinorfft 240 ! Cartesian communicator over band-fft-spinor 241 242 integer :: comm_bandfft 243 ! Cartesian communicator over the band-fft 244 245 integer :: comm_kptband 246 ! Communicator over kpt-band subspace 247 248 integer :: comm_spinorfft 249 ! Communicator over fft-spinors subspace 250 251 integer :: comm_bandspinor 252 ! Communicator over band-spinors subspace 253 254 integer, allocatable :: my_kgtab(:,:) 255 ! (mpw, mkmem) 256 ! Indexes of kg treated by current proc 257 ! i.e. mapping betwee the G-vector stored by this proc and the list of G-vectors 258 ! one would have in the sequential version. See kpgsph in m_fftcore. 259 260 integer, allocatable :: my_kpttab(:) 261 ! Indicates the correspondence between the ikpt and ikpt_this_proc 262 263 real(dp) :: pw_unbal_thresh 264 !Threshold (in %) activating the plane-wave load balancing process (see kpgsph routine) 265 266 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 267 ! This is for the parallelisation over kpt/nsppol in the Berry Phase case 268 integer, allocatable :: kptdstrb(:,:,:) 269 ! kptdstrb(me,ineigh,ikptloc) 270 ! tab of processors required for dfptnl_mv.f and berryphase_new.f 271 272 integer, allocatable :: kpt_loc2fbz_sp(:,:,:) 273 ! kpt_loc2fbz_sp(nproc, dtefield%fmkmem_max, 2) 274 ! K-PoinT LOCal TO Full Brilloin Zone and Spin Polarization 275 ! given a processor and the local number of the k-point for this proc, 276 ! give the number of the k-point in the FBZ and the isppol; 277 ! necessary for synchronisation in berryphase_new 278 ! kpt_loc2fbz(iproc, ikpt_loc,1) = ikpt 279 ! kpt_loc2fbz(iproc, ikpt_loc,2) = isppol 280 281 integer, allocatable :: kpt_loc2ibz_sp(:,:,:) 282 283 ! TODO: Is is still used? 284 integer, allocatable :: mkmem(:) 285 286 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 287 ! This is for Hartree-Fock's parallelisation 288 integer :: paral_hf 289 ! Flag: activation of parallelization for Hartree-Fock 290 291 integer :: comm_hf 292 ! Communicator over the k-points and bands of occupied states for Hartree-Fock 293 294 integer :: me_hf 295 ! Rank of my proc in the communicator for Hartree-Fock 296 297 integer :: nproc_hf 298 ! Number of procs on which we distribute the occupied states for Hartree-Fock 299 300 integer, allocatable :: distrb_hf(:,:,:) 301 ! distrb_hf(nkpthf,nbandhf,1) 302 ! index of processor treating each occupied states for Hartree Fock. 303 ! No spin-dependence because only the correct spin is treated (in parallel) or both spins are considered (sequential) 304 ! but we keep the third dimension (always equal to one) to be able to use the same routines as the one for proc_distrb 305 306 ! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 307 ! This is for the wavelet parallelisation 308 integer :: comm_wvl 309 ! Communicator over real space grids for WVLs 310 311 integer :: me_wvl 312 ! Rank of my proc for WVLs 313 314 integer :: nproc_wvl 315 ! Number of procs for WVLs 316 ! Array to store the description of the scaterring in real space of 317 ! the potentials and density. It is allocated to (0:nproc-1,4). 318 ! The four values are: 319 ! - the density size in z direction ( = ngfft(3)) ; 320 ! - the potential size in z direction ( <= ngfft(3)) ; 321 ! - the position of the first value in the complete array ; 322 ! - the shift for the potential in the array. 323 ! This array is a pointer on a BigDFT handled one. 324 325 integer, pointer :: nscatterarr(:,:) => null() 326 ! Array to store the total size (of this proc) of the potentails arrays when 327 ! the memory is distributed following nscatterarr. 328 ! This array is a pointer on a BigDFT handled one. 329 330 integer, pointer :: ngatherarr(:,:) => null() 331 ! Store the ionic potential size in z direction. 332 ! This array is a pointer on a BigDFT handled one. 333 334 integer :: ngfft3_ionic 335 ! End wavelet additions 336 337 end type MPI_type