Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FCISolver expects max_memory = remaining memory in PySCF #54

Open
joannaqw opened this issue Dec 19, 2023 · 8 comments
Open

FCISolver expects max_memory = remaining memory in PySCF #54

joannaqw opened this issue Dec 19, 2023 · 8 comments
Labels
code hygiene Nothing is actually wrong, but the code should be "cleaned up" to elegant conventions

Comments

@joannaqw
Copy link

I have updated my dev branch:

(base) [qjwang@midway3-login4 mrh]$ git branch
* dev
  master
(base) [qjwang@midway3-login4 mrh]$ git pull origin dev
From https://github.com/MatthewRHermes/mrh
 * branch            dev        -> FETCH_HEAD
Already up to date.

but still had the error message:

5 Traceback (most recent call last):
 6   File "/project/lgagliardi/shared/ibm-collab/may_june_calcs/dps_mp2/LASSCF/12_12/equ/NEW_MOL/combo2/input.py", line 58, in <module>
 7     las.kernel(mo_localized)
 8   File "/home/qjwang/mrh/my_pyscf/mcscf/lasci.py", line 950, in kernel
 9     self.ci, h2eff_sub, veff = _kern(mo_coeff=mo_coeff, ci0=ci0, verbose=verbose, \
10   File "/home/qjwang/mrh/my_pyscf/mcscf/lasci_sync.py", line 63, in kernel
11     e_cas, ci1 = ci_cycle (las, mo_coeff, ci1, veff, h2eff_sub, casdm1frs, log)
12   File "/home/qjwang/mrh/my_pyscf/mcscf/lasci_sync.py", line 290, in ci_cycle
13     e_sub, fcivec = fcibox.kernel(h1e, eri_cas, ncas, nelecas,
14   File "/home/qjwang/mrh/my_pyscf/mcscf/addons.py", line 75, in kernel
15     e, c = solver.kernel(h1e, h2, norb, self._get_nelec(solver, nelec), c0,
16   File "/home/qjwang/mrh/my_pyscf/fci/csf.py", line 547, in kernel
17     e, c = kernel (self, h1e, eri, norb, nelec, smult=self.smult,
18   File "/home/qjwang/mrh/my_pyscf/fci/csf.py", line 375, in kernel
19     hdiag_csf = fci.make_hdiag_csf (h1e, eri, norb, nelec, hdiag_det=hdiag_det, max_memory=max_memory)
20   File "/home/qjwang/mrh/my_pyscf/fci/csf.py", line 501, in make_hdiag_csf
21     return make_hdiag_csf (h1e, eri, norb, nelec, self.transformer, hdiag_det=hdiag_det, max_memory=max_memory)
22   File "/home/qjwang/mrh/my_pyscf/fci/csf.py", line 106, in make_hdiag_csf
23     raise MemoryError (memstr)
24 MemoryError: hdiag_csf of 2 orbitals, (1,1) electrons and smult=1 with 0 doubly-occupied orbitals (1 configurations and 2 determinants) requires 8.64e-05    MB > -3053.3283840000004 MB remaining memory

The shared path has the same issue

@MatthewRHermes
Copy link
Owner

Could you make a MWE of this issue?

@joannaqw
Copy link
Author

Sure, here is the script:

import numpy as np
from pyscf import gto, scf,lib, tools, mcscf
from mrh.my_pyscf.mcscf.lasscf_o0 import LASSCF
from pyscf.mcscf import avas
 
 mol =gto.M(
     atom =  '''
  S      0.00000000e+00  0.00000000e+00  0.00000000e+00
  C     -1.23860822e+00 -8.46663022e-01 -9.53471323e-01
  C     -4.86241511e-01  1.71687341e+00  8.05324719e-17
  H      9.79058626e-01  1.06329980e-02 -9.32922830e-01
  C     -1.08512412e+00 -1.04000840e+00 -2.33955666e+00
  C     -9.89555138e-01  2.33917993e+00 -1.15673648e+00
  C     -2.36007161e+00 -1.31391193e+00 -2.42075811e-01
  C     -3.44393192e-01  2.38772131e+00  1.22862736e+00
  C     -2.10111955e+00 -1.72220811e+00 -3.02939494e+00
  C     -1.34857411e+00  3.69252078e+00 -1.06912585e+00
  C     -3.36764427e+00 -1.97967805e+00 -9.57027715e-01
  C     -7.10704041e-01  3.74352824e+00  1.28860533e+00
  C     -3.23819128e+00 -2.18398374e+00 -2.34300894e+00
  C     -1.20867912e+00  4.39135521e+00  1.45724835e-01
  H     -1.95277327e-01 -6.85505500e-01 -2.87159128e+00
  H     -1.11388150e+00  1.78625933e+00 -2.09312749e+00
  H     -2.44701250e+00 -1.16453318e+00  8.39475711e-01
  H      4.29092087e-02  1.87151291e+00  2.11331574e+00
  H     -1.99914507e+00 -1.89203632e+00 -4.10556846e+00
  H     -1.74165916e+00  4.20238246e+00 -1.95402092e+00
  H     -4.24935075e+00 -2.34993946e+00 -4.25357685e-01
  H     -6.09115706e-01  4.28788461e+00  2.23223670e+00
  H     -4.02424405e+00 -2.71350551e+00 -2.88989978e+00
  H     -1.49437612e+00  5.44632225e+00  2.00165204e-01 ''',
     basis = 'cc-pvdz',
     charge = 1,
     spin =  0,
     verbose = 5)
  
mf = mol.RHF().run()
ncas,nelecas,guess_mo_coeff=avas.kernel(mf,['1 C 2p','S 3p'],minao=mol.basis)
las_list=np.array([45,51,39,41,48,52,59,65,38,40,58,64])
frag_orbs=[['^1 C 2px','^0 S 3px'],['^1 C 2py','^4 C 2py','^6 C 2py','^8 C 2py','^10 C 2py','^12 C 2py'],['^2 C 2px','^5 C 2px','^7 C 2px','^9     C 2px','^11 C 2px','^13 C 2px']]
las=LASSCF(mf,(2,6,4),(2,6,4),spin_sub=(1,1,1))
guess_mo_sorted=las.sort_mo(las_list,guess_mo_coeff)
mo_localized=las.localize_init_guess(frag_orbs,guess_mo_sorted)                                                                                
las.max_cycle_macro=200
las.kernel(mo_localized)

@MatthewRHermes
Copy link
Owner

Can I see the output of this: grep -E "max_memory|LASCI macro|MB remaining memory" on the output file of when you ran this and it crashed? Mine currently produces

[INPUT] max_memory = 8000 
max_memory 8000 MB (current use 95 MB)
max_memory 8000 (MB)
max_memory 8000 MB
max_memory 8000 MB
max_memory 8000 MB
pspace_size of 200 CSFs -> 4 determinants requires 0.39951359999999997 MB, cf 4416.110592 MB remaining memory
pspace_size of 200 CSFs -> 400 determinants requires 3.456 MB, cf 4416.110592 MB remaining memory
pspace_size of 200 CSFs -> 36 determinants requires 0.5346816 MB, cf 4415.721472 MB remaining memory
LASCI macro 0 : E = -855.392718549569 ; |g_int| = 1.98744786381293 ; |g_ci| = 0.191947845294957 ; |g_x| = 0
pspace_size of 200 CSFs -> 4 determinants requires 0.39951359999999997 MB, cf 4243.18976 MB remaining memory
pspace_size of 200 CSFs -> 400 determinants requires 3.456 MB, cf 4243.18976 MB remaining memory
pspace_size of 200 CSFs -> 36 determinants requires 0.5346816 MB, cf 4243.18976 MB remaining memory
LASCI macro 1 : E = -856.96868642341 ; |g_int| = 1.18780681207217 ; |g_ci| = 0.0391621398681654 ; |g_x| = 0
pspace_size of 200 CSFs -> 4 determinants requires 0.39951359999999997 MB, cf 4242.255872 MB remaining memory
pspace_size of 200 CSFs -> 400 determinants requires 3.456 MB, cf 4242.255872 MB remaining memory
pspace_size of 200 CSFs -> 36 determinants requires 0.5346816 MB, cf 4242.255872 MB remaining memory
LASCI macro 2 : E = -857.892220506532 ; |g_int| = 0.955136575550375 ; |g_ci| = 0.039998925591002 ; |g_x| = 0
pspace_size of 200 CSFs -> 4 determinants requires 0.39951359999999997 MB, cf 4239.675392 MB remaining memory
pspace_size of 200 CSFs -> 400 determinants requires 3.456 MB, cf 4239.675392 MB remaining memory
pspace_size of 200 CSFs -> 36 determinants requires 0.5346816 MB, cf 4239.675392 MB remaining memory
LASCI macro 3 : E = -858.137895194404 ; |g_int| = 0.416626197306675 ; |g_ci| = 0.0277244302423608 ; |g_x| = 0

It hasn't converged yet, but it isn't crashing.

@joannaqw
Copy link
Author

sure, here's what it returns
(base) [qjwang@midway3-login3 combo2]$ grep -E "max_memory|LASCI macro|MB remaining memory" las.out.py [INPUT] max_memory = 4000 max_memory 4000 MB (current use 80 MB) max_memory 4000 (MB) max_memory 4000 MB max_memory 4000 MB max_memory 4000 MB

@joannaqw
Copy link
Author

I added 'mol.max_memory = 20000' in my input file and LASSCF is running and hasn't crushed yet, will see how it goes.

MatthewRHermes added a commit that referenced this issue Dec 20, 2023
max_memory in the FCI kernel call in lasci_sync was incorrectly
overwritten to *remaining* memory, causing incorrect exceptions to
be raised. Just unset that kwarg since the corresponding member is
set correctly at object initialization.
@MatthewRHermes
Copy link
Owner

Yeah it probably won't but see my recent commit to the dev branch above. This was caused by an actual bug I just found in which max_memory was at one point overwritten by remaining memory. The bug is hidden if the memory is large enough, but 4 GB is just close enough to exactly how much memory this calculation takes that it became visible. If you pull dev now this should work with 4 GB (although more memory might make it run faster).

@joannaqw
Copy link
Author

no problem running calculations now

@MatthewRHermes MatthewRHermes reopened this Feb 5, 2024
MatthewRHermes added a commit that referenced this issue Feb 5, 2024
PySCF apparently wants "remaining memory" on entry to fcisolver
kernel. CSFSolver currently wants "maximum memory" (the config
input by user) instead. Fix calling lines in LASSCF functions to
the CSFSolver behavior for now. Maybe the CSFSolver behavior needs
to change? Maybe the PySCF behavior needs to change?
@MatthewRHermes
Copy link
Owner

There were other points in my code where I did the same thing: passed remaining memory as max_memory. However, I just discovered that the reason I did this is that this is PySCF convention: the FCISolver kernel is written to expect remaining memory. I don't like this and it's hard to change CSFSolver to match it. Commit 21a1a51 matches the LASSCF calling functions to the CSFSolver expected behavior for now, but I have to change the latter (or convince the PySCF maintainers to change their FCISolver behavior) if I ever want to migrate CSFSolver to PySCF...

@MatthewRHermes MatthewRHermes changed the title MemoryError with 'dev' up to date FCISolver expects max_memory = remaining memory in PySCF Feb 5, 2024
@MatthewRHermes MatthewRHermes added the code hygiene Nothing is actually wrong, but the code should be "cleaned up" to elegant conventions label Feb 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
code hygiene Nothing is actually wrong, but the code should be "cleaned up" to elegant conventions
Projects
None yet
Development

No branches or pull requests

2 participants