-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LASCI Convergence issue within LASSI[r,nmax] #68
Comments
Does this also happen with the smaller AO basis set? Do you have those optimized orbitals lying around somewhere? |
I have a (11e,10o) optimized orbital with cc-pvdz(Al,Fe,O)+sto-3g(C,H). I decided not to push it further because I would atleast need triple zeta basis for a (11e,20o) active space. Additionally, all LASSI calculations were done on production quality basis set. That said, I am going to start a LASSI with (11e,10o) in a smaller basis and see if I can recreate a problem. I am going to share the orbitals with you through shared rcc - it's just easier. |
For documentation: the reason this is both a bug and a convergence issue is that LASSI[r,1] (i.e., not passing the lroots kwarg) should have identical convergence behavior to LASSI[r,nmax] (passing the lroots kwarg) since only the ground states of each fragment in each rootspace talk to each other. |
Two-step bandaid that appears to solve the problem:
Note that the pspace step carries the risk of invoking issue #48, if CASCI calculations in large active spaces are also being attempted. The required value of pspace is likely system-dependent so it's important to check all outputs and never assume that the calculation is converged. Unfortunately, applying these bandaids changes the computed J from 2.29 cm^-1 to 1.94 cm^-1... |
Update: it's not a bug; it's just that the interactions between fragments is unrelated to the problem. Every individual FCI kernel call is individually failing when highly-excited states are sought. As is usually the case with convergence failures, this seems to be related to the initial guess. In the current state of the code, no guess is ever constructed for more than the ground state of each fragment in each Hilbert space. I don't quite understand why the bandaids above appeared to work. |
Commit ae2dfda, which for the first time implements the construction of meaningful FCI guess vectors for multiple states in each fragment in each rootspace in the method object layer (as opposed to leaving it up to the generic davidson diagonalizer, which was implicitly taking care of this step until now), permits the test calculation to converge successfully without recourse to either bandaid above. The outcome J value is still 1.94 cm^-1, which means the bandaids were somehow guiding it to the right answer. If convergence still fails in other cases, increasing |
I can confirm that all my test cases now converge. |
I am trying a (11e,20o) active space with AlFe2 node.
When I try a LASSI[1,1], I do not have LASCI convergence issues.
With LASCI[1,3], I see that LASCI converges fine, although I start getting some warnings around QR decomposition.
With a LASSI[1,5], I see that LASCI does not converge for many states and the following is observed.
I am attaching an logfile of script that runs all of the three cases. The script is within the logfile.
Also, attaching HF and LASSCF converged orbitals and geometry.
For some reason, github is not accepting my .tgz of the folder, so here is the same folder on google drive: https://drive.google.com/file/d/1JzNCVKPk8kk79V_4-nq1AVDL37Fhjk48/view?usp=sharing.
The text was updated successfully, but these errors were encountered: