Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Q-HDNNP: Simplified "charge-aware" HDNNP #80

Open
wants to merge 30 commits into
base: master
Choose a base branch
from
Open

Conversation

singraber
Copy link
Member

No description provided.

@singraber singraber added enhancement New feature or request core Touches the libnnp core library labels Jan 22, 2021
@singraber singraber requested a review from mpbircher January 22, 2021 10:36
@singraber
Copy link
Member Author

Charge training seems to work (see examples/nnp-train/H2O_RPBE-D3):

List of extra keywords required:

nnp_type Q-HDNNP
global_hidden_layers_electrostatic      2
global_nodes_electrostatic              25 25
global_activation_electrostatic         t t l
charge_fraction                         1.0
task_batch_size_charge                  1
write_traincharges                      1

Things to do:

Needs testing (compiles but yields bogus numbers)
@codecov-io
Copy link

codecov-io commented Feb 2, 2021

Codecov Report

Merging #80 (cca139d) into master (c96ea3c) will decrease coverage by 2.53%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #80      +/-   ##
==========================================
- Coverage   72.52%   69.99%   -2.54%     
==========================================
  Files         126      131       +5     
  Lines       13311    14280     +969     
==========================================
+ Hits         9654     9995     +341     
- Misses       3657     4285     +628     
Flag Coverage Δ
cpp 73.11% <ø> (-2.24%) ⬇️
python 69.99% <ø> (-2.54%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/application/nnp-comp2.cpp 0.00% <ø> (ø)
src/application/nnp-dataset.cpp 96.00% <ø> (+0.07%) ⬆️
src/application/nnp-predict.cpp 95.88% <ø> (+0.36%) ⬆️
src/application/nnp-scaling.cpp 96.34% <ø> (ø)
src/application/nnp-sfclust.cpp 92.18% <ø> (ø)
src/application/nnp-train.cpp 94.11% <ø> (-5.89%) ⬇️
...nterface/LAMMPS/src/USER-NNP/pair_nnp_external.cpp 0.00% <ø> (ø)
.../interface/LAMMPS/src/USER-NNP/pair_nnp_external.h 100.00% <ø> (ø)
src/libnnp/Atom.cpp 39.98% <ø> (+0.21%) ⬆️
src/libnnp/Atom.h 80.00% <ø> (+13.33%) ⬆️
... and 100 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update de4da77...cca139d. Read the comment docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Touches the libnnp core library enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants