Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CHGNetCalculator.calculate add kwarg task: PredTask = "efsm" #215

Merged
merged 3 commits into from
Nov 16, 2024

Conversation

janosh
Copy link
Collaborator

@janosh janosh commented Nov 16, 2024

reason: need a way to disable magmom predictions. noticed that frozen phonon method computes restorative forces for needless atom displacements if bad magmoms from CHGNet break the structure's symmetry. changes include a unit test for the new task keyword

@janosh janosh added the enhancement New feature or request label Nov 16, 2024
@janosh janosh enabled auto-merge (squash) November 16, 2024 15:51
@janosh janosh force-pushed the chgnet-calculator-task-kwarg branch from 10c4126 to 93d2e93 Compare November 16, 2024 16:53
@janosh janosh force-pushed the chgnet-calculator-task-kwarg branch 2 times, most recently from bbd5f59 to af0cabb Compare November 16, 2024 17:04
@janosh janosh force-pushed the chgnet-calculator-task-kwarg branch from af0cabb to 0cb93e6 Compare November 16, 2024 19:03
    def test_trainer_composition_model(tmp_path: Path) -> None:
        for param in chgnet.composition_model.parameters():
            assert param.requires_grad is False
        trainer = Trainer(
            model=chgnet,
            targets="efsm",
            optimizer="Adam",
            criterion="MSE",
            learning_rate=1e-2,
            epochs=5,
        )
        initial_weights = chgnet.composition_model.state_dict()["fc.weight"].clone()
>       trainer.train(
            train_loader, val_loader, save_dir=tmp_path, train_composition_model=True
        )

tests/test_trainer.py:106:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
chgnet/trainer/trainer.py:305: in train
    train_mae = self._train(train_loader, epoch, wandb_log_freq)
chgnet/trainer/trainer.py:400: in _train
    combined_loss = self.criterion(targets, prediction)
../../.venv/py312/lib/python3.12/site-packages/torch/nn/modules/module.py:1553: in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
../../.venv/py312/lib/python3.12/site-packages/torch/nn/modules/module.py:1562: in _call_impl
    return forward_call(*args, **kwargs)
chgnet/trainer/trainer.py:861: in forward
    if mag_target is not None and not np.isnan(mag_target).any():
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = tensor([0.9620, 0.0657], device='mps:0'), dtype = None

    def __array__(self, dtype=None):
        if has_torch_function_unary(self):
            return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        if dtype is None:
>           return self.numpy()
E           TypeError: can't convert mps:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

../../.venv/py312/lib/python3.12/site-packages/torch/_tensor.py:1083: TypeError
@janosh janosh merged commit 84e8d55 into main Nov 16, 2024
10 checks passed
@janosh janosh deleted the chgnet-calculator-task-kwarg branch November 16, 2024 21:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant