You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using from causalnex.structure.pytorch.notears import from_pandas for causal graph learning with GPU support, there is no option to set the batch_size for PyTorch. It appears that the batch size defaults to the number of samples, as indicated by the code: X_torch = torch.from_numpy(x).float().to(self.device). Consequently, with data shaped (12000, 55), an 8GB RTX3070 encounters out-of-memory issues.
Why is this feature needed?
Allowing for adjustable batch sizes ensures that the library can handle datasets of varying sizes without memory constraints.
Additional Context
No response
Possible implementation
No response
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Contact Details
[email protected]
Feature Description
When using
from causalnex.structure.pytorch.notears import from_pandas
for causal graph learning with GPU support, there is no option to set thebatch_size
for PyTorch. It appears that the batch size defaults to the number of samples, as indicated by the code:X_torch = torch.from_numpy(x).float().to(self.device)
. Consequently, with data shaped (12000, 55), an 8GB RTX3070 encounters out-of-memory issues.Why is this feature needed?
Allowing for adjustable batch sizes ensures that the library can handle datasets of varying sizes without memory constraints.
Additional Context
No response
Possible implementation
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: