You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering if there is documentation on what the optimal training parameters are? For example, how many tomograms should I provide for network training (I now provided 3, from my dataset of 100)? Same for batch size, kernel size, etc. for which I stuck to the values in the README.
Any help would be appreciated!
The text was updated successfully, but these errors were encountered:
Like Marten, I'm also interested in getting an idea of how many tomograms should be used on average for training. Initially, we were told to use all the data (up to 100 tomograms here). But this takes a considerable amount of time (6 days) and even with a loss value that tends to increase over the last few epochs.
It's very different from what Marten uses here, so I'd imagine a dozen tomograms would be already enough, isn't it ?
I was wondering if there is documentation on what the optimal training parameters are? For example, how many tomograms should I provide for network training (I now provided 3, from my dataset of 100)? Same for batch size, kernel size, etc. for which I stuck to the values in the README.
Any help would be appreciated!
The text was updated successfully, but these errors were encountered: