Training stops outa nowhere #351
Unanswered
mihailpettas
asked this question in
Q&A
Replies: 2 comments 6 replies
-
Hi @jezerro, Thank you for the question. Did you try running your exp on a different dataset? Does this happen just on this specific dataset/task you are working on or other datasets as well? Several possible reasons include segmentation fault (although segmentation fault usually has an error message), or out of memory (more likely). Can you please provide more information (e.g., code )for us to reproduce the error? Thank you! |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Alright, im training a model on a code that used to work fluently and now it stops mid-train, something like this:
[flaml.automl: 12-22 15:17:03] {2344} INFO - iteration 59, current learner lgbm [flaml.automl: 12-22 15:17:03] {2532} INFO - at 10.6s, estimator lgbm's best error=0.0000, best estimator lgbm's best error=0.0000 [flaml.automl: 12-22 15:17:03] {2344} INFO - iteration 60, current learner xgboost [flaml.automl: 12-22 15:17:03] {2532} INFO - at 10.7s, estimator xgboost's best error=0.0000, best estimator lgbm's best error=0.0000 [flaml.automl: 12-22 15:17:03] {2344} INFO - iteration 61, current learner lgbm [flaml.automl: 12-22 15:17:03] {2532} INFO - at 10.8s, estimator lgbm's best error=0.0000, best estimator lgbm's best error=0.0000 [flaml.automl: 12-22 15:17:03] {2344} INFO - iteration 62, current learner xgboost
Any ideas? Early stop is on but it used to write a few stuff when it stooped, not it just returns to terminal.
Beta Was this translation helpful? Give feedback.
All reactions