-
-
Notifications
You must be signed in to change notification settings - Fork 43
Impossible to reproduce model results #37
Comments
The problem is when I run the model in cluster mode (not local). It´s a Yarn Cluster as I mentioned before. |
You may try this and see if things work (untested). import dask_xgboost as dxgb
params = {'objective': 'binary:logistic', 'n_estimators': 420,
'max_depth': 5, 'eta': .01,
'subsample': .8, 'colsample_bytree': .8,
'learning_rate': .05, 'scale_pos_weight': 1, 'seed': 1234}
bst = dxgb.train(client, params, fitted.transform(X), y) Provided |
Sorry, I had to tell to you that I also tested with seed parameter and it's not reproducible :( |
Ok. This may have to do with how we're using xgboost, or it may be inherent to xgboost (as I mentioned above). I'm not the person to figure this out, Tom likely knows more here. |
@TomAugspurger can you reply please? :( |
I’m on parental leave for the next couple weeks. Could you try debugging it further yourself?
… On Apr 23, 2019, at 05:10, Sergio Calderón Pérez-Lozao ***@***.***> wrote:
@TomAugspurger can you reply please? :(
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
When you are saying not replicable, do you mean the model itself? Or its prediction? If it is the prediction, is it the probability or the class label? One thing to note that if you don't specify |
Sorry @TomAugspurger ;( Hi @DigitalPig, The point is that if I build two xgboost models with exactly the same parameters it doesn´t return the same model because the importances are different. My preprocessing code is this (df_train is a dask dataframe):
and then if I make this train two times and show its feature importances they are different:
first model: second model: As you can see the results are slightly different. Maybe I´m doing something wrong... Thanks for your replies |
Which xgboost version are you using? I know the recent xgboost change the default method of variable importance from Also, I would try to take a downsampled dataset and train it w/o dask to see if you still get different variable importance. Last, there are some stochastic options turned on during your training like What about the prediction of these two models? |
@DigitalPig sorry for the time I took to answer, I was during my holidays. I´m using xgboost 0.81, returned by:
With the random options turned off the model also return different importances: I ran this two times
First execution it returns this importances: And the second time: BUT when I tried to execute the test for less data (only a subset of 100000 registers), the models returned the same importances even with the stochastic parameters setted to a less than 1 value (subsample .8 or colasample_bytree .8 x.e.). So maybe it´s because the size of the data?? |
Any idea for why with more data dask_xgboost doesn´t return the same results for reproducibility? |
If you remove dask-xgboost from the equation, and just use XGBoost, are the results deterministic? Is it still deterministic if you use XGBoost distributed training (again, not using dask to set up the distributed xgboost runtime) |
@TomAugspurger yes! with just using XGboost the results are deterministic with all of the data and xgboost with 30 n_jobs (cores) |
Is that 30 cores on one machine or distributed?
… On May 27, 2019, at 17:38, Sergio Calderón Pérez-Lozao ***@***.***> wrote:
@TomAugspurger yes! with just using XGboost the results are deterministic with all of the data and xgboost with 30 n_jobs (cores)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@TomAugspurger mmm is in one machine, can I distributed xgboost only with the xgboost library? |
Yes, that’s the runtime dask hooks into.
… On May 27, 2019, at 17:58, Sergio Calderón Pérez-Lozao ***@***.***> wrote:
@TomAugspurger mmm is in one machine, can I distributed xgboost only with the xgboost library?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Ok I will test it tomorrow! Thanks |
Sorry but I couldn´t test it because in our environment we are usiong a cluster that we are not allow to configure and it´s not possible to execute xgboost in distributed without dask. Any idea to test it? :( PS: For me the strangest thing is that with less data the results are reproducible even dask is using also the cluster (dask dashboard show it) |
I don't have any other ideas at the moment.
…On Mon, Jun 3, 2019 at 12:45 PM Sergio Calderón Pérez-Lozao < ***@***.***> wrote:
Sorry but I couldn´t test it because in our environment we are usiong a
cluster that we are not allow to configure and it´s not possible to execute
xgboost in distributed without dask.
Any idea to test it? :(
PS: For me the strangest thing is that with less data the results are
reproducible even dask is using also the cluster (dask dashboard show it)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#37?email_source=notifications&email_token=AAKAOIVUUSWVT4OVFCZAUGDPYVKC7A5CNFSM4HFPPOWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODW2FFNY#issuecomment-498356919>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAKAOIQVMXADFPIZXCTU42LPYVKC7ANCNFSM4HFPPOWA>
.
|
@sergiocalde94 Do you have minimal example that reproduces this issue? If so, I can take a look. |
I was able to reproduce this error. Taking a look at why this is happening |
I installed both libraries from source, the error seemed to go away. I did some digging and it seems that its a problem with version 0.90 of xgboost. I'm fairly certain that this is the culprit and was fixed a few days ago in a commit in the master of xgboost here: |
In that case, would recommend requesting if upstream could make a new release. |
ok, my conclusion was a bit premature. I ran the example that reproduced the error again today after the issue above was closed and realized that I had accidentally set Here is how I reproduced the bug if anybody else was curious: https://github.com/kylejn27/dask-xgb-randomstate-bug |
No solution yet, but I have interesting information. I was tailing the dask worker logs and noticed a trend. If the thread that the workers were running on were the same between executions of the I'm not sure if this is an issue with dask-xgboost or dmlc/xgboost but I was able to reproduce this issue on the v1.0 version of dmlc/xgboost native dask integration. maybe this is expected behavior though? https://xgboost.readthedocs.io/en/latest/faq.html#slightly-different-result-between-runs |
cc @RAMitchell in case he has thoughts on what might be going on here. |
(or knows someone who can take a look) |
cc also @trivialfis |
Yup. We are still struggling with some blocking issues to make a new release. |
Ah ok. Good to know that this is on your radar. Thanks Jiaming!
…On Thu, Nov 7, 2019 at 8:59 AM Jiaming Yuan ***@***.***> wrote:
Yup. We are still struggling with some blocking issues to make a new
release.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#37?email_source=notifications&email_token=AACKZTE2O5JFXARJNCFZKO3QSRCO3A5CNFSM4HFPPOWKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDNC4OI#issuecomment-551169593>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACKZTFGDK5CF3TOZGRSLA3QSRCO3ANCNFSM4HFPPOWA>
.
|
Sorry for the long wait. Should be fixed once dmlc/xgboost#4732 is merged. |
@mmccarty, would someone from your team be able to try out Jiaming’s PR ( dmlc/xgboost#4732 )? |
@jakirkham I'll test it out |
Great! Thank you @trivialfis |
I tested it with year prediction dataset with dask interface in XGBoost, along with "exact" tree method used in this issue. More tests are coming. The root problem is in model serialization, previously distributed training is more or less a Java/Scala thing and the JVM package built a layer on top of c++ to handle parameters. See the short tutorial in that PR. I tried to handle them in C++ by walking through the whole library and serialize it into JSON. As a bonus, you can verify internal configurations by calling |
@trivialfis @jakirkham ran a few tests using that branch (ensuring that dask-xgboost was using Jiaming's PR branch as its xgboost dependency), saw the same issue. ~60% of the time there was a discrepancy in results Here's an example of it failing: |
@kylejn27 I ran your example, X_train is different between runs. i converted it into numpy then save it as file for running sha256sum. |
While X is reproducible |
Not sure if I'm getting the same results as you, but maybe I'm doing something wrong. Were you saving the input arrays inside the model code itself? Sometimes, when running that example it does produce identical models, othertimes it does not. I've been running the script 4-5 times to ensure that I'm not getting different models. So after running this I got two different models, but the input parameters seemed to be the exact same. I tried adding in
|
@trivialfis, any thoughts? 🙂 |
Sorry for late reply. Will get back to this tomorrow or at weekend. I also need to test for more popular tree methods like hist and GPU hsit. |
Sorry for the many noise here. It suddenly occurred to me that the |
@trivialfis hmm your right, I missed that in the docs. I still wasn't able to reproduce model results when I set the tree_method param to referencing this in the FAQ: |
Set the numeric errors aside, it's supposed to have exact same trees. But sometimes numeric errors can be troublesome, for example XGBoost utilities gradients in missing value. It's nice in the math but floating point errors sometimes generate artificial gradients that are misinterpreted as gradients from missing value, hence changing the default split direction (I haven't talked to anyone about this yet as it's quite surprising to me). I will continue the tests for hist and GPU hist, should report back soon as possible. |
That's helpful to know, thanks for looking into this further |
Please note that the above example is rare as I have only seen it in an artificially generated small dataset that has specific pattern of values. Usually this doesn't happen on any other datasets, as normally the error is not even close to be big enough to affect the split direction. |
Ran some tests today for both hist and gpu_hist. For small number of iterations (< 48) with 2 workers 4 threads each, on YearPredictionMSD dataset (a dense dataset with 0.5M rows) the model should be reproducible. But going higher starts to generate discrepancy. It's not a good news but at least it proves that there is no human error in the code to generate this discrepancy. Most of the errors come from histogram building as result of summation error. I will keep the issue tracked in #37 in the future. Thanks for all the help! |
Thank you for the update @trivialfis Please let us know if you need any assistance. |
Thanks for all the discussion! I recently get on-boarded to dask-xgboost. Can someone summarize what's the reason for the irreproducibility? |
I´ve just opened this issue in the dask repo, but maybe here is better...
I´m using dask for implementing a data pipeline with dask dataframes and dask ml in a Yarn Cluster.
When I build an XGBoost model, the results are always different, even if I manually fix a seed with da.random.seed().
Is it possible to reproduce the results of a dask model like the one in local using sklearn instead of dask ml???
The text was updated successfully, but these errors were encountered: