-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Encourage use of same datasets across implementations #123
Comments
I definitely agree with this. I can add something to the Contributing page (although I'm not sure how closely that gets read anyway). |
I agree, and this has generally been my approach. (Although in some cases I've used the nice sklearn built-in data generation functions like Agree about the problematic page; it could be better. When I was doing Python versions of some geospatial pages, I took one look at that and thought 'no'. |
@NickCH-K Okay, cool. I'll let you update then. Feel free to close this when you do. I think people will get the idea once we manage to set the standard across a couple pages. @aeturrell Ya, I agree it can be tough to balance with benefits of generated datasets. On that note, I just saw in commit log that you added a Python SVM example (nice!). Do you know of a good off-the-shelf dataset that we could use here? |
FWIW, Julia uses its own mirror of But I think that overall it's been pretty stable. Since I stopped actively contributing to Python |
Oh, thanks for the heads-up @vincentarelbundock. I should double check that the example I used above actually works then! Do you think there's any chance of them updating it (have you contacted them?) I'm happy to if not... EDIT: Scratch that, I've just read that they only want to bundle some example and that any more would have to be added manually. I'll see about putting in a PR. |
I think my previous post left too much ambiguity: My strong intention is to keep everything in So far, this has only happened once or twice in many years, and not recently. So I wouldn't worry about the examples you posted. They should still work. |
Good question. To replace what's there, we'd want linear and non-linear binary classification with balanced classes. A quick look at the sklearn datasets (which is also where the |
In looking over several examples, I've come around to the idea that we should strongly encourage re-use of the same datasets across language implementations (i.e for a specific page). Advantages include:
I recently did this for the Collapse a Dataset page and, personally, think it's a lot easier to read and compare the code examples now. @vincentarelbundock's Rdatasets is a very useful resource in this regard, since it provides a ton of datasets that can be directly read in as CSVs. (Both Julia and Python statsmodels have nice wrappers for it too.)
Question: Do others agree? If so, I'll add some text to Contributing page laying this out.
PS. I also think we should discourage use of really large files, especially since this is going to start becoming a drag on our GA Actions builds. There is one big offender here that I'll try to swap out when I get a sec. (Sorry, that's my student and I should have warned her about it.)
The text was updated successfully, but these errors were encountered: