Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encourage use of same datasets across implementations #123

Open
grantmcdermott opened this issue May 5, 2021 · 7 comments
Open

Encourage use of same datasets across implementations #123

grantmcdermott opened this issue May 5, 2021 · 7 comments

Comments

@grantmcdermott
Copy link
Contributor

In looking over several examples, I've come around to the idea that we should strongly encourage re-use of the same datasets across language implementations (i.e for a specific page). Advantages include:

  • Common input and output for direct comparisons.
  • Avoids duplicate typing up of the task (we can write that once under the implementation header) and unnecessary in-code commenting (which, frankly, I think we have too much of atm and personally find quite distracting).

I recently did this for the Collapse a Dataset page and, personally, think it's a lot easier to read and compare the code examples now. @vincentarelbundock's Rdatasets is a very useful resource in this regard, since it provides a ton of datasets that can be directly read in as CSVs. (Both Julia and Python statsmodels have nice wrappers for it too.)

Question: Do others agree? If so, I'll add some text to Contributing page laying this out.

PS. I also think we should discourage use of really large files, especially since this is going to start becoming a drag on our GA Actions builds. There is one big offender here that I'll try to swap out when I get a sec. (Sorry, that's my student and I should have warned her about it.)

@NickCH-K
Copy link
Contributor

NickCH-K commented May 5, 2021

I definitely agree with this. I can add something to the Contributing page (although I'm not sure how closely that gets read anyway).

@aeturrell
Copy link
Contributor

I agree, and this has generally been my approach. (Although in some cases I've used the nice sklearn built-in data generation functions like make_regression because they make it really easy to demonstrate something with a generated dataset.)

Agree about the problematic page; it could be better. When I was doing Python versions of some geospatial pages, I took one look at that and thought 'no'.

@grantmcdermott
Copy link
Contributor Author

@NickCH-K Okay, cool. I'll let you update then. Feel free to close this when you do. I think people will get the idea once we manage to set the standard across a couple pages.

@aeturrell Ya, I agree it can be tough to balance with benefits of generated datasets. On that note, I just saw in commit log that you added a Python SVM example (nice!). Do you know of a good off-the-shelf dataset that we could use here?

@vincentarelbundock
Copy link

vincentarelbundock commented May 5, 2021

FWIW, Julia uses its own mirror of Rdatasets (which is extremely out of date now). I think that John Myles White, who originally forked it years ago, was worried about stability. This was reasonable at the time because he was one of the very first people to discover Rdatasets.

But I think that overall it's been pretty stable. Since I stopped actively contributing to Python statsmodels, I think that Josef has only contacted me once about a problem, and we got it fixed pretty quickly.

@grantmcdermott
Copy link
Contributor Author

grantmcdermott commented May 5, 2021

Oh, thanks for the heads-up @vincentarelbundock. I should double check that the example I used above actually works then! Do you think there's any chance of them updating it (have you contacted them?) I'm happy to if not...

EDIT: Scratch that, I've just read that they only want to bundle some example and that any more would have to be added manually. I'll see about putting in a PR.

@vincentarelbundock
Copy link

I think my previous post left too much ambiguity: My strong intention is to keep everything in Rdatasets rock stable, and to only remove/modify data if the underlying package does it too, or if I am asked directly.

So far, this has only happened once or twice in many years, and not recently. So I wouldn't worry about the examples you posted. They should still work.

@aeturrell
Copy link
Contributor

Do you know of a good off-the-shelf dataset that we could use here?

Good question. To replace what's there, we'd want linear and non-linear binary classification with balanced classes. A quick look at the sklearn datasets (which is also where the make_dataset generation functions are), suggests this might work for simple binary classification with almost balanced classes. I haven't checked Rdatasets though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants