Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pagination headers #17

Open
gwincr11 opened this issue May 18, 2016 · 10 comments
Open

Pagination headers #17

gwincr11 opened this issue May 18, 2016 · 10 comments

Comments

@gwincr11
Copy link
Contributor

Hello,

I am curious about support for pagination headers such as these:

$ curl --include 'https://localhost:3000/movies?page=5'
HTTP/1.1 200 OK
Link: <http://localhost:3000/movies?page=1>; rel="first",
  <http://localhost:3000/movies?page=173>; rel="last",
  <http://localhost:3000/movies?page=6>; rel="next",
  <http://localhost:3000/movies?page=4>; rel="prev"
Total: 4321
Per-Page: 10

I don't think this exists today, but it would be nice to add next page bindings to the api. Any opinions?

@kevinhughes27
Copy link
Contributor

We don't include headers with next page details - have a look at the index reference for one of our resources: https://help.shopify.com/api/reference/article#index

If you need to fetch all items using pagination write a loop that increments the page param until you don't get anything back. Adding this utility to our client libraries has been discussed so if you make something you'd like to contribute back please open a PR

@gwincr11
Copy link
Contributor Author

I would be interested in working on this. In your opinion would this be a pointer to another active resource object for the next page, or some sort of iterable?

@kevinhughes27
Copy link
Contributor

What I've done in ruby before is create a ArticleFinder class that implements the ruby enumerable class which lets you use any of the standard lib iteration techniques. So to a user of this utility class / function they just loop through all objects but internally it makes a new API call every so often as required to get more data.

Does that make sense?

@gwincr11
Copy link
Contributor Author

Yes that makes sense, so to abstract this you would create a new active resource class, maybe collection, that inherits from active resource and know how to fetch the next page?

@gwincr11
Copy link
Contributor Author

Is there anyway to get the headers from the previous call?

@kevinhughes27
Copy link
Contributor

you could store them in a variable before making the next call. I don't think you need to do this though to solve your problem.

@gwincr11
Copy link
Contributor Author

Thanks @kevinhughes27 I don't need to do this, but it prevents me from making un-needed calls. I guess I was somewhat unclear, how do I get the response object from the urllib caller so I can store it?

@kevinhughes27
Copy link
Contributor

How does it prevent you from making un-needed calls? I am not sure how to reach through to urllib you'll have to deep dive the code if thats what you need.

@gwincr11
Copy link
Contributor Author

If the API is following spec rfc988 then the headers send back the next page if present. So if you are on the last page you know not to call again.

I can dig through I was just wondering if the active resource call was saving the calls response as state somewhere?

On May 26, 2016, at 9:47 AM, Kevin Hughes [email protected] wrote:

How does it prevent you from making un-needed calls? I am not sure how to reach through to urllib you'll have to deep dive the code if thats what you need.


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub #17 (comment)

@kevinhughes27
Copy link
Contributor

Yeah sorry I am not familiar enough with the internals

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants