Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Static/immutable feeds #169

Open
worldmind opened this issue May 13, 2024 · 0 comments
Open

Static/immutable feeds #169

worldmind opened this issue May 13, 2024 · 0 comments

Comments

@worldmind
Copy link

Hi, thank you for the project!
Yes, feels like JSON is a bit better for such simple thing as news feeds.

I want to share idea I have in mind for a long time, I really like it, but I am even not sure do I have strong enough arguments for it. In such cases usually good to discuss with people who has better knowledge in this area.

I like the idea of immutability in general and I think it can be useful for news feeds as well, at least for optimize caching on CDNs. My use case - I have static web site, most of it is autogenerated from DocBook, but some pages are created manually and I would like to have a feed/blog in it. Now editing Atom XML manually/keeping file size small is a bit painful, and feel like it will be much easier, if for every post I will need just add new file in proper place without touching anything else. Let me show what I mean, files can be structure as:

ifeed/
|----2020/
_____|----01/
_________|----1.json
_________|----2.json
_____|----02/
_________|----1.json
|----2024/
_____|----03/
_________|----1.json

Client application can keep a pointer to last fetched post like <site_url>/ifeed/2020/02/1.json and do get everything new (just pseudocode for show the idea):

for year in range(last_fetched_year, current_year):
    continue if HEAD(site_url + year) returns 404
    for month in range(last_fetched_month, current_month):
        continue if HEAD(site_url + year + month) returns 404
        for day in range(last_fetched_day, current_day_of_month):
            continue if HEAD(site_url + year + month + day) returns 404
            i = last_post_number if month == last_fetched_month else 1
            while (response := GET(site_url + year + month + day + i)) not 404:
                save_post(response)

Downside of this approach - a bit bigger amount of HTTP queries, but I think it will not be that big, because if no posts in year/month/day we will know it by one request. But same time all files can be easily cached and have small size. Also, as you don't need to cut feed file, it's easy to keep all history and anybody can load it from any point of time (probably this feature can be implemented with existing feed's, but some effort is needed). And, as mentioned, creating feed is much easier.

What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant