Skip to content

Latest commit

 

History

History
26 lines (16 loc) · 2.64 KB

rfc-025-managing-special-snowflake-urls-through-the-publishing-api.md

File metadata and controls

26 lines (16 loc) · 2.64 KB
status implementation status_last_reviewed status_notes
superseded
superseded
2024-03-06
Content Store content is kept in sync with Publishing API so there is no longer a choice.

Managing special snowflake URLs through the Publishing API

Problem

There are a number of URLs on GOV.UK which do not fall into the category of traditional leaf-node published content e.g. robots.txt, search, or the homepage.

These URLs are frequently registered with the router directly.  Since they aren't entered into the URL arbiter this sometimes causes clashes and/or downtime if important routes are overwritten.

Proposal

The minimum required to add safety to this process is to have the routes pass through the URL arbiter.  We could have all applications which register these snowflake routes directly instead check with the arbiter first, but this is an untidy option and open to error.  We'd also like to reduce the number of applications which talk directly to the router API as much as possible.  This suggests we use the publishing API as the registration mechanism.

So far there are two proposals on how to do this:

  1. Use the publishing API as an endpoint but don't write to the content store.  This would require a new endpoint on the API which engages in URL arbitration but does not push to either live or draft content stores.  It would also require the API to speak to the router API directly, something which it currently delegates to the content store.
  2. Add content store entries.  This would require no changes to the publishing API.  The owning applications would publish a "snowflake" format document to the publishing API on deploy (or whenever else they normally do this, e.g. via rake tasks) containing some human-readable information about the path and why it exists (e.g. "/search handles the display of search results across the site" or "/info is a prefix to provide statistics about other routes on GOV.UK") and allow the standard publishing pipeline to deal with URL arbitration and route registration.  This could later allow some of the routes to be not only registered via the publishing pipeline but also rendered out of the content store, for example in the case of robots.txt.

My personal preference is for proposal #2.  Here's an example of what the document might look like for robots.txthttps://gist.github.com/elliotcm/cddd6ea1f4e3989009bd