You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The sorting and ranking features of RecipeRadar aren't perfect at the moment; they aim to provide the best results based on our experience and engineering ability so far.
Longer-term, we hope and expect to improve these, but we'll need help to do so. Opening ourselves up to that kind of help runs the risk that even the best-intentioned contributors could lead the software and algorithms down paths that are tricky to undo should they be considered problematic in future.
Ideally we'd like to have an approach where it's always possible to tune and adjust the sorting and relevance of results over time, and that it's as easy as possible to provide data-backed evidence, gather community feedback, and then prepare and review changes in a straightforward format.
Currently the best we have for this is the application code and the associated OpenSearch queries that it generates at runtime. Is this good enough? Can we do better?
Describe the solution you'd like
We should document our philosophy around content relevance and try to define guidance and interfaces to make this easy to assess, discuss, review and adjust over time.
Even if we don't realize a perfect implementation of these processes, having a documented philosophy regarding it should help to explain how the system is intended to work, and guide towards via continuous incremental improvement.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
The sorting and ranking features of RecipeRadar aren't perfect at the moment; they aim to provide the best results based on our experience and engineering ability so far.
Longer-term, we hope and expect to improve these, but we'll need help to do so. Opening ourselves up to that kind of help runs the risk that even the best-intentioned contributors could lead the software and algorithms down paths that are tricky to undo should they be considered problematic in future.
Ideally we'd like to have an approach where it's always possible to tune and adjust the sorting and relevance of results over time, and that it's as easy as possible to provide data-backed evidence, gather community feedback, and then prepare and review changes in a straightforward format.
Currently the best we have for this is the application code and the associated OpenSearch queries that it generates at runtime. Is this good enough? Can we do better?
Describe the solution you'd like
We should document our philosophy around content relevance and try to define guidance and interfaces to make this easy to assess, discuss, review and adjust over time.
Even if we don't realize a perfect implementation of these processes, having a documented philosophy regarding it should help to explain how the system is intended to work, and guide towards via continuous incremental improvement.
The text was updated successfully, but these errors were encountered: