-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add proposal: Simplify joins with info metrics in PromQL #37
Conversation
Signed-off-by: Arve Knudsen <[email protected]>
a61336d
to
9c5e502
Compare
Signed-off-by: Arve Knudsen <[email protected]>
9c5e502
to
38c2743
Compare
sum by (k8s_cluster_name, http_status_code) ( | ||
info( | ||
rate(http_server_request_duration_seconds_count[2m]), | ||
{k8s_cluster_name=~".+"} | ||
) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you help me understand this example? I know the current rule for aggregating rates is Rate then sum, never sum then rate.
Now, will this turn into rate then info then sum
, or could you also info then rate then sum
?
Not sure if there is any specific reason to do rate()
before info()
, but sum()
after info()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disclaimer: I just copied this example from Beorn's original document. I'll answer based on my knowledge of info
though.
rate(http_server_request_duration_seconds_count[2m])
becomes a sub-query to the info
function, which adds the k8s_cluster_name
label from target_info
to the http_server_request_duration_seconds_count
time series resulting from the sub-query. The sum
expression aggregates by the labels k8s_cluster_name
and http_status_code
. If you don't apply sum
after info
, the k8s_cluster_name
label won't be available.
As for whether you could do info
before rate
; I haven't tested, but I think it would work. However, it looks less logical/comprehensible to me, as you really want the rate of http_server_request_duration_seconds_count[2m]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aknuds1 already explained nicely why you have to do the sum
last.
In current PromQL, the order before the sum
has to be range selector → rate
→ info
.
The range selector is a construct that only works directly with a metric, i.e. info(http_server_request_duration_seconds_count)[2m]
doesn't even parse. (You could do a sub-query, i.e. info(http_server_request_duration_seconds_count)[2m:]
, but that's quite a different thing.)
Neither can you apply info
to a range selector: info(http_server_request_duration_seconds_count[2m])
doesn't even parse again, because the grammar knows that info
applies to an instant vector, not a range vector. (And the concept of overloaded function, i.e. functions with different type signature but the same name, doesn't exist in PromQL. Within the current framework, you could only implement a separate info_range
function or something, but I hope we don't need that.)
proposals/2024-04-10-native-support-for-info-metrics-metadata.md
Outdated
Show resolved
Hide resolved
## How | ||
|
||
* Introduce a new info metric sample type, to track the info metric's identifying label set over time (in case it changes). | ||
* Augment the head and block indexes with indexes of info metrics, for easy finding of info metrics matching time series. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice to know how exactly 🙈
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'm trying to derive how to best model the info metric indexes in the scenarios scratch pad. I'm currently just focusing on the TSDB head, the block index is trickier to design due to being on disk.
I think I need some feedback on whether my idea of tracking changes to info metrics' identifying labels through a native sample type is a good idea. @beorn7 suggests that we might persist these info metrics completely outside of the TSDB, but I have the impression that would require reinventing the wheel wrt. querying the info metrics themselves (rather than just including their labels via the info
function).
I'm on holidays until the 28th of May. When I come back, I'll have to seek feedback on how to solve this particular design problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@beorn7 suggests that we might persist these info metrics completely outside of the TSDB
Scratch that, in retrospect I noticed a footnote in the doc, where @beorn7 does say we need to keep storing info metric samples in TSDB to support querying:
To query and display an info metric as a regular time series, we would still store the info metric as usual in the TSDB.
@beorn7 my question is: Does it make sense to you to use the identifying label set as the info metric sample value? I.e., it would be a new info metric sample type, that also pretends to have a float value of 1 (backwards compatible behaviour). From my prototyping, this is what would make sense, otherwise you would need to model changes over time in the info metric index. It seems better to me to model this via the usual TSDB time series mechanism instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a detailed implementation idea here. You have probably most experience with this by now through your prototyping.
However, it feels like you wanted to say "Does it make sense to you to use the data label set as the info metric sample value?". The identifying labels are like "normal" labels, by which you want to find an info metric, and then you take the data labels from their sample value and add it to the metric you want to join with. Right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, it feels like you wanted to say "Does it make sense to you to use the data label set as the info metric sample value?".
@beorn7 why do you think the data label set is better suited for the sample value? My reasoning is that for any given info metric, when you ingest a sample, the identifying label set may have changed, while the label set as a whole is unchanged. I.e., it's the same metric to Prometheus, but you need to track that the identifying label set has changed at that point in time. I should think you can instead use the data label set for the sample value, as it's just the inverse of the identifying label set. However, isn't it more natural to track that the identifying label set changes, as that's what declared over the ingestion protocol?
Maybe I should clarify that I currently just store the label indices in samples, not the strings, so the data label values would be extracted from the metric's label set. Not 100% sure if it's a good design choice, but that's how it's currently done. Make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I have understood the actual issue only now. Let me re-phrase: What you are discussing here is an info metric foo{a="aa", b="bb", c="cc"}
where b
and c
used to be data labels (and a
identifying labels), and then a change happens where b
gets "promoted" to an identifying label, so nothing really changes from the point of view of the Prometheus data model is it exists right now.
That's a bit different from what I understood first.
How to implement this is again a question that you can probably answer better than anyone by now. It depends how "thoroughly" the new data model will be implemented in PromQL.
From a pure "new data model" perspective, the info metric after the change is a different metric (because it has different identifying labels). The info
function might now join it with different other metrics, and it will often add other labels to its output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How to implement this is again a question that you can probably answer better than anyone by now. It depends how "thoroughly" the new data model will be implemented in PromQL.
I've thus far chosen the "simplistic" route of not changing Prometheus' core time series identity model (so that when the identifying label set changes, as you describe above, it's the same time series according to Prometheus). Then, for the purpose of implementing the info
function, the info metric's identifying label set is determined on a per timestamp basis (via the info metric's samples). I haven't implemented the latter, it's just design at the moment, but I think it should work. I think I will return to work some more on the prototype soon.
You don't see any real problems with my proposed design, as I describe it above, I hope?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot say at the moment.
In any case, changing which labels are identifying should be a rare thing to happen, so even if it isn't solved in the ideal way, it shouldn't hurt much.
This is very much different from the current state of a change in a target_info label that then creates duplicates because staleness isn't handled properly when ingesting OTLP. The latter happens all the time in regular operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! :) I'll try fixing up the prototype to match the design in the proposal, then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've fixed up the prototype, so it matches the design in the proposal (info metric samples define the identifying label set).
|
||
## Goals | ||
|
||
Goals and use cases for the solution as proposed in [How](#how): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the goal to also select certain metrics based on certain target_info non-identifying label?
e.g. would I be able to do something like info(http_request{...}, {"go_version" = "1.22.3"}
, so filtering http_requests by label which only exists on target e.g. version of the Go that target was compiled with?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added more specification about the new TSDB API and its use of label matchers. If I understand you correctly, the answer is yes:
If data label matchers are provided, time series are only included in the result if matching data labels from info metrics were found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Then I wonder if simpler solution would be worth to discuss as an alternative e.g. special prefixed labels: https://cloud.google.com/stackdriver/docs/managed-prometheus/promql#metadata-labels 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will have a read, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After reading the link about metadata labels, as I understand Bartek, the idea is to perhaps use metadata like syntax instead of introducing an info
function. I think the underlying TSDB API would be the same. Alternative syntax to info
is definitely on the table, also @jesusvazquez has thought about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand the Stackdriver documentation correctly, metadata labels are, from the outside API/PromQL perspective equivalent to adding all OTel resource attributes (or version number from build_info
or pod labels from kube_pod_labels
) to all metrics as label values (just with a prefix so that you have some hope (not a strategy!) that no label name collisions will happen, and that users can recognize which labels are coming from metadata).
Obviously, a backend can be implemented in a way to make that efficient, so that the efficiency concerns can be addressed. However, there are still concerns about UX, like every metric will now have a ginormous amount of labels, which needs additional tooling to get under control (like displaying metadata labels differently). It as a long-held Prometheus best practice to not add the same information (like the version number of the exposing binary) to every metric exposed, but bundle them in an info metric (like build_info
), and the motivation for that was not just a concern about storage efficiency in the backend, but also UX.
My understanding is that this whole proposal is an attempt to solve the problem in the "classic" Prometheus way by making info metrics more powerful and more usable. The proposal to add all of the "metadata" to every metric in form of metadata labels is still on the table, but it's a different proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sharing those insights @beorn7 :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, thanks!
Signed-off-by: Arve Knudsen <[email protected]>
I have just responded to a few direct questions. I haven't reviewed the whole proposal yet. My apologies in case my answers are stupid because I didn't grasp the full context. |
Signed-off-by: Arve Knudsen <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for writing this up! Overall it's looking good.
I had a discussion with @aknuds1 offline to understand this better and I think the proposal should have these things:
- Details on how info() function will work when you have different info metrics under different names. (the union of data labels that you mentioned)
- Specifics on how a change in identifying and non-identifying labels will impact queries (I left individual comments on them below)
- Any specific details from the linked doc that you think is important, please include them in the proposal because naturally I would consider them as background information and not a part of what you are proposing.
|
||
* Persist info metrics with labels categorized as either identifying or non-identifying. | ||
* Track when info metrics' set of identifying labels changes. This shouldn't be a frequent occurrence, but it should be handled. | ||
* Automatically treat the old version of an info metric as stale for query result enriching purposes, when its data labels change (producing a new time series, but with same identity). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's say a time series starts at time t0, and if the data labels of the info metric changes at time t1, and t2 is a future time t2>t1. Then in this scenario, when I do a range query of info() function for time range t0 through t2, which data labels will be used? Will it use the old data labels between t0 and t1 and then new data labels between t1 to t2, or new data labels for t0 through t2? It will be good to document this because "stale" makes me feel new labels for everything but I think the right thing to do will be old labels for t0 to t1 (I think this is the plan, right? I forgot)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think it should be always the relevant label for the respective timestamp. I guess the term "stale" comes from looking at a possible implementation detail, which is to insert a staleness marker for the "old series". However, if the identity of the info series is only defined by the identifying labels, then there is actually no new series. Only the existing series changes its value (where the value are the data labels). (This is the "logical view", if you want. I can imagine it is hard to implement that way directly in Prometheus. And I haven't looked at @aknuds1's PoC to check how it is done there.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if the prototype currently works this way, since I did some additional thinking when drafting relevant scenarios for info
, but my current thinking is indeed that the info
function should pick the correct data labels on a sample by sample basis. Does that answer your question @codesome?
@beorn7 I haven't changed the fact that there will be a new time series when the data labels change. I just use a head/block index to associate info metrics with their identifying label sets, so you can find different time series (as data labels change) for the same identifying labels.
Goals and use cases for the solution as proposed in [How](#how): | ||
|
||
* Persist info metrics with labels categorized as either identifying or non-identifying. | ||
* Track when info metrics' set of identifying labels changes. This shouldn't be a frequent occurrence, but it should be handled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar question as the data labels - if the identifying labels change in between let's say t1 (with t0 being a older time and t2 being a future time), do we use the old identifying labels for queries touching t0 to t1, and then new identifying labels for queries after t1? Let's document this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a change of identifying labels should really be seen as a completely independent new series, without any staleness marker insertion. As said, this should be rare (while the fallout of a change in the data labels in the absence of staleness markers, e.g. what you get when ingesting OTLP, triggered the whole effort). And it could actually mean the new info series is actually meant as a completely independent new series.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we use the old identifying labels for queries touching t0 to t1, and then new identifying labels for queries after t1?
@codesome yes, that is my current thinking at least. My thinking is to track through the info metric's samples, what is the identifying label set at any given point in time. Agree to document it properly, I'll give my proposal a re-read, to see whether I need to clarify further.
Maybe a change of identifying labels should really be seen as a completely independent new series, without any staleness marker insertion.
@beorn7 there shouldn't be any need for a staleness marker insertion, when the identifying label set changes, so long as the entire series' label set remains the same. My thinking is to just record a new sample for the info metric, that records the new set of identifying labels. Does that make sense to you as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It probably works (but again, I would almost see this as an implementation detail).
Staleness marker will be inserted anyway if you scrape Prometheus-style, and the problems you have because of staleness markers not happening with OTLP ingestion will also happen anyway. It's not just for info metrics, and I think we don't need a bespoke staleness-marker ingestion solution here.
* If data label matchers are provided, time series are only included in the result if matching data labels from info metrics were found. | ||
* A data label matcher like `k8s_cluster_name=~".+"` guarantees that each returned time series has a non-empty `k8s_cluster_name` label, implying that time series for which no matching info metrics have a data label named `k8s_cluster_name` (including the case where no matching info metric exists at all) will be excluded from the result. | ||
* A special case: If a data label matcher allows empty labels (equivalent to missing labels, e.g. `k8s_cluster_name=~".*"`), it will not exclude time series from the result even if there's no matching info metric. | ||
* A data label matcher like `__name__="target_info"` can be used to restrict the info metrics used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to allow writing it as just target_info
, and with additional matchers like target_info{k8s_cluster_name=~".+"}
? Or is there a reason why it needs to be {__name__="target_info"}
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the current implementation just recognizes the selecter of it starts with curly braces. Generally, I guess any valid vector matcher, including just the name, should be valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation only recognizes a selector starting with curly braces indeed. But it's also a very rough and hacky implementation, as I'm concentrating on getting the underlying design (TSDB API) right primarily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can, at the end of the day, use a normal vector matcher after all, just that the PromQL engine has to intervene before it is fully evaluated.
Yesterday, I discussed with @juliusv his valid concern about using a vector matcher here in a non-standard way, and we found that you could stretch things just a tiny bit to see this vector matcher as a normal vector matcher. It will (potentially) select a lot, but all the info metrics we want will be selected, too, so one might argue the info
function simply trims this selection down to the relevant info metrics. Of course, for efficiency reasons, we want to do this trimming proactively and not first load all the metrics matching the vector matcher into memory, but that's an optimization that could also be applied to any other PromQL expression without breaking any semantics. The info
function also looks back at the originally specified labels to find out what labels to copy. But that doesn't break the semantics, either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The info function also looks back at the originally specified labels to find out what labels to copy.
Yeah, the absent()
function already does similar magic.
Now on a language typing level, we'll still have the problem that we only have four types: strings, scalars, and instant/range vectors. We currently don't have any way to specify that a function expects not just one of those four types as an input, but a specific AST node type (specifically an instant vector selector, not just any instant-vector-typed result). I guess for a start we could ignore this in parsing and have the function check at run-time whether the parameter is really an instant vector selector and error out otherwise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
absent()
is indeed a good role model. I think we could just accept any instant vector-typed result. If the expression doesn't evaluate to one or more info metric, we can issue a warning and not do anything. But generally, we are going down into details very prematurely here.
Thanks for the review @codesome (and for our chat beforehand)!
I'll look the proposal over again, and write more if this is missing.
Thanks, I'll keep this in mind when re-reading the proposal also.
Agree 100%. |
@codesome this aspect is actually covered under the Goals section. The following part should specify that the set of data labels enriching the query result should be a union of all matching data labels found for info metrics:
This part (which I didn't remember when we talked, I'll admit) specifies that if there are label collisions between either the query result and an info metric, or between matching info metrics, an error should be returned:
@codesome WDYT, is the above sufficient? |
faeb84a
to
52262ad
Compare
@codesome I added more details to the How section:
Does that work for you? |
There's also no particular support for enriching query results with info metric labels in PromQL. | ||
The consequence is that you need relatively expert level PromQL knowledge to include info metric labels in your query results; as OTel grows in popularity, this becomes more and more of a problem as users will want to include certain labels from `target_info` (corresponding to OTel resource attributes). | ||
Without persisted info metric metadata, one can't build more user friendly abstractions (e.g. a PromQL function) for including OTel resource attributes (or other info metric labels) in query results. | ||
Neither can you build dedicated UI for OTel resource attributes (or other info metric labels). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps you could mention that there is room for a future iteration addressing these issues in various forms, one could think of info metrics with persistence or some other kind of metadata store.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jesusvazquez should that perhaps be mentioned in the Goals section? Before revising the proposal after @codesome's feedback, solving these problems in Prometheus was part of the goals. My understanding of the "Pitfalls of the current solution" section is that it should be referring to status quo in Prometheus, before the proposed solution.
To be honest I'm a bit unsure of what to put in the Pitfalls section currently, until @codesome reviews so the final form of the proposal is clearer.
|
||
Goals and use cases for the solution as proposed in [How](#how): | ||
|
||
* Persist info metrics with labels categorized as either identifying or non-identifying (i.e. data labels). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this statements sounds a bit contradictory to the above:
This means that knowledge of info metrics' identifying labels are lost, ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect you have a different understanding of the "Pitfalls of the current solution" section than I do. That section has the following purpose (from the template):
What specific problems are we hitting with the current solution? Why it’s not enough?
The tasks to do in order to migrate to the new idea. | ||
|
||
* [ ] Task one <GH issue> | ||
* [ ] Task two <GH issue> ... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we do have an action plan now right? Something along the lines of implementing the info function without TSDB modifications and then iterate further with a new proposal addressing the issues listed above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I'll add some.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added two tasks, could you please take a look?
* Specify detection of info metric identifying labels for other ingestion methods than OTLP. | ||
* Define how this functionality would work together with OOO samples. | ||
|
||
## Alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might have been discussed before, but I wonder if we are missing the simplest solution of query rewriting as a considered alternative. Given the example query on line 119, can we rewrite it into a join and let the engine execute it with the existing semantics of PromQL. Information needed for the join can either be implicitly deducted or specified by the user as arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The info
function can't be simply reduced to syntactic sugar for joins though, for the following reasons at least:
- The
info
function automatically finds suitable info metrics to join with (the MVP only supportstarget_info
, but we aim to lift that limitation in the future) - The
info
function automatically knows which are info metrics' identifying labels, i.e. which to join on (the MVP assumesinstance
andjob
, but we aim to lift that limitation in the future) - The
info
function solves the problem of temporary join query conflicts between info metrics due to non-identifying labels changing, until the old version goes stale
Information needed for the join can either be implicitly deducted or specified by the user as arguments.
Assuming we were to use query rewriting, how would you deduce which info metric(s) to join against (when info
is no longer limited to just the target_info
metric)? If the user, as in join queries, has to supply which info metric to join with and which labels to join on plus which labels to include, have we really gained much versus status quo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can only see this working well for a static list of well known info metrics like.. target_info
or by specifying which info metrics to use expicitely. If i have multiple info metrics that match by accident and dont specify matchers I probably get confusing labels added. Thats not possible if i specify the info metric explicitely or constrain it to a well known list like target_info
. Prometheus is too flexible to make such convenience safe I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can only see this working well for a static list of well known info metrics like.. target_info or by specifying which info metrics to use expicitely.
@MichaHoffmann if that is your concern with the info
function (and @fpetkovski's?), I think that should be an explicit criticism of the proposed design, rather than suggesting info
as proposed here can be simply reduced to syntactic sugar. If you think the design should be explicit about picking info metrics rather than automatic (although that can be filtered down via the __name__
label matcher), I think there should be an alternative proposal. I'm not sure though if it's most feasible to list competing proposals in the same document, or whether they should be separate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, Sorry I didnt want to propose something differently, I was just concerning out loud. What I want to say is that the name label matcher for info metric probably should be requirement for this to be generally least surprising for the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @MichaHoffmann, I can note your wish about the name label matcher being a requirement, so a consensus can be reached.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should note that my original idea was indeed that there is generally a relatively low number of info metrics so that a default matching behavior of "all the info metrics known to the TSDB" is feasible, which is in fact one of the reasons that makes the "joining" easier.
Note that with a "general metadata store", you would probably also end up with "give me all the metadata you know for this metric", which is (more or less) equivalent to "give me all the info metrics that have an identifying labels match with this metric".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main concern with the proposal is that it starts by describing a problem which can be solved by adding syntactic sugar for specific type of joins. It then goes into various details on what changes are needed to integrate with OTel and various modifications in the the TSDB.
If the goal is to enable better support for OTEL target_info
metrics, then I think this should be emphasized in the title and very clear from the description. Unless OTEL is the primary driver of the feature, I am not convinced that the new info
syntax/function is better than the explicit join.
@fpetkovski in my reply to your comment, I describe how
It's definitely a motivation for developing the @beorn7 would you like to share your PoV as far as the motivation goes? |
@aknuds1 I think you have already expressed my thoughts quite well. So mostly just restating for the sake of clarity:
Here my train of thought described in more detail: The initial trigger was indeed that the UX when dealing with
|
Signed-off-by: Arve Knudsen <[email protected]>
0396349
to
e2ccdda
Compare
@fpetkovski great :) I should mention in case it isn't clear btw, that the idea is to introduce |
Signed-off-by: Arve Knudsen <[email protected]>
@MichaHoffmann I added an alternative to represent your suggestion of making |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! My major comments are around leaving out the TSDB API and turning those points into explaining the semantics of info() function instead (which the TSDB API explanation are indirectly doing).
I also have a thought/question on able to specify info metrics to join with, with default being join with all.
Other than that, it is 👍
proposals/2024-04-10-native-support-for-info-metrics-metadata.md
Outdated
Show resolved
Hide resolved
proposals/2024-04-10-native-support-for-info-metrics-metadata.md
Outdated
Show resolved
Hide resolved
proposals/2024-04-10-native-support-for-info-metrics-metadata.md
Outdated
Show resolved
Hide resolved
* Increased series churn when metadata labels change | ||
* More labels per metric increases CPU/memory usage | ||
|
||
### Make the `info` function require specifying the info metric(s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, me thought was that the info
function should focus on the cases where it "magically" works, and if things get complicated (with label collisions, too many info metrics that match, …), then you can just revert to the traditional join syntax, which is mostly so complex because it specifies everything explicitly. Or in other words: We are diluting the added value of the info function if we add some of the knobs that the traditional join syntax has (and which makes it so unwieldy).
But we can still have this alternative here for public consideration (and to show that this thought was considered).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it would be reasonable if this only works for info metrics that were present in the same scrape maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should have that restriction. kube_pod_labels
is an example where you might apply things cross-scrape. OTLP ingestion is an example where there isn't even a notion of "scrape".
For metrics ingested via a scrape, the target labels will be identifying labels by default anyway, so the whole proposed semantics of the info function will implicitly limit info metric matching to the same scrape in most cases (but that's a behavior that needs to be configurable in a different way, e.g. if scraping KSM, one might want to not have the KSM target labels as identifying labels to enable kube_pod_labels
matching cross-target.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking having an option to specify info metrics to join is not a bad idea.
@codesome @beorn7 the proposal suggests the ability to specify certain info metrics by matching on __name__
:
- A data label matcher like
__name__="target_info"
can be used to restrict the info metrics used.
Did you consider this?
Maybe there could also be a configuration parameter to control this behaviour?
Signed-off-by: Arve Knudsen <[email protected]>
Signed-off-by: Arve Knudsen <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM from me!
@bwplotka @beorn7 @jesusvazquez do you have any feedback? I think we can move forward with the current state since this will start off as an experimental feature, allowing us to make any modifications if necessary (especially with ongoing metadata work on the side).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I'm good too lets get this merged 🚀
Cool, I would love to check it with @dashpole this week. My main concern is that I have several ppl DM-ing me with "have you seen info function proposal? is this amount of complexity only to eliminate one join? is it just another way to do join? is this Otel driven only?". Ideally those would lay out their concerns in this proposal, and will encourage them, but I would love to resolve those concerns before we finalize things, so we don't have complains or repeated discussions like with UTF-8 |
### Pitfalls of the current solution | ||
|
||
Prometheus currently persists info metrics as if they were normal float samples. | ||
This means that knowledge of info metrics' identifying labels are lost, and you have to base yourself on convention when querying them (for example that `target_info` should have `job` and `instance` as identifying labels). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the damage is much bigger -- we essentially use all labels as "identifying" causing massive churn and cardinality bomb. With this feature we essentially increase use of info metric, so that's perhaps relevant to fix.
Thanks for all the discussion here. And there is even more discussion happening in other channels. I'm at the same time intrigued and scared by all this growing attention for this (which just started with a small idea of mine). :) Let's clarify that this is all still an experiment. We want to implement it (in prototypical form) behind a feature flag to give users the option to try it out and compare it to alternatives. We are still open to changing details on this, or even scrap the whole idea altogether. We pretty much agreed elsewhere, that we should also implement mapping of OTel resource attributes to labels upon OTLP ingestion as a parallel experiment. With that said, I think we should merge this PR in the current form and not fine-tune it based on the outcome of the still ongoing discussions. I think those discussion have reached a point where we really would like to see the results from the prototype in the wild to come to further conclusions. @jesusvazquez and @codesome have already approved this. @bwplotka as you added some significant comments to this, are you fine with merging or would you prefer more changes of the design doc before merging? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for waiting a bit!
We just had a review cycle with my colleagues about this. Generally, great work on making it fit info metric in existing gauge style and flexibility of Prometheus! Key things worth to check from our side when we are looking on info function as in example:
sum by (k8s_cluster_name, http_status_code) (
info(
rate(http_server_request_duration_seconds_count[2m]), # original metric to join into
{k8s_cluster_name=~".+"} # info metric matchers
)
)
- When user wants to specify one metric, they have to use
__name__
syntax or can they directly do normal PromQL metric matching as is
sum by (k8s_cluster_name, http_status_code) (
info(
rate(http_server_request_duration_seconds_count[2m]), # original metric to join into
target_info{k8s_cluster_name=~".+"}
)
)
?
- I assume for MVP the only metric matched in the second argument is a hardcoded
target_info
, but long term any{__name__=~"magic.*"
will be supported, right?
a) Am I right that this would mean essentially{__name__=~".*_info"
by default if name is not specified? Is there a room to keep the possible metric names set somewhere.. to few?
b) If the MVP is the case why not starting with the mandatory syntax of having metric name specified from the start (not supporting more than one metric for now?)
...generally not every system can support flexible metric name matching, but Prometheus can, so it's fair to do this. We will have to probably restrict this a bit.
- Naming is hard 🙈
info
name reads like "give me information about this metric", but actually it is "join this labels from another metric". Should we call itinfo_join
ormore_info
or something more explicit? (:
Thanks for considering, otherwise LGTM 💪🏽
@bwplotka thanks very much for your input/consideration, attempting to answer below:
Aha good point. I think this question has been raised by @beorn7 before, but it's not even clear exactly what the data label selector syntax will end up being, since the
My current thinking is that either Regarding a) and b), I think that specifying the info metric name in PromQL should not be supported in the MVP (mostly because you need metadata to be able to tell an info metric's identifying labels, unless it's a well known one). I've made a PoC for automatic info metric data label inclusion, that relies on configuration to teach it about info metrics and their respective identifying labels. The benefit of this scheme is that you don't need to make Prometheus intelligent about these things.
Don't know, but I'm sure we can discuss this during the development process. We're not even 100% sure it will be a function syntactically (maybe we can think of some more optimal syntax in the end). |
Got it, thanks! Let's iterate then, especially for known number (or only one for now) of info metrics, LGTM! Thanks for extensive work on this! |
I'm adding a proposal about enhancing Prometheus with a 1st class feature to PromQL for easy joining with info type metrics. This is in particular supposed to facilitate OpenTelemetry resource attributes (via the
target_info
metric they get mapped to), but works for info metrics in general.Please note that in agreement with @codesome, I'm deliberately punting on TSDB details, as @codesome wants to have a separate proposal for those.