-
Notifications
You must be signed in to change notification settings - Fork 559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
telemetry: periodic reporting #2721
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Kuat Yessenov <[email protected]>
😊 Welcome @kyessenov! This is either your first contribution to the Istio api repo, or it's been You can learn more about the Istio working groups, code of conduct, and contributing guidelines Thanks for contributing! Courtesy of your friendly welcome wagon. |
/retest |
/assign @zirain |
|
||
// Configuration for the interval reporting for the access logging and | ||
// metrics. | ||
ReportingInterval reporting_interval = 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be in top level or inside each struct? You may want different settings for each. Stdout logs we set flush interval to 1s for example, but that would be aggressive for a write to a remote server.
Does this overwrite --file-flush-interval-msec
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not supported. Most people have only one sink enabled (telemetry costs $$$) and we should cater to them and not complicate things for imaginary use cases.
This setting has nothing to do with IO flush.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you might be misunderstanding periodic reporting. It's reporting per data stream, not per sink stream
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I am misunderstanding the surely our users will 🙂 . Can we clarify the comments a bit so it's understandable for someone that doesn't have prior context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also if it's not related to IO flush that seems like something to be called out in the comments. I guess that means it I use Telemetry to turn on stdout logging and set this field it has no impact?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mentioned that the IO flush rate is completely independent. Istio reports telemetry per stream, not per sink.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading the .proto file I still have no clue what this does
// Reporting interval allows configuration of the time between the access log
// reports for the individual streams.
But you are saying its NOT related to the IO flush rate, which is, by my understanding, the interval at which access logs are written to stdout?
In Envoy, is this config global or something? What happens if I have 2 metrics backends configured by 2 Telemetry with 2 different configs? Or 2 telemetry, 1 configuring metrics and 1 configuring access logs.
TBH I have read this proto quite a few times and I don't think I am any closer to understanding what it does, and how or why I should set it. Can we write the comments from the perspective of a user to help guide them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a new feature - this existed in Istio since 1.0 for TCP. IO flush rate has nothing to do with telemetry production - it's a rate of delivering telemetry not production. Users never complained about this being confusing :/
In Envoy, is this config global or something? What happens if I have 2 metrics backends configured by 2 Telemetry with 2 different configs? Or 2 telemetry, 1 configuring metrics and 1 configuring access logs.
It is a global setting per listener.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a list of telemetry providers - and how many can implement this feature ? From what I've seen, if we use OTel this will not be configurable per app - but the per-node or cluster collector will handle most of this.
For metric push - I see OTel has it so I don't mind adding it, but for the others I would not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a list of telemetry providers - and how many can implement this feature ? From what I've seen, if we use OTel this will not be configurable per app - but the per-node or cluster collector will handle most of this.
I think all of them would be supported, but they need to be modified to properly report mid-stream telemetry. OTel would need a special field to indicate whether it's end of stream or mid-stream.
For metric push - I see OTel has it so I don't mind adding it, but for the others I would not.
Metrics are somewhat irrelevant actually, it's an artifact of Istio quirky stackdriver/prometheus.
Standard envoy metrics report immediately, and Istio metric sinks act like access loggers, so they get snapshot of the data at the same rate as this value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: Kuat Yessenov <[email protected]>
Signed-off-by: Kuat Yessenov <[email protected]>
Signed-off-by: Kuat Yessenov <[email protected]>
🤔 🐛 You appear to be fixing a bug in Go code, yet your PR doesn't include updates to any test files. Did you forget to add a test? Courtesy of your friendly test nag. |
Need TOC review. |
google.protobuf.Duration http = 2; | ||
} | ||
|
||
// Configuration for the interval reporting for the access logging and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does it mean to "report" and access log at a reporting_interval: 5s
? Does that mean if I open a connection and send no data, it will report and access log every 5s?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's what Istio does today?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know access logs are only printed once per connection...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only the demo stdout log. The production stackdriver log reports periodically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we note, in the proto, this? Stdout is not seen as "demo" as well, its pretty commonly used in production (https://12factor.net/logs)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does any vendor rely on stdout for production? It's demo because you need another system to collect logs and we don't include it in Istio.
The point of this PR is to generalize what we do for stackdriver as a good production practive to every other logging sink including Otel and stdout. Is that what you want me to say?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does any vendor rely on stdout for production?
Yes, we do 🙂
I want the PR to explain what the API does, and give guidance to a user on how they can configure it, which is the standard for API documentation.
I am still not sure I even understand. "including Otel and stdout" - I thought we were saying this does not apply to stdout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Google doesn't depend on stdout logging, not sure why you claim so. It shouldn't since stdout will hurt performance.
The setting is global - it applies to all Envoy telemetry sinks, including stdout and otel. SD implementation is an odd ball but it's Google's problem and will be fixed later.
The API is about enabling periodic reporting for long living streams, and it's opt-in. What is not clear about it? Are you questioning the value of the periodic reporting or the shape of the API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends - many apps report logs to stdout and this gets collected and pushed to stackdriver.
I think we need to be compatible ( and adopt ) OTel, including integration with their collectors if available - and not expose APIs that would not be implementable or needed.
Maybe in mesh config for the other integrations - but even for that I doubt we should expose them.
All APIs require testing, support and are hard to deprecated.
// Reporting interval allows configuration of the time between the access log | ||
// reports for the individual streams. Duration set to 0 disables the perodic | ||
// reporting, and only the ends of the streams are reported. | ||
message ReportingInterval { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be part of AccessLogging? IIUC, this config is only applicable for access logs. If access logs is not enabled, this config would not be useful, correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, worth to explain why user would want to customize this for what type of workloads. And why they would consider disable it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to have a discussion about what we plan to to about OTel and all the standards and integrations they are driving.
https://opentelemetry.io/docs/reference/specification/sdk-environment-variables/ has the set of options OTel supports - OTEL_METRIC_EXPORT_INTERVAL is the equivalent setting that an app using otel natively would use.
In otel it applies to all push-based metric exporters - logs and tracers don't seem to have this ( but have batch and other settings ).
IMO we should consider the API surface of OTel and not try to have more complicated settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@linsun AccessLogging is a list, and the setting is global in Envoy. Hence I pulled it to the top. Users shouldn't generally touch this unless they want to disable it (because they want to limit telemetry). It cannot be made Otel specific, since the log production is driven by Envoy and the push is driven by OTel.
@costinm That env seems unrelated - it's about metric push, not production again. See the discussion about IO flush - they are not related at all.
The closest semantic convention I could find is modeling stream events in Otel https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/semantic_conventions/events.md. We'd define an event.name
for stream open/mid/close, and report the generic attributes associated with the stream.
// Optional. Reporting interval for TCP streams. The default value is 5s. | ||
google.protobuf.Duration tcp = 1; | ||
|
||
// Optional. Reporting interval for HTTP streams. Not enabled by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
both http 1 and 2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why this would be here or why it matters what protocol is used. You want 30 seconds - using whatever protocol is available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any HTTP, Envoy treats them the same.
Specialization by protocol is for backwards compatibility only - Istio does periodic reporting for TCP only right now.
30s is too long - users consider 10s extra latency unacceptable and without any telemetry during that time, we can't debug.
} | ||
|
||
// Configuration for the interval reporting for the access logging and | ||
// metrics. Telemetry is reported independently per each data stream, at |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this for metrics also? Earlier only access logs were mentioned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some metric sinks are implemented as access loggers. It's deliberately vague since it's implementation-dependent.
@kyessenov @zirain i think I agree the docs for the proto could be a bit more clear for users who are not very familiar with telemetry. It would be really helpful to explain to readers why something should be configured in addition to the behavior. At the minimum, point to readers to find that info. Also worth clarifying if this is for access logs or metrics or both. |
My concern remains: the API should not be based on past features/behaviors,
but be aligned with the current state of the world and expected future.
It appears OpenTelemetry is getting good adoption and is very likely to be
the common standard and implementation ( probably
combined with OpenMetrics - I understand they are getting closer and
aligning ). Maintaining a custom and competing Istio
API and collecting model doesn't seem very future proof or desirable.
We are still using OpenCensus and integrate with many 'vendor specific'
providers - including StackDriver, but even stackdriver
is now supporting open telemetry.
So I would be very reluctant to add or promote any logging feature or
config option that is not also part of OpenTelemetry, at least
not until we have a discussion on what we plan to do in future ( Ambient
included ) about our proprietary logging and if
we really believe we should compete with OTel instead of adopting it as a
standard.
Or at least I would like the 'beta' API to be aligned with OTel and support
the standard - with the rest remaining in MeshConfig
( for Istio v1 ) and not carried over in Ambient.
…On Fri, Mar 17, 2023 at 9:30 AM Kuat ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In telemetry/v1alpha1/telemetry.proto
<#2721 (comment)>:
> @@ -261,6 +261,23 @@ message Telemetry {
// Optional. AccessLogging configures the access logging behavior for all
// selected workloads.
repeated AccessLogging access_logging = 4;
+
+ // Reporting interval allows configuration of the time between the access log
+ // reports for the individual streams. Duration set to 0 disables the perodic
+ // reporting, and only the ends of the streams are reported.
+ message ReportingInterval {
+ // Optional. Reporting interval for TCP streams. The default value is 5s.
+ google.protobuf.Duration tcp = 1;
+
+ // Optional. Reporting interval for HTTP streams. Not enabled by default.
+ google.protobuf.Duration http = 2;
+ }
+
+ // Configuration for the interval reporting for the access logging and
+ // metrics. Telemetry is reported independently per each data stream, at
Some metric sinks are implemented as access loggers. It's deliberately
vague since it's implementation-dependent.
—
Reply to this email directly, view it on GitHub
<#2721 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUR2S76V4Y6QVPITTK4WTW4SGRXANCNFSM6AAAAAAVW4QI5M>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
@costinm OTel is all about the metric schema and the distribution. It's still up to Istio to define when we produce the telemetry, irrespective of the sink. OTel doesn't define a strict schema - it's a universal mapping from any other semantic model but it doesn't specify what you map into it. We really need to support long running streams better at the metric production level because 1) HBONE is a long running stream; 2) debugging customer issues with long streams is painful. |
I see, sorry for the confusion. I still stand by my comment - if we can map
this to OTel, we should.
If OTel can't represent this - I would open a discussion and wait until we
know what OTel can do.
Stream events are what I was thinking as well for long-lived connections (
streaming gRPC too - not only TCP/HTTP long lived connections).
Does OTel provide any customization for them ? What is their API ?
…On Fri, Mar 17, 2023 at 9:50 AM Kuat ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In telemetry/v1alpha1/telemetry.proto
<#2721 (comment)>:
> @@ -261,6 +261,23 @@ message Telemetry {
// Optional. AccessLogging configures the access logging behavior for all
// selected workloads.
repeated AccessLogging access_logging = 4;
+
+ // Reporting interval allows configuration of the time between the access log
+ // reports for the individual streams. Duration set to 0 disables the perodic
+ // reporting, and only the ends of the streams are reported.
+ message ReportingInterval {
@linsun <https://github.com/linsun> AccessLogging is a list, and the
setting is global in Envoy. Hence I pulled it to the top. Users shouldn't
generally touch this unless they want to disable it (because they want to
limit telemetry). It cannot be made Otel specific, since the log production
is driven by Envoy and the push is driven by OTel.
@costinm <https://github.com/costinm> That env seems unrelated - it's
about metric push, not production again. See the discussion about IO flush
- they are not related at all.
The closest semantic convention I could find is modeling stream events in
Otel
https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/semantic_conventions/events.md.
We'd define an event.name for stream open/mid/close, and report the
generic attributes associated with the stream.
—
Reply to this email directly, view it on GitHub
<#2721 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUR2RYTBDKNJPYA44Y5PLW4SI4JANCNFSM6AAAAAAVW4QI5M>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@costinm The logging data model in OTel is very loose. There are three parts to it:
I'm not trying to design the exact set of attributes to produce in this PR. That's for @lei-tang I think. The topic is making sure we can produce stream-open events in Envoy, because that's we already do for opencensus and OTlp is now upstream so we must use upstream xDS to match the behavior. |
I agree we need to support long running streams and streaming gRPC, and we
may need to customize the interval or have better default.
Like the high-cardinality issue - we need to be careful on how we express
the API and what we claim to support.
Can we start with a narrower and more restricted API - to enable stream
events for long running connections, for the OTel provider ?
Other providers that support similar events may add it as well.
So instead of a per-workload setting - it will initially just be per
provider setting. I think it'll also be easier for the users, they won't
have
to customize each workload and the cluster admin will be able to control
the setting ( it costs money to send too much ).
…On Fri, Mar 17, 2023 at 9:53 AM Kuat ***@***.***> wrote:
@costinm <https://github.com/costinm> OTel is all about the metric schema
and the distribution. It's still up to Istio to define when we produce the
telemetry, irrespective of the sink. OTel doesn't define a strict schema -
it's a universal mapping from any other semantic model but it doesn't
specify what you map into it. We really need to support long running
streams better at the metric production level because 1) HBONE is a long
running stream; 2) debugging customer issues with long streams is painful.
—
Reply to this email directly, view it on GitHub
<#2721 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUR2WP75MFKQDV5M5FE23W4SJJJANCNFSM6AAAAAAVW4QI5M>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@costinm We cannot do it per-provider. Envoy has a global setting and it's hard to justify the complexity of running dual/distinctly configured providers at the same time. |
I know we support multiple providers at the same time now. And Envoy has a
global setting. But we can say that this feature is only supported for
OTel, or
require that multiple providers set it to the same value.
And we can decide that for ambient we'll just support one provider -
OTel/Prom - with collectors or agents handling additional sinks.
ZTunnel won't add support for all the providers that Envoy supports - and
we know how tricky it is to authenticate and maintain
all the providers and the Istio config for them.
The actual attributes and mechanism should not be decided by Istio - we
need to raise an issue with OTel if they don't model this, or if what
they model is not sufficient. I think it is highly desirable for all
workloads - including those not using Istio or envoy sidecars - to generate
consistent telemetry. It is a very important feature for proxyless gRPC
too.
…On Fri, Mar 17, 2023 at 10:02 AM Kuat ***@***.***> wrote:
@costinm <https://github.com/costinm> We cannot do it per-provider. Envoy
has a global setting and it's hard to justify the complexity of running
dual/distinctly configured providers at the same time.
—
Reply to this email directly, view it on GitHub
<#2721 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUR2R4EXUYU7UZCXW3UYDW4SKJ7ANCNFSM6AAAAAAVW4QI5M>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Explicitly handicapping this feature for other sinks doesn't seem like a good idea to me. It'd be nice to enable stdout with periodic reporting temporarily for debugging. Why should we block this support? I looked into gRPC Otel interceptor and it's modeled as "message" event in both directions. I think we could implement the exact same model in Envoy generically but it's outside the scope of this PR since it's not periodic. |
Otel has a stdout sink, so no problem. And we can have stdout access log
follow the otel setting.
I wouldn't mix debug with stable APIs for prod config.
…On Fri, Mar 17, 2023, 10:17 Kuat ***@***.***> wrote:
I know we support multiple providers at the same time now. And Envoy has a
global setting. But we can say that this feature is only supported for
OTel, or require that multiple providers set it to the same value. And we
can decide that for ambient we'll just support one provider - OTel/Prom -
with collectors or agents handling additional sinks.
Explicitly handicapping this feature for other sinks doesn't seem like a
good idea to me. It'd be nice to enable stdout with periodic reporting
temporarily for debugging. Why should we block this support?
I looked into gRPC Otel interceptor and it's modeled as "message" event in
both directions. I think we could implement the exact same model in Envoy
generically but it's outside the scope of this PR since it's not periodic.
—
Reply to this email directly, view it on GitHub
<#2721 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUR2WHRTJNGQ74DEQQ3DDW4SMC5ANCNFSM6AAAAAAVW4QI5M>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Where do you see stdout sink in OTel? Are you referring to some SDK or
collector?
Besides that, we still need to support opencensus in Telemetry API for
sidecars. If you want a different Telemetry API for ambient, that's an
option but certainly needs a wider discussion.
On Fri, Mar 17, 2023 at 11:12 AM Costin Manolache ***@***.***>
wrote:
… Otel has a stdout sink, so no problem. And we can have stdout access log
follow the otel setting.
I wouldn't mix debug with stable APIs for prod config.
On Fri, Mar 17, 2023, 10:17 Kuat ***@***.***> wrote:
> I know we support multiple providers at the same time now. And Envoy has
a
> global setting. But we can say that this feature is only supported for
> OTel, or require that multiple providers set it to the same value. And we
> can decide that for ambient we'll just support one provider - OTel/Prom -
> with collectors or agents handling additional sinks.
>
> Explicitly handicapping this feature for other sinks doesn't seem like a
> good idea to me. It'd be nice to enable stdout with periodic reporting
> temporarily for debugging. Why should we block this support?
>
> I looked into gRPC Otel interceptor and it's modeled as "message" event
in
> both directions. I think we could implement the exact same model in Envoy
> generically but it's outside the scope of this PR since it's not
periodic.
>
> —
> Reply to this email directly, view it on GitHub
> <#2721 (comment)>, or
> unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AAAUR2WHRTJNGQ74DEQQ3DDW4SMC5ANCNFSM6AAAAAAVW4QI5M
>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
—
Reply to this email directly, view it on GitHub
<#2721 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACIYRRRY7W5XGEAWMDWNBQLW4SSR7ANCNFSM6AAAAAAVW4QI5M>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I think defining more APIs that only work with Sidecar - as well as
continuing support for deprecated integrations ( OpenCensus ) is not ideal.
Just like we did for TCP Authz, we may need a new API or subset of the API
that is suited for Ambient. It would be ideal to NOT have
an API but rely on APIs defined upstream - in OTel or K8S.
But I don't think we should continue extending the API surface that is
specific to sidecars.
TOC certainly needs to provide some guidance - no point to debate it
between us :-), and I think we are in agreement that
the feature itself ( TCP / long-lived h2 / gRPC incremental access logs )
is important and should be available in both Sidecar and ambient.
…On Fri, Mar 17, 2023 at 11:20 AM Kuat ***@***.***> wrote:
Where do you see stdout sink in OTel? Are you referring to some SDK or
collector?
Besides that, we still need to support opencensus in Telemetry API for
sidecars. If you want a different Telemetry API for ambient, that's an
option but certainly needs a wider discussion.
On Fri, Mar 17, 2023 at 11:12 AM Costin Manolache ***@***.***>
wrote:
> Otel has a stdout sink, so no problem. And we can have stdout access log
> follow the otel setting.
>
> I wouldn't mix debug with stable APIs for prod config.
>
> On Fri, Mar 17, 2023, 10:17 Kuat ***@***.***> wrote:
>
> > I know we support multiple providers at the same time now. And Envoy
has
> a
> > global setting. But we can say that this feature is only supported for
> > OTel, or require that multiple providers set it to the same value. And
we
> > can decide that for ambient we'll just support one provider -
OTel/Prom -
> > with collectors or agents handling additional sinks.
> >
> > Explicitly handicapping this feature for other sinks doesn't seem like
a
> > good idea to me. It'd be nice to enable stdout with periodic reporting
> > temporarily for debugging. Why should we block this support?
> >
> > I looked into gRPC Otel interceptor and it's modeled as "message" event
> in
> > both directions. I think we could implement the exact same model in
Envoy
> > generically but it's outside the scope of this PR since it's not
> periodic.
> >
> > —
> > Reply to this email directly, view it on GitHub
> > <#2721 (comment)>, or
> > unsubscribe
> > <
>
https://github.com/notifications/unsubscribe-auth/AAAUR2WHRTJNGQ74DEQQ3DDW4SMC5ANCNFSM6AAAAAAVW4QI5M
> >
> > .
> > You are receiving this because you were mentioned.Message ID:
> > ***@***.***>
> >
>
> —
> Reply to this email directly, view it on GitHub
> <#2721 (comment)>, or
> unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/ACIYRRRY7W5XGEAWMDWNBQLW4SSR7ANCNFSM6AAAAAAVW4QI5M
>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
—
Reply to this email directly, view it on GitHub
<#2721 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUR2UQF7KAJRBJXPEULEDW4STOHANCNFSM6AAAAAAVW4QI5M>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I am not suggesting 'stop supporting sidecar API', or to ask users to keep
using EnvoyFilters - quite the opposite.
Just that the 'beta' CRDs ( I know telemetry is not yet beta, but should
soon become one ) are extended with care and keeping
in mind that sidecar will not be the only operating mode.
Meshconfig, annotations are still open (providers, features that are less
common or not fully tested, vendor specific things) and
they have less expectations of long-term support.
MeshConfig is a mess and permanent alpha - but that's reasonable for any
brand new config or API or feature.
It is interesting that GAMMA and Gateway specs have a requirement of '2
implementations' before a CRD moves forward.
We should do the same for Istio.
…On Fri, Mar 17, 2023 at 2:28 PM Kuat ***@***.***> wrote:
@costinm <https://github.com/costinm> We cannot freeze telemetry API
evolution while offering EnvoyFilter approach. See previous discussion for
the original PR that added this #2556
<#2556>. If we stop supporting existing
sidecar API, then we're going to get more and more EnvoyFilters doing the
same thing.
—
Reply to this email directly, view it on GitHub
<#2721 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAUR2VA5Y35DGAC43ASRBTW4TJRVANCNFSM6AAAAAAVW4QI5M>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
as #2800 done, we can use annotation to support feature like this? |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Generalize interval reporting to all access logs.
This is useful for long duration streams, and the setting would work for stdout access logs, OTel, etc, and not just prometheus stats.
Fixes: istio/istio#43763