Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node exporter network metrics values mismatch #1224

Open
ashwinisivakumar opened this issue Jul 8, 2024 · 1 comment
Open

node exporter network metrics values mismatch #1224

ashwinisivakumar opened this issue Jul 8, 2024 · 1 comment
Labels
bug Something isn't working needs-attention

Comments

@ashwinisivakumar
Copy link

ashwinisivakumar commented Jul 8, 2024

What's wrong?

We are running a node exporter as standalone and scraping those metrics using Prometheus and sending it to mimir
and we are running alloy with integrated node exporter sending those metrics to mimir

we did comparison between the node exporter standalone metrics and alloy integrated node exporter metrics
we found a difference in the network related metrics values
all other CPU, memory and disk metrics values are matching in both case.
image (1)

Steps to reproduce

run node exporter as a standalone application and scrape it
run alloy integrated node exporter as one set

System information

No response

Software version

Node exporter version - 1.8.1
Alloy version - 1.0

Configuration

/Node exporter
    prometheus.exporter.unix "integrations_node_exporter" {
      include_exporter_metrics = true
    }
    discovery.relabel "integrations_node_exporter" {
      targets = prometheus.exporter.unix.integrations_node_exporter.targets
      rule {
        target_label = "job"
        replacement  = "integrations/node_exporter"
      }
    }
    prometheus.scrape "integrations_node_exporter" {
      targets    = discovery.relabel.integrations_node_exporter.output
      forward_to = [prometheus.relabel.integrations_node_exporter.receiver]
      job_name   = "integrations/node_exporter"
    }
    prometheus.relabel "integrations_node_exporter" {
      forward_to = [prometheus.remote_write.prom_receiver.receiver]
      rule {
        source_labels = ["__name__"]
        regex         = "node_(arg_|cooling_device_|cpu_|disk_|entropy_|filefd_|filesystem_|hwmon_temp_|memory_|netstat_Tcp_|netstat_Ip_|netstat_TcpExt_|netstat_Udp_|netstat_UdpLite_|network_receive_|network_transmit_|nf_conntrack_|schedstat_|sockstat_TCP_|sockstat_UDPLITE_|softnet_|systemd_socket_|textfile_|timex_estimated_|timex_loop_|timex_maxerror_|timex_offset_|timex_sync_|timex_tai_|timex_tick_|boot_time_|procs_|forks_|context_switches_|vmstat_).*|node_(load1|load5|load15|intr_total|time_seconds)"
        action        = "keep"
      }
      rule {
        source_labels = ["__name__"]
        regex         = "(go_|node_xfs_|node_timex_|node_power_supply|node_nf_conntrack_|node_cooling_|node_scrape_collector_|loki_chunk_|node_export_build_|node_dmi_|node_exporter_build_).*"
        action        = "drop"
        }      
    }
    prometheus.remote_write "prom_receiver" {
      endpoint {
        url  = ""
        headers = {
          "X-Scope-OrgID" = "",}
      
      queue_config {
        min_shards = 1
        max_shards = 5
        max_samples_per_send = 5000
        batch_send_deadline = "60s"
        min_backoff = "5s"
        max_backoff = "30s"
        sample_age_limit = "300s"
      }
      }
      wal {
        truncate_frequency = "2h"
        min_keepalive_time = "5m"
        max_keepalive_time = "8h"
      }
    
      external_labels = {
       
      }
    }

Logs

No response

@ashwinisivakumar ashwinisivakumar added the bug Something isn't working label Jul 8, 2024
Copy link
Contributor

github-actions bot commented Aug 8, 2024

This issue has not had any activity in the past 30 days, so the needs-attention label has been added to it.
If the opened issue is a bug, check to see if a newer release fixed your issue. If it is no longer relevant, please feel free to close this issue.
The needs-attention label signals to maintainers that something has fallen through the cracks. No action is needed by you; your issue will be kept open and you do not have to respond to this comment. The label will be removed the next time this job runs if there is new activity.
Thank you for your contributions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs-attention
Projects
None yet
Development

No branches or pull requests

1 participant