Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add metrics endpoint #1423

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Add metrics endpoint #1423

wants to merge 8 commits into from

Conversation

AllentDan
Copy link
Collaborator

Open http://xxxx:23333/metrics/ to view the metrics.

@lvhan028 lvhan028 added the enhancement New feature or request label Apr 12, 2024
@lvhan028 lvhan028 requested a review from zhulinJulia24 June 4, 2024 07:46
@zhulinJulia24
Copy link
Collaborator

感觉gpu和cpu是不是不用放里面,可以自己起一个nvidia的端口就能获得metrics,如 docker run -d --gpus all --rm -p 9400:9400 nvcr.io/nvidia/k8s/dcgm-exporter:3.1.7-3.1.4-ubuntu20.04

@zhulinJulia24
Copy link
Collaborator

能不能添加tokens相关性能指标?

@nathan-az
Copy link

Can we add first token time as well, so the difference between scheduling time and first token time can be used to estimate prefill time?

Here is vLLM's source as a reference

@lvhan028 lvhan028 changed the title Log stats Report request metrics Nov 26, 2024
documentation='Number of total requests.',
labelnames=labelnames)

# latency metrics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

latency类的指标可以考虑用Histogram/Summary类型的metric,方便计算分布以及不同时段的平均值,也可以简化计算逻辑


# latency metrics
self.gauge_duration_queue = Gauge(
name='lmdeploy:duration_queue',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对于有单位的指标,可以在metrics_name中声明单位,如duration_queue_seconds,写PromQL的时候会比较直观

documentation='CPU memory used bytes.',
labelnames=labelnames)

# requests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Request数量类指标建议使用Counter类型,单调递增

handle = pynvml.nvmlDeviceGetHandleByIndex(int(i))
mem_info = pynvml.nvmlDeviceGetMemoryInfo(handle)
utilization = pynvml.nvmlDeviceGetUtilizationRates(handle)
self.gpu_memory_used_bytes[str(i)] = str(mem_info.used)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gpu的index可以用label来指定,用info的话指标变成str了,不好在PromQL中做计算

@lvhan028 lvhan028 changed the title Report request metrics Add metrics endpoint Nov 27, 2024
@AllentDan
Copy link
Collaborator Author

@uzuku
I removed postprocess metrics since it is weird for me to align lmdeploy api_server with triton inference server. There are no clear pre-infer-post stages when streaming mode is applied.

@Huarong
Copy link

Huarong commented Dec 19, 2024

Thanks for the great work. The metrics is import in production observation. Can we expect this feature to be merged in the next release? @AllentDan

@ao-zz
Copy link

ao-zz commented Feb 11, 2025

If our metrics can be compatible with vllm, it will greatly facilitate the comparison of deployment performance between lmdeploy and vllm.
For instance, vllm supports histogram_e2e_time_request , which is not currently considered in this PR.

如果我们的 metrics 可以和 vllm 的兼容,将为 lmdeploy 和 vllm 之间的部署性能比较,带来很大的方便。
例如,vllm 支持 histogram_e2e_time_request 这个指标,我们是否也会考虑支持呢

@lvhan028
Copy link
Collaborator

If our metrics can be compatible with vllm, it will greatly facilitate the comparison of deployment performance between lmdeploy and vllm. For instance, vllm supports histogram_e2e_time_request , which is not currently considered in this PR.

如果我们的 metrics 可以和 vllm 的兼容,将为 lmdeploy 和 vllm 之间的部署性能比较,带来很大的方便。 例如,vllm 支持 histogram_e2e_time_request 这个指标,我们是否也会考虑支持呢

好。我们尽量对齐

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants