-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
Fix some issues with benchmark data output #13641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix some issues with benchmark data output #13641
Conversation
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
benchmarks/benchmark_serving.py
Outdated
| parser.add_argument( | ||
| "--tensor-parallel-size", | ||
| type=int, | ||
| default=0, | ||
| help= | ||
| "The tensor parallel used by the server to display on the dashboard") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you just need a way to pass tp as an argument to this script, we already have --metadata
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah got it, --metadata is indeed more appropriate for this as I just need to get tp parameter to show on the dashboard. It's probably a good idea to save other server parameters too just in case.
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]>
ywang96
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for the fix and addressing my comment!
Signed-off-by: Huy Do <[email protected]>
Signed-off-by: Huy Do <[email protected]> Signed-off-by: Louis Ulmer <[email protected]>
Signed-off-by: Huy Do <[email protected]>
This is a follow-up of #13068 to fix:
infqps config. The JSON parser used by the database doesn't like this value. So, I add a customInfEncoderto convert this to aninfstring.Passing the parameterI'll use--tensor-parallel-sizetobenchmark_serving.pyscript so that it can be stored in JSON and picked up by the dashboard.--metadatato get this instead per @ywang96 comment.commandsbecause the new.pytorch.jsonis also a JSON file.cc @ywang96 @simon-mo
Just FYI, here is a preview of the new dashboard.