-
Notifications
You must be signed in to change notification settings - Fork 1
Productionize the dataset we are using for BackendBench #57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
BackendBench/data_loaders.py
Outdated
df = batch.to_pandas() | ||
for _, row in df.iterrows(): | ||
op_name = row["op_name"] | ||
if filter is None or any(f in op_name for f in filter): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the intent to have exact matching, prefix matching, something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right now (for torchbench) we just search for the string in the op name. This should be fine for now, but if we want to be more rigorous we could do exact matching in another pr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you keep the non augmented dataset as a default now, considering we're sprinting I don't want us to change the underlying eval infra too much unless it's to fix bugs
This looks like a much bigger PR than it actually is. This is most of the work we need to do on the repo end for #44
This PR
I think 3 and 4 definitely require the most review.
The schema is described in the comment at the top of BackendBench/scripts/parquet_trace_converter.py
I'd also take a close look at the filters on BackendBench/scripts/dataset_filters.py as this contains a bunch of ops that seem to not be useful in a benchmark, but I'd like a second look.
BackendBench/scripts/parquet_trace_converter.py offers a trace-to-parquet mode and a parquet-to-trace mode. parquet-to-trace mode is self explanatory. trace-to-parquet mode actually creates two parquet files. The first is a "dev" parquet which contains a bunch of extra metadata on the inputs while the final parquet (I refer to as prod) is the one that should be used in benchmarks and is the result of all the filtering.
You can find explanations of the trace files (this can be removed as it should not be permanent) and the argument schema at https://huggingface.co/datasets/GPUMODE/huggingface_op_trace (I will add the parquet schema once it is done and finalized).
The results of creating and uploading a parquet to huggingface - https://huggingface.co/datasets/GPUMODE/huggingface_op_trace
A validation that this works is that this the roundtrip conversion of the tritonbench data trace -> parquet (dev) -> trace
https://www.diffchecker.com/YYiJ43cq/. The differences are attributed to the fact that we do rename the op "aten.sum.SymInt" to "aten.sum.dim_IntList".