Skip to content

torchao/_models/llama/eval.py does not work with latest lm_eval #404

@gau-nernst

Description

@gau-nernst

Using lm_eval==0.4.2, I'm getting this error

Traceback (most recent call last):
  File "/home/---/code/ao/torchao/_models/llama/eval.py", line 128, in <module>
    run_evaluation(
  File "/home/---/code/ao/torchao/_models/llama/eval.py", line 100, in run_evaluation
    TransformerEvalWrapper(
  File "/home/---/code/ao/torchao/_models/_eval.py", line 188, in __init__
    super().__init__(None, None)
  File "/home/---/code/ao/torchao/_models/_eval.py", line 55, in __init__
    super().__init__()
TypeError: HFLM.__init__() missing 1 required positional argument: 'pretrained'

Installing v0.4.0 fixed the error

pip install git+https://github.com/EleutherAI/[email protected]

Maybe we should pin lm_eval version? Since this is for internal testing only, just mention the specific version in the script is probably ok (or pin in dev-requirements.txt)

@HDCharles

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions