You can find our dataset on huggingface: 🤗ChartQAPro Dataset
To evaluate your model on ChartQAPro, follow the steps below:
Save your model's predictions in a .json
file that contains a list of dictionaries.
Each dictionary should include the following keys (first three keys taken from the original huggingface dataset):
"Answer"
: the ground truth answer"Question Type"
: the type of the question (e.g., Factoid, MCQ, etc.)"Year"
: useful for evaluating year-based answers"prediction"
: your model’s predicted answer
[
{
"Answer": ["2016"]
"Question Type": "Factoid",
"Year": ["YES"]
"prediction": "2016"
},
...
]
pip install anls pandas
python evaluate_predictions.py --predictions-file path/to/your/predictions.json
This will print your model’s performance across different question types and the overall score, following the official evaluation metrics used in the paper. 📊
If you have any questions about this work, please contact Ahmed Masry using the following email addresses: [email protected], [email protected], or [email protected].
If you use ChartQAPro in your research, please cite:
@misc{masry2025chartqaprodiversechallengingbenchmark,
title={ChartQAPro: A More Diverse and Challenging Benchmark for Chart Question Answering},
author={Ahmed Masry and Mohammed Saidul Islam and Mahir Ahmed and Aayush Bajaj and Firoz Kabir and Aaryaman Kartha and Md Tahmid Rahman Laskar and Mizanur Rahman and Shadikur Rahman and Mehrad Shahmohammadi and Megh Thakkar and Md Rizwan Parvez and Enamul Hoque and Shafiq Joty},
year={2025},
eprint={2504.05506},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.05506},
}