This is LCM-Lab, an open-source research team within the OpenNLG Group that focuses on long-context modeling and optimization. Below is a list of our work—please feel free to explore!
-
LOOM-Eval: A comprehensive and efficient framework for long-context model evaluation
-
L-CiteEval (ACL 2025): A faithfulness-oriented benchmark for long-context citation
-
MMLongCite: A Benchmark for Evaluating Fidelity of Long-Context Vision-Language Models
-
LOGO (ICML 2025): Long cOntext aliGnment via efficient preference Optimization
-
Global-Mamba (ACL 2025): Efficient long-context modeling architecture
If you have any questions about the code or paper details, please don’t hesitate to open an issue or contact us directly at [email protected] .