Skip to content
@LCM-Lab

LCM-Lab

Towards an unbounded context for foundation models

Hi there 👋

This is LCM-Lab, an open-source research team within the OpenNLG Group that focuses on long-context modeling and optimization. Below is a list of our work—please feel free to explore!


🔹 Long-Context Reward

  1. LongRM: Pushing the limits of reward modeling beyond 128K tokens GitHub arXiv

🔹 Long-Context Evaluation

  1. LOOM-Eval: A comprehensive and efficient framework for long-context model evaluation GitHub arXiv

  2. L-CiteEval (ACL 2025): A faithfulness-oriented benchmark for long-context citation GitHub ACL Anthology

  3. MMLongCite: A Benchmark for Evaluating Fidelity of Long-Context Vision-Language Models GitHub arXiv

🔹 Long-Context Modeling

  1. CDT (Context Denoising Training) GitHub arXiv

  2. LOGO (ICML 2025): Long cOntext aliGnment via efficient preference Optimization GitHub ICML

  3. Global-Mamba (ACL 2025): Efficient long-context modeling architecture GitHub ACL Anthology


If you have any questions about the code or paper details, please don’t hesitate to open an issue or contact us directly at [email protected] .

Pinned Loading

  1. LOOM-Eval LOOM-Eval Public

    A comprehensive and efficient long-context model evaluation framework

    Python 27 4

  2. LOOM-Train LOOM-Train Public

    Simple and efficient training framework for long-context models

    Python 11 1

  3. L-CITEEVAL L-CITEEVAL Public

    Evaluating the faithfulness of long-context language models

    Python 30 2

  4. context-denoising-training context-denoising-training Public

    context denoising training for long-context modeling

    Python 14 1

  5. LongRM LongRM Public

    Revealing and unlocking the context boundary of reward models

    Python 20

  6. LOGO LOGO Public

    Code for paper: Long cOntext aliGnment via efficient preference Optimization

    Jupyter Notebook 23

Repositories

Showing 10 of 11 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…