Skip to content
@OPTML-Group

OPTML Group

Welcome to the OPTML Group's GitHub Repository!

About Us

OPtimization and Trustworthy Machine Learning (OPTML) group (Group Website) is an active research group at Michigan State University. Our research interests span the areas of machine learning (ML)/deep learning (DL), optimization, computer vision, security, signal processing and data science, with a focus on developing learning algorithms and theory, as well as robust and explainable artificial intelligence (AI). These research themes provide a solid foundation for reaching the long-term research objective: Making AI systems scalable and trustworthy.

As AI moves from the lab into the real world (e.g., autonomous vehicles), ensuring its safety becomes a paramount requirement prior to its deployment. Moreover, as datasets, ML/DL models, and learning tasks become increasingly complex, getting ML/DL to scale calls for new advances in learning algorithm design. More broadly, the study towards robust and scalable AI could make a significant impact on machine learning theories, and induce more promising applications in, e.g., automated ML, meta-learning, privacy and security, hardware design, and big data analysis. We seek a new learning frontier when the current learning algorithms become infeasible, and formalize foundations of secure learning.

We always look for passionate students to join the team in terms of RA/TA/externship/internship/visiting students (more info)!

Pinned Loading

  1. Unlearn-Saliency Unlearn-Saliency Public

    [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, D…

    Python 137 28

  2. UnlearnCanvas UnlearnCanvas Public

    [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng …

    Python 78 2

  3. Diffusion-MU-Attack Diffusion-MU-Attack Public

    The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and e…

    Python 85 4

  4. AdvUnlearn AdvUnlearn Public

    Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models". This work adversarially unlearns the text encoder to enh…

    Jupyter Notebook 49 2

  5. Unlearn-Sparse Unlearn-Sparse Public

    [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu

    Python 81 11

  6. Unlearn-Simple Unlearn-Simple Public

    [NeurIPS25] Official repo for "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning"

    Python 34 11

Repositories

Showing 10 of 36 repositories
  • OPTML-Group/OPTML-Group.github.io’s past year of commit activity
    SCSS 1 2 0 0 Updated Oct 27, 2025
  • OPTML-Group/Unlearn-Backdoor’s past year of commit activity
    Python 0 GPL-3.0 0 1 0 Updated Oct 21, 2025
  • OPTML-Group/Unlearn-FullStack’s past year of commit activity
    Python 2 MIT 0 0 0 Updated Oct 11, 2025
  • Unlearn-R2MU Public

    Reasoning Model Unlearning: Forgetting Traces, Not Just Answers, While Preserving Reasoning Skills

    OPTML-Group/Unlearn-R2MU’s past year of commit activity
    Python 2 0 0 0 Updated Oct 8, 2025
  • Unlearn-Simple Public

    [NeurIPS25] Official repo for "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning"

    OPTML-Group/Unlearn-Simple’s past year of commit activity
    Python 34 MIT 11 1 1 Updated Oct 3, 2025
  • Unlearn-Smooth Public

    [ICML25] Official repo for "Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond"

    OPTML-Group/Unlearn-Smooth’s past year of commit activity
    Python 14 MIT 0 0 0 Updated Sep 28, 2025
  • Unlearn-Trace Public

    Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs

    OPTML-Group/Unlearn-Trace’s past year of commit activity
    Python 23 MIT 1 0 0 Updated Jul 5, 2025
  • CyclicReflex Public

    "CyclicReflex: Improving Large Reasoning Models via Cyclical Reflection Token Scheduling" by Chongyu Fan, Yihua Zhang, Jinghan Jia, Alfred Hero, Sijia Liu

    OPTML-Group/CyclicReflex’s past year of commit activity
    Python 4 MIT 0 1 0 Updated Jun 22, 2025
  • VLM-Safety-Unlearn Public

    Safety Mirage: How Spurious Correlations Undermine VLM Safety Fine-tuning

    OPTML-Group/VLM-Safety-Unlearn’s past year of commit activity
    Python 12 MIT 0 1 0 Updated Jun 17, 2025
  • Unlearn-ILU Public
    OPTML-Group/Unlearn-ILU’s past year of commit activity
    Python 4 MIT 0 0 0 Updated Jun 15, 2025

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…