Skip to content

Code and for IJCAI 2025 paper of Enhancing Automated Grading in Science Education through LLM-Driven Causal Reasoning and Multimodal Analysis

License

Notifications You must be signed in to change notification settings

illidanlab/llm_grading_edu

Repository files navigation

Enhancing Automated Grading in Science Education through LLM-Driven Causal Reasoning and Multimodal Analysis

This repository provides the code, data samples for our IJCAI 2025 paper accepted in the Human-Centred AI (HAI) track.

Paper Title: Enhancing Automated Grading in Science Education through LLM-Driven Causal Reasoning and Multimodal Analysis
Authors: Haohao Zhu, Tingting Li, Peng He, Jiayu Zhou
📄 Read the paper


Abstract

Automated assessment of open responses in K–12 science education poses significant challenges due to the multimodal nature of student work, which often integrates textual explanations, drawings, and handwritten elements. Traditional evaluation methods that focus solely on visual or textual analysis fail to capture the full breadth of student reasoning and are susceptible to biases such as handwriting neatness or answer length.

🧩 Our Solution

We propose a LLM-augmented grading framework that:

  • Extracts causal reasoning graphs from both student text and drawings.
  • Compares student reasoning to rubrics and high-quality responses.
  • Incorporates handwriting quality and drawing clarity to debias human grading.
  • Fuses textual, visual, and graph-based features for final multimodal scoring.


📊 Data Samples and Processed Metadata

To protect student privacy, the full original dataset is not publicly released. We share de‑identified sample files that let you run and test the code end‑to‑end. You can find original student response samples and corresponding LLM-processed metadata in:

llm_grading_edu/auto_grading/Data_Sample

Here is an illustration of a student's original response and the extracted reasoning graph from the LLM:


⚙️ Functional Code Modules

We provide two main code modules:

  • llm_grading_edu/LLM – For extracting structured causal reasoning graphs.
  • llm_grading_edu/auto_grading – For running automated scoring using the extracted structure and multimodal inputs.

🚀 Getting Started

🛠️ Installation

We recommend using Conda:

# Create a new conda environment
conda create -n llm_grading python=3.10

# Activate the environment
conda activate llm_grading

# Install required packages
conda env update -f llm_grading_edu/llm_grading_env.yaml

💻 Running the Code

1. LLM Reasoning (Causal Graph Extraction)

To run the LLM-based causal reasoning extraction:

#Please run the code under the llm_grading_edu folder
cd llm_grading_edu

# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export DASHSCOPE_API_KEY="your-dashscope-key"

# Run the extraction script (OpenAI or Qwen)
python llm_reasoning.py openai     # for OpenAI
python llm_reasoning.py qwen       # for Qwen

2. Automated Grading

# Navigate to the grading module
#Please run the code under the llm_grading_edu/auto_grading folder
cd llm_grading_edu/auto_grading

# Run grading
python main.py

📜 License

This project is licensed under the MIT License.

About

Code and for IJCAI 2025 paper of Enhancing Automated Grading in Science Education through LLM-Driven Causal Reasoning and Multimodal Analysis

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages