Skip to content
View jorahn's full-sized avatar

Highlights

  • Pro

Organizations

@rcs-analytics

Block or report jorahn

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 250 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
jorahn/README.md

Chess AI Through Language Models: Strategic Reasoning Without Search

Jonathan Rahn | AI Lab Lead, Drees & Sommer

Research Website Interactive Demos LAION Research
GitHub HuggingFace

Quick Start

Welcome to my chess AI research repositories! This work demonstrates that transformer models can learn strategic reasoning without traditional search algorithms.

๐ŸŽฎ Try the models: Interactive Demos ๐Ÿ“– Full Documentation: Research Website ๐Ÿ“„ Research Publication: LAION Notes

Key Repositories

  • ๐Ÿ”ฌ RookWorld - Unified policy+environment models (PyTorch/llm.c)
  • ๐ŸŽฏ rook - Classification approach (HuggingFace Transformers)
  • ๐Ÿš€ rookworld-trl - RLVR training (TRL framework)

The ROOK Project Evolution

RookWorld-RLVR (Current)

Active development integrating Reinforcement Learning with Verifiable Rewards (GRPO) for enhanced reasoning capabilities.
Repo Transformers & TRL: jorahn/RookWorld-TRL
Repo PyTorch: jorahn/RookWorld-RLVR

RookWorld-LM (2024) - Unified Agent+Environment

124M params: Unified chess policy and world model in a single transformer architecture. Post: ROOK: REASONING OVER ORGANIZED KNOWLEDGE

  • Collaboration: Jenia Jitsev (LAION/JSC), Qi Sun (Tokyo Tech/Sakana AI)
  • Multi-task Performance:
    • ๐Ÿ† 32.1% Checkmate-in-One accuracy - outperforms ChessGPT-Base (Feng et al., NeurIPS'23) with 124M vs 3B parameters (24x smaller)
    • 99.9% environment simulation accuracy (details: Next State 99.61%, NLS 99.99%, Reward 99.11%, Terminated 99.13%; see RookWorld/README)
    • 26.2% action accuracy
  • Interactive Demo: Try it in your browser
  • Model: RookWorld-LM 124M
  • Dataset: rookworld_7m
  • Repository: jorahn/RookWorld
  • Significance: Enables closed-loop self-play without external engines

ROOK-LM (2024) - Chain-of-Thought Reasoning

124M params: Implementation of reasoning traces for chess, incorporating position analysis โ†’ candidate evaluation โ†’ move selection.

ROOK-CLF (2024) - Decoder-based Behavioral Cloning

9M params: Reproduction of Google DeepMind's "Grandmaster-Level Chess Without Search" methodology using LLaMA-based decoder.

LAION Strategic Game Dataset (2023) - Dataset Engineering

Contributed to the LAION Strategic Game Dataset project, responding to their call for participation to enhance AI models' strategic planning capabilities through game-based synthetic datasets. Developed chess-to-text transformation tools for dataset generation as part of this community effort exploring strategic reasoning in language models.

  • Contribution: Chess dataset generation and transformation pipeline
  • Code: chess-to-text repository
  • Project Scale: 3.2 billion chess games, 608 billion moves via Stockfish self-play
  • Impact: Foundation work that evolved into the ROOK project research

YoloChess (2022) - Encoder-based Behavioral Cloning

87M params (Custom DeBERTaV2-base, Vocab Size 500): Two-stage training approach using masked language modeling (MLM) pretraining on FEN representations, followed by supervised fine-tuning on a sequence classification objective for move prediction. Established baseline performance and identified key challenges in chess representation for transformer architectures.

  • Dataset: yolochess_lichess-elite_2211
  • Architecture: DeBERTa v2 with custom FEN tokenization and classification head
  • Training: MLM pretraining โ†’ Supervised fine-tuning for sequence classification
  • W&B Logs: Training Report

Technical Contributions

Novel Architectures

  • Unified world modeling: Simultaneous policy and environment simulation in transformers
  • Strategic tokenization: Custom representations for structured game states
  • Multi-task scaling: Consistent performance improvements with unified training objectives

Dataset Engineering

  • Large-scale annotation: 40M+ positions annotated with Stockfish 16.1 on supercomputing infrastructure
  • Multi-format datasets: Support for classification, autoregressive, and multi-task learning
  • Reproducible pipelines: Full data generation code and methodology documentation

Open Science Impact

All models, datasets, and code publicly available. Contributing to democratization of strategic AI research.


Research Context

Background spans early esports content management with leading German clan mTw during competitive gaming's formative years, founding and scaling startup readmore.de (CEO, 2005) which earned two esports awards before acquisition by publishing house Computec Media AG in 2007. Academic foundation in neuro-informatics (University of Lรผbeck) and business economics & management (Witten/Herdecke University, IPADE Mexico DF, Masters 2012), followed by games publishing startup experience and transition into data-driven digital performance marketing. Continuous learning includes fast.ai deep learning (2018), INRIA scikit-learn MOOC (2021), and Mastering LLMs with Hamel Husain (Maven, 2024). Recognition includes the SEOday 2023 best speaker award for GPT-4 content generation innovation. Contributor to HuggingFace ecosystem (transformers, datasets, evaluate) and open source frameworks. Current work at Drees & Sommer, building the AI Lab & exploring applications in construction and real estate optimization.


Research Implications

The RookWorld results suggest that:

  1. Search-free strategic AI is viable with appropriate training data
  2. Unified architectures can efficiently handle multiple strategic reasoning tasks
  3. Chain-of-thought training improves both performance and interpretability
  4. Language model paradigms apply effectively to structured strategic domains

These findings have implications beyond chess for any domain requiring sequential decision-making under

Pinned Loading

  1. rookworld-trl rookworld-trl Public

    Post-train RookWorld-LM using GRPO via TRL

    Python

  2. RookWorld RookWorld Public

    training language models to reason with a world model

    Jupyter Notebook 1

  3. rook rook Public

    training language models to reason with a world model

    Python

  4. llama-int8 llama-int8 Public

    Forked from tloen/llama-int8

    Quantized inference code for LLaMA models

    Python 13 2

  5. keras-wide-n-deep keras-wide-n-deep Public

    Reimplementation of Google's Wide & Deep Network in Keras

    Python 27 16

  6. llm.c llm.c Public

    Forked from karpathy/llm.c

    LLM training in simple, raw C/CUDA

    Cuda