Welcome to my chess AI research repositories! This work demonstrates that transformer models can learn strategic reasoning without traditional search algorithms.
๐ฎ Try the models: Interactive Demos ๐ Full Documentation: Research Website ๐ Research Publication: LAION Notes
- ๐ฌ RookWorld - Unified policy+environment models (PyTorch/llm.c)
- ๐ฏ rook - Classification approach (HuggingFace Transformers)
- ๐ rookworld-trl - RLVR training (TRL framework)
Active development integrating Reinforcement Learning with Verifiable Rewards (GRPO) for enhanced reasoning capabilities.
Repo Transformers & TRL: jorahn/RookWorld-TRL
Repo PyTorch: jorahn/RookWorld-RLVR
124M params: Unified chess policy and world model in a single transformer architecture. Post: ROOK: REASONING OVER ORGANIZED KNOWLEDGE
- Collaboration: Jenia Jitsev (LAION/JSC), Qi Sun (Tokyo Tech/Sakana AI)
- Multi-task Performance:
- ๐ 32.1% Checkmate-in-One accuracy - outperforms ChessGPT-Base (Feng et al., NeurIPS'23) with 124M vs 3B parameters (24x smaller)
- 99.9% environment simulation accuracy (details: Next State 99.61%, NLS 99.99%, Reward 99.11%, Terminated 99.13%; see RookWorld/README)
- 26.2% action accuracy
- Interactive Demo: Try it in your browser
- Model: RookWorld-LM 124M
- Dataset: rookworld_7m
- Repository: jorahn/RookWorld
- Significance: Enables closed-loop self-play without external engines
124M params: Implementation of reasoning traces for chess, incorporating position analysis โ candidate evaluation โ move selection.
- Dataset: rook_40m (6B tokens, generated on Tsubame 4.0)
- Architecture: GPT-2 with custom chess tokenization
- Performance: 22.2% action accuracy, 24.4% Checkmate-in-One with reasoning traces
- Technical Details: LAION Research Note
- Interactive Demo: Try it in your browser
- Model: ROOK-LM 124M
- Repository: jorahn/RookWorld
9M params: Reproduction of Google DeepMind's "Grandmaster-Level Chess Without Search" methodology using LLaMA-based decoder.
- Performance: 49% action accuracy, 57% on Checkmate-in-One
- Achievement: Demonstrated searchless chess AI feasibility with minimal parameters
- Model: Available on HuggingFace
- Interactive Demo: Try it in your browser
- Repository: jorahn/rook
- Training Logs: W&B
Contributed to the LAION Strategic Game Dataset project, responding to their call for participation to enhance AI models' strategic planning capabilities through game-based synthetic datasets. Developed chess-to-text transformation tools for dataset generation as part of this community effort exploring strategic reasoning in language models.
- Contribution: Chess dataset generation and transformation pipeline
- Code: chess-to-text repository
- Project Scale: 3.2 billion chess games, 608 billion moves via Stockfish self-play
- Impact: Foundation work that evolved into the ROOK project research
87M params (Custom DeBERTaV2-base, Vocab Size 500): Two-stage training approach using masked language modeling (MLM) pretraining on FEN representations, followed by supervised fine-tuning on a sequence classification objective for move prediction. Established baseline performance and identified key challenges in chess representation for transformer architectures.
- Dataset: yolochess_lichess-elite_2211
- Architecture: DeBERTa v2 with custom FEN tokenization and classification head
- Training: MLM pretraining โ Supervised fine-tuning for sequence classification
- W&B Logs: Training Report
- Unified world modeling: Simultaneous policy and environment simulation in transformers
- Strategic tokenization: Custom representations for structured game states
- Multi-task scaling: Consistent performance improvements with unified training objectives
- Large-scale annotation: 40M+ positions annotated with Stockfish 16.1 on supercomputing infrastructure
- Multi-format datasets: Support for classification, autoregressive, and multi-task learning
- Reproducible pipelines: Full data generation code and methodology documentation
All models, datasets, and code publicly available. Contributing to democratization of strategic AI research.
Background spans early esports content management with leading German clan mTw during competitive gaming's formative years, founding and scaling startup readmore.de (CEO, 2005) which earned two esports awards before acquisition by publishing house Computec Media AG in 2007. Academic foundation in neuro-informatics (University of Lรผbeck) and business economics & management (Witten/Herdecke University, IPADE Mexico DF, Masters 2012), followed by games publishing startup experience and transition into data-driven digital performance marketing. Continuous learning includes fast.ai deep learning (2018), INRIA scikit-learn MOOC (2021), and Mastering LLMs with Hamel Husain (Maven, 2024). Recognition includes the SEOday 2023 best speaker award for GPT-4 content generation innovation. Contributor to HuggingFace ecosystem (transformers, datasets, evaluate) and open source frameworks. Current work at Drees & Sommer, building the AI Lab & exploring applications in construction and real estate optimization.
The RookWorld results suggest that:
- Search-free strategic AI is viable with appropriate training data
- Unified architectures can efficiently handle multiple strategic reasoning tasks
- Chain-of-thought training improves both performance and interpretability
- Language model paradigms apply effectively to structured strategic domains
These findings have implications beyond chess for any domain requiring sequential decision-making under