Skip to content

nicolay-r/bulk-chain-shell

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

bulk-chain-shell 0.25.0

Third-party providers hosting↗️

This project is a bash-shell client for bulk-chain no-string framework for reasoning over your data using predefined CoT prompt schema.

πŸ“Ί This video showcase application of the ↗️ Sentiment Analysis Schema towards LLaMA-3-70B-Instruct hosted by Replicate for reasoning over submitted texts

Extra Features

  • βœ… Progress caching [for remote LLMs]: withstanding exception during LLM calls by using sqlite3 engine for caching LLM answers;

Installation

pip install git+https://github.com/nicolay-r/bulk-chain-shell@master

Usage

To interact with LLM via command line with LLM output streaming support. The video below illustrates an example of application for sentiment analysis on author opinion extraction towards mentioned object in text.

Interactive Mode

Quick start :

  1. ⬇️ Download replicate provider for bulk-chain:
  2. πŸ“œ Setup your reasoning thor_cot_schema.json according to the following example ↗️
  3. πŸš€ Launch demo.py as follows:
python3 -m bulk_chain_shell.demo \
    --schema "test/schema/thor_cot_schema.json" \
    --adapter "dynamic:replicate_104.py:Replicate" \
    %%m \
    --model_name "meta/meta-llama-3-70b-instruct" \
    --api_token "<REPLICATE-API-TOKEN>" \
    --stream

Inference Mode

NOTE: You have to install source-iter and tqdm packages that actual dependencies of this project

  1. ⬇️ Download replicate provider for bulk-chain:
wget https://raw.githubusercontent.com/nicolay-r/nlp-thirdgate/refs/heads/master/llm/replicate_104.py
  1. πŸ“œ Setup your reasoning schema.json according to the following example ↗️
  2. πŸš€ Launch inference using DeepSeek-R1:
python3 -m bulk_chain_shell.infer \
    --src "<PATH-TO-YOUR-CSV-or-JSONL>" \
    --schema "test/schema/default.json" \
    --adapter "replicate_104.py:Replicate" \
    %%m \
    --model_name "deepseek-ai/deepseek-r1" \
    --api_token "<REPLICATE-API-TOKEN>"

Embed your LLM

All you have to do is to implement BaseLM class, that includes:

  • __init__ -- for setting up batching mode support and (optional) model name;
  • ask(prompt) -- infer your model with the given prompt.

See examples with models at nlp-thirdgate 🌌.

References

About

Shell client πŸ“Ί for shema-based reasoning 🧠 over your data via custom LLM provider 🌌

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published