Skip to content

amirezzati/expertbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ExpertBot

RAG-powered API and ChatBot for serving all LLaMA models on custom documents.

Prerequisites

  1. Make conda env from environment.yml file
conda env create -f environment.yml
  1. Then you should activate your created env
conda activate rag

Use with GET request

Step 0

Define your huggignface token as environment variable.

export HUGGING_FACE_HUB_TOKEN="YOUR_TOKEN"

Step 1

Run the main.py for serving fastapi on 8000 port

uvicorn main:app --reload --host 0.0.0.0 --port 8000

Step 2

Open the http://localhost:8000/llama3/?query=YOUR_QUESTION in browser or use following command:

curl -X 'GET' \
  'http://localhost:8000/llama3/?query=YOUR_QUESTION' \
  -H 'accept: application/json'

Endpoints available:

Use UI-Based ChatBOT

This repo includes a simple Streamlit chat UI powered by streamlit-chat template.

Do these steps after section "Use with GET request".

Step 1

Run Streamlit UI

cd ui
streamlit run app.py

Step 2

Open http://localhost:8501/ to see ChatBot app.


Optional: set a different backend URL if not using localhost:8000
export RAG_API_BASE_URL="http://your-api-host:8000"

About

RAG-powered API and ChatBot for serving all LLaMA models on custom documents.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages