💡 A Streamlit-based legal chatbot powered by RAG architecture, built to deliver hallucination-free, document-grounded answers — unlike generic LLMs.
- 🧠 RAG (Retrieval-Augmented Generation) backed memory
- 🗃️ Custom vectorstore using FAISS for fast retrieval
- 🛡️ Avoids hallucinations — answers are based on your uploaded documents
- 🧩 Modular code (
create_memory.py
,connect_memory.py
,chatbot.py
) - ⚡ Instant and interactive UI with Streamlit
- 📁 Clean folder structure with
data/
andvectorstore/
separation
📁 data/
└── your uploaded PDFs
📁 vectorstore/db_faiss/
└── vector index stored here
📄 create_memory.py
└── Converts PDFs to chunks → embeds with LLM → stores vectors
📄 connect_memory.py
└── Loads vectorstore and connects it to LLM
📄 chatbot.py
└── Streamlit UI to chat with your knowledge base
Feature | LawgicAI | Traditional LLMs |
---|---|---|
Hallucination-Free | ✅ | ❌ |
Domain-Aware (Legal) | ✅ | ❌ |
Custom Data Injection | ✅ | ❌ |
Modular Architecture | ✅ | |
Streamlit UI | ✅ | ❌ |
git clone https://github.com/yuvraj-kumar-dev/LawgicAI-Chatbot.git
cd LawgicAI-Chatbot
pip install -r requirements.txt
- Create Vectorstore
python create_memory.py
- Start Chatbot
streamlit run chatbot.py
├── data/ ← Your PDFs go here
├── vectorstore/db_faiss/ ← Vector DB created by FAISS
├── create_memory.py ← Create memory (RAG setup)
├── connect_memory__.py ← Connect memory to LLM
├── chatbot.py ← Streamlit interface
├── LICENSE
└── README.md
This project is licensed under the MIT License - see the LICENSE file for details.
If you find this useful, give it a ⭐ and feel free to contribute!
Built with ❤️ by Yuvraj Kumar