[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

lalanikarim/ai-chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

title emoji colorFrom colorTo sdk sdk_version app_file pinned license
Ai Chatbot
📊
indigo
gray
streamlit
1.28.0
main.py
false
mit

Streamlit + Langchain + LLama.cpp w/ Mistral

Run your own AI Chatbot locally on a GPU or even a CPU.

To make that possible, we use the Mistral 7b model.
However, you can use any quantized model that is supported by llama.cpp.

This AI chatbot will allow you to define its personality and respond to the questions accordingly.
There is no chat memory in this iteration, so you won't be able to ask follow-up questions. The chatbot will essentially behave like a Question/Answer bot.

TL;DR instructions

  1. Install llama-cpp-python
  2. Install langchain
  3. Install streamlit
  4. Run streamlit

Step by Step instructions

The setup assumes you have python already installed and venv module available.

  1. Download the code or clone the repository.
  2. Inside the root folder of the repository, initialize a python virtual environment:
python -m venv venv
  1. Activate the python environment:
source venv/bin/activate
  1. Install required packages (langchain, llama.cpp, and streamlit):
pip install -r requirements.txt
  1. Start streamlit:
streamlit run main.py
  1. The Mistral7b quantized model from huggingface will be downloaded and cached locally from the following link: mistral-7b-instruct-v0.1.Q4_0.gguf

Screenshot

Screenshot from 2023-10-23 20-00-09