title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned | license |
---|---|---|---|---|---|---|---|---|
Ai Chatbot |
📊 |
indigo |
gray |
streamlit |
1.28.0 |
main.py |
false |
mit |
Run your own AI Chatbot locally on a GPU or even a CPU.
To make that possible, we use the Mistral 7b model.
However, you can use any quantized model that is supported by llama.cpp.
This AI chatbot will allow you to define its personality and respond to the questions accordingly.
There is no chat memory in this iteration, so you won't be able to ask follow-up questions.
The chatbot will essentially behave like a Question/Answer bot.
- Install llama-cpp-python
- Install langchain
- Install streamlit
- Run streamlit
The setup assumes you have python
already installed and venv
module available.
- Download the code or clone the repository.
- Inside the root folder of the repository, initialize a python virtual environment:
python -m venv venv
- Activate the python environment:
source venv/bin/activate
- Install required packages (
langchain
,llama.cpp
, andstreamlit
):
pip install -r requirements.txt
- Start
streamlit
:
streamlit run main.py
- The
Mistral7b
quantized model fromhuggingface
will be downloaded and cached locally from the following link: mistral-7b-instruct-v0.1.Q4_0.gguf