Welcome to the Mini LLM project! This project demonstrates a mini language model (LLM) integrated with LangChain and Streamlit. The application allows users to interact with a language model via a web interface and make API calls to Gemini for language processing tasks.
- Interactive Web Interface: Built with Streamlit, providing an easy-to-use interface for interacting with the LLM.
- API Integration: Uses Gemini API for handling language processing tasks.
- Dynamic Responses: Generate and display responses based on user input.
- Frontend: Streamlit (for creating the web interface)
- Backend: LangChain (for managing language model interactions)
- API: Gemini (for language processing)
- Version Control: Git
To set up and run this project locally, follow these steps:
- Python (v3.8+)
- Git
-
Clone the repository:
git clone https://github.com/your-username/mini-llm.git cd mini-llm
-
Create a virtual environment:
python -m venv venv
-
Activate the virtual environment:
- Windows:
venv\Scripts\activate
- macOS/Linux:
source venv/bin/activate
- Windows:
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables: Create a
.env
file in the project root and add the following variables:GEMINI_API_KEY=your_gemini_api_key
-
Run the Streamlit app:
streamlit run app.py
-
View the app: Visit
http://localhost:8501
in your browser to interact with the mini LLM.
├── app.py
├── langchain
│ ├── model_manager.py
│ ├── response_generator.py
├── .env
├── requirements.txt
└── README.md