pip install minillmlib
# For HuggingFace/local models: (Beta - not well tested)
pip install minillmlib[huggingface]
A Python library for interacting with various LLM providers (OpenAI, Anthropic, Mistral, HuggingFace, through URL).
Author: Quentin Feuillade--Montixi
git clone https://github.com/qfeuilla/MiniLLMLib.git
cd MiniLLMLib
pip install -e . # Install in editable mode
import minillmlib as mll
# Create a GeneratorInfo for your model/provider
import os
gi = mll.GeneratorInfo(
model="gpt-4",
_format="openai",
api_key=os.getenv("OPENAI_API_KEY") # Recommended: use env var for secrets
)
# Create a chat node (conversation root)
chat = mll.ChatNode(content="Hello!", role="user")
# Synchronous completion
response = chat.complete_one(gi)
print(response.content)
# Or asynchronous version
# response = await chat.complete_one_async(gi)
- Unified interface for major LLM providers:
- OpenAI, Anthropic, Mistral, HuggingFace (local), custom URL (e.g. OpenRouter)
- Thread (linear) and loom (tree/branching) conversation modes
- Synchronous & asynchronous API
- Audio completions (OpenAI audio models, beta)
- Flexible parameter/config management via
GeneratorInfo
andGeneratorCompletionParameters
- Save/load conversation trees
- Extensible: add new models/providers easily
- See the Usage Guide for advanced usage, parameter tables, and branching/loom semantics.
- See the Provider Matrix for supported models and configuration tips.
- See Troubleshooting for common issues and debugging.
- Set API keys as environment variables for security (see the Configuration Guide).
- Run tests with:
pytest tests/
- See Contributing for contribution guidelines.
For more, see the full documentation at minillmlib.readthedocs.io or open an issue on GitHub if you need help.