Easily transform your prototype AI code into production-ready software.
Applying AI is becoming increasingly more accessible, but many case studies have shown that these applications are often deployed poorly. This may lead to suboptimal performance and to introducing unintended biases. GreatAI helps fix this by allowing you to easily transform your prototype AI code into production-ready software.
pip install sis-great-ai
Create a new file called demo.py
from great_ai import GreatAI
@GreatAI.create
def greeter(name: str) -> str:
return f"Hello {name}!"
Start it by executing great-ai demo.py
, and find the dashboard at http://localhost:6060.
That's it. Your GreatAI service is nearly ready for production use. Many of the SE4ML best practices are configured and implemented automatically (of course, these can be customised as well).
Check out the full documentation here.
GreatAI fits between the prototype and deployment phases of your AI development lifecycle. This is highlighted in blue in the diagram. Here, several best practices can be automatically implemented, aiming to achieve the following attributes:
- General: use any Python library without restriction
- Robust: have error-handling and well-tested utilities out-of-the-box
- End-to-end: utilise end-to-end feedback as a built-in, first-class concept
- Automated: focus only on what actually requires your attention
- Trustworthy: deploy models that you and society can confidently trust
There are other existing solutions aiming to facilitate this phase. Amazon SageMaker and Seldon Core provide the most comprehensive suite of features. If you have the opportunity to use them, do that because they're great.
However, research indicates that professionals rarely use them. This may be due to their inherent setup and operational complexity. GreatAI is designed to be as simple to use as possible. Its straightforward, high-level API and sensible default configuration make it easy to start using. Despite its relative simplicity over Seldon Core, it still implements many of the SE4ML best practices, and thus, can meaningfully improve your deployment without requiring prohibitively great effort.
Check out the full documentation here.
Find great-ai
on PyPI
pip install sis-great-ai
Find great-ai
on DockerHub
docker run -p6060:6060 scoutinscience/great-ai
Contributions are welcome.
python3 -m venv --copies .env
source .env/bin/activate
pip install --upgrade flit pip
flit install --symlink
scripts/format-python.sh great_ai docs tests
Format code.
python3 -m pytest --doctest-modules --asyncio-mode=strict .
Run tests.
mkdocs serve
Serve documentation.