Project | Description | Links |
---|---|---|
WebLama | Web application generation | GitHub · Docs |
GetLLM | LLM model management and code generation | GitHub · PyPI · Docs |
DevLama | Python code generation with Ollama | GitHub · Docs |
LogLama | Centralized logging and environment management | GitHub · PyPI · Docs |
APILama | API service for code generation | GitHub · Docs |
BEXY | Sandbox for executing generated code | GitHub · Docs |
JSLama | JavaScript code generation | GitHub · NPM · Docs |
JSBox | JavaScript sandbox for executing code | GitHub · NPM · Docs |
SheLLama | Shell command generation | GitHub · PyPI · Docs |
Tom Sapletta — DevOps Engineer & Systems Architect
- 💻 15+ years in DevOps, Software Development, and Systems Architecture
- 🏢 Founder & CEO at Telemonit (Portigen - edge computing power solutions)
- 🌍 Based in Germany | Open to remote collaboration
- 📚 Passionate about edge computing, hypermodularization, and automated SDLC
If you find this project useful, please consider supporting it:
A web frontend for the PyLama ecosystem that provides a user interface for interacting with the various PyLama services. WebLama integrates with LogLama as the primary service for centralized logging, environment management, and service orchestration.
# Create a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install the package in development mode
pip install -e . # This is important! Always install in development mode before starting
IMPORTANT: Always run
pip install -e .
before starting the project to ensure all dependencies are properly installed and the package is available in development mode.
To build the Docker image, run the following command from the project directory:
docker build -t weblama .
To run the container, mapping port 8084 on the host to port 80 in the container:
docker run -p 8084:80 weblama
Once the container is running, you can access the application at http://localhost:8084
server.js
- The main Node.js application filepackage.json
- Node.js dependencies and project configurationpublic/index.html
- The HTML file served by the applicationDockerfile
- Instructions for building the Docker image
WebLama integrates with LogLama as the primary service in the PyLama ecosystem. This integration provides:
- Centralized Environment Management: Environment variables are loaded from the central
.env
file in thedevlama
directory - Dependency Management: Dependencies are validated and installed by LogLama
- Service Orchestration: WebLama is started after all backend services by LogLama
- Centralized Logging: All WebLama operations are logged to the central LogLama system
- Structured Logging: Logs include component context for better filtering and analysis
- Health Monitoring: LogLama monitors WebLama service health and availability
WebLama includes a Makefile to simplify common development tasks:
# Set up the project (creates a virtual environment and installs dependencies)
make setup
# Run the web server (default port 8081)
make web
# Run the web server on a custom port
make web PORT=8080
# Run tests
make test
# Clean up project (remove __pycache__, etc.)
make clean
# Show all available commands
make help