Professional AI prompt optimization, debugging, and sharing platform
๐ Quick Start โข ๐ Features โข ๐ ๏ธ Tech Stack โข ๐ฆ Deployment Guide โข ๐ค Contribution Guide
AI Prompt Optimization Platform is a professional prompt engineering tool designed to help users optimize AI model prompts, enhancing the effectiveness and accuracy of AI interactions. The platform integrates intelligent optimization algorithms, deep reasoning analysis, visualization debugging tools, and community sharing features, providing comprehensive prompt optimization solutions for AI application developers and content creators.
- Intelligent Optimization: Automatically analyze and optimize prompt structures based on advanced AI algorithms
- Deep Reasoning: Provide multi-dimensional thinking analysis to deeply understand user needs
- Community Sharing: Discover and share high-quality prompt templates, exchange experiences with community users
- Visualization Debugging: Robust debugging environment with real-time preview of prompt effects
- Automatic Structure Analysis: Deeply analyze the semantic structure and logical relationships of prompts
- Multi-Dimensional Optimization: Optimize from multiple dimensions such as clarity, accuracy, and completeness
- Deep Reasoning Mode: Enable AI deep thinking to provide detailed analysis processes
- Real-Time Generation: Stream output of optimization results, view the generation process in real-time
- Template Creation: Save optimized prompts as reusable templates
- Tag Classification: Support multi-tag classification management for easy searching and organizing
- Favorites Feature: Favorite templates for quick access to commonly used prompts
- Usage Statistics: Track template usage and feedback on effectiveness
- Public Sharing: Share high-quality templates with community users
- Popularity Ranking: Display popular templates based on views, likes, etc.
- Search Discovery: Powerful search functionality to quickly find needed templates
- Interactive Features: Social features such as likes, comments, and favorites
- Visual Interface: Intuitive user interface to simplify operations
- Real-Time Preview: Instantly view prompt optimization effects
- History Records: Save optimization history and support version comparison
- Export Functionality: Support exporting optimization results in multiple formats
- Language Switching: Support Chinese and English interface switching
- Real-Time Translation: Instant language switching without page refresh
- Localized Content: Complete localization of all interface elements
- Browser Detection: Automatic language detection based on browser settings
- Framework: .NET 9.0 + ASP.NET Core
- AI Engine: Microsoft Semantic Kernel 1.54.0
- Database: PostgreSQL + Entity Framework Core
- Authentication: JWT Token authentication
- Logging: Serilog structured logging
- API Documentation: Scalar OpenAPI
- Framework: React 19.1.0 + TypeScript
- UI Components: Ant Design 5.25.3
- Routing: React Router DOM 7.6.1
- State Management: Zustand 5.0.5
- Styling: Styled Components 6.1.18
- Build Tool: Vite 6.3.5
- AI Model Integration: OpenAI API compatible interface
- Real-Time Communication: Server-Sent Events (SSE)
- Data Storage: IndexedDB (client-side caching)
- Rich Text Editing: TipTap editor
- Code Highlighting: Prism.js + React Syntax Highlighter
- Internationalization: React i18next for multi-language support
- Docker & Docker Compose
- .NET 9.0 SDK (development environment)
- Node.js 18+ (development environment)
- Clone the project
git clone https://github.com/AIDotNet/auto-prompt.git
cd auto-prompt
- Deploy using Docker Compose
# Start services
docker-compose up -d
# Check service status
docker-compose ps
# View logs
docker-compose logs -f
- Access the application
- Frontend: http://localhost:10426
- API Documentation: http://localhost:10426/scalar/v1
- Backend Development
cd src/Console.Service
dotnet restore
dotnet run
- Frontend Development
cd web
npm install
npm run dev
Configure in src/Console.Service/appsettings.json
:
{
"GenerateModel": "gpt-4o",
"ConnectionStrings": {
"DefaultConnection": "Host=localhost;Database=prompt_db;Username=postgres;Password=your_password"
},
"Jwt": {
"Key": "your_jwt_secret_key",
"Issuer": "auto-prompt",
"Audience": "auto-prompt-users"
}
}
This platform supports configuring custom AI API endpoints that are compatible with the OpenAI API format.
Configure in src/Console.Service/appsettings.json
:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"OpenAIEndpoint": "https://your-custom-api.com/v1",
"ConnectionStrings": {
"Type": "sqlite",
"Default": "Data Source=/data/ConsoleService.db"
}
}
export OPENAIENDPOINT="https://your-custom-api.com/v1"
Create or modify docker-compose.yaml
:
services:
console-service:
image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
ports:
- 10426:8080
environment:
- TZ=Asia/Shanghai
- OpenAIEndpoint=https://your-custom-api.com/v1
# Optional: Configure database type
- ConnectionStrings:Type=sqlite
- ConnectionStrings:Default=Data Source=/data/ConsoleService.db
volumes:
- ./data:/data
build:
context: .
dockerfile: src/Console.Service/Dockerfile
The platform supports the following services compatible with the OpenAI API format:
- OpenAI Official API:
https://api.openai.com/v1
- Azure OpenAI:
https://your-resource.openai.azure.com/openai/deployments/your-deployment
- Domestic Proxy Services:
https://api.token-ai.cn/v1
(default)https://api.deepseek.com/v1
https://api.moonshot.cn/v1
- Self-hosted Services:
- Ollama:
http://localhost:11434/v1
- LocalAI:
http://localhost:8080/v1
- vLLM:
http://localhost:8000/v1
- Ollama:
version: '3.8'
services:
console-service:
image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
container_name: auto-prompt
ports:
- "10426:8080"
environment:
- TZ=Asia/Shanghai
- OpenAIEndpoint=https://api.openai.com/v1
- ConnectionStrings:Type=sqlite
- ConnectionStrings:Default=Data Source=/data/ConsoleService.db
volumes:
- ./data:/data
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
version: '3.8'
services:
console-service:
image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
container_name: auto-prompt
ports:
- "10426:8080"
environment:
- TZ=Asia/Shanghai
- OpenAIEndpoint=https://your-custom-api.com/v1
- ConnectionStrings:Type=postgresql
- ConnectionStrings:Default=Host=postgres;Database=auto_prompt;Username=postgres;Password=your_password
depends_on:
- postgres
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
postgres:
image: postgres:16-alpine
container_name: auto-prompt-db
environment:
- POSTGRES_DB=auto_prompt
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=your_password
- TZ=Asia/Shanghai
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
version: '3.8'
services:
console-service:
image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
container_name: auto-prompt
ports:
- "10426:8080"
environment:
- TZ=Asia/Shanghai
- OpenAIEndpoint=http://ollama:11434/v1
- ConnectionStrings:Type=sqlite
- ConnectionStrings:Default=Data Source=/data/ConsoleService.db
volumes:
- ./data:/data
depends_on:
- ollama
restart: unless-stopped
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
environment:
- OLLAMA_HOST=0.0.0.0
restart: unless-stopped
# Uncomment the following if you have a GPU
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
volumes:
ollama_data:
-
Select Configuration Template
Choose one of the configuration templates above according to your needs and save it as
docker-compose.yaml
. -
Modify Configuration Parameters
# Modify the API endpoint - OpenAIEndpoint=https://your-api-endpoint.com/v1 # Modify the database password (if using PostgreSQL) - POSTGRES_PASSWORD=your_secure_password - ConnectionStrings:Default=Host=postgres;Database=auto_prompt;Username=postgres;Password=your_secure_password
-
Start the Service
# Start all services docker-compose up -d # Check the status of the services docker-compose ps # View logs docker-compose logs -f console-service
-
Verify Deployment
# Check the health status of the service curl http://localhost:10426/health # Access the API documentation curl http://localhost:10426/scalar/v1
Variable Name | Description | Default Value | Example |
---|---|---|---|
OpenAIEndpoint |
AI API endpoint address | https://api.token-ai.cn/v1 |
https://api.openai.com/v1 |
ConnectionStrings:Type |
Database type | sqlite |
postgresql , sqlite |
ConnectionStrings:Default |
Database connection string | Data Source=/data/ConsoleService.db |
PostgreSQL: Host=postgres;Database=auto_prompt;Username=postgres;Password=password |
TZ |
Time zone setting | Asia/Shanghai |
UTC , America/New_York |
-
API Endpoint Connection Failure
# Check if the endpoint is accessible curl -I https://your-api-endpoint.com/v1/models # Check the container network docker-compose exec console-service curl -I http://ollama:11434/v1/models
-
Database Connection Failure
# Check the PostgreSQL container status docker-compose logs postgres # Test database connection docker-compose exec postgres psql -U postgres -d auto_prompt -c "SELECT 1;"
-
Permission Issues
# Ensure correct permissions for the data directory sudo chown -R 1000:1000 ./data chmod 755 ./data
# View application logs
docker-compose logs -f console-service
# View database logs
docker-compose logs -f postgres
# View all service logs
docker-compose logs -f
-
Resource Limitation Configuration
services: console-service: deploy: resources: limits: memory: 2G cpus: '1.0' reservations: memory: 512M cpus: '0.5'
-
Database Optimization
postgres: environment: - POSTGRES_SHARED_PRELOAD_LIBRARIES=pg_stat_statements - POSTGRES_MAX_CONNECTIONS=200 command: > postgres -c shared_preload_libraries=pg_stat_statements -c max_connections=200 -c shared_buffers=256MB -c effective_cache_size=1GB
-
Cache Configuration
services: redis: image: redis:7-alpine container_name: auto-prompt-redis ports: - "6379:6379" volumes: - redis_data:/data restart: unless-stopped
auto-prompt/
โโโ src/
โ โโโ Console.Service/ # Backend services
โ โโโ Controllers/ # API controllers
โ โโโ Services/ # Business services
โ โโโ Entities/ # Data entities
โ โโโ Dto/ # Data transfer objects
โ โโโ DbAccess/ # Data access layer
โ โโโ plugins/ # AI plugin configuration
โ โ โโโ Generate/ # Prompt generation plugins
โ โ โโโ DeepReasoning/ # Deep reasoning
โ โ โโโ DeepReasoningPrompt/ # Deep reasoning prompts
โ โ โโโ OptimizeInitialPrompt/ # Initial optimization
โ โโโ Migrations/ # Database migrations
โโโ web/ # Frontend application
โ โโโ src/
โ โ โโโ components/ # React components
โ โ โ โโโ GeneratePrompt/ # Prompt generation components
โ โ โ โโโ Workbench/ # Workbench
โ โ โ โโโ DashboardPage/ # Dashboard
โ โ โ โโโ PromptsPage/ # Prompt management
โ โ โโโ stores/ # State management
โ โ โโโ api/ # API interfaces
โ โ โโโ styles/ # Style files
โ โ โโโ utils/ # Utility functions
โ โโโ public/ # Static assets
โ โโโ dist/ # Build output
โโโ docker-compose.yaml # Docker orchestration configuration
โโโ README.md # Project documentation
- Enter the prompt to be optimized in the workbench
- Describe specific requirements and expected outcomes
- Choose whether to enable deep reasoning mode
- Click "Generate" to start the optimization process
- View optimization results and reasoning process
- Save optimized prompts as templates
- Add title, description, and tags
- Manage personal templates in "My Prompts"
- Support editing, deleting, and favoriting operations
- Browse popular templates in the prompt plaza
- Use the search function to find specific types of templates
- Like and favorite interesting templates
- Share your high-quality templates with the community
- Click the language switcher button (๐) in the top-right corner or sidebar
- Select your preferred language (Chinese/English)
- The interface will switch languages instantly without page refresh
- Your language preference will be saved for future visits
We welcome community contributions! Please follow these steps:
- Fork the project to your GitHub account
- Create a feature branch:
git checkout -b feature/AmazingFeature
- Commit your changes:
git commit -m 'Add some AmazingFeature'
- Push the branch:
git push origin feature/AmazingFeature
- Create a Pull Request
- Follow existing code style and naming conventions
- Add appropriate comments and documentation
- Ensure all tests pass
- Update related documentation
This project is licensed under the LGPL (Lesser General Public License).
- โ Commercial Use: Allows deployment and use in commercial environments
- โ Distribution: Allows distribution of original code and binaries
- โ Modification: Allows modification of source code for personal or internal use
- โ Commercial Distribution After Modification: Prohibits commercial distribution of modified source code
โ ๏ธ Liability: Use of this software is at the user's own risk
- Direct deployment of this project for commercial use is allowed
- Development of internal tools based on this project is allowed
- Modified source code cannot be redistributed
- Original copyright statements must be retained
For detailed license terms, please refer to the LICENSE file.
Thanks to the following open source projects and technologies:
- Microsoft Semantic Kernel - AI orchestration framework
- Ant Design - React UI component library
- React - Frontend framework
- .NET - Backend framework
- Project Homepage: https://github.com/AIDotNet/auto-prompt
- Issue Reporting: GitHub Issues
- Official Website: https://token-ai.cn
- Technical Support: Submit via GitHub Issues
If this project helps you, please give us a โญ Star!
Made with โค๏ธ by TokenAI