8000 GitHub - AIDotNet/auto-prompt: AI Prompt Optimization Platform is a professional prompt engineering tool designed to help users optimize AI model prompts, enhancing the effectiveness and accuracy of AI interactions. The platform integrates intelligent optimization algorithms, deep reasoning analysis, visualization debugging tools, and community sharing features, providing compre
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

AI Prompt Optimization Platform is a professional prompt engineering tool designed to help users optimize AI model prompts, enhancing the effectiveness and accuracy of AI interactions. The platform integrates intelligent optimization algorithms, deep reasoning analysis, visualization debugging tools, and community sharing features, providing compre

License

Notifications You must be signed in to change notification settings

AIDotNet/auto-prompt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

37 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

AI Prompt Optimization Platform

AI Prompt Optimizer License .NET React

Professional AI prompt optimization, debugging, and sharing platform

๐Ÿš€ Quick Start โ€ข ๐Ÿ“– Features โ€ข ๐Ÿ› ๏ธ Tech Stack โ€ข ๐Ÿ“ฆ Deployment Guide โ€ข ๐Ÿค Contribution Guide


๐Ÿ“‹ Project Overview

AI Prompt Optimization Platform is a professional prompt engineering tool designed to help users optimize AI model prompts, enhancing the effectiveness and accuracy of AI interactions. The platform integrates intelligent optimization algorithms, deep reasoning analysis, visualization debugging tools, and community sharing features, providing comprehensive prompt optimization solutions for AI application developers and content creators.

๐ŸŽฏ Core Values

  • Intelligent Optimization: Automatically analyze and optimize prompt structures based on advanced AI algorithms
  • Deep Reasoning: Provide multi-dimensional thinking analysis to deeply understand user needs
  • Community Sharing: Discover and share high-quality prompt templates, exchange experiences with community users
  • Visualization Debugging: Robust debugging environment with real-time preview of prompt effects

โœจ Features

๐Ÿง  Intelligent Prompt Optimization

  • Automatic Structure Analysis: Deeply analyze the semantic structure and logical relationships of prompts
  • Multi-Dimensional Optimization: Optimize from multiple dimensions such as clarity, accuracy, and completeness
  • Deep Reasoning Mode: Enable AI deep thinking to provide detailed analysis processes
  • Real-Time Generation: Stream output of optimization results, view the generation process in real-time

๐Ÿ“š Prompt Template Management

  • Template Creation: Save optimized prompts as reusable templates
  • Tag Classification: Support multi-tag classification management for easy searching and organizing
  • Favorites Feature: Favorite templates for quick access to commonly used prompts
  • Usage Statistics: Track template usage and feedback on effectiveness

๐ŸŒ Community Sharing Platform

  • Public Sharing: Share high-quality templates with community users
  • Popularity Ranking: Display popular templates based on views, likes, etc.
  • Search Discovery: Powerful search functionality to quickly find needed templates
  • Interactive Features: Social features such as likes, comments, and favorites

๐Ÿ”ง Debugging and Testing Tools

  • Visual Interface: Intuitive user interface to simplify operations
  • Real-Time Preview: Instantly view prompt optimization effects
  • History Records: Save optimization history and support version comparison
  • Export Functionality: Support exporting optimization results in multiple formats

๐ŸŒ Multi-Language Support

  • Language Switching: Support Chinese and English interface switching
  • Real-Time Translation: Instant language switching without page refresh
  • Localized Content: Complete localization of all interface elements
  • Browser Detection: Automatic language detection based on browser settings

๐Ÿ› ๏ธ Tech Stack

Backend Technologies

  • Framework: .NET 9.0 + ASP.NET Core
  • AI Engine: Microsoft Semantic Kernel 1.54.0
  • Database: PostgreSQL + Entity Framework Core
  • Authentication: JWT Token authentication
  • Logging: Serilog structured logging
  • API Documentation: Scalar OpenAPI

Frontend Technologies

  • Framework: React 19.1.0 + TypeScript
  • UI Components: Ant Design 5.25.3
  • Routing: React Router DOM 7.6.1
  • State Management: Zustand 5.0.5
  • Styling: Styled Components 6.1.18
  • Build Tool: Vite 6.3.5

Core Dependencies

  • AI Model Integration: OpenAI API compatible interface
  • Real-Time Communication: Server-Sent Events (SSE)
  • Data Storage: IndexedDB (client-side caching)
  • Rich Text Editing: TipTap editor
  • Code Highlighting: Prism.js + React Syntax Highlighter
  • Internationalization: React i18next for multi-language support

๐Ÿ“ฆ Deployment Guide

Environment Requirements

  • Docker & Docker Compose
  • .NET 9.0 SDK (development environment)
  • Node.js 18+ (development environment)

๐Ÿš€ Quick Start

  1. Clone the project
git clone https://github.com/AIDotNet/auto-prompt.git
cd auto-prompt
  1. Deploy using Docker Compose
# Start services
docker-compose up -d

# Check service status
docker-compose ps

# View logs
docker-compose logs -f
  1. Access the application

๐Ÿ”ง Development Environment Setup

  1. Backend Development
cd src/Console.Service
dotnet restore
dotnet run
  1. Frontend Development
cd web
npm install
npm run dev

๐ŸŒ Environment Variable Configuration

Configure in src/Console.Service/appsettings.json:

{
  "GenerateModel": "gpt-4o",
  "ConnectionStrings": {
    "DefaultConnection": "Host=localhost;Database=prompt_db;Username=postgres;Password=your_password"
  },
  "Jwt": {
    "Key": "your_jwt_secret_key",
    "Issuer": "auto-prompt",
    "Audience": "auto-prompt-users"
  }
}

๐Ÿ”ง Custom Endpoint Configuration

This platform supports configuring custom AI API endpoints that are compatible with the OpenAI API format.

Configuration Methods

1. Configuration via Configuration File (Recommended for Production)

Configure in src/Console.Service/appsettings.json:

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "AllowedHosts": "*",
  "OpenAIEndpoint": "https://your-custom-api.com/v1",
  "ConnectionStrings": {
    "Type": "sqlite",
    "Default": "Data Source=/data/ConsoleService.db"
  }
}
2. Configuration via Environment Variables
export OPENAIENDPOINT="https://your-custom-api.com/v1"
3. Docker Compose Environment Variable Configuration

Create or modify docker-compose.yaml:

services:
  console-service:
    image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
    ports:
      - 10426:8080
    environment:
      - TZ=Asia/Shanghai
      - OpenAIEndpoint=https://your-custom-api.com/v1
      # Optional: Configure database type
      - ConnectionStrings:Type=sqlite
      - ConnectionStrings:Default=Data Source=/data/ConsoleService.db
    volumes:
      - ./data:/data
    build:
      context: .
      dockerfile: src/Console.Service/Dockerfile

Supported API Endpoint Types

The platform supports the following services compatible with the OpenAI API format:

  • OpenAI Official API: https://api.openai.com/v1
  • Azure OpenAI: https://your-resource.openai.azure.com/openai/deployments/your-deployment
  • Domestic Proxy Services:
    • https://api.token-ai.cn/v1 (default)
    • https://api.deepseek.com/v1
    • https://api.moonshot.cn/v1
  • Self-hosted Services:
    • Ollama: http://localhost:11434/v1
    • LocalAI: http://localhost:8080/v1
    • vLLM: http://localhost:8000/v1

Complete Docker Compose Configuration Example

Basic Configuration (SQLite Database)
version: '3.8'

services:
  console-service:
    image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
    container_name: auto-prompt
    ports:
      - "10426:8080"
    environment:
      - TZ=Asia/Shanghai
      - OpenAIEndpoint=https://api.openai.com/v1
      - ConnectionStrings:Type=sqlite
      - ConnectionStrings:Default=Data Source=/data/ConsoleService.db
    volumes:
      - ./data:/data
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 3
Advanced Configuration (PostgreSQL Database)
version: '3.8'

services:
  console-service:
    image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
    container_name: auto-prompt
    ports:
      - "10426:8080"
    environment:
      - TZ=Asia/Shanghai
      - OpenAIEndpoint=https://your-custom-api.com/v1
      - ConnectionStrings:Type=postgresql
      - ConnectionStrings:Default=Host=postgres;Database=auto_prompt;Username=postgres;Password=your_password
    depends_on:
      - postgres
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  postgres:
    image: postgres:16-alpine
    container_name: auto-prompt-db
    environment:
      - POSTGRES_DB=auto_prompt
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=your_password
      - TZ=Asia/Shanghai
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  postgres_data:
Local AI Service Configuration (Ollama)
version: '3.8'

services:
  console-service:
    image: registry.cn-shenzhen.aliyuncs.com/tokengo/console
    container_name: auto-prompt
    ports:
      - "10426:8080"
    environment:
      - TZ=Asia/Shanghai
      - OpenAIEndpoint=http://ollama:11434/v1
      - ConnectionStrings:Type=sqlite
      - ConnectionStrings:Default=Data Source=/data/ConsoleService.db
    volumes:
      - ./data:/data
    depends_on:
      - ollama
    restart: unless-stopped

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
    environment:
      - OLLAMA_HOST=0.0.0.0
    restart: unless-stopped
    # Uncomment the following if you have a GPU
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities: [gpu]

volumes:
  ollama_data:

Deployment Steps

  1. Select Configuration Template

    Choose one of the configuration templates above according to your needs and save it as docker-compose.yaml.

  2. Modify Configuration Parameters

    # Modify the API endpoint
    - OpenAIEndpoint=https://your-api-endpoint.com/v1
    
    # Modify the database password (if using PostgreSQL)
    - POSTGRES_PASSWORD=your_secure_password
    - ConnectionStrings:Default=Host=postgres;Database=auto_prompt;Username=postgres;Password=your_secure_password
  3. Start the Service

    # Start all services
    docker-compose up -d
    
    # Check the status of the services
    docker-compose ps
    
    # View logs
    docker-compose logs -f console-service
  4. Verify Deployment

    # Check the health status of the service
    curl http://localhost:10426/health
    
    # Access the API documentation
    curl http://localhost:10426/scalar/v1

Environment Variable Descriptions

Variable Name Description Default Value Example
OpenAIEndpoint AI API endpoint address https://api.token-ai.cn/v1 https://api.openai.com/v1
ConnectionStrings:Type Database type sqlite postgresql, sqlite
ConnectionStrings:Default Database connection string Data Source=/data/ConsoleService.db PostgreSQL: Host=postgres;Database=auto_prompt;Username=postgres;Password=password
TZ Time zone setting Asia/Shanghai UTC, America/New_York

Troubleshooting

Common Issues
  1. API Endpoint Connection Failure

    # Check if the endpoint is accessible
    curl -I https://your-api-endpoint.com/v1/models
    
    # Check the container network
    docker-compose exec console-service curl -I http://ollama:11434/v1/models
  2. Database Connection Failure

    # Check the PostgreSQL container status
    docker-compose logs postgres
    
    # Test database connection
    docker-compose exec postgres psql -U postgres -d auto_prompt -c "SELECT 1;"
  3. Permission Issues

    # Ensure correct permissions for the data directory
    sudo chown -R 1000:1000 ./data
    chmod 755 ./data
Log Viewing
# View application logs
docker-compose logs -f console-service

# View database logs
docker-compose logs -f postgres

# View all service logs
docker-compose logs -f

Performance Optimization Suggestions

  1. Resource Limitation Configuration

    services:
      console-service:
        deploy:
          resources:
            limits:
              memory: 2G
              cpus: '1.0'
            reservations:
              memory: 512M
              cpus: '0.5'
  2. Database Optimization

    postgres:
      environment:
        - POSTGRES_SHARED_PRELOAD_LIBRARIES=pg_stat_statements
        - POSTGRES_MAX_CONNECTIONS=200
      command: >
        postgres
        -c shared_preload_libraries=pg_stat_statements
        -c max_connections=200
        -c shared_buffers=256MB
        -c effective_cache_size=1GB
  3. Cache Configuration

    services:
      redis:
        image: redis:7-alpine
        container_name: auto-prompt-redis
        ports:
          - "6379:6379"
        volumes:
          - redis_data:/data
        restart: unless-stopped

๐Ÿ—๏ธ Project Structure

auto-prompt/
โ”œโ”€โ”€ src/
โ”‚   โ””โ”€โ”€ Console.Service/          # Backend services
โ”‚       โ”œโ”€โ”€ Controllers/          # API controllers
โ”‚       โ”œโ”€โ”€ Services/             # Business services
โ”‚       โ”œโ”€โ”€ Entities/             # Data entities
โ”‚       โ”œโ”€โ”€ Dto/                  # Data transfer objects
โ”‚       โ”œโ”€โ”€ DbAccess/             # Data access layer
โ”‚       โ”œโ”€โ”€ plugins/              # AI plugin configuration
โ”‚       โ”‚   โ””โ”€โ”€ Generate/         # Prompt generation plugins
โ”‚       โ”‚       โ”œโ”€โ”€ DeepReasoning/           # Deep reasoning
โ”‚       โ”‚       โ”œโ”€โ”€ DeepReasoningPrompt/     # Deep reasoning prompts
โ”‚       โ”‚       โ””โ”€โ”€ OptimizeInitialPrompt/   # Initial optimization
โ”‚       โ””โ”€โ”€ Migrations/           # Database migrations
โ”œโ”€โ”€ web/                          # Frontend application
โ”‚   โ”œโ”€โ”€ src/
โ”‚   โ”‚   โ”œโ”€โ”€ components/           # React components
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ GeneratePrompt/   # Prompt generation components
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ Workbench/        # Workbench
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ DashboardPage/    # Dashboard
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ PromptsPage/      # Prompt management
โ”‚   โ”‚   โ”œโ”€โ”€ stores/               # State management
โ”‚   โ”‚   โ”œโ”€โ”€ api/                  # API interfaces
โ”‚   โ”‚   โ”œโ”€โ”€ styles/               # Style files
โ”‚   โ”‚   โ””โ”€โ”€ utils/                # Utility functions
โ”‚   โ”œโ”€โ”€ public/                   # Static assets
โ”‚   โ””โ”€โ”€ dist/                     # Build output
โ”œโ”€โ”€ docker-compose.yaml           # Docker orchestration configuration
โ””โ”€โ”€ README.md                     # Project documentation

๐ŸŽฎ Usage Guide

1. Prompt Optimization

  1. Enter the prompt to be optimized in the workbench
  2. Describe specific requirements and expected outcomes
  3. Choose whether to enable deep reasoning mode
  4. Click "Generate" to start the optimization process
  5. View optimization results and reasoning process

2. Template Management

  1. Save optimized prompts as templates
  2. Add title, description, and tags
  3. Manage personal templates in "My Prompts"
  4. Support editing, deleting, and favoriting operations

3. Community Sharing

  1. Browse popular templates in the prompt plaza
  2. Use the search function to find specific types of templates
  3. Like and favorite interesting templates
  4. Share your high-quality templates with the community

4. Language Switching

  1. Click the language switcher button (๐ŸŒ) in the top-right corner or sidebar
  2. Select your preferred language (Chinese/English)
  3. The interface will switch languages instantly without page refresh
  4. Your language preference will be saved for future visits

๐Ÿค Contribution Guide

We welcome community contributions! Please follow these steps:

  1. Fork the project to your GitHub account
  2. Create a feature branch: git checkout -b feature/AmazingFeature
  3. Commit your changes: git commit -m 'Add some AmazingFeature'
  4. Push the branch: git push origin feature/AmazingFeature
  5. Create a Pull Request

Development Standards

  • Follow existing code style and naming conventions
  • Add appropriate comments and documentation
  • Ensure all tests pass
  • Update related documentation

๐Ÿ“„ Open Source License

This project is licensed under the LGPL (Lesser General Public License).

License Terms

  • โœ… Commercial Use: Allows deployment and use in commercial environments
  • โœ… Distribution: Allows distribution of original code and binaries
  • โœ… Modification: Allows modification of source code for personal or internal use
  • โŒ Commercial Distribution After Modification: Prohibits commercial distribution of modified source code
  • โš ๏ธ Liability: Use of this software is at the user's own risk

Important Notes

  • Direct deployment of this project for commercial use is allowed
  • Development of internal tools based on this project is allowed
  • Modified source code cannot be redistributed
  • Original copyright statements must be retained

For detailed license terms, please refer to the LICENSE file.

๐Ÿ™ Acknowledgments

Thanks to the following open source projects and technologies:

๐Ÿ“ž Contact Us


Star History

Star History Chart

๐Ÿ’ŒWeChat

image

If this project helps you, please give us a โญ Star!

Made with โค๏ธ by TokenAI

About

AI Prompt Optimization Platform is a professional prompt engineering tool designed to help users optimize AI model prompts, enhancing the effectiveness and accuracy of AI interactions. The platform integrates intelligent optimization algorithms, deep reasoning analysis, visualization debugging tools, and community sharing features, providing compre