10000 GitHub - localSummer/LocalSite-ai: Generate Web Pages and Components with text prompts, with Local Models. (or Cloud Models, if you want) - now supports Thinking Models!
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Generate Web Pages and Components with text prompts, with Local Models. (or Cloud Models, if you want) - now supports Thinking Models!

License

Notifications You must be signed in to change notification settings

localSummer/LocalSite-ai

 
 

Repository files navigation

LocalSite AI - now with Thinking Model Support!

A modern web application that uses AI to generate HTML, CSS, and JavaScript code based on natural language prompts. Simply describe what you want to build, and the AI will create a complete, self-contained web page for you.

Features

  • AI-Powered Code Generation: Generate complete web pages from text descriptions
  • Live Preview: See your generated code in action with desktop, tablet, and mobile views
  • Code Editing: Edit the generated code directly in the browser
  • Multiple AI Providers: Support for DeepSeek, custom OpenAI-compatible APIs, and local models
  • Responsive Design: Works on desktop and mobile devices
  • Modern UI: Clean, dark-themed interface with a focus on usability

Tech Stack

Getting Started

Prerequisites

Installation

  1. Clone the repository:

    git clone https://github.com/weise25/LocalSite-ai.git
    cd LocalSite-ai
  2. Install the dependencies:

    npm install
    # or
    yarn install
  3. Rename the .env.example file in the root directory to .env.local and add your API key:

    # Choose one of the following providers:
    
    # DeepSeek API
    DEEPSEEK_API_KEY=your_deepseek_api_key_here
    DEEPSEEK_API_BASE=https://api.deepseek.com/v1
    
    # Custom OpenAI-compatible API
    # OPENAI_COMPATIBLE_API_KEY=your_api_key_here
    # OPENAI_COMPATIBLE_API_BASE=https://api.openai.com/v1
    
    # Default Provider (deepseek, openai_compatible, ollama, lm_studio)
    DEFAULT_PROVIDER=lm_studio
    
  4. Start the development server:

    npm run dev
    # or
    yarn dev
  5. Open http://localhost:3000 in your browser.

Supported AI Providers

Local Models

Ollama

  1. Install Ollama on your local machine.
  2. Pull a model like llama2 or codellama.
  3. Start the Ollama server.
  4. Set in your .env.local file:
    OLLAMA_API_BASE=http://localhost:11434
    DEFAULT_PROVIDER=ollama
    

LM Studio

  1. Install LM Studio on your local machine.
  2. Download a model and start the local server.
  3. Set in your .env.local file:
    LM_STUDIO_API_BASE=http://localhost:1234/v1
    DEFAULT_PROVIDER=lm_studio
    

DeepSeek

  1. Visit DeepSeek and create an account or sign in.
  2. Navigate to the API keys section.
  3. Create a new API key and copy it.
  4. Set in your .env.local file:
    DEEPSEEK_API_KEY=your_deepseek_api_key
    DEEPSEEK_API_BASE=https://api.deepseek.com/v1
    

Custom OpenAI-compatible API

You can use any OpenAI-compatible API:

  1. Obtain an API key from your desired provider (OpenAI, Together AI, Groq, etc.).
  2. Set in your .env.local file:
    OPENAI_COMPATIBLE_API_KEY=your_api_key
    OPENAI_COMPATIBLE_API_BASE=https://api.of.provider.com/v1
    

Deployment

Deploying on Vercel

Vercel is the recommended platform for hosting your Next.js application:

  1. Create an account on Vercel and connect it to your GitHub account.
  2. Import your repository.
  3. Add the environment variables for your desired provider, e.g.:
    • DEEPSEEK_API_KEY
    • DEEPSEEK_API_BASE
    • DEFAULT_PROVIDER
  4. Click "Deploy".

Other Hosting Options

The application can also be deployed on:

Keep in Mind that if you host it on a platform, (like Vercel, Netlify, etc.) you can not use local models through Ollama or LM Studio, unless using something like Tunneling via ngrok.

Usage

  1. Enter a prompt describing what kind of website you want to create.
  2. Select an AI provider and model from the dropdown menu.
  3. Click "GENERATE".
  4. Wait for the code to be generated.
  5. View the live preview and adjust the viewport (Desktop, Tablet, Mobile).
  6. Toggle edit mode to modify the code if needed.
  7. Copy the code or download it as an HTML file.

Roadmap

AI Models and Providers

  • Integration with Ollama for local model execution
  • Support for LM Studio to use local models
  • Predefined provider: DeepSeek
  • Custom OpenAI-compatible API support
  • Support thinking models (Qwen3,DeepCoder, etc.)
  • Adding more predefined providers (Anthropic, Groq, etc.)

Advanced Code Generation

  • Choose between different Frameworks and Libraries (React, Vue, Angular, etc.)
  • File-based code generation (multiple files)
  • Save and load projects
  • Agentic diff-editing capabilities

UI/UX Improvements

  • Dark/Light theme toggle
  • Customizable code editor settings
  • Drag-and-drop interface for UI components
  • History of generated code

Accessibility

  • Transcription and voice input for prompts
  • Anything; feel free to make suggestions

Desktop App

  • Turning into a cross-platform desktop app (Electron)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Generate Web Pages and Components with text prompts, with Local Models. (or Cloud Models, if you want) - now supports Thinking Models!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 98.8%
  • Other 1.2%
0