A modern web application that uses AI to generate HTML, CSS, and JavaScript code based on natural language prompts. Simply describe what you want to build, and the AI will create a complete, self-contained web page for you.
- AI-Powered Code Generation: Generate complete web pages from text descriptions
- Live Preview: See your generated code in action with desktop, tablet, and mobile views
- Code Editing: Edit the generated code directly in the browser
- Multiple AI Providers: Support for DeepSeek, custom OpenAI-compatible APIs, and local models
- Responsive Design: Works on desktop and mobile devices
- Modern UI: Clean, dark-themed interface with a focus on usability
- Next.js 15 with App Router
- React 19
- Tailwind CSS
- Shadcn UI
- OpenAI SDK (for API compatibility)
- Monaco Editor
- Node.js (version 18.17 or higher)
- npm or yarn
- Ollama or LM Studio installed
- OR an API key from one of the supported providers (see below)
-
Clone the repository:
git clone https://github.com/weise25/LocalSite-ai.git cd LocalSite-ai
-
Install the dependencies:
npm install # or yarn install
-
Rename the
.env.example
file in the root directory to.env.local
and add your API key:# Choose one of the following providers: # DeepSeek API DEEPSEEK_API_KEY=your_deepseek_api_key_here DEEPSEEK_API_BASE=https://api.deepseek.com/v1 # Custom OpenAI-compatible API # OPENAI_COMPATIBLE_API_KEY=your_api_key_here # OPENAI_COMPATIBLE_API_BASE=https://api.openai.com/v1 # Default Provider (deepseek, openai_compatible, ollama, lm_studio) DEFAULT_PROVIDER=lm_studio
-
Start the development server:
npm run dev # or yarn dev
-
Open http://localhost:3000 in your browser.
- Install Ollama on your local machine.
- Pull a model like
llama2
orcodellama
. - Start the Ollama server.
- Set in your
.env.local
file:OLLAMA_API_BASE=http://localhost:11434 DEFAULT_PROVIDER=ollama
- Install LM Studio on your local machine.
- Download a model and start the local server.
- Set in your
.env.local
file:LM_STUDIO_API_BASE=http://localhost:1234/v1 DEFAULT_PROVIDER=lm_studio
- Visit DeepSeek and create an account or sign in.
- Navigate to the API keys section.
- Create a new API key and copy it.
- Set in your
.env.local
file:DEEPSEEK_API_KEY=your_deepseek_api_key DEEPSEEK_API_BASE=https://api.deepseek.com/v1
You can use any OpenAI-compatible API:
- Obtain an API key from your desired provider (OpenAI, Together AI, Groq, etc.).
- Set in your
.env.local
file:OPENAI_COMPATIBLE_API_KEY=your_api_key OPENAI_COMPATIBLE_API_BASE=https://api.of.provider.com/v1
Vercel is the recommended platform for hosting your Next.js application:
- Create an account on Vercel and connect it to your GitHub account.
- Import your repository.
- Add the environment variables for your desired provider, e.g.:
DEEPSEEK_API_KEY
DEEPSEEK_API_BASE
DEFAULT_PROVIDER
- Click "Deploy".
The application can also be deployed on:
- Netlify
- Cloudflare Pages
- Any platform that supports Next.js applications
Keep in Mind that if you host it on a platform, (like Vercel, Netlify, etc.) you can not use local models through Ollama or LM Studio, unless using something like Tunneling via ngrok.
- Enter a prompt describing what kind of website you want to create.
- Select an AI provider and model from the dropdown menu.
- Click "GENERATE".
- Wait for the code to be generated.
- View the live preview and adjust the viewport (Desktop, Tablet, Mobile).
- Toggle edit mode to modify the code if needed.
- Copy the code or download it as an HTML file.
- Integration with Ollama for local model execution
- Support for LM Studio to use local models
- Predefined provider: DeepSeek
- Custom OpenAI-compatible API support
- Support thinking models (Qwen3,DeepCoder, etc.)
- Adding more predefined providers (Anthropic, Groq, etc.)
- Choose between different Frameworks and Libraries (React, Vue, Angular, etc.)
- File-based code generation (multiple files)
- Save and load projects
- Agentic diff-editing capabilities
- Dark/Light theme toggle
- Customizable code editor settings
- Drag-and-drop interface for UI components
- History of generated code
- Transcription and voice input for prompts
- Anything; feel free to make suggestions
- Turning into a cross-platform desktop app (Electron)
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.