8000 GitHub - janily/lama-cleaner: Image inpainting tool powered by SOTA AI Model
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

janily/lama-cleaner

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lama-cleaner: Image inpainting tool powered by SOTA AI model

lama-cleaner-0.4.0.mp4
  • Support multiple model architectures
    1. LaMa
    2. LDM
  • High resolution support
  • Multi stroke support. Press and hold the cmd/ctrl key to enable multi stroke mode.
  • Zoom & Pan
  • Keep image EXIF data

Quick Start

Install requirements: pip3 install -r requirements.txt

Start server with LaMa model

python3 main.py --device=cuda --port=8080 --model=lama

Start server with LDM model

python3 main.py --device=cuda --port=8080 --model=ldm --ldm-steps=50

--ldm-steps: The larger the value, the better the result, but it will be more time-consuming

Diffusion model is MUCH MORE slower than GANs(1080x720 image takes 8s on 3090), but it's possible to get better results than LaMa.

Original Image LaMa LDM
photo-1583445095369-9c651e7e5d34 photo-1583445095369-9c651e7e5d34_cleanup_lama photo-1583445095369-9c651e7e5d34_cleanup_ldm

Blogs about diffusion models:

Development

Only needed if you plan to modify the frontend and recompile yourself.

Fronted

Frontend code are modified from cleanup.pictures, You can experience their great online services here.

  • Install dependencies:cd lama_cleaner/app/ && yarn
  • Start development server: yarn dev
  • Build: yarn build

Docker

Run within a Docker container. Set the CACHE_DIR to models location path. Optionally add a -d option to the docker run command below to run as a daemon.

Build Docker image

docker build -f Dockerfile -t lamacleaner .

Run Docker (cpu)

docker run -p 8080:8080 -e CACHE_DIR=/app/models -v  $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cpu --port=8080

Run Docker (gpu)

docker run --gpus all -p 8080:8080 -e CACHE_DIR=/app/models -v $(pwd)/models:/app/models -v $(pwd):/app --rm lamacleaner python3 main.py --device=cuda --port=8080

Then open http://localhost:8080

Like My Work?

Sanster

About

Image inpainting tool powered by SOTA AI Model

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • TypeScript 59.0%
  • Python 34.3%
  • HTML 3.7%
  • CSS 1.2%
  • JavaScript 1.0%
  • Dockerfile 0.8%
0