As our final LSML project we decided to create online AI-tool for image correction using inpainting. There are a lot of works where this problem is solved with Deep Learning. We took this fresh paper by NVidia researches as a base because they archived very spectacular results and can deservedly be considered a state-of-the-art right now.
We thought it would be interesting to apply this tool for face correction, so used CelebA dataset for learning. But in the original dataset there are a lot of low-quality images, that's why we finally used CelebA-HQ dataset.
Finally we faced a challenge to create pretty web-page to allow people to use our tool online. You can try it here.
The fastest way to start playing with the demo is (requires Docker):
>>> git clone https://github.com/karfly/inpaint
>>> cd inpaint
>>> docker build -t inpaint_image .
>>> ./run.sh
<visit localhost:8003 in your favourite browser>
Current limitations:
- photos must have resolution 256x256
- photos must be similar to ones from CelebA-HQ dataset
If you want to explore the project more deeply, here are some notes:
- the project supports only Python 3
- all the dependencies are listed in
requirements.txt
- a well-documented interface of the main library (with the original model and the loss used when training) is in
inpaint/__init__.py
- to run the app locally see the example of a command in app/run.sh and the documentation of the function
setup_app
inapp/app.py
- Ivan Golovanov - mask generation, research.
- Yury Gorishniy - backend, data manipulation, inpaint loss.
- Vladimir Ivashkin - frontend, backend.
- Karim Iskakov - model training, mask generation, CelebA-HQ generation.