Merge pull request #5 from MasaTate/main
Browse filesChanged README.md to explain color control usage
README.md
CHANGED
|
@@ -1,26 +1,26 @@
|
|
| 1 |
-
2022-AdaIN-pytorch
|
| 2 |
-
|
| 3 |
This is an unofficial Pytorch implementation of the paper, `Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization, ICCV 2017` [arxiv](https://arxiv.org/abs/1703.06868). I referred to the [official implementation](https://github.com/xunhuang1995/AdaIN-style) in Torch. I used pretrained weights of vgg19 and decoder from [naoto0804](https://github.com/naoto0804/pytorch-AdaIN).
|
| 4 |
|
| 5 |
-
Requirements
|
| 6 |
-
|
| 7 |
Install requirements by `$ pip install -r requirements.txt`
|
| 8 |
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
|
| 17 |
-
Usage
|
| 18 |
-
----------------------------
|
| 19 |
|
| 20 |
### Training
|
| 21 |
|
| 22 |
-
The encoder uses pretrained vgg19 network. Download the [vgg19 weight](https://drive.google.com/file/d/1UcSl-Zn3byEmn15NIPXMf9zaGCKc2gfx/view?usp=sharing). The decoder is trained on MSCOCO and wikiart dataset.
|
| 23 |
Run the script train.py
|
|
|
|
| 24 |
```
|
| 25 |
$ python train.py --content_dir $CONTENT_DIR --style_dir STYLE_DIR --cuda
|
| 26 |
|
|
@@ -44,7 +44,7 @@ optional arguments:
|
|
| 44 |
|
| 45 |
Download [vgg19 weight](https://drive.google.com/file/d/1UcSl-Zn3byEmn15NIPXMf9zaGCKc2gfx/view?usp=sharing), [decoder weight](https://drive.google.com/file/d/18JpLtMOapA-vwBz-LRomyTl24A9GwhTF/view?usp=sharing) under the main directory.
|
| 46 |
|
| 47 |
-
To test basic style transfer, run the script test.py. Specify `--content_image`, `--style_image` to the image path, or specify `--content_dir`, `--style_dir` to iterate all images under this directory. All outputs are saved in `./results/`. Specify `--grid_pth` to collect all outputs in a grid image.
|
| 48 |
|
| 49 |
```
|
| 50 |
$ python test.py --content_image $IMG --style_image $STYLE --cuda
|
|
@@ -66,11 +66,12 @@ optional arguments:
|
|
| 66 |
--grid_pth GRID_PTH
|
| 67 |
Specify a grid image path (default=None) if generate a grid image
|
| 68 |
that contains all style transferred images
|
|
|
|
| 69 |
```
|
| 70 |
|
| 71 |
### Test Image Interpolation Style Transfer
|
| 72 |
|
| 73 |
-
To test style transfer interpolation, run the script test_interpolate.py. Specify `--style_image` with multiple paths separated by comma. Specify `--interpolation_weights` to interpolate once. All outputs are saved in `./results_interpolate/`. Specify `--grid_pth` to interpolate with different built-in weights and provide 4 style images.
|
| 74 |
|
| 75 |
```
|
| 76 |
|
|
@@ -95,6 +96,7 @@ optional arguments:
|
|
| 95 |
transfer multiple times with different built-in weights and generate a
|
| 96 |
grid image that contains all style transferred images. Provide 4 style
|
| 97 |
images. Do not specify if input interpolation_weights.
|
|
|
|
| 98 |
```
|
| 99 |
|
| 100 |
### Test Video Style Transfer
|
|
@@ -114,25 +116,35 @@ optional arguments:
|
|
| 114 |
--alpha {Alpha Range}
|
| 115 |
Alpha [0.0, 1.0] controls style transfer level
|
| 116 |
--cuda Use CUDA
|
|
|
|
| 117 |
```
|
| 118 |
|
| 119 |
-
Examples
|
| 120 |
-
|
| 121 |
### Basic Style Transfer
|
|
|
|
| 122 |

|
| 123 |
|
| 124 |
### Different levels of style transfer
|
|
|
|
| 125 |

|
| 126 |
|
| 127 |
### Interpolation Style Transfer
|
|
|
|
| 128 |

|
| 129 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
### Video Style Transfer
|
|
|
|
| 131 |
Original Video
|
| 132 |
|
| 133 |
https://user-images.githubusercontent.com/42717345/163805137-d7ba350b-a42e-4b91-ac2b-4916b1715153.mp4
|
| 134 |
|
| 135 |
-
|
| 136 |
Style Image
|
| 137 |
|
| 138 |
<img src="https://github.com/media-comp/2022-AdaIN-pytorch/blob/main/images/art/picasso_self_portrait.jpg" alt="drawing" width="200"/>
|
|
@@ -141,10 +153,11 @@ Style Transfer Video
|
|
| 141 |
|
| 142 |
https://user-images.githubusercontent.com/42717345/163805886-a1199a40-6032-4baf-b2d4-30e6e05b3385.mp4
|
| 143 |
|
|
|
|
| 144 |
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
|
|
|
| 1 |
+
# 2022-AdaIN-pytorch
|
| 2 |
+
|
| 3 |
This is an unofficial Pytorch implementation of the paper, `Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization, ICCV 2017` [arxiv](https://arxiv.org/abs/1703.06868). I referred to the [official implementation](https://github.com/xunhuang1995/AdaIN-style) in Torch. I used pretrained weights of vgg19 and decoder from [naoto0804](https://github.com/naoto0804/pytorch-AdaIN).
|
| 4 |
|
| 5 |
+
## Requirements
|
| 6 |
+
|
| 7 |
Install requirements by `$ pip install -r requirements.txt`
|
| 8 |
|
| 9 |
+
- Python 3.7+
|
| 10 |
+
- PyTorch 1.10
|
| 11 |
+
- Pillow
|
| 12 |
+
- TorchVision
|
| 13 |
+
- Numpy
|
| 14 |
+
- imageio
|
| 15 |
+
- tqdm
|
| 16 |
|
| 17 |
+
## Usage
|
|
|
|
| 18 |
|
| 19 |
### Training
|
| 20 |
|
| 21 |
+
The encoder uses pretrained vgg19 network. Download the [vgg19 weight](https://drive.google.com/file/d/1UcSl-Zn3byEmn15NIPXMf9zaGCKc2gfx/view?usp=sharing). The decoder is trained on MSCOCO and wikiart dataset.
|
| 22 |
Run the script train.py
|
| 23 |
+
|
| 24 |
```
|
| 25 |
$ python train.py --content_dir $CONTENT_DIR --style_dir STYLE_DIR --cuda
|
| 26 |
|
|
|
|
| 44 |
|
| 45 |
Download [vgg19 weight](https://drive.google.com/file/d/1UcSl-Zn3byEmn15NIPXMf9zaGCKc2gfx/view?usp=sharing), [decoder weight](https://drive.google.com/file/d/18JpLtMOapA-vwBz-LRomyTl24A9GwhTF/view?usp=sharing) under the main directory.
|
| 46 |
|
| 47 |
+
To test basic style transfer, run the script test.py. Specify `--content_image`, `--style_image` to the image path, or specify `--content_dir`, `--style_dir` to iterate all images under this directory. All outputs are saved in `./results/`. Specify `--grid_pth` to collect all outputs in a grid image. Specify `--color_control` to preserve the content image color.
|
| 48 |
|
| 49 |
```
|
| 50 |
$ python test.py --content_image $IMG --style_image $STYLE --cuda
|
|
|
|
| 66 |
--grid_pth GRID_PTH
|
| 67 |
Specify a grid image path (default=None) if generate a grid image
|
| 68 |
that contains all style transferred images
|
| 69 |
+
--color_control Preserve content image color
|
| 70 |
```
|
| 71 |
|
| 72 |
### Test Image Interpolation Style Transfer
|
| 73 |
|
| 74 |
+
To test style transfer interpolation, run the script test_interpolate.py. Specify `--style_image` with multiple paths separated by comma. Specify `--interpolation_weights` to interpolate once. All outputs are saved in `./results_interpolate/`. Specify `--grid_pth` to interpolate with different built-in weights and provide 4 style images. Specify `--color_control` to preserve the content image color.
|
| 75 |
|
| 76 |
```
|
| 77 |
|
|
|
|
| 96 |
transfer multiple times with different built-in weights and generate a
|
| 97 |
grid image that contains all style transferred images. Provide 4 style
|
| 98 |
images. Do not specify if input interpolation_weights.
|
| 99 |
+
--color_control Preserve content image color
|
| 100 |
```
|
| 101 |
|
| 102 |
### Test Video Style Transfer
|
|
|
|
| 116 |
--alpha {Alpha Range}
|
| 117 |
Alpha [0.0, 1.0] controls style transfer level
|
| 118 |
--cuda Use CUDA
|
| 119 |
+
--color_control Preserve content image color
|
| 120 |
```
|
| 121 |
|
| 122 |
+
## Examples
|
| 123 |
+
|
| 124 |
### Basic Style Transfer
|
| 125 |
+
|
| 126 |

|
| 127 |
|
| 128 |
### Different levels of style transfer
|
| 129 |
+
|
| 130 |

|
| 131 |
|
| 132 |
### Interpolation Style Transfer
|
| 133 |
+
|
| 134 |

|
| 135 |
|
| 136 |
+
### Style Transfer with color control
|
| 137 |
+
|
| 138 |
+
|||
|
| 139 |
+
|---|---|
|
| 140 |
+
|w/o color control|w/ color control|
|
| 141 |
+
|
| 142 |
### Video Style Transfer
|
| 143 |
+
|
| 144 |
Original Video
|
| 145 |
|
| 146 |
https://user-images.githubusercontent.com/42717345/163805137-d7ba350b-a42e-4b91-ac2b-4916b1715153.mp4
|
| 147 |
|
|
|
|
| 148 |
Style Image
|
| 149 |
|
| 150 |
<img src="https://github.com/media-comp/2022-AdaIN-pytorch/blob/main/images/art/picasso_self_portrait.jpg" alt="drawing" width="200"/>
|
|
|
|
| 153 |
|
| 154 |
https://user-images.githubusercontent.com/42717345/163805886-a1199a40-6032-4baf-b2d4-30e6e05b3385.mp4
|
| 155 |
|
| 156 |
+
## References
|
| 157 |
|
| 158 |
+
- X. Huang and S. Belongie. "Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization.", in ICCV, 2017. [arxiv](https://arxiv.org/abs/1703.06868)
|
| 159 |
+
- [Original implementation in Torch](https://github.com/xunhuang1995/AdaIN-style)
|
| 160 |
+
- [Pretrained weights](https://github.com/naoto0804/pytorch-AdaIN)
|
| 161 |
+
- List of all source URLs of images collected from the internet. [Image_sources.txt](https://github.com/media-comp/2022-AdaIN-pytorch/blob/main/Image_sources.txt)
|
| 162 |
+
- L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman. Controlling perceptual factors in neural style transfer. In CVPR, 2017. [arxiv](https://arxiv.org/abs/1611.07865)
|
| 163 |
+
- A. Hertzmann. Algorithms for Rendering in Artistic Styles. PhD thesis, New York University, 2001.
|