生成對抗網路(Generative Adversarial Network, GAN)如今日益火熱,但在GAN領域,西方藝術繪畫占據大半江山,古老的東方藝術卻依舊默默無聞。我們受到Alice Xue的SAPGAN的啓發,並對其進行改進,增加了與使用者人機互動的環節,開發了一個互動式多模態生成框架來為灰度邊緣圖像進行顔色風格覆蓋。我們的生成模型由三個GAN構成,第一個DraftGAN從亂數空間提取亂數生成灰度圖像,第二個TextGAN接收使用者輸入的美學關鍵詞生成調色盤,第三個PaletteGAN使用調色盤為灰度圖像上色,最後在網頁輸出給使用者。實驗表明DTPGAN的辨識率能夠達到52.5%,而基綫RaLSGAN 只有11%的辨識率,且DTPGAN生成的模擬山水畫具有較好的完整性和藝術真實感,還可以通過使用者輸入文本的方式,達到對畫作空間的整體色彩風格進行編輯的效果,該方法具有良好的通用性與應用前景。
Generative Adversarial Network (GAN) is becoming more and more popular nowadays, but in the field of GAN, Western art painting occupies more than half of the country, while ancient Eastern art remains unknown. We were inspired by Alice Xue’s SAPGAN and improved it, adding the link of human-computer interaction with the user, and developed an interactive multi-modal generation framework to cover the color style of gray-scale edge images . Our generative model consists of three GANs. The first DraftGAN extracts random numbers from the random number space to generate grayscale images, the second TextGAN receives aesthetic keywords entered by the user to generate a palette, and the third PaletteGAN uses tone. The color wheel is used to color the grayscale image, and finally output to the user on the web page. Experiments show that the recognition rate of DTPGAN can reach 52.5%, while the baseline RaLSGAN only has an identification rate of 11%, and the simulated landscape paintings generated by DTPGAN have good integrity and artistic realism. It can also be inputted by the user. This method achieves the effect of editing the overall color style of the painting space. This method has good versatility and application prospects.