Skip to content

CorvaeOboro/gen_ability_icon

Repository files navigation

GEN_ABILITY_ICON

create ability icon images , a circular workflow of refinement using procgen augmented by neural networks .

00_icon_gen2_compA

IMAGE DATASET

  • a synthetic image dataset of circular magic ability icons
  • collection of favored ability icons generated free to all ( CC0 )
  • DOWNLOAD ICONS |  VIEW ICONS

00_icon_gen_20220407_comp

STYLEGAN2ADA CHECKPOINT

00_icon_gen_stylegan2ada_20221011_comp

PROCGEN

  • houdini hda tool , z_ITEM_ABILITY_ICON.hda , generates 3d randomized icons of types ( slash , shatter , splatter )
  • included houdini/GEN_ABILITY_ICON.hip file setup with PDG TOPs , renders randomized wedging
  • utilizes SideFXLabs hda tools and ZENV hda tools
  • focused on volumetric lighting , metallic material , randomized vertex color

gen_ability_icon_pdg_02

00_icon_procgen_comp

GUIDED MUTATION / REMIXING

with initial set of procgen generated , expand the dataset by alteration using various techniques :

00_icon_gen2_compB

  • vqgan+clip

00_icon_gen_stablediffusion_20221005_comp

  • stable diffusion

INSTALL

# clone GEN_ABILITY_ICON
git clone 'https://github.com/CorvaeOboro/gen_ability_icon'
cd gen_ability_icon

# create anaconda env from included environment.yml
conda env create --prefix venv -f environment.yml
conda activate venv

# clone STYLEGAN2ADA
git clone 'https://github.com/NVlabs/stylegan2-ada'

# clone VQGAN+CLIP 
git clone 'https://github.com/openai/CLIP'
git clone 'https://github.com/CompVis/taming-transformers'

# download VQGAN checkpoint imagenet 16k
mkdir checkpoints
curl -L -o checkpoints/vqgan_imagenet_f16_16384.yaml -C - 'https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' #ImageNet 16384
curl -L -o checkpoints/vqgan_imagenet_f16_16384.ckpt -C - 'https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/files/?p=%2Fckpts%2Flast.ckpt&dl=1' #ImageNet 16384

WORKFLOW

  • generate procgen renders from houdini , selecting favored renders
# procgen houdini pdg render , requires houdini and zenv tools
python gen_ability_icon_houdini_render.py
  • mutate those renders via text guided VQGAN+CLIP
# vqgan+clip text2image batch alter from init image set
python gen_ability_icon_vqganclip.py  
python gen_ability_icon_vqganclip.py --input_path="./icons/" --input_prompt_list="prompts_list.txt" 
  • combine the renders and mutants via random collaging
# collage from generated icon set
python gen_ability_icon_collage.py
python gen_ability_icon_collage.py --input_path="./icons/" --resolution=256
  • select the favored icons to create a stylegan2 dataset
  • train stylegan2 network , then generate seeds from trained checkpoint
# stylegan2ada generate from trained icon checkpoint
python gen_ability_icon_stylegan2ada_generate.py
  • cultivate the complete dataset thru selection and art direction adjustments
  • repeat to expand and refine by additional text guided mutation , retraining , regenerating

CHANGELIST

  • 20221012 = stylegan2ada checkpoint gen 5 , includes stablediffusion variants
  • 20220801 = stylegan2ada checkpoint gen 4 , including procgen , collage , vqgan+clip variants

THANKS

many thanks to

AKNOWLEDGEMENTS

@inproceedings{Karras2020ada,
  title     = {Training Generative Adversarial Networks with Limited Data},
  author    = {Tero Karras and Miika Aittala and Janne Hellsten and Samuli Laine and Jaakko Lehtinen and Timo Aila},
  booktitle = {Proc. NeurIPS},
  year      = {2020}
}
@misc{esser2020taming,
      title={Taming Transformers for High-Resolution Image Synthesis}, 
      author={Patrick Esser and Robin Rombach and Björn Ommer},
      year={2020},
      eprint={2012.09841},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@misc{https://doi.org/10.48550/arxiv.2103.00020,
  doi = {10.48550/ARXIV.2103.00020},
  url = {https://arxiv.org/abs/2103.00020},
  author = {Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and Krueger, Gretchen and Sutskever, Ilya},
  keywords = {Computer Vision and Pattern Recognition (cs.CV), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {Learning Transferable Visual Models From Natural Language Supervision},
  publisher = {arXiv},
  year = {2021},
  copyright = {arXiv.org perpetual, non-exclusive license}
}

ATTRIBUTION

CREATIVE COMMONS ZERO

free to all , creative commons CC0 , free to redistribute , no attribution required