Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to take the generated model pt -> and mash it back into stylegan2-ada pkl format for use in other apps. [DRAFTED] #26

Open
johndpope opened this issue Aug 26, 2021 · 3 comments

Comments

@johndpope
Copy link

johndpope commented Aug 26, 2021

So I get the npz file (thanks for your help on the other ticket #24) + I see the new generator - saved.
model_MWVZTEZFDDJB_1.pt

I did some inspecting and see the new generator
https://gist.github.com/johndpope/c5b77f8cc7d7d008be7f15079a9378bf

I'm wanting to spit out am update ffhq pkl file in the correct shape and format so I can run the new generator in different use cases with other repos.

  with open(paths_config.stylegan2_ada_ffhq, 'rb') as f:
    old_G = pickle.load(f)['G_ema'].cuda()  // this grabs the pickle for ffhq file
    
  with open(f'{paths_config.checkpoints_dir}/model_{model_id}_{image_name}.pt', 'rb') as f_new: 
    new_G = torch.load(f_new).cuda() // and htis is grabbing the updated model_MWVZTEZFDDJB_1.pt

UPDATE 1 - thus far I have this hack which saves out a pkl

UPDATE 2 -
I actually load the new file into stylegan2-ada-pytorch and run the approach.py in conjunction with projected_w.pnz
but it's badly working - I wonder if it's because this pickle would need a new descriminator too???

UPDATE 3 -
I think I know how to solve - I need to load the final pt which is spat out and do the hot wiring - should be fine.

def export_updated_pickle(new_G,model_id):
  print("Exporting large updated pickle based off new generator and ffhq.pkl")
  with open(paths_config.stylegan2_ada_ffhq, 'rb') as f:
    d = pickle.load(f)
    old_G = d['G_ema'].cuda() ## tensor
    old_D = d['D'].eval().requires_grad_(False).cpu()

  tmp = {}
  tmp['G_ema'] = old_G.eval().requires_grad_(False).cpu()# copy.deepcopy(new_G).eval().requires_grad_(False).cpu()
  tmp['G'] = new_G.eval().requires_grad_(False).cpu() # copy.deepcopy(new_G).eval().requires_grad_(False).cpu()
  tmp['D'] = old_D
  tmp['training_set_kwargs'] = None
  tmp['augment_pipe'] = None


  with open(f'{paths_config.checkpoints_dir}/model_{model_id}.pkl', 'wb') as f:
      pickle.dump(tmp, f)

....
at bottom of notebook
print(f'Displaying PTI inversion')
plot_image_from_w(w_pivot, new_G)
np.savez(f'projected_w.npz', w=w_pivot.cpu().detach().numpy())
export_updated_pickle(new_G,model_id)

original
image_from_w

1_afro
1_angry
1_bobcut
1_bowlcut
1_mohawk
1_surprised
1_trump

https://drive.google.com/drive/folders/1l6Xvs6EPVyyw0sFowIpN1pd1lJbm56hD?usp=sharing

I get new pkl / npz file

I cherry pick this file into original stylegan2-ada-pytorch repo
https://github.com/l4rz/stylegan2-clip-approach

I rename file pkl to ffhq-pti.pkl
I run
(torch) ➜ stylegan2-ada-pytorch git:(main) ✗ python approach.py --network ffhq-pti.pkl --w projected_w.npz --outdir ffhq-pti --num-steps 100 --text 'squint'

@johndpope johndpope changed the title How to take the generated model pt -> and mash it back into pkl format for use in other apps. How to take the generated model pt -> and mash it back into stylegan2-ada pkl format for use in other apps. [DRAFTED] Aug 27, 2021
@JanFschr
Copy link

were you able to achive this and use the model in styleclip or something else?

@johndpope
Copy link
Author

johndpope commented Sep 12, 2021

here is result - I lose something...
out-c783468d493236b5e34d658635f8a276a3fef5f4

the problem is this line
tmp['G_ema'] = old_G.eval().requires_grad_(False).cpu()

https://reposhub.com/python/deep-learning/NVlabs-stylegan2-ada-pytorch.html

The pickle contains three networks. 'G' and 'D' are instantaneous snapshots taken during training, and 'G_ema' represents a moving average of the generator weights over several training steps. The networks are regular instances of torch.nn.Module, with all of their parameters and buffers placed on the CPU at import and gradient computation disabled by default.

until this can be baked from PTI - non sure this is feasible.

UPDATE -
I think my code is using the wrong generator - e4e.
I'll have another crack later on.

UPDATE 2 - using the embeddings spat out - I successfully run
python optimization/run_optimization.py --latent_path='/home/jp/Documents/gitWorkspace/PTI/embeddings/barcelona/PTI/personal_image/0.pt'

a person with purple hair.
00280

@chwshuang
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants