Skip to content

Commit

Permalink
Add ComfyUI Progress Bar for Zero123++, add MVDream output camposes
Browse files Browse the repository at this point in the history
  • Loading branch information
MrForExample committed May 24, 2024
1 parent 7cf8fcd commit cf770a8
Show file tree
Hide file tree
Showing 4 changed files with 29 additions and 8 deletions.
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# ComfyUI-3D-Pack
An extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc) using cutting edge algorithms (3DGS, NeRF, Differentiable Rendering, SDS/VSD Optimization, etc.)
**Make ComfyUI generates 3D assets as good & convenient as it generates image/video!**
<br>
This is an extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc.) using cutting edge algorithms (3DGS, NeRF, etc.) and models (InstantMesh, CRM, TripoSR, etc.)

<span style="font-size:1.5em;">
<a href=#currently-support>Features</a> &mdash;
Expand All @@ -10,8 +12,6 @@
<a href=#supporters>Supporters</a>
</span>

### Note: this project is still a WIP

## Currently support:
- For use case please check [Example Workflows](./_Example_Workflows/). [**Last update: 23/05/2024**]
- **Note:** you need to put [Example Inputs Files & Folders](_Example_Workflows/_Example_Inputs_Files/) under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow
Expand Down Expand Up @@ -180,7 +180,7 @@ install_windows_portable_win_py311_cu121.bat
```

### Install Method 1: Using Miniconda(Works on Windows & Linux & Mac)
***Note: [In some edge cases Miniconda fails Anaconda could fix the issue](https://github.com/MrForExample/ComfyUI-3D-Pack/issues/49)***
***Note: [In some edge cases Miniconda fails but Anaconda could fix the issue](https://github.com/MrForExample/ComfyUI-3D-Pack/issues/49)***

#### Setup with Miniconda:
First download [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/) (*One of the best way to manage a clean and separated python envirments*)
Expand Down Expand Up @@ -230,6 +230,7 @@ pip install -r requirements_post.txt

**Plus:**<br>
- For those who want to run it inside Google Colab, you can check the [install instruction from @lovisdotio](https://github.com/MrForExample/ComfyUI-3D-Pack/issues/13)
- You can find some of the pre-build wheels for Linux here: [remsky/ComfyUI3D-Assorted-Wheels](https://github.com/remsky/ComfyUI3D-Assorted-Wheels)

#### Install and run with docker:

Expand Down
13 changes: 10 additions & 3 deletions nodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -1487,9 +1487,11 @@ def INPUT_TYPES(cls):

RETURN_TYPES = (
"IMAGE",
"ORBIT_CAMPOSES", # [orbit radius, elevation, azimuth, orbit center X, orbit center Y, orbit center Z]
)
RETURN_NAMES = (
"multiview_images",
"orbit_camposes",
)
FUNCTION = "run_mvdream"
CATEGORY = "Comfy3D/Algorithm"
Expand Down Expand Up @@ -1521,7 +1523,14 @@ def run_mvdream(
mv_images = mvdream_pipe(prompt, reference_image, generator=generator, negative_prompt=prompt_neg, guidance_scale=mv_guidance_scale, num_inference_steps=num_inference_steps, elevation=elevation)
mv_images = torch.from_numpy(np.stack([mv_images[1], mv_images[2], mv_images[3], mv_images[0]], axis=0)).float() # [4, H, W, 3], float32

return (mv_images, )
azimuths = [0, 90, 180, -90]
elevations = [0, 0, 0, 0]
radius = [4.0] * 4
center = [0.0] * 4

orbit_camposes = [azimuths, elevations, radius, center, center, center]

return (mv_images, orbit_camposes)

class Load_Large_Multiview_Gaussian_Model:

Expand Down Expand Up @@ -2246,8 +2255,6 @@ def INPUT_TYPES(s):
@torch.no_grad()
def run_LRM(self, lrm_model, multiview_images, orbit_camera_poses, orbit_camera_fovy, texture_resolution):

multiview_images

images = multiview_images.permute(0, 3, 1, 2).unsqueeze(0).to(DEVICE) # [N, H, W, 3] -> [1, N, 3, H, W]
images = v2.functional.resize(images, 320, interpolation=3, antialias=True).clamp(0, 1)

Expand Down
5 changes: 4 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
[project]
name = "comfyui-3d-pack"
description = "An extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc) using cutting edge algorithms (3DGS, NeRF, etc.)\nNOTE: Pre-built python wheels can be download from [a/https://github.com/remsky/ComfyUI3D-Assorted-Wheels](https://github.com/remsky/ComfyUI3D-Assorted-Wheels)"
description = "
Make ComfyUI generates 3D assets as good & convenient as it generates image/video!
This is an extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc.) using cutting edge algorithms (3DGS, NeRF, etc.) and models (InstantMesh, CRM, TripoSR, etc.)
"
version = "1.0.0"
license = "LICENSE"
dependencies = ["# base", "cmake", "ninja", "# computational libraries", "numpy", "einops", "scipy", "kornia", "opencv-python", "pillow", "roma", "nerfacc>=0.5.3", "PyMCubes", "scikit-learn", "# for use ML models", "diffusers>=0.26.1", "transformers>=4.36.2", "safetensors", "open_clip_torch", "# for training differentiable tensors", "pytorch_msssim", "# for process images & videos", "imageio", "imageio-ffmpeg", "matplotlib", "# for dmtet and mesh import & export", "trimesh", "plyfile", "pygltflib", "xatlas", "pymeshlab", "# configs & extra", "torchtyping", "tqdm", "jaxtyping", "packaging", "OmegaConf", "pyhocon"]
Expand Down
10 changes: 10 additions & 0 deletions zero123plus/pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,11 @@
from diffusers.models.attention_processor import Attention, AttnProcessor, XFormersAttnProcessor, AttnProcessor2_0
from diffusers.utils.import_utils import is_xformers_available

import comfy.utils

def callback_update_comfy_bar(pipe, step_index, timestep, callback_kwargs):
pipe.comfy_pbar.update_absolute(step_index + 1)
return callback_kwargs

def to_rgb_image(maybe_rgba: Image.Image):
if maybe_rgba.mode == 'RGB':
Expand Down Expand Up @@ -380,6 +385,9 @@ def __call__(
cak = dict(cond_lat=cond_lat)
if hasattr(self.unet, "controlnet"):
cak['control_depth'] = depth_image

self.comfy_pbar = comfy.utils.ProgressBar(num_inference_steps)

latents: torch.Tensor = super().__call__(
None,
*args,
Expand All @@ -391,6 +399,8 @@ def __call__(
output_type='latent',
width=width,
height=height,
callback_on_step_end=callback_update_comfy_bar,
callback_on_step_end_tensor_inputs=[],
**kwargs
).images
latents = unscale_latents(latents)
Expand Down

0 comments on commit cf770a8

Please sign in to comment.