Skip to content

Commit

Permalink
Merge branch 'release/v0.4'
Browse files Browse the repository at this point in the history
  • Loading branch information
cheind committed Feb 2, 2021
2 parents bf5ee25 + 5e58154 commit 31a2768
Show file tree
Hide file tree
Showing 76 changed files with 2,139 additions and 283 deletions.
5 changes: 2 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,7 @@ venv.bak/
# mypy
.mypy_cache/

tmp/*
examples/datagen/tmp/*
**/tmp/*
*.blend1

**/tmp/*
!__keep__
3 changes: 3 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@

dist: xenial
language: python
python:
- 3.7
- 3.8
services:
- xvfb

cache:
pip: true
Expand Down
68 changes: 40 additions & 28 deletions Readme.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,55 @@
# blendtorch v0.2
![](https://travis-ci.org/cheind/pytorch-blender.svg?branch=develop)
# blendtorch
[![](https://travis-ci.org/cheind/pytorch-blender.svg?branch=develop)](https://travis-ci.org/cheind/pytorch-blender)

**blendtorch** is a Python framework to seamlessly integrate [Blender](http://blender.org) into [PyTorch](http://pytorch.org) datasets for deep learning from artificial visual data. We utilize Eevee, a new physically based real-time renderer, to synthesize images and annotations in real-time and thus avoid stalling model training in many cases.

Feature summary
- ***Data Streaming***: Stream distributed Blender renderings directly into PyTorch data pipelines in real-time for supervised learning and domain randomization applications. Supports arbitrary pickle-able objects to be send alongside images/videos. Built-in recording capability to replay data without Blender.</br>More info [\[examples/datagen\]](examples/datagen)
- ***Data Streaming***: Stream distributed Blender renderings directly into PyTorch data pipelines in real-time for supervised learning and domain randomization applications. Supports arbitrary pickle-able objects to be send alongside images/videos. Built-in recording capability to replay data without Blender. Bi-directional communication channels allow Blender simulations to adapt during network training. </br>More info [\[examples/datagen\]](examples/datagen), [\[examples/compositor_normals_depth\]](examples/compositor_normals_depth), [\[examples/densityopt\]](examples/densityopt)
- ***OpenAI Gym Support***: Create and run remotely controlled Blender gyms to train reinforcement agents. Blender serves as simulation, visualization, and interactive live manipulation environment.
</br>More info [\[examples/control\]](examples/control)

The figure below visualizes a single image/label batch received by PyTorch from four parallel Blender instances. Each Blender process repeatedly performs motion simulations of randomized cubes.
The figure below visualizes the basic concept of **blendtorch** used in the context of generating artificial training data for a real-world detection task.

<p align="center">
<img src="etc/result_physics.png" width="500">
</p>
<div align="center">
<img src="etc/blendtorch_intro_v3.svg" width="90%">
</div>

## Getting started
1. Read the installation instructions below
1. To get started with **blendtorch** for training data training read [\[examples/datagen\]](examples/datagen).
1. To learn about using **blendtorch** for creating reinforcement training environments read [\[examples/control\]](examples/control).

## Cite
The code accompanies our academic work [[1]](https://arxiv.org/abs/1907.01879),[[2]](https://arxiv.org/abs/2010.11696) in the field of machine learning from artificial images. Please consider the following publications when citing **blendtorch**
```
@inproceedings{robotpose_etfa2019_cheind,
author={Christoph Heindl, Sebastian Zambal, Josef Scharinger},
title={Learning to Predict Robot Keypoints Using Artificially Generated Images},
booktitle={
24th IEEE International Conference on
Emerging Technologies and Factory Automation (ETFA)
},
year={2019}
}
@inproceedings{blendtorch_icpr2020_cheind,
author = {Christoph Heindl, Lukas Brunner, Sebastian Zambal and Josef Scharinger},
title = {BlendTorch: A Real-Time, Adaptive Domain Randomization Library},
booktitle = {
1st Workshop on Industrial Machine Learning
at International Conference on Pattern Recognition (ICPR2020)
},
year = {2020},
}
```

## Installation

**blendtorch** is composed of two distinct sub-packages: `bendtorch.btt` (in [pkg_pytorch](./pkg_pytorch)) and `blendtorch.btb` (in [pkg_blender](./pkg_blender)), providing the PyTorch and Blender views on **blendtorch**.

### Prerequisites
This package has been tested with
- [Blender](https://www.blender.org/) >= 2.83 (Python 3.7)
- [Blender](https://www.blender.org/) >= 2.83/2.91 (Python 3.7)
- [PyTorch](http://pytorch.org) >= 1.50 (Python 3.7/3.8)
running Windows 10 and Linux.

Expand All @@ -39,9 +63,12 @@ git clone https://github.com/cheind/pytorch-blender.git <DST>
### Extend `PATH`
Ensure Blender executable is in your environments lookup `PATH`. On Windows this can be accomplished by
```
set PATH=c:\Program Files\Blender Foundation\Blender 2.83;%PATH%
set PATH=c:\Program Files\Blender Foundation\Blender 2.91;%PATH%
```

### Complete Blender settings
Open Blender at least once, and complete the initial settings. If this step is missed, some of the tests (especially the tests relating RL) will fail (Blender 2.91).

### Install **blendtorch** Blender part
```
blender --background --python <DST>/scripts/install_btb.py
Expand All @@ -56,6 +83,7 @@ installs `blendtorch-btt` into the Python environment that you intend to run PyT
```
pip install gym
```

### Developer instructions
This step is optional. If you plan to run the unit tests
```
Expand All @@ -79,27 +107,11 @@ python -c "import blendtorch.btt as btt; print(btt.__version__)"
which should print **blendtorch** version number on success.

## Architecture
Please see [\[examples/datagen\]](examples/datagen) and [examples/control\]](examples/control) for an in-depth architectural discussion.

## Cite
The code accompanies our [academic work](https://arxiv.org/abs/1907.01879) in the field of machine learning from artificial images. When using please cite the following work
```
@inproceedings{robotpose_etfa2019_cheind,
author={Christoph Heindl and Sebastian Zambal and Josef Scharinger},
title={Learning to Predict Robot Keypoints Using Artificially Generated Images},
booktitle={
24th IEEE International Conference on
Emerging Technologies and Factory Automation (ETFA)
},
year={2019},
pages={1536-1539},
doi={10.1109/ETFA.2019.8868243},
isbn={978-1-7281-0303-7},
}
```
Please see [\[examples/datagen\]](examples/datagen) and [\[examples/control\]](examples/control) for an in-depth architectural discussion. Bi-directional communication is explained in [\[examples/densityopt\]](examples/densityopt).

## Runtimes
The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmarks/benchmark.py](./benchmarks/benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating.

The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See [benchmarks/benchmark.py](./benchmarks/benchmark.py) for details. The timings include rendering, transfer, decoding and batch collating. Reported timings are for Blender 2.8. Blender 2.9 performs equally well on this scene, but is usually faster for more complex renderings.

| Blender Instances | Runtime sec/batch | Runtime sec/image | Arguments|
|:-:|:-:|:-:|:-:|
Expand Down
25 changes: 22 additions & 3 deletions benchmarks/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,20 @@
import argparse
from pathlib import Path
import torch.utils.data as data
import matplotlib.pyplot as plt
import numpy as np

from blendtorch import btt

BATCH = 8
INSTANCES = 4
WORKER_INSTANCES = 2
WORKER_INSTANCES = 4
NUM_ITEMS = 512
EXAMPLES_DIR = Path(__file__).parent/'..'/'examples'/'datagen'

def main():
parser = argparse.ArgumentParser()
parser.add_argument('--scene', help='Blender scene name to run', default='cube')
parser.add_argument('scene', help='Blender scene name to run', default='cube')
args = parser.parse_args()

launch_args = dict(
Expand All @@ -31,20 +34,36 @@ def main():
time.sleep(5)

t0 = None
tlast = None
imgshape = None

elapsed = []
n = 0
for item in dl:
n += len(item['image'])
if t0 is None: # 1st is warmup
t0 = time.time()
tlast = t0
imgshape = item['image'].shape
n += len(item['image'])
elif n % (50*BATCH) == 0:
t = time.time()
elapsed.append(t - tlast)
tlast = t
print('.', end='')
assert n == NUM_ITEMS

t1 = time.time()
N = NUM_ITEMS - BATCH
B = NUM_ITEMS//BATCH - 1
print(f'Time {(t1-t0)/N:.3f}sec/image, {(t1-t0)/B:.3f}sec/batch, shape {imgshape}')

fig, _ = plt.subplots()
plt.plot(np.arange(len(elapsed)), elapsed)
plt.title('Receive times between 50 consecutive batches')
save_path = EXAMPLES_DIR / 'tmp' / 'batches_elapsed.png'
fig.savefig(str(save_path))
plt.close(fig)
print(f'Figure saved to {save_path}')

if __name__ == '__main__':
main()
1 change: 1 addition & 0 deletions etc/blendtorch_intro_v3.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 1 addition & 2 deletions etc/export_paths.bat
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
@echo off
set PATH=c:\Program Files\Blender Foundation\Blender 2.83;%PATH%
set PYTHONPATH=%~dp0..\pkg_blender;%~dp0..\pkg_pytorch;%PYTHONPATH%
set PATH=c:\Program Files\Blender Foundation\Blender 2.90;%PATH%
@echo on
Binary file removed etc/result.png
Binary file not shown.
23 changes: 23 additions & 0 deletions examples/compositor_normals_depth/Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## Compositor Render Support

This directory showcases synthetic data generation using **blendtorch** for supervised machine learning. In particular, we use composite rendering to extract normals and depths from a randomized scene. The scene is composed of fixed plane and a number of parametric 3D supershapes. Using physics, we drop a random initial constellation of objects onto the plane. Once the object come to rest (we speed up the physics, so this roughly happens after a single frame), we publish dense camera depth and normal information.

<p align="center">
<img src="etc/normals_depth.png" width="500">
</p>

### Composite rendering
This sample uses the compositor to access different render passes. Unfortunately, Blender (2.9) does not offer a straight forward way to access the result of various render passes in memory. Therefore, `btb.CompositeRenderer` requires `FileOutput` nodes for temporary storage of data. For this purpose a fast OpenEXR reader, [py-minexr](https://github.com/cheind/py-minexr) was developed and integrated into **blendtorch**.

### Normals
Camera normals are generated by a custom geometry-based material. Since colors must be in range (0,1), but normals are in (-1,1) a transformation is applied to make them compatible with color ranges. Hence, in PyTorch apply the following transformation to get true normals
```python
true_normals = (normals - 0.5)*np.array([2., 2., -2.]).reshape(1,1,1,-1) # BxHxWx3
```

### Run

To recreate these results run [generate.py](./generate.py)
```
python generate.py
```
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@

import blendtorch.btb as btb
import numpy as np
import bpy

SHAPE = (30, 30)
NSHAPES = 70


def main():
# Update python-path with current blend file directory
btb.add_scene_dir_to_path()
import scene_helpers as scene

def pre_anim(meshes):
# Called before each animation
# Randomize supershapes
for m in meshes:
scene.update_mesh(m, sshape_res=SHAPE)

def post_frame(render, pub, animation):
# After frame
if anim.frameid == 1:
imgs = render.render()
pub.publish(
normals=imgs['normals'],
depth=imgs['depth']
)

# Parse script arguments passed via blendtorch launcher
btargs, _ = btb.parse_blendtorch_args()

# Fetch camera
cam = bpy.context.scene.camera

bpy.context.scene.rigidbody_world.time_scale = 100
bpy.context.scene.rigidbody_world.substeps_per_frame = 300

# Setup supershapes
meshes = scene.prepare(NSHAPES, sshape_res=SHAPE)

# Data source
pub = btb.DataPublisher(btargs.btsockets['DATA'], btargs.btid)

# Setup default image rendering
cam = btb.Camera()
render = btb.CompositeRenderer(
[
btb.CompositeSelection('normals', 'Out1', 'Normals', 'RGB'),
btb.CompositeSelection('depth', 'Out1', 'Depth', 'V'),
],
btid=btargs.btid,
camera=cam,
)

# Setup the animation and run endlessly
anim = btb.AnimationController()
anim.pre_animation.add(pre_anim, meshes)
anim.post_frame.add(post_frame, render, pub, anim)
anim.play(frame_range=(0, 1), num_episodes=-1,
use_offline_render=False, use_physics=True)


main()
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
48 changes: 48 additions & 0 deletions examples/compositor_normals_depth/generate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
from pathlib import Path

import blendtorch.btt as btt
import matplotlib.pyplot as plt
import numpy as np
import torch
from torch.utils import data


def main():
# Define how we want to launch Blender
launch_args = dict(
scene=Path(__file__).parent/'compositor_normals_depth.blend',
script=Path(__file__).parent/'compositor_normals_depth.blend.py',
num_instances=1,
named_sockets=['DATA'],
)

# Launch Blender
with btt.BlenderLauncher(**launch_args) as bl:
# Create remote dataset and limit max length to 16 elements.
addr = bl.launch_info.addresses['DATA']
ds = btt.RemoteIterableDataset(addr, max_items=4)
dl = data.DataLoader(ds, batch_size=4, num_workers=0)

for item in dl:
normals = item['normals']
# Note, normals are color-coded (0..1), to convert back to original
# range (-1..1) use
# true_normals = (normals - 0.5) * \
# torch.tensor([2., 2., -2.]).view(1, 1, 1, -1)
depth = item['depth']
print('Received', normals.shape, depth.shape,
depth.dtype, np.ptp(depth))

fig, axs = plt.subplots(2, 2)
axs = np.asarray(axs).reshape(-1)
for i in range(4):
axs[i].imshow(depth[i, :, :, 0], vmin=1, vmax=2.5)
fig, axs = plt.subplots(2, 2)
axs = np.asarray(axs).reshape(-1)
for i in range(4):
axs[i].imshow(normals[i, :, :])
plt.show()


if __name__ == '__main__':
main()
Loading

0 comments on commit 31a2768

Please sign in to comment.