This is a Pytorch implementation of Gaussian Mixture Model Convolutional Networks (MoNet) for the tasks of image classification, vertex classification on generic graphs, and dense intrinsic shape correspondence, as described in the paper:
Monti et al, Geometric deep learning on graphs and manifolds using mixture model CNNs (CVPR 2017)
Following the same network architecture provided in the paper, our implementation produces results comparable to or better than those shown in the paper. Note that for the tasks of image classification and shape correspondence, we do not use polar coordinates but replacing it as relative cartesian coordinates . It eases the pain of the both computational and space cost from data preprocessing.
- Pytorch (1.3.0)
- Pytorch Geometric (1.3.0)
MoNet uses a local system of pseudo-coordinates around to represent the neighborhood and a family of learnable weighting functions w.r.t. , e.g., Gaussian kernels with learnable mean and covariance . The convolution is
where is the learnable filter weights and is the node feature vector.
We provide efficient Pytorch implementation of this operator GMMConv
, which is accessible from Pytorch Geometric
.
python -m image.main
python -m graph.main
python -m correspondence.main
In order to use your own dataset, you can simply create a regular python list holding torch_geometric.data.Data
objects and specify the following attributes:
data.x
: Node feature matrix with shape[num_nodes, num_node_features]
data.edge_index
: Graph connectivity in COO format with shape[2, num_edges]
and typetorch.long
data.edge_attr
: Pesudo-coordinates with shape[num_edges, pesudo-coordinates-dim]
data.y
: Target to train against
Please cite this paper if you use this code in your own work:
@inproceedings{monti2017geometric,
title={Geometric deep learning on graphs and manifolds using mixture model cnns},
author={Monti, Federico and Boscaini, Davide and Masci, Jonathan and Rodola, Emanuele and Svoboda, Jan and Bronstein, Michael M},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={5115--5124},
year={2017}
}