Recursive neural networks are similar to recurrent neural networks in the sense that they involve repeated application of the same network weights to update internal representations. However, instead of operating in a linear fashion, the topology of the network is dynamically determined for each input example according to a specified tree structure (e.g., a parse tree, etc.). Fig 1 from Tai, Socher, and Manning (2015) summarizes nicely:
- Support for (dynamically) passing features to each node of the tree.
- An implementation of Child-Sum Tree-LSTMs from Tai, Socher, and Manning (2015). (
tree_lstm.py
) - See the minimal
main
in both files for API examples. - Make your own tree recursive nets! A base class that can be extended by overriding the
_combine_inner
function. See the in-line comments inrecursive_nn.py
for info about inputs/outputs.
- I built this on tensorflow 2.2, but it probably will work with tensorflow 2.1 or above.
- I have not throughly tested this implementation: bugfixes are welcomed!
- There may be more efficient and clever ways of implementing this functionality; I mostly wanted to play with
dynamic=True
functionality in tf.keras