开发者

What is the network analog of a recursive function?

开发者 https://www.devze.com 2023-03-14 00:57 出处:网络
This is an ambitious question from a Wolfram Science Conference: Is there such a thing as a net开发者_开发知识库work analog of a recursive function? Maybe a kind of iterative \"map-reduce\" pattern? I

This is an ambitious question from a Wolfram Science Conference: Is there such a thing as a net开发者_开发知识库work analog of a recursive function? Maybe a kind of iterative "map-reduce" pattern? If we add interaction to iteration, things become complicated: continuous iteration of a large number of interacting entities can produce very complex results. It would be nice to have a way of seeing the consequences of the myriad interactions that define a complex system. Can we find a counterpart of a recursive function in an iterative network of connected nodes which contain nested propagation loops?

One of the basic patterns of distributed computation is Map-Reduce: it can be found in Cellular Automata (CA) and Neural Networks (NN). Neurons in NN collect informations through their synapses (reduce) and send it to other neurons (map). Cells in CA act similar, they gather informations from their neighbors (reduce), apply a transition rule (reduce), and offer the result to their neighbors again. Thus >if< there is a network analog of a recursive function, then Map-Reduce is certainly an important part of it. What kind of iterative "map-reduce" patterns exist? Do certain kinds of "map-reduce" patterns result in certain kinds of streams or even vortices or whirls? Can we formulate a calculus for map-reduce patterns?


I'll take a stab at the question about recursion in neural networks, but I really don't see how map-reduce plays into this at all. I get that neural network can perform distributed computation and then reduce it to a more local representation, but the term map-reduce is a very specific brand of this distributed/local piping, mainly associated with google and Hadoop.

Anyways, the simple answer to your question is that there isn't a general method for recursion in neural networks; in fact, the very related simpler problem of implementing general purpose role-value bindings in neural networks is currently still an open question.

The general principle of why things like role-binding and recursion in neural networks (ANNs) are so hard is that ANNs are very interdependent by nature; indeed that is where most of their computational power is derived from. Whereas function calls and variable bindings are both very delineated operations; what they include is an all-or-nothing affair, and that discreteness is a valuable property in many cases. So implementing one inside the other without sacrificing any computational power is very tricky indeed.

Here is a small sampling of papers that try there hand at partial solutions. Lucky for you, a great many people find this problem very interesting!

Visual Segmentation and the Dynamic Binding Problem: Improving the Robustness of an Artificial Neural Network Plankton Classifier (1993)

A Solution to the Binding Problem for Compositional Connectionism

A (Somewhat) New Solution to the Binding Problem

0

精彩评论

暂无评论...
验证码 换一张
取 消