Extending Neural Computation Architectures

  • Followers
About this project

Extending Neural Computation Architectures with External Program Stores: Despite recent advances in the application of deep learning to program induction in the form of neural program induction, reliable learning of anything more than extremely simple programs with neural networks remains elusive, especially when using weak supervision signals. The main problem is that of inducing natural computation bias, or ensuring that most programs learned by the model are the sort that would be written by a human, without making training difficult. In this work, we present a model that incorporates elements of both neural program induction and differentiable interpreters, which means that our model can be viewed in two ways: either we augmented a simple neural programming architecture with an external program store where parts of the program are trained directly in a natural programming language, or we augmented a differentiable natural programming language with a simple neural programming architecture for some aspects of the learned program. Our model demonstrates learning capacity of significantly more complex programs than have been possible to learn earlier with similar methods, and is both simpler to understand and easier to train than comparable models in either neural programming or differentiable interpreters.

Project Members