Theory of Generative Reservoirs (F. Reinhart)
Reservoir computing combines a recurrent neural network with a linear read-out layer. The reservoir network is used as a high-dimensional and temporal feature generator for learning input to output mappings efficiently. The reservoir approach is biologically plausible by resembling the structure of the cerebellum. Beside excellent generalization of reservoirs to new data, reservoir computing copes with online learning on temporally ordered data, e.g. time-series. These capabilities are crucial for systems, such as robots, that have to perform in the real world.
Recently, we pointed out that bidirectional mappings can be learned in a single network simultaneously and in a generative fashion (Reinhart & Steil, 2008). The key ingredient for generative learning is feedback of the network output into the reservoir. In a first step, this project investigates the effects of output-feedback. Further, the kind of encoding generated by reservoir networks is analyzed theoretically. We aim at deeper understanding of reservoir computing and reliable network performance grounded on these theoretic insights to fit the requirements of advanced robotic applications.