The capabilities of natural neural systems have inspired new generations of
machine learning algorithms as well as neuromorphic very large-scale integrated
(VLSI) circuits capable of fast, low-power i
We present a simple yet faster training algorithm called zeroth-orderrelaxed backpropagation (zorb).
To illustrate the speed up, we trained a feed-forward neuralnetwork with 11 layers on mnist and observed that zorb converged 300 times-faster than adam while achieving a comparable error rate, without any hyperparameter tuning.
It is widely believed that the backpropagation algorithm is essential for
learning good feature detectors in early layers of artificial neural networks,
so that these detectors are useful for the task
We describe recurrent neural networks (RNNs), which have attracted great
attention on sequential tasks, such as handwriting recognition, speech
recognition and image to text. However, compared to gene
The backpropagation algorithm is an invaluable tool for training artificialneural networks, however, because of a weight sharing requirement, it does notprovide a plausible model of brain function.
Here, in the context of a two-layer network, we derive an algorithm for training a neural network which avoids this problem by not requiring explicit error computation and backpropagation.
Deep learning and quantum computing have achieved dramatic progresses in
recent years. The interplay between these two fast-growing fields gives rise to
a new research frontier of quantum machine lear
Deep Neural Networks are successful but highly computationally expensive
learning systems. One of the main sources of time and energy drains is the well
known backpropagation (backprop) algorithm, whi
Predictive coding has been offered as a potentially more biologically realistic alternative to backpropagation for training feedforward artificial neural networks on supervised learning tasks.
I discuss some implications of these results for the interpretation of predictive coding and deep neural networks as models of biological learning and i describe a repository of functions, torch2pc, for performing predictive coding with pytorch neural network models.
We propose proximal backpropagation (ProxProp) as a novel algorithm that
takes implicit instead of explicit gradient steps to update the network
parameters during neural network training. Our algorith
Modern problems in AI or in numerical analysis require nonsmooth approaches
with a flexible calculus. We introduce generalized derivatives called
conservative fields for which we develop a calculus an
The on-chip implementation of learning algorithms would speed-up the training
of neural networks in crossbar arrays. The circuit level design and
implementation of backpropagation algorithm using grad
Deep neural networks are powerful parametric models that can be trained
efficiently using the backpropagation algorithm. Stochastic neural networks
combine the power of large parametric functions with
The brain cortex, which processes visual, auditory and sensory data in the
brain, is known to have many recurrent connections within its layers and from
higher to lower layers. But, in the case of mac
Learning in biological and artificial neural networks is often framed as a
problem in which targeted error signals guide parameter updating for more
optimal network behaviour. Backpropagation of error
Despite being the workhorse of deep learning, the backpropagation algorithm
is no panacea. It enforces sequential layer updates, thus preventing efficient
parallelization of the training process. Furt
Training of autoencoders using the backpropagation algorithm is challenging
for non-differential channel models or in an experimental environment where
gradients cannot be computed. In this paper, we
In this paper, a straightforward enhancement learning algorithm based on
Separation Index (SI) concept is proposed for Convolutional Neural Networks
(CNNs). At first, the SI as a supervised complexity
The backpropagation that drives the success of deep learning is most likely
different from the learning mechanism of the brain. In this paper, we develop a
biology-inspired learning rule that discover