Keep Up With Latest Trending Papers. Computer Science, AI and Machine Learning and more.Subscribe

Top Papers in Backpropagation algorithm

Share

The Backpropagation Algorithm Implemented on Spiking Neuromorphic Hardware

The capabilities of natural neural systems have inspired new generations of
machine learning algorithms as well as neuromorphic very large-scale integrated
(VLSI) circuits capable of fast, low-power i

More...

Share

Zeroth-Order Backpropagation for Deep Neural Networks

ZORB: A Derivative-Free Backpropagation Algorithm for Neural Networks

Read More...

Share

Machine Learning

Computer Vision

Neural and Evolutionary Computing

Neurons and Cognition

Stats Machine Learning

Unsupervised Learning by Competing Hidden Units

It is widely believed that the backpropagation algorithm is essential for
learning good feature detectors in early layers of artificial neural networks,
so that these detectors are useful for the task

More...

Share

A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation

We describe recurrent neural networks (RNNs), which have attracted great
attention on sequential tasks, such as handwriting recognition, speech
recognition and image to text. However, compared to gene

More...

Share

A Weight-Sharing Backpropagation Algorithm for Training Neural Networks

A biologically plausible neural network for local supervision in cortical microcircuits

Read More...

Share

Deep quantum neural networks equipped with backpropagation on a superconducting processor

Deep learning and quantum computing have achieved dramatic progresses in
recent years. The interplay between these two fast-growing fields gives rise to
a new research frontier of quantum machine lear

More...

Share

Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training

Deep Neural Networks are successful but highly computationally expensive
learning systems. One of the main sources of time and energy drains is the well
known backpropagation (backprop) algorithm, whi

More...

Share

Physical Neural Networks

Deep physical neural networks enabled by a backpropagation algorithm for arbitrary physical systems

Read More...

Share

Predictive coding and backpropagation for training feedforward artificial neural networks on supervised learning tasks

On the relationship between predictive coding and backpropagation

Read More...

Share

Proximal Backpropagation

We propose proximal backpropagation (ProxProp) as a novel algorithm that
takes implicit instead of explicit gradient steps to update the network
parameters during neural network training. Our algorith

More...

Share

Conservative set valued fields, automatic differentiation, stochastic gradient method and deep learning

Modern problems in AI or in numerical analysis require nonsmooth approaches
with a flexible calculus. We introduce generalized derivatives called
conservative fields for which we develop a calculus an

More...

Share

Learning in Memristive Neural Network Architectures using Analog Backpropagation Circuits

The on-chip implementation of learning algorithms would speed-up the training
of neural networks in crossbar arrays. The circuit level design and
implementation of backpropagation algorithm using grad

More...

Share

MuProp: Unbiased Backpropagation for Stochastic Neural Networks

Deep neural networks are powerful parametric models that can be trained
efficiently using the backpropagation algorithm. Stochastic neural networks
combine the power of large parametric functions with

More...

Share

Convergence of a New Learning Algorithm for Neural Network

Convergence of a New Learning Algorithm

Read More...

Share

Implicit recurrent networks: A novel approach to stationary input processing with recurrent neural networks in deep learning

The brain cortex, which processes visual, auditory and sensory data in the
brain, is known to have many recurrent connections within its layers and from
higher to lower layers. But, in the case of mac

More...

Share

Constrained Parameter Inference as a Principle for Learning

Learning in biological and artificial neural networks is often framed as a
problem in which targeted error signals guide parameter updating for more
optimal network behaviour. Backpropagation of error

More...

Share

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures

Despite being the workhorse of deep learning, the backpropagation algorithm
is no panacea. It enforces sequential layer updates, thus preventing efficient
parallelization of the training process. Furt

More...

Share

Gradient-free training of autoencoders for non-differentiable communication channels

Training of autoencoders using the backpropagation algorithm is challenging
for non-differential channel models or in an experimental environment where
gradients cannot be computed. In this paper, we

More...

Share

Learning Enhancement of CNNs via Separation Index Maximizing at the First Convolutional Layer

In this paper, a straightforward enhancement learning algorithm based on
Separation Index (SI) concept is proposed for Convolutional Neural Networks
(CNNs). At first, the SI as a supervised complexity

More...

Share

Activation Learning by Local Competitions

The backpropagation that drives the success of deep learning is most likely
different from the learning mechanism of the brain. In this paper, we develop a
biology-inspired learning rule that discover

More...

Share

More

More