Graph Neural Networks (GNNs) have shown success in learning from graph
structured data containing node/edge feature information, with application to
social networks, recommendation, fraud detection an
Self-replication is a key aspect of biological life that has been largely
overlooked in Artificial Intelligence systems. Here we describe how to build
and train self-replicating neural networks. The n
Neural Networks (NNs) are the method of choice for building learning
algorithms. Their popularity stems from their empirical success on several
challenging learning problems. However, most scholars ag
In this work, we present a morphological neural network that handles binary inputs and outputs.
We propose their construction inspired by convolutional neural networks (cnns) to formulate layers adapted to such images by replacing convolutions with erosions and dilations.
This paper investigates the ability of artificial neural networks to judge
the grammatical acceptability of a sentence, with the goal of testing their
linguistic competence. We introduce the Corpus of
Non-local operation is widely explored to model the long-range dependencies.
However, the redundant computation in this operation leads to a prohibitive
complexity. In this paper, we present a Represe
We introduce twin neural network (TNN) regression. This method predicts
differences between the target values of two different data points rather than
the targets themselves. The solution of a traditi
The cost to train machine learning models has been increasing exponentially,
making exploration and research into the correct features and architecture a
costly or intractable endeavor at scale. Howev
Large neural networks are heavily over-parameterized. This is done because it
improves training to optimality. However once the network is trained, this
means many parameters can be zeroed, or pruned,
In this paper, we develop a new neural network family based on power series expansion, which is proved to achieve a better approximation accuracy comparingto existing neural networks.
This new set of neural networks can improve theexpressive power while preserving comparable computational cost by increasing the degree of the network instead of increasing the depth or width.
Deep learning has transformed the way we think of software and what it can
do. But deep neural networks are fragile and their behaviors are often
surprising. In many settings, we need to provide forma
Isotropic Gaussian priors are the de facto standard for modern Bayesian
neural network inference. However, such simplistic priors are unlikely to
either accurately reflect our true beliefs about the w
Statistical models are inherently uncertain. Quantifying or at least
upper-bounding their uncertainties is vital for safety-critical systems such as
autonomous vehicles. While standard neural networks
The Common Vulnerabilities and Exposures (CVE) represent standard means for
sharing publicly known information security vulnerabilities. One or more CVEs
are grouped into the Common Weakness Enumerati
In this letter, motivated by the question that whether the empirical fitting of data by neural network can yield the same structure of physical laws, we apply the neural network to a simple quantum me
As deep neural networks (DNNs) grow to solve increasingly complex problems,
they are becoming limited by the latency and power consumption of existing
digital processors. 'Weight-stationary' analog op
Quantization is one of the key techniques used to make Neural Networks (NNs)
faster and more energy efficient. However, current low precision quantization
algorithms often have the hidden cost of conv
Recently there has been much interest in quantifying the robustness of neural
network classifiers through adversarial risk metrics. However, for problems
where test-time corruptions occur in a probabi
Graph neural networks (GNNs) have demonstrated great success in
representation learning for graph-structured data. The layer-wise graph
convolution in GNNs is shown to be powerful at capturing graph t