A few-shot generative model should be able to generate data from a
distribution by only observing a limited set of examples. In few-shot learning
the model is trained on data from many sets from diffe
Counterfactuals can explain classification decisions of neural networks in a
human interpretable way. We propose a simple but effective method to generate
such counterfactuals. More specifically, we p
We explore methods of producing adversarial examples on deep generative
models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning
architectures are known to be vulnerable to adve
Counterfactual instances offer human-interpretable insight into the local
behaviour of machine learning models. We propose a general framework to
generate sparse, in-distribution counterfactual model
Score-based models generate samples by mapping noise to data (and vice versa)
via a high-dimensional diffusion process. We question whether it is necessary
to run this entire process at high dimension
In recent years fully-parametric fast simulation methods based on generative
models have been proposed for a variety of high-energy physics detectors. By
their nature, the quality of data-driven model
Users have the right to have their data deleted by third-party learned
systems, as codified by recent legislation such as the General Data Protection
Regulation (GDPR) and the California Consumer Priv
Generative networks are fundamentally different in their aim and methods
compared to CNNs for classification, segmentation, or object detection. They
have initially not been meant to be an image analy
We propose a new "Poisson flow" generative model (PFGM) that maps a uniform
distribution on a high-dimensional hemisphere into any data distribution. We
interpret the data points as electrical charges
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only
Machine learning approaches now achieve impressive generation capabilities in
numerous domains such as image, audio or video. However, most training \&
evaluation frameworks revolve around the idea of
Existing Score-based Generative Models (SGMs) can be categorized into
constrained SGMs (CSGMs) or unconstrained SGMs (USGMs) according to their
parameterization approaches. CSGMs model the probability
We deal with Bayesian generative and discriminative classifiers. Given a
model distribution $p(x, y)$, with the observation $y$ and the target $x$, one
computes generative classifiers by firstly consi
Generative models estimate the underlying distribution of a dataset to
generate realistic samples according to that distribution. In this paper, we
present the first membership inference attacks again
Diffusion generative models have recently been applied to domains where the
available data can be seen as a discretization of an underlying function, such
as audio signals or time series. However, the
We present a generative model to synthesize 3D shapes as sets of handles --
lightweight proxies that approximate the original 3D shape -- for applications
in interactive editing, shape parsing, and bu
Capsule networks are a neural network architecture specialized for visual
scene recognition. Features and pose information are extracted from a scene and
then dynamically routed through a hierarchy of
The goal of compressed sensing is to estimate a vector from an
underdetermined system of noisy linear measurements, by making use of prior
knowledge on the structure of vectors in the relevant domain.
The ability to compare two degenerate probability distributions (i.e. two
probability distributions supported on two distinct low-dimensional manifolds
living in a much higher-dimensional space) is a