Self-supervised learning has attracted many researchers for its soaring performance on representation learning in the last several years.
In this survey, we take a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning.
Machine learning has become successful in solving wireless interference
management problems. Different kinds of deep neural networks (DNNs) have been
trained to accomplish key tasks such as power cont
Self-supervised contrastive learning is a powerful tool to learn visual
representation without labels. Prior work has primarily focused on evaluating
the recognition accuracy of various pre-training a
Deep learning on graphs has recently achieved remarkable success on a variety
of tasks while such success relies heavily on the massive and carefully labeled
data. However, precise annotations are gen
Noisy labels, resulting from mistakes in manual labeling or webly data
collecting for supervised learning, can cause neural networks to overfit the
misleading information and degrade the generalizatio
This paper tackles the problem of semi-supervised learning when the set of
labeled samples is limited to a small number of images per class, typically
less than 10, problem that we refer to as barely-
Self-supervised Learning (SSL) including the mainstream contrastive learning
has achieved great success in learning visual representations without data
annotations. However, most methods mainly focus
Scene flow is the three-dimensional (3D) motion field of a scene. It provides
information about the spatial arrangement and rate of change of objects in
dynamic environments. Current learning-based ap
Supervised and semi-supervised learning methods have been traditionally
designed for the closed-world setting based on the assumption that unlabeled
test data contains only classes previously encounte
Kidney segmentation is leveraged to define an effective proxy task for kidney segmentation via self-supervised learning.
Evaluation results on a publicly available dataset containing computed tomography (ct) scans of the abdominal region shows that a boost in performance and fast convergence can be had relative to a network trained conventionally from scratch.
The renaissance of artificial neural networks was catalysed by the success of
classification models, tagged by the community with the broader term supervised
learning. The extraordinary results gave r
Semi-supervised learning in combination with pretrained language models for data-to-text generation is discussed.
Results show that extending the training set of a data-to-text system with a language model using the pseudo-labeling approach did increase text quality scores, but the data augmentation approach yielded similar scores to the system without training set extension.
In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained encoders in self-supervised learning, including six confidentiality problems, three integrity problems, and one availability problem.
We hope our bookchapter will inspire future research on the security and privacy of self-supervised learning.
We study unsupervised data selection for semi-supervised learning (SSL),
where a large-scale unlabeled data is available and a small subset of data is
budgeted for label acquisition. Existing SSL meth
Recently, unsupervised adversarial training (AT) has been extensively studied
to attain robustness with the models trained upon unlabeled data. To this end,
previous studies have applied existing supe
This chapter is an attempt to provide a detailed summary of the different facets of lifelong learning.
It provides a high-level overview of the different approaches and paradigms in lifelong learning and provides an 8-chapter introduction to the field without focusing on specific approaches or benchmarks.
Ensemble methods have proven to be a powerful technique for boosting modelperformance, uncertainty estimation, and robustness in supervised learning.
Recent advances in self-supervised learning (ssl) enable leveraging large unlabeled corpora for state-of-the-art few-shot and supervised learning performance.
Recent advances in contrastive learning have enlightened diverse applications
across various semi-supervised fields. Jointly training supervised learning and
unsupervised learning with a shared featur
While deep-learning based methods for visual tracking have achieved
substantial progress, these schemes entail large-scale and high-quality
annotated data for sufficient training. To eliminate expensi