Rotation is frequently listed as a candidate for data augmentation in
contrastive learning but seldom provides satisfactory improvements. We argue
that this is because the rotated image is always trea
Conditional contrastive learning frameworks consider the conditional sampling procedure that constructs positive or negative data pairs conditioned on specific variables.
Although conditional contrastive learning enables many applications, the conditional sampling procedure can be challenging if we can not obtain sufficient data pairs for some values of the conditioning variable.
Unsupervised graph representation learning has emerged as a powerful tool to
address real-world problems and achieves huge success in the graph learning
domain. Graph contrastive learning is one of th
In this paper, we study contrastive learning from an optimization
perspective, aiming to analyze and address a fundamental issue of existing
contrastive learning methods that either rely on a large ba
Contrastive learning is an effective unsupervised method in graph
representation learning. Recently, the data augmentation based contrastive
learning method has been extended from images to graphs. Ho
Contrastive learning has been widely applied to graph representation
learning, where the view generators play a vital role in generating effective
contrastive samples. Most of the existing contrastive
In this paper we provide a comprehensive literature review and we propose a general contrastiverepresentation learning framework that simplifies and unifies many different contrastive learning methods.
We also provide a taxonomy for each of the components of contrastive learning in order to summarise it and distinguish it from other forms of machine learning.
We propose explanation guided augmentations (ega) and explanation guidedcontrastive learning for sequential recommendation (ec4srec) model framework to address data sparsity caused by users with few iteminteractions and items with few user adoptions.
The key idea behind ega is to utilize explanation method(s) to determine items'importance in a user sequence and derive the positive and negative sequences accordingly.
Contrastive learning has achieved state-of-the-art performance in various
self-supervised learning tasks and even outperforms its supervised counterpart.
Despite its empirical success, theoretical und
Self-supervised learning has recently shown great potential in vision tasks
via contrastive learning, which aims to discriminate each image, or instance,
in the dataset. However, such instance-level l
Fine-tuning a pre-trained language model via the contrastive learning
framework with a large amount of unlabeled sentences or labeled sentence pairs
is a common way to obtain high-quality sentence rep