In recent years, Multi-Task Learning (MTL) attracts much attention due to its
good performance in many applications. However, many existing MTL models cannot
guarantee that its performance is no worse than its single-task counterpart on
each task. Though this phenomenon has been empirically observed by some works,
little work aims to handle the resulting problem, which is formally defined as
negative sharing in this paper. To achieve safe multi-task learning where no
\textit{negative sharing} occurs, we propose a Safe Multi-Task Learning (SMTL)
model, which consists of a public encoder shared by all the tasks, private
encoders, gates, and private decoders. Specifically, each task has a private
encoder, a gate, and a private decoder, where the gate is to learn how to
combine the private encoder and public encoder for the downstream private
decoder. To reduce the storage cost during the inference stage, a lite version
of SMTL is proposed to allow the gate to choose either the public encoder or
the corresponding private encoder. Moreover, we propose a variant of SMTL to
place all the gates after decoders of all the tasks. Experiments on several
benchmark datasets demonstrate the effectiveness of the proposed methods.