CLIPascene: Scene Sketching with Different Types and Levels of Abstraction
In this paper, we present a method for converting a given scene image into a
sketch using different types and multiple levels of abstraction. We distinguish
between two types of abstraction. The first considers the fidelity of the
sketch, varying its representation from a more precise portrayal of the input
to a looser depiction. The second is defined by the visual simplicity of the
sketch, moving from a detailed depiction to a sparse sketch. Using an explicit
disentanglement into two abstraction axes -- and multiple levels for each one
-- provides users additional control over selecting the desired sketch based on
their personal goals and preferences. To form a sketch at a given level of
fidelity and simplification, we train two MLP networks. The first network
learns the desired placement of strokes, while the second network learns to
gradually remove strokes from the sketch without harming its recognizability
and semantics. Our approach is able to generate sketches of complex scenes
including those with complex backgrounds (e.g., natural and urban settings) and
subjects (e.g., animals and people) while depicting gradual abstractions of the
input scene in terms of fidelity and simplicity.
Authors
Yael Vinker, Yuval Alaluf, Daniel Cohen-Or, Ariel Shamir