Rolling shutter (rs) distortions in the captured image can result in undesired motion artifacts known as rolling shutter (rs)distortions in the captured image.
Existing single image rs rectification methods attempt to account for these distortions by either using algorithms tailored for specific class of scenes which warrants information of intrinsic camera parameters or a learning-based framework with known ground truth motionparameters.
We develop new theoretical results on matrix perturbation to shed light on the impact of architecture on the performance of a deep network.
In particular, we explain analytically what deep learning practitioners have long observed empirically : the parameters of some deep architectures (e.g., residual networks, resnets) are easier to optimize than other deep architectures (e.g., convolutional networks, convnets).
Deep image denoisers achieve state-of-the-art results but with a hidden cost.
As witnessed in recent literature, these deep networks are capable of overfitting their training distributions, causing inaccurate hallucinations to be added to the output and generalizing poorly to varying data.