Published In

2022 7th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA)

Document Type

Pre-Print

Publication Date

5-2022

Subjects

Blind Motion -- Deblurring

Abstract

In this paper, we introduce an end-to-end generative adversarial network (GAN) based on sparse learning for single image motion deblurring, which we called SL-CycleGAN. For the first time in image motion deblurring, we propose a sparse ResNet-block as a combination of sparse convolution layers and a trainable spatial pooler k-winner based on HTM (Hierarchical Temporal Memory) to replace non-linearity such as ReLU in the ResNet-block of SL-CycleGAN generators. Furthermore, we take our inspiration from the domain-to-domain translation ability of the CycleGAN, and we show that image deblurring can be cycle-consistent while achieving the best qualitative results. Finally, we perform extensive experiments on popular image benchmarks both qualitatively and quantitatively and achieve the highest PSNR of 38.087 dB on GoPro dataset, which is 5.377 dB better than the most recent deblurring method.

Description

This is the author’s version of a work. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document.

DOI

10.1109/ICCCBDA55098.2022.9778862

Persistent Identifier

https://archives.pdx.edu/ds/psu/37741

Share

COinS