Paper | Dynamic Residual Dense Network for Image Denoising

目录html

发表于2019 Sensors。markdown

摘要网络

Deep convolutional neural networks have achieved great performance on various image restoration tasks. Specifically, the residual dense network (RDN) has achieved great results on image noise reduction by cascading multiple residual dense blocks (RDBs) to make full use of the hierarchical feature. However, the RDN only performs well in denoising on a single noise level, and the computational cost of the RDN increases significantly with the increase in the number of RDBs, and this only slightly improves the effect of denoising. To overcome this, we propose the dynamic residual dense network (DRDN), a dynamic network that can selectively skip some RDBs based on the noise amount of the input image. Moreover, the DRDN allows modifying the denoising strength to manually get the best outputs, which can make the network more effective for real-world denoising. Our proposed DRDN can perform better than the RDN and reduces the computational cost by 40−50%. Furthermore, we surpass the state-of-the-art CBDNet by 1.34 dB on the real-world noise benchmark.ide

结论函数

In this paper, we propose a DRDN model for noise reduction of real-world images. Our proposed DRDN makes full use of the properties of residual connection and deep supervision. We present a method to denoise images with different noise amounts and simultaneously reduce the average computational cost. The core idea of our method is to dynamically change the number of blocks involved in denoising to change the denoising strength via sequential decision. Moreover, our method can manually adjust the denoising strength of the model without fine-tuning the parameters.post

要点性能

  1. RDN有两个局限:第一,没法作到盲去噪;第二,随着RDB数量增长,计算量大增,而性能增益很小。测试

  2. 本文提出了动态RDN(DRDN),能够基于噪声程度,选择性地跳过一些RDB。DRDN甚至还容许手动更改去噪强度。this

  3. DRDN不只性能比RDN更好,并且计算负荷降低40-50%。idea

故事背景

咱们先回顾RDN和RDB。RDN中有大量以RDB为单位的短链接。RDN做者对此的解释是:让先后的RDB具备连续的记忆。但回顾黄高老师的学术报告,咱们知道,这种局部短链接设计可让级联模型更健壮,而且本质上是增长了网络的冗余,提升泛化能力。

所以,做者认为,RDN中的有一些RDB可能对某个任务贡献不大,是能够选择性跳过的。

最难得的一点是,做者经过对RDN的可视化,发现:相邻RDB的输出很形似。如图:
可视化
其中,每一层的可视化结果,是当前层全部通道的均值。

这说明:咱们能够用恒等变换(短链接)跳过一些RDB,在达到近似性能的同时,减少计算量。

做者还提到了UNet++。该网络对UNet进行了剪裁:基于在测试集上的恶化程度。但本文更但愿DRDN可以根据任务难度,自主地进行RDB选择。

DRDN

DRDN

  • 如图,DRDN的结构和RDN是同样的,但内部的RDB换成了DRDB。

  • 在DRDN内部,做者经过一个LSTM,来预测下一个RDB的重要性。若是重要性低于设定阈值,那么下一个RDB就会被短链接代替。这被做者称为门模块(gate module)。

  • 注意,LSTM是创建在连续的DRDB上的。

  • 实战中,做者使用了20个DRDB,每一个DRDB内部有6层卷积。其他参数见第四章。

训练

  • 在训练时,若是LSTM阈值未达到,那么该RDB梯度为0。

  • 训练分为3个阶段:首先,咱们令全部门输出为1,即便得DRDN收敛;其次,咱们让门恢复正常,使得门收敛;最后,咱们加上对经过率的惩罚,迫使一些DRDB不工做。总体损失函数以下:

    损失函数

    其中,\(S\)是sigmoid函数,\(v_d\)是第\(d\)个DRDB输出的向量(FC层输出)。

实验略。

转载于:https://www.cnblogs.com/RyanXing/p/11617463.html