Administrator
Shanxi Provincial Education Department
Sponsor
Taiyuan University of Technology
Publisher
Ed. Office of Journal of TYUT
Editor-in-Chief
SUN Hongbin
ISSN: 1007-9432
CN: 14-1220/N
Unlike smog, dust distribution is often non-uniform and local, and it lacks an effective model description for light scattering and absorption, which seriously affects the depth estimation that relies on image features. To address this issue, a method for depth estimation of dust images incorporating visual attention is proposed. In this method, the distribution model of dust in the image is established, and the
visual attention network module is designed accordingly to obtain the attention map of the dust area, so as to guide the depth estimation network to strengthen the image depth feature extraction of the dust area. In addition, in order to enhance the deep feature extraction capability of dust images, a multi-scale feature extraction module is designed. The above modules are added to the framework of generative adversarial network, and combined with the loss function designed for depth estimation of dust images, the depth estimation of a single dust image is realized. The experimental results on the NYU Depth v2 dataset show that the mean relative error, logarithmic mean error and root mean square error obtained by this method are 0.189, 0.052 and 0.508, respectively, which are better than the current advanced algorithms for dust images, the comparison experiment on real data alsoproves that this method has better generalization ability.