[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = cloud and snow recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 6484 KiB  
Article
DSRSS-Net: Improved-Resolution Snow Cover Mapping from FY-4A Satellite Images Using the Dual-Branch Super-Resolution Semantic Segmentation Network
by Xi Kan, Zhengsong Lu, Yonghong Zhang, Linglong Zhu, Kenny Thiam Choy Lim Kam Sian, Jiangeng Wang, Xu Liu, Zhou Zhou and Haixiao Cao
Remote Sens. 2023, 15(18), 4431; https://doi.org/10.3390/rs15184431 - 8 Sep 2023
Cited by 3 | Viewed by 1305
Abstract
The Qinghai–Tibet Plateau is one of the regions with the highest snow accumulation in China. Although the Fengyun-4A (FY4A) satellite is capable of monitoring snow-covered areas in real time and on a wide scale at high temporal resolution, its spatial resolution is low. [...] Read more.
The Qinghai–Tibet Plateau is one of the regions with the highest snow accumulation in China. Although the Fengyun-4A (FY4A) satellite is capable of monitoring snow-covered areas in real time and on a wide scale at high temporal resolution, its spatial resolution is low. In this study, the Qinghai–Tibet Plateau, which has a harsh climate with few meteorological stations, was selected as the study area. We propose a deep learning model called the Dual-Branch Super-Resolution Semantic Segmentation Network (DSRSS-Net), in which one branch focuses with super resolution to obtain high-resolution snow distributions and the other branch carries out semantic segmentation to achieve accurate snow recognition. An edge enhancement module and coordinated attention mechanism were introduced into the network to improve the classification performance and edge segmentation effect for cloud versus snow. Multi-task loss is also used for optimization, including feature affinity loss and edge loss, to obtain fine structural information and improve edge segmentation. The 1 km resolution image obtained by coupling bands 1, 2, and 3; the 2 km resolution image obtained by coupling bands 4, 5, and 6; and the 500 m resolution image for a single channel, band 2, were inputted into the model for training. The accuracy of this model was verified using ground-based meteorological station data. Snow classification accuracy, false detection rate, and total classification accuracy were compared with the MOD10A1 snow product. The results show that, compared with MOD10A1, the snow classification accuracy and the average total accuracy of DSRSS-Net improved by 4.45% and 5.1%, respectively. The proposed method effectively reduces the misidentification of clouds and snow, has higher classification accuracy, and effectively improves the spatial resolution of FY-4A satellite snow cover products. Full article
(This article belongs to the Special Issue Monitoring Cold-Region Water Cycles Using Remote Sensing Big Data)
Show Figures

Figure 1

Figure 1
<p>Structure of the DSRSS-Net network. The abbreviations used are as follows. DSRSS-Net is the dual-branch super-resolution semantic segmentation network. Conv is the convolutional layer. EEB is the edge enhancement block. SSSR, SISR, FA, and ASPP are semantic-segmentation super-resolution, single-image super-resolution, feature affinity, and atrous spatial pyramid pooling, respectively.</p>
Full article ">Figure 2
<p>Structures of the EEB, EEB-ResBlock, and EEB-BottleNeck.</p>
Full article ">Figure 3
<p>Structure of the improved coordinated attention module. The abbreviations used are as follows. Conv is the convolutional layer followed by the kernel height × width. BN is batch normalization.</p>
Full article ">Figure 4
<p>Different cloud–snow segmentation effects. The images are FY-4A 500 m resolution images from bands 1, 2, and 5, synthesized from the super-resolution branch output results. NDSI is the 500 m resolution snow product MOD10A1, extracted using the NDSI snow thresholding method. GT is the 500 m resolution cloud and snow label.</p>
Full article ">Figure 5
<p>Comparison between the super-resolution output and the semantic segmentation output.</p>
Full article ">Figure 6
<p>Comparison of mappings of snow accumulation on the Qinghai–Tibet Plateau at 13:00 GMT on 28 November 2021: (<b>a</b>) is a composite image of the FY-4A 500 m resolution output from bands 1, 2, and 3 from the super-resolution branch; (<b>b</b>) is composite image of the FY-4A 500 m resolution output from bands 4, 5, and 6 from the super-resolution branch; (<b>c</b>) is FY-4A 500 m resolution imagery extracted using NDSI; (<b>d</b>) is the result of cloud and snow classification with MOD10A1 at a 500 m resolution; and (<b>e</b>) is the result of cloud and snow classification for the proposed model at 500 m resolution.</p>
Full article ">Figure 7
<p>Detection rate comparison for snow classification from January to March 2020. Figure (<b>a</b>) shows the accuracy and false detection rate of the proposed model and the MOD10A1 snow product. Figure (<b>b</b>) shows the total classification accuracy of the proposed model and the MOD10A1 snow product.</p>
Full article ">Figure 8
<p>Landset8 mask verification: Landset8 images are the Landset8 30 m resolution raw images; Snowmap is the result of snow detection through Landset8 raw images using the Snowmap algorithm; DSRRS-Net is the result of snow detection for the model proposed in this paper.</p>
Full article ">
18 pages, 6572 KiB  
Article
Cloud Screening Method in Complex Background Areas Containing Snow and Ice Based on Landsat 9 Images
by Tingting Wu, Qing Liu and Ying Jing
Int. J. Environ. Res. Public Health 2022, 19(20), 13267; https://doi.org/10.3390/ijerph192013267 - 14 Oct 2022
Viewed by 1565
Abstract
The first step in the application of Landsat 9 imagery is cloud screening, and the International Satellite Cloud Climatology Project (ISCCP) has made cloud screening an important part of the World Climate Research Program. The accurate identification of clouds in remote sensing satellite [...] Read more.
The first step in the application of Landsat 9 imagery is cloud screening, and the International Satellite Cloud Climatology Project (ISCCP) has made cloud screening an important part of the World Climate Research Program. The accurate identification of clouds in remote sensing satellite images containing snow and ice on the subsurface has been a challenging task in the cloud screening process. It is imperative to fully exploit the characteristic heterogeneous information of the cloud and snow subsurface, to solve the problem of cloud–snow confusion in the snow and ice environment, and to carry out research on cloud screening technology without interference from the snow and ice subsurface. In view of this, this paper will systematically carry out research on cloud screening methods in snow and ice environments. In this paper, we propose the building of a cloud screening algorithm that takes into account the difficulty of the fact that snow and ice on the subsurface can easily interfere with cloud recognition, and the influence of an empirical threshold or statistical threshold that makes its application less effective, and then establish a dynamic threshold cloud screening algorithm that is suitable for snow and ice environments. The research results will provide new ideas and perspectives to solve the problem of surface-type interference that most of the existing cloud screening algorithms contend with. Full article
Show Figures

Figure 1

Figure 1
<p>Global cloud cover (<bold>A</bold>,<bold>C</bold>) vs. snow cover (<bold>B</bold>,<bold>D</bold>) in November 2021 and January 2022.</p>
Full article ">Figure 2
<p>Proposed method framework.</p>
Full article ">Figure 3
<p>Flow chart of image processing and MTMF-based mapping steps.</p>
Full article ">Figure 4
<p>Grey-scale comparison images (<bold>A</bold>–<bold>D</bold>) of the original panchromatic band grey-scale images and the MTMF processed images.</p>
Full article ">Figure 5
<p>Sub-distribution diagram of the mixed monolithic sieving model, where (<bold>A</bold>–<bold>D</bold>) correspond to the results of the four gray-scale images in <xref ref-type="fig" rid="ijerph-19-13267-f004">Figure 4</xref>, respectively.</p>
Full article ">Figure 6
<p>Comparison of the proposed method against ANN and Fmask. (Sub-figures (<bold>A</bold>–<bold>D</bold>) correspond to the four data used in this paper, respectively. Cropping the region of interest, a false color composite is used to help the reader visually differentiate between snow/ice and clouds. Bands 7, 4, and 2 for Red, Green, and Blue, respectively).</p>
Full article ">
23 pages, 858 KiB  
Article
Satellite Image for Cloud and Snow Recognition Based on Lightweight Feature Map Attention Network
by Chaoyun Yang, Yonghong Zhang, Min Xia, Haifeng Lin, Jia Liu and Yang Li
ISPRS Int. J. Geo-Inf. 2022, 11(7), 390; https://doi.org/10.3390/ijgi11070390 - 12 Jul 2022
Cited by 2 | Viewed by 2226
Abstract
Cloud and snow recognition technology is of great significance in the field of meteorology, and is also widely used in remote sensing mapping, aerospace, and other fields. Based on the traditional method of manually labeling cloud-snow areas, a method of labeling cloud and [...] Read more.
Cloud and snow recognition technology is of great significance in the field of meteorology, and is also widely used in remote sensing mapping, aerospace, and other fields. Based on the traditional method of manually labeling cloud-snow areas, a method of labeling cloud and snow areas using deep learning technology has been gradually developed to improve the accuracy and efficiency of recognition. In this paper, from the perspective of designing an efficient and lightweight network model, a cloud snow recognition model based on a lightweight feature map attention network (Lw-fmaNet) is proposed to ensure the performance and accuracy of the cloud snow recognition model. The model is improved based on the ResNet18 network with the premise of reducing the network parameters and improving the training efficiency. The main structure of the model includes a shallow feature extraction module, an intrinsic feature mapping module, and a lightweight adaptive attention mechanism. Overall, in the experiments conducted in this paper, the accuracy of the proposed cloud and snow recognition model reaches 95.02%, with a Kappa index of 93.34%. The proposed method achieves an average precision rate of 94.87%, an average recall rate of 94.79%, and an average F1-Score of 94.82% for four sample recognition classification tasks: no snow and no clouds, thin cloud, thick cloud, and snow cover. Meanwhile, our proposed network has only 5.617M parameters and takes only 2.276 s. Compared with multiple convolutional neural networks and lightweight networks commonly used for cloud and snow recognition, our proposed lightweight feature map attention network has a better performance when it comes to performing cloud and snow recognition tasks. Full article
Show Figures

Figure 1

Figure 1
<p>Multi-scale fusion attention network.</p>
Full article ">Figure 2
<p>Detail map of L3 layer in lightweight feature attention network.</p>
Full article ">Figure 3
<p>Schematic diagram of depth separable convolution.</p>
Full article ">Figure 4
<p>Schematic diagram of hybrid depth convolution.</p>
Full article ">Figure 5
<p>Feature map after ReLU activation function.</p>
Full article ">Figure 6
<p>L1 layer feature heat map in ResNet18.</p>
Full article ">Figure 7
<p>Intrinsic feature mapping module detailed diagram.</p>
Full article ">Figure 8
<p>Lightweight adaptive attention mechanism.</p>
Full article ">Figure 9
<p>Process diagram before and after one-dimensional convolution operation (<span class="html-italic">K</span> = 5).</p>
Full article ">Figure 10
<p>Cloud and snow recognition effect map of different Models in plateau area.</p>
Full article ">Figure 11
<p>Generalization effect of cloud and snow recognition based on lightweight model.</p>
Full article ">Figure 12
<p>Generalization effect of cloud and snow recognition based on lightweight model.</p>
Full article ">
19 pages, 8841 KiB  
Article
Recent Changes of Glacial Lakes in the High Mountain Asia and Its Potential Controlling Factors Analysis
by Meimei Zhang, Fang Chen, Hang Zhao, Jinxiao Wang and Ning Wang
Remote Sens. 2021, 13(18), 3757; https://doi.org/10.3390/rs13183757 - 19 Sep 2021
Cited by 27 | Viewed by 5295
Abstract
The current glacial lake datasets in the High Mountain Asia (HMA) region still need to be improved because their boundary divisions in the land–water transition zone are not precisely delineate, and also some very small glacial lakes have been lost due to their [...] Read more.
The current glacial lake datasets in the High Mountain Asia (HMA) region still need to be improved because their boundary divisions in the land–water transition zone are not precisely delineate, and also some very small glacial lakes have been lost due to their mixed reflectance with backgrounds. In addition, most studies have only focused on the changes in the area of a glacial lake as a whole, but do not involve the actual changes of per pixel on its boundary and the potential controlling factors. In this research, we produced more accurate and complete maps of glacial lake extent in the HMA in 2008, 2012, and 2016 with consistent time intervals using Landsat satellite images and the Google Earth Engine (GEE) cloud computing platform, and further studied the formation, distribution, and dynamics of the glacial lakes. In total, 17,016 and 21,249 glacial lakes were detected in 2008 and 2016, respectively, covering an area of 1420.15 ± 232.76 km2 and 1577.38 ± 288.82 km2; the lakes were mainly located at altitudes between 4400 m and 5600 m. The annual areal expansion rate was approximately 1.38% from 2008 to 2016. To explore the cause of the rapid expansion of individual glacial lakes, we investigated their long-term expansion rates by measuring changes in shoreline positions. The results show that glacial lakes are expanding rapidly in areas close to glaciers and had a high expansion rate of larger than 20 m/yr from 2008 to 2016. Glacial lakes in the Himalayas showed the highest expansion rate of more than 2 m/yr, followed by the Karakoram Mountains (1.61 m/yr) and the Tianshan Mountains (1.52 m/yr). The accelerating rate of glacier ice and snow melting caused by global warming is the primary contributor to glacial lake growth. These results may provide information that will help in the understanding of detailed lake dynamics and the mechanism, and also facilitate the scientific recognition of the potential hazards associated with glacial lakes in this region. Full article
(This article belongs to the Special Issue Remote Sensing in Glaciology and Cryosphere Research)
Show Figures

Figure 1

Figure 1
<p>The location of sub-basins of High Mountain Asia (HMA). Glacier outlines are from the Randolph Glacier Inventory (RGI v6.0), the Second Chinese Glacier Inventory (CGI2), and the GAMDAM inventory, and are drawn in sky blue; the positions of the China Meteorological Administration (CMA) stations are indicated by black triangles.</p>
Full article ">Figure 2
<p>Automated lake mapping procedure (<b>left</b>) including image subsetting, snow, cloud, shadow, and SLC-off pixel detection, global threshold segmentation, generation of an image block for each glacial lake obtained from Hi-MAG dataset and initial segmentation results, and local glacial lake mapping. Lake mapping procedures taking one area in the central Himalayas as an example (<b>right</b>): (<b>a</b>) false color composite (R/G/B = Band 5/4/3) of original TOA data; (<b>b</b>) bad observation identification including clouds, shadows, and snow; (<b>c</b>) potential glacial lake extent found by applying an MNDWI threshold (≥0.1); (<b>d</b>) image block for each glacial lake; and (<b>e</b>) the final glacial lake shorelines determined using the NLAC model. The image used in this case was acquired by the Landsat-8 OLI on 28 October 2014. The sequence numbers (<b>a</b>–<b>e</b>) in the left-hand diagram correspond with those in the right-hand parts.</p>
Full article ">Figure 3
<p>Schematic diagram of the Digital Shoreline Analysis System (DSAS). The sample data of the shoreline are provided by the DSAS software application.</p>
Full article ">Figure 4
<p>Distribution of glacial lakes in the HMA in 2008, 2012, and 2016. This map was produced using an automated mapping method and Landsat images collected throughout the year. The three zoom-in maps (<b>a</b>–<b>c</b>) show local detail in different regions including the central Himalaya (<b>a</b>), Eastern Hindu Kush (<b>b</b>), and Nyainqentanglha (<b>c</b>).</p>
Full article ">Figure 5
<p>(<b>a</b>) Frequency and areal distribution of glacial lakes by elevation class; and (<b>b</b>) expansion rate of glacial lake area within the different elevation range from 2008 to 2016.</p>
Full article ">Figure 6
<p>Evolution of different types of glacial lakes: (<b>a</b>–<b>c</b>) proglacial lakes; (<b>d</b>–<b>f</b>) supraglacial lakes; and (<b>g</b>–<b>i</b>) unconnected glacial lakes. Background images were derived from USGS Landsat 8 satellite data for 2016.</p>
Full article ">Figure 7
<p>Temporal development and sudden drainage of Lake Merzbacher (79.89°E, 42.23°N).</p>
Full article ">Figure 8
<p>Examples of shoreline erosion and expansion rates measured at 30 m intervals between 2008, 2012, and 2016 for lake (<b>a</b>) (94.2667°E, 30.1018°N); lake (<b>b</b>) (89.1916°E, 28.3325°N); and lake (<b>c</b>) (94.0917°E, 30.1251°N).</p>
Full article ">Figure 9
<p>Mean rate of expansion of glacial lakes from 2008 to 2016 across the HMA region.</p>
Full article ">Figure 10
<p>Spatial variation in (<b>a</b>) air temperature, as derived from NCEP Reanalysis data, and (<b>b</b>) precipitation, as derived from GPCP data, from 1979 to 2016 in the HMA region. Plus symbols indicate the different sub-basins listed in <a href="#remotesensing-13-03757-t002" class="html-table">Table 2</a>.</p>
Full article ">
26 pages, 8970 KiB  
Article
Near-Ultraviolet to Near-Infrared Band Thresholds Cloud Detection Algorithm for TANSAT-CAPI
by Ning Ding, Jianbing Shao, Changxiang Yan, Junqiang Zhang, Yanfeng Qiao, Yun Pan, Jing Yuan, Youzhi Dong and Bo Yu
Remote Sens. 2021, 13(10), 1906; https://doi.org/10.3390/rs13101906 - 13 May 2021
Cited by 9 | Viewed by 2555
Abstract
Cloud and aerosol polarization imaging detector (CAPI) is one of the important payloads on the China Carbon Dioxide Observation Satellite (TANSAT), which can realize multispectral polarization detection and accurate on-orbit calibration. The main function of the instrument is to identify the interference of [...] Read more.
Cloud and aerosol polarization imaging detector (CAPI) is one of the important payloads on the China Carbon Dioxide Observation Satellite (TANSAT), which can realize multispectral polarization detection and accurate on-orbit calibration. The main function of the instrument is to identify the interference of clouds and aerosols in the atmospheric detection path and to improve the retrieval accuracy of greenhouse gases. Therefore, it is of great significance to accurately identify the clouds in remote sensing images. However, in order to meet the requirement of lightweight design, CAPI is only equipped with channels in the near-ultraviolet to near-infrared bands. It is difficult to achieve effective cloud recognition using traditional visible light to thermal infrared band spectral threshold cloud detection algorithms. In order to solve the above problem, this paper innovatively proposes a cloud detection method based on different threshold tests from near ultraviolet to near infrared (NNDT). This algorithm first introduces the 0.38 μm band and the ratio of 0.38 μm band to 1.64 μm band, to realize the separation of cloud pixels and clear sky pixels, which can take advantage of the obvious difference in radiation characteristics between clouds and ground objects in the near-ultraviolet band and the advantages of the band ratio in identifying clouds on the snow surface. The experimental results show that the cloud recognition hit rate (PODcloud) reaches 0.94 (ocean), 0.98 (vegetation), 0.99 (desert), and 0.86 (polar), which therefore achieve the application standard of CAPI data cloud detection The research shows that the NNDT algorithm replaces the demand for thermal infrared bands for cloud detection, gets rid of the dependence on the minimum surface reflectance database that is embodied in traditional cloud recognition algorithms, and lays the foundation for aerosol and CO2 parameter inversion. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Reflectance curve of cloud and underlying surface. The underlying surface spectrum comes from ENVI spectral library [<a href="#B35-remotesensing-13-01906" class="html-bibr">35</a>], and the cloud spectrum comes from Airborne Visible Infrared Imaging Spectrometer.</p>
Full article ">Figure 2
<p>CAPI images over Australia on 29 March 2017. (<b>a</b>–<b>c</b>): Images of semi-vegetated areas in southern Australia; (<b>d</b>–<b>f</b>): Images of desert areas in central Australia. (<b>a</b>,<b>d</b>): Images of 0.38 μm band; (<b>b</b>,<b>e</b>): Images of 0.67 μm band; (<b>c</b>,<b>f</b>): Images of 0.87 μm band.</p>
Full article ">Figure 3
<p>Flow chart of NNDT algorithm using CAPI data.</p>
Full article ">Figure 4
<p>Case over the northwest coast of Australia and Indian ocean at 05:36 UTC 26 April 2017 for CAPI and at 05:40 UTC 26 April 2017 for MODIS. The “time” is the start time when the imager took the image of this scene. (<b>a</b>) The CAPI false color image, (<b>b</b>) the cloud flag image of CAPI derived by applying NNDT algorithm, (<b>c</b>) the MODIS true-color image overlapping with the CAPI scene, and (<b>d</b>) the cloud flag of MYD35 with only “cloud” and “clear” results.</p>
Full article ">Figure 5
<p>Same as <a href="#remotesensing-13-01906-f004" class="html-fig">Figure 4</a>, but over the islands of Indonesia at 05:42 UTC 26 April 2017 for CAPI and at 05:45 UTC 26 April 2017 for MODIS.</p>
Full article ">Figure 6
<p>Same as <a href="#remotesensing-13-01906-f004" class="html-fig">Figure 4</a>, but over the southwest part of Australia at 05:36 UTC 26 April 2017 for CAPI and at 05:35 UTC 26 April 2017 for MODIS.</p>
Full article ">Figure 7
<p>Same as <a href="#remotesensing-13-01906-f004" class="html-fig">Figure 4</a>, but over the Sahara Desert at 12:24 UTC 26 April 2017 for CAPI and at 12:25 UTC 26 April 2017 for MODIS.</p>
Full article ">Figure 8
<p>Same as <a href="#remotesensing-13-01906-f004" class="html-fig">Figure 4</a>, but over the Antarctic continent at 00:30 UTC 1 March 2017 for CAPI and at 01:15 UTC 1 March 2017 for MODIS.</p>
Full article ">Figure 9
<p>Per-pixel comparison of NNDT cloud identification results (NNDT-CLFG) based on SGLI observation data and SGLI-CLFG. The pixels represent (<b>A</b>). NNDT-CLFG and SGLI-CLFG are both cloudy (white), (<b>B</b>). both are clear (gray), (<b>C</b>). NNDT-CLFG are cloudy and SGLI-CLFG are clear (orange), and (<b>D</b>). NNDT-CLFG are clear and SGLI-CLFG are cloudy (blue). Column 1: Composite true-color images of SGLI. Column 2: Per-pixel comparison image for cloud-screening results obtained by NNDT algorithm and SGLI-CLFG. Line 1: Case over vegetation at T0527 26 April 2018 for SGLI. Line 2: Case over desert at T0524 10 March 2018 for SGLI.</p>
Full article ">Figure 10
<p>Same as <a href="#remotesensing-13-01906-f009" class="html-fig">Figure 9</a>, but Line 1: Case over ocean at T0629 10 March 2018 for SGLI. Line 2: Case over snow at T1520 25 December 2018 for SGLI.</p>
Full article ">Figure 11
<p>POD, FAR, HR, and KSS scores of NNDT algorithm in four scenes: (<b>a</b>) vegetation (<b>b</b>) desert (<b>c</b>) ocean, and (<b>d</b>) polar.</p>
Full article ">
14 pages, 6401 KiB  
Article
Design of Desktop Audiovisual Entertainment System with Deep Learning and Haptic Sensations
by Chien-Hsing Chou, Yu-Sheng Su, Che-Ju Hsu, Kong-Chang Lee and Ping-Hsuan Han
Symmetry 2020, 12(10), 1718; https://doi.org/10.3390/sym12101718 - 19 Oct 2020
Cited by 12 | Viewed by 2671
Abstract
In this study, we designed a four-dimensional (4D) audiovisual entertainment system called Sense. This system comprises a scene recognition system and hardware modules that provide haptic sensations for users when they watch movies and animations at home. In the scene recognition system, we [...] Read more.
In this study, we designed a four-dimensional (4D) audiovisual entertainment system called Sense. This system comprises a scene recognition system and hardware modules that provide haptic sensations for users when they watch movies and animations at home. In the scene recognition system, we used Google Cloud Vision to detect common scene elements in a video, such as fire, explosions, wind, and rain, and further determine whether the scene depicts hot weather, rain, or snow. Additionally, for animated videos, we applied deep learning with a single shot multibox detector to detect whether the animated video contained scenes of fire-related objects. The hardware module was designed to provide six types of haptic sensations set as line-symmetry to provide a better user experience. After the system considers the results of object detection via the scene recognition system, the system generates corresponding haptic sensations. The system integrates deep learning, auditory signals, and haptic sensations to provide an enhanced viewing experience. Full article
(This article belongs to the Special Issue Selected Papers from IIKII 2020 Conferences II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The system framework of Sense.</p>
Full article ">Figure 2
<p>(<b>a</b>) The appearance of the 4D sensation device; (<b>b</b>) the experience area of the 4D sensation device.</p>
Full article ">Figure 3
<p>Module installation.</p>
Full article ">Figure 4
<p>(<b>a</b>) Sense operating software; (<b>b</b>) control software of the 4D sensation device.</p>
Full article ">Figure 5
<p>Examples of detected results after applying Google Cloud Vision.</p>
Full article ">Figure 6
<p>Some animation images among the dataset used in this study.</p>
Full article ">Figure 7
<p>The actual trial scenarios.</p>
Full article ">Figure 8
<p>AUC results of some fire-related objects.</p>
Full article ">Figure 9
<p>Sixty-four test animation images.</p>
Full article ">Figure 10
<p>(<b>a</b>) The test animation image; (<b>b</b>) the testing result of the proposed SSD-based system; (<b>c</b>) the testing result of Google Cloud Vision.</p>
Full article ">
15 pages, 2174 KiB  
Article
Cloud Detection for Satellite Imagery Using Attention-Based U-Net Convolutional Neural Network
by Yanan Guo, Xiaoqun Cao, Bainian Liu and Mei Gao
Symmetry 2020, 12(6), 1056; https://doi.org/10.3390/sym12061056 - 25 Jun 2020
Cited by 65 | Viewed by 6459
Abstract
Cloud detection is an important and difficult task in the pre-processing of satellite remote sensing data. The results of traditional cloud detection methods are often unsatisfactory in complex environments or the presence of various noise disturbances. With the rapid development of artificial intelligence [...] Read more.
Cloud detection is an important and difficult task in the pre-processing of satellite remote sensing data. The results of traditional cloud detection methods are often unsatisfactory in complex environments or the presence of various noise disturbances. With the rapid development of artificial intelligence technology, deep learning methods have achieved great success in many fields such as image processing, speech recognition, autonomous driving, etc. This study proposes a deep learning model suitable for cloud detection, Cloud-AttU, which is based on a U-Net network and incorporates an attention mechanism. The Cloud-AttU model adopts the symmetric Encoder-Decoder structure, which achieves the fusion of high-level features and low-level features through the skip-connection operation, making the output results contain richer multi-scale information. This symmetrical network structure is concise and stable, significantly enhancing the effect of image segmentation. Based on the characteristics of cloud detection, the model is improved by introducing an attention mechanism that allows model to learn more effective features and distinguish between cloud and non-cloud pixels more accurately. The experimental results show that the method proposed in this paper has a significant accuracy advantage over the traditional cloud detection method. The proposed method is also able to achieve great results in the presence of snow/ice disturbance and other bright non-cloud objects, with strong resistance to disturbance. The Cloud-AttU model proposed in this study has achieved excellent results in the cloud detection tasks, indicating that this symmetric network architecture has great potential for application in satellite image processing and deserves further research. Full article
(This article belongs to the Special Issue Symmetry in Artificial Visual Perception and Its Application)
Show Figures

Figure 1

Figure 1
<p>U-Net architecture diagram modified from the original study [<a href="#B27-symmetry-12-01056" class="html-bibr">27</a>]. Green/yellow boxes indicate multi-channel feature maps; red arrows indicate 3 × 3 convolution for feature extraction; cyan arrows indicate skip-connection for feature fusion; downward orange arrows indicate max pooling for dimension reduction; upward orange arrows indicate up-sampling for dimension recovery.</p>
Full article ">Figure 2
<p>The structure of the Cloud-AttU model. All the orange/white boxes correspond to multi-channel feature maps. The Cloud-AttU is equipped with skip connections to adaptively rescale feature maps in the encoding path with weights learned from the correlation of feature maps in the decoding path.</p>
Full article ">Figure 3
<p>The diagram of attention gate in Cloud-AttU.</p>
Full article ">Figure 4
<p>Cloud detection results of different scenes over Landsat-Cloud dataset [<a href="#B48-symmetry-12-01056" class="html-bibr">48</a>]. The first row shows the RGB images (<b>top</b>), the second row shows the ground truths (<b>middle</b>) and the third row shows the predictions of Cloud-AttU model (<b>bottom</b>). The yellow in the figure indicates that cloud exists and the purple indicates that no cloud exists.</p>
Full article ">Figure 5
<p>Cloud detection results of different scenes over Landsat-Cloud dataset [<a href="#B48-symmetry-12-01056" class="html-bibr">48</a>]. The first column is the RGB image (<b>left</b>), the second column is the ground truth (<b>center left</b>), the third column is the predictions of Cloud-Net model (<b>center right</b>) and the fourth column is the predictions of Cloud-AttU model (<b>right</b>). The yellow in the figure indicates that cloud exists and the purple indicates that no cloud exists.</p>
Full article ">Figure 6
<p>Cloud detection results under the influence of snow and ice ground over Landsat-Cloud dataset [<a href="#B48-symmetry-12-01056" class="html-bibr">48</a>], the first column is RGB image (<b>left</b>), the second column is ground truth (<b>middle left</b>), the third column is the prediction of the Cloud-Net model (<b>center right</b>), and the fourth column is the prediction of the Cloud-AttU model (<b>right</b>). The yellow in the figure indicates the presence of clouds and the purple indicates the absence of clouds.</p>
Full article ">Figure 7
<p>Cloud detection results under the influence of other factors over Landsat-Cloud dataset [<a href="#B48-symmetry-12-01056" class="html-bibr">48</a>], the first column is RGB image (<b>left</b>), the second column is ground truth (<b>middle left</b>), the third column is the prediction of the Cloud-Net model (<b>center right</b>), and the fourth column is the prediction of the Cloud-AttU model (<b>right</b>). The yellow in the figure indicates the presence of clouds and the purple indicates the absence of clouds.</p>
Full article ">
16 pages, 21936 KiB  
Article
A Novel Approach for Cloud Detection in Scenes with Snow/Ice Using High Resolution Sentinel-2 Images
by Ling Han, Tingting Wu, Qing Liu and Zhiheng Liu
Atmosphere 2019, 10(2), 44; https://doi.org/10.3390/atmos10020044 - 23 Jan 2019
Cited by 5 | Viewed by 4086
Abstract
The recognition of snow versus clouds causes difficulties in cloud detection because of the similarity between cloud and snow spectral characteristics in the visible wavelength range. This paper presents a novel approach to distinguish clouds from snow to improve the accuracy of cloud [...] Read more.
The recognition of snow versus clouds causes difficulties in cloud detection because of the similarity between cloud and snow spectral characteristics in the visible wavelength range. This paper presents a novel approach to distinguish clouds from snow to improve the accuracy of cloud detection and allow an efficient use of satellite images. Firstly, we selected thick and thin clouds from high resolution Sentinel-2 images and applied a matched filter. Secondly, the fractal digital number-frequency (DN-N) algorithm was applied to detect clouds associated with anomalies. Thirdly, spatial analyses, particularly spatial overlaying and hotspot analyses, were conducted to eliminate false anomalies. The results indicate that the method is effective for detecting clouds with various cloud covers over different areas. The resulting cloud detection effect possesses specific advantages compared to classic methods, especially for satellite images of snow and brightly colored ground objects with spectral characteristics similar to those of clouds. Full article
(This article belongs to the Special Issue Remote Sensing of Clouds)
Show Figures

Figure 1

Figure 1
<p>Proposed method framework.</p>
Full article ">Figure 2
<p>High resolution Sentinel-2 images of the study area (cropping the region of interest, false color composite to help the reader visually differentiate between snow/ice and clouds [<a href="#B32-atmosphere-10-00044" class="html-bibr">32</a>]. Bands 4, 10, and 11 for Red, Green, and Blue, respectively), with the (<b>a</b>) original image, (<b>b</b>) cloud detection results using the traditional maximum-likelihood method.</p>
Full article ">Figure 3
<p>Results of the matched filtering (MF) method: (<b>a</b>) The MF result for ROI 1, and (<b>b</b>) the MF result for ROI 2.</p>
Full article ">Figure 4
<p>Fractal schema for ln(DN) versus ln(N) of (<b>a</b>) region of interest (ROI) 1 and (<b>b</b>) ROI 2; B<span class="html-italic">i</span> (<span class="html-italic">i</span> = 1, 2, 3...) represents a fractal dimension of each segment, N<span class="html-italic">i</span> (<span class="html-italic">i</span> = 1, 2, 3...) means the pixel size and frequency of a certain DN threshold, and T<span class="html-italic">i</span> (<span class="html-italic">i</span> = 1, 2, 3...) represents the DN value.</p>
Full article ">Figure 5
<p>Cloud detection based on the DN-N fractal model. (<b>a</b>) Results of the DN-N fractal model for ROI 1, (<b>b</b>) results of the DN-N fractal model for ROI 2.</p>
Full article ">Figure 6
<p>Output image of the anomaly-overlaying between ROI 1 and ROI 2.</p>
Full article ">Figure 7
<p>Output image of hotspot analysis based on the anomaly-overlaying between ROI 1 and ROI 2.</p>
Full article ">Figure 8
<p>Output image of cloud detection. Red colored areas represent clouds; a 100 m one-sided buffer zone around patches of thick cloud could include thinner clouds that were not detected previously. (<b>a</b>) Original image; (<b>b</b>) final cloud detection result.</p>
Full article ">Figure 9
<p>Comparison of the proposed technique against other methods, using imageries in different areas (cropping the regions of interest).</p>
Full article ">Figure 10
<p>Proposed method results of 25 Sentinel-2 scenes. Composited images with bands 4, 10, and 11 in red, green and blue, respectively, are shown on the left, and cloud masks are shown on the right with white color (cropping 6 regions of interest).</p>
Full article ">Figure 11
<p>Enlarged view of the six indicated areas of detail in <a href="#atmosphere-10-00044-f010" class="html-fig">Figure 10</a>, (<b>a</b>–<b>f</b>) correspond to the six regions of interest in <a href="#atmosphere-10-00044-f010" class="html-fig">Figure 10</a>.</p>
Full article ">
Back to TopTop