[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (177)

Search Parameters:
Keywords = azimuth correction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2943 KiB  
Article
Characterization of 77 GHz Radar Backscattering from Sea Surfaces at Low Incidence Angles: Preliminary Results
by Qinghui Xu, Chen Zhao, Zezong Chen, Sitao Wu, Xiao Wang and Lingang Fan
Remote Sens. 2025, 17(1), 116; https://doi.org/10.3390/rs17010116 - 1 Jan 2025
Viewed by 303
Abstract
Millimeter-wave (MMW) radar is capable of providing high temporal–spatial measurements of the ocean surface. Some topics, such as the characterization of the radar echo, have attracted widespread attention from researchers. However, most existing research studies focus on the backscatter of the ocean surface [...] Read more.
Millimeter-wave (MMW) radar is capable of providing high temporal–spatial measurements of the ocean surface. Some topics, such as the characterization of the radar echo, have attracted widespread attention from researchers. However, most existing research studies focus on the backscatter of the ocean surface at low microwave bands, while the sea surface backscattering mechanism in the 77 GHz frequency band remains not well interpreted. To address this issue, in this paper, the investigation of the scattering mechanism is carried out for the 77 GHz frequency band ocean surface at small incidence angles. The backscattering coefficient is first simulated by applying the quasi-specular scattering model and the corrected scattering model of geometric optics (GO4), using two different ocean wave spectrum models (the Hwang spectrum and the Kudryavtsev spectrum). Then, the dependence of the sea surface normalized radar cross section (NRCS) on incidence angles, azimuth angles, and sea states are investigated. Finally, by comparison between model simulations and the radar-measured data, the 77 GHz frequency band scattering characterization of sea surfaces at the near-nadir incidence is verified. In addition, experimental results from the wave tank are shown, and the difference in the scattering mechanism is further discussed between water surfaces and oceans. The obtained results seem promising for a better understanding of the ocean surface backscattering mechanism in the MMW frequency band. It provides a new method for fostering the usage of radar technologies for real-time ocean observations. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

Figure 1
<p>The geometry for MMW radar observations.</p>
Full article ">Figure 2
<p>Experimental set-up for sea surface observations. The device in the blue circle is the UAV with the MMW radar. The red circle is the site of the wave buoy.</p>
Full article ">Figure 3
<p>Experimental set-up in the wave tank. The radar was fixed on the stationary red bridge at a height of about <math display="inline"><semantics> <mrow> <mn>13.5</mn> </mrow> </semantics></math> m.</p>
Full article ">Figure 4
<p>The range spectrum of single-chirp signal for the sea surface observation.</p>
Full article ">Figure 5
<p>The attitude of the UAV platform.</p>
Full article ">Figure 6
<p>Experimental set-up for the external calibration. (<b>a</b>) The geometry for the external calibration. The UAV with the MMW radar hovered in the air, and the trihedral corner reflector was placed on the ground. (<b>b</b>) The measurement environment from the view of the camera in the UAV.</p>
Full article ">Figure 7
<p>The simulated NRCS from two theoretical electromagnetic scattering models versus incidence angle with four sea states (<math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math>) in the 77 GHz frequency band. (<b>a</b>) Results from the Gaussian QS (QS-G) model in the upwind direction. (<b>b</b>) Results from the Gaussian GO4 (GO4-G) model in the upwind direction. (<b>c</b>) Results from the QS model in the crosswind direction. (<b>d</b>) Results from the GO4 mode in the crosswind direction.</p>
Full article ">Figure 8
<p>The simulated NRCS from two scattering models versus incidence angle with four sea states (<math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math>) in the 77 GHz frequency band. (<b>a</b>) Results from the QS model in the upwind direction. (<b>b</b>) Results from the GO4 model in the upwind direction. (<b>c</b>) Results from the QS model in the crosswind direction. (<b>d</b>) Results from the GO4 mode in the crosswind direction.</p>
Full article ">Figure 9
<p>The NRCS versus the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> under three incidence angles (<math display="inline"><semantics> <msup> <mn>5</mn> <mo>∘</mo> </msup> </semantics></math>, <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math>, and <math display="inline"><semantics> <msup> <mn>15</mn> <mo>∘</mo> </msup> </semantics></math> incidence angles) for two sea spectrum models. (<b>a</b>) Results from the H spectrum in upwind directions. (<b>b</b>) Results from the K spectrum in upwind directions. (<b>c</b>) Results from the H spectrum in crosswind directions. (<b>d</b>) Results from the K spectrum in crosswind directions.</p>
Full article ">Figure 10
<p>The error <math display="inline"><semantics> <mrow> <mi>e</mi> <mi>r</mi> <mi>r</mi> </mrow> </semantics></math> as a function of <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> using the K spectrum. Curves with circles and squares represent the results in upwind and crosswind directions.</p>
Full article ">Figure 11
<p>The simulated NRCS versus azimuth angles with two sea states in the 77 GHz frequency band by the H spectrum and K spectrum, at different incidence angles. (<b>a</b>–<b>c</b>) Results from the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>0.45</mn> </mrow> </semantics></math> m at <math display="inline"><semantics> <msup> <mn>5</mn> <mo>∘</mo> </msup> </semantics></math> incidence, <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math> incidence, and <math display="inline"><semantics> <msup> <mn>15</mn> <mo>∘</mo> </msup> </semantics></math> incidence. (<b>d</b>–<b>f</b>) Results from the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>1.08</mn> </mrow> </semantics></math> m.</p>
Full article ">Figure 12
<p>The error <math display="inline"><semantics> <mrow> <mi>e</mi> <mi>r</mi> <mi>r</mi> </mrow> </semantics></math> as a function of azimuth angles using the H spectrum. The blue solid line, the green dotted line, and red dotted line represent the results obtained from the <math display="inline"><semantics> <msup> <mn>5</mn> <mo>∘</mo> </msup> </semantics></math>, <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math>, and <math display="inline"><semantics> <msup> <mn>15</mn> <mo>∘</mo> </msup> </semantics></math> incidences, respectively.</p>
Full article ">Figure 13
<p>The obtained NRCS versus incidence angles with six sea states in upwind directions. (<b>a</b>) Ocean surfaces with the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>0.33</mn> </mrow> </semantics></math> m. (<b>b</b>) Ocean surfaces with the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>0.40</mn> </mrow> </semantics></math> m. (<b>c</b>) Ocean surfaces with the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>0.43</mn> </mrow> </semantics></math> m. (<b>d</b>) Ocean surfaces with the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>0.48</mn> </mrow> </semantics></math> m. (<b>e</b>) Ocean surfaces with the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>0.52</mn> </mrow> </semantics></math> m. (<b>f</b>) Ocean surfaces with the <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>0.63</mn> </mrow> </semantics></math> m. Errorbars are <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>1</mn> </mrow> </semantics></math> standard deviation.</p>
Full article ">Figure 14
<p>Comparison of the NRCS (in dB) obtained from the upwind directions in near-nadir-radar sea surface observations. The <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> of six datasets are <math display="inline"><semantics> <mrow> <mn>0.33</mn> </mrow> </semantics></math> m, <math display="inline"><semantics> <mrow> <mn>0.40</mn> </mrow> </semantics></math> m, <math display="inline"><semantics> <mrow> <mn>0.43</mn> </mrow> </semantics></math> m, <math display="inline"><semantics> <mrow> <mn>0.48</mn> </mrow> </semantics></math> m, <math display="inline"><semantics> <mrow> <mn>0.52</mn> </mrow> </semantics></math> m, and <math display="inline"><semantics> <mrow> <mn>0.63</mn> </mrow> </semantics></math> m, respectively. Errorbars are <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>1</mn> </mrow> </semantics></math> standard deviation.</p>
Full article ">Figure 15
<p>The obtained NRCS from the wave tank observation in the upwind direction. (<b>a</b>) The irregular wave water surfaces. (<b>b</b>) The wave tank experiments with regular waves.</p>
Full article ">Figure 16
<p>The relative NRCS as a function of <math display="inline"><semantics> <mi>θ</mi> </semantics></math> for the results obtained from water and ocean surfaces in the 77 GHz frequency band. The results from two observations are shown in the red triangle line and green rhombus line, respectively. Curves with circles represent the relative difference (in dB) between the results from water and sea surfaces.</p>
Full article ">Figure 17
<p>Wave profile from regular waves in the wave tank.</p>
Full article ">
20 pages, 7164 KiB  
Article
A Method for Borehole Image Reverse Positioning and Restoration Based on Grayscale Characteristics
by Shuangyuan Chen, Zengqiang Han, Yiteng Wang, Yuyong Jiao, Chao Wang and Jinchao Wang
Appl. Sci. 2025, 15(1), 222; https://doi.org/10.3390/app15010222 - 30 Dec 2024
Viewed by 192
Abstract
Borehole imaging technology is a critical means for the meticulous measurement of rock mass structures. However, the inherent issue of probe eccentricity significantly compromises the quality of borehole images obtained during testing. This paper proposes a method based on grayscale feature analysis for [...] Read more.
Borehole imaging technology is a critical means for the meticulous measurement of rock mass structures. However, the inherent issue of probe eccentricity significantly compromises the quality of borehole images obtained during testing. This paper proposes a method based on grayscale feature analysis for reverse positioning of imaging probes and image restoration. An analysis of the response characteristics of probe eccentricity was conducted, leading to the development of a grayscale feature model and a method for reverse positioning analysis. By calculating the error matrix using the probe’s spatial trajectory, this method corrects and restores grayscale errors caused by probe eccentricity in images. Quantitative analysis was conducted on the azimuthal errors in borehole images caused by probe eccentricity, establishing a method for correcting image perspective errors based on probe spatial-positioning calibration. Results indicate significant enhancement in the effectiveness and measurement accuracy of borehole images. Full article
Show Figures

Figure 1

Figure 1
<p>Digital panoramic borehole camera system.</p>
Full article ">Figure 2
<p>Schematic of panoramic image transformation.</p>
Full article ">Figure 3
<p>Brightness variation bands in a borehole image.</p>
Full article ">Figure 4
<p>Imaging principle of borehole camera. 1—Borehole wall, 2—imaging probe, 3—CMOS camera, 4—light source, 5—truncated cone mirror.</p>
Full article ">Figure 5
<p>Coordinate system of borehole wall and probe.</p>
Full article ">Figure 6
<p>Algorithm flow of reverse positioning of borehole imaging probe.</p>
Full article ">Figure 7
<p>Typical borehole image under probe eccentricity.</p>
Full article ">Figure 8
<p>Regression analysis curve at depth 24.5 m. (<b>a</b>) Regression before fixing <span class="html-italic">λ</span>; (<b>b</b>) regression after fixing <span class="html-italic">λ</span>.</p>
Full article ">Figure 9
<p>Estimation of parameter λ and sample mean.</p>
Full article ">Figure 10
<p>Estimated 3D trajectory of the probe in borehole.</p>
Full article ">Figure 11
<p>Result of color space transferring.</p>
Full article ">Figure 12
<p>Grayscale error caused by probe eccentricity. (<b>a</b>) Situation of the probe working centered in the borehole; (<b>b</b>) borehole image with the probe centered in the borehole; (<b>c</b>) situation of the probe working off-center in the borehole; (<b>d</b>) borehole image with the probe eccentrically positioned in the borehole.</p>
Full article ">Figure 13
<p>Calculation result of grayscale offset matrix.</p>
Full article ">Figure 14
<p>Borehole image after grayscale restoration.</p>
Full article ">Figure 15
<p>Grayscale histogram of borehole image. (<b>a</b>) Histogram before restoration; (<b>b</b>) histogram after restoration.</p>
Full article ">Figure 16
<p>Perspective error caused by probe eccentricity. (<b>a</b>) Situation of the probe working centered in the borehole; (<b>b</b>) borehole image with the probe centered in the borehole; (<b>c</b>) situation of the probe working off-center in the borehole; (<b>d</b>) borehole image with the probe eccentrically positioned in the borehole.</p>
Full article ">Figure 17
<p>Probe coordinate system and borehole coordinate system.</p>
Full article ">Figure 18
<p>Calculation result of perspective offset matrix.</p>
Full article ">Figure 19
<p>Borehole image after grayscale and perspective restoration.</p>
Full article ">
23 pages, 9152 KiB  
Article
Multi-Band Scattering Characteristics of Miniature Masson Pine Canopy Based on Microwave Anechoic Chamber Measurement
by Kai Du, Yuan Li, Huaguo Huang, Xufeng Mao, Xiulai Xiao and Zhiqu Liu
Sensors 2025, 25(1), 46; https://doi.org/10.3390/s25010046 - 25 Dec 2024
Viewed by 228
Abstract
Using microwave remote sensing to invert forest parameters requires clear canopy scattering characteristics, which can be intuitively investigated through scattering measurements. However, there are very few ground-based measurements on forest branches, needles, and canopies. In this study, a quantitative analysis of the canopy [...] Read more.
Using microwave remote sensing to invert forest parameters requires clear canopy scattering characteristics, which can be intuitively investigated through scattering measurements. However, there are very few ground-based measurements on forest branches, needles, and canopies. In this study, a quantitative analysis of the canopy branches, needles, and ground contribution of Masson pine scenes in C-, X-, and Ku-bands was conducted based on a microwave anechoic chamber measurement platform. Four canopy scenes with different densities by defoliation in the vertical direction were constructed, and the backscattering data for each scene were collected in the C-, X-, and Ku-bands across eight incidence angles and eight azimuth angles, respectively. The results show that in the vertical observation direction, the backscattering energy of the C- and X-bands was predominantly contributed by the ground, whereas the Ku-band signal exhibited higher sensitivity to the canopy structure. The backscattering energy of the scene was influenced by the incident angle, particularly in the cross-polarization, where backscattering energy increased with larger incident angles. The scene’s backscattering energy was influenced by the scattering and extinction of canopy branches and needles, as well as by ground scattering, resulting in a complex relationship with canopy density. In addition, applying orientation correction to the polarization scattering matrix can mitigate the impact of the incident angle and reduce the decomposition energy errors in the Freeman–Durden model. In order to ensure the reliability of forest parameter inversion based on SAR data, a greater emphasis should be placed on physical models that account for signal scattering and the extinction process, rather than relying on empirical models. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Interior view of microwave characteristic measurement and simulation imaging science experiment platform (LAMP, Deqing, China); (<b>b</b>) Geometric diagram of the platform.</p>
Full article ">Figure 2
<p>(<b>a</b>) The scene with all needles (S1), (<b>b</b>) the first defoliation scene (S2), (<b>c</b>) the second defoliation scene (S3), (<b>d</b>) the scene without needles (S4).</p>
Full article ">Figure 3
<p>Workflow of this study.</p>
Full article ">Figure 4
<p>Illustration of backscatter energy profile and signal locations of canopy and ground.</p>
Full article ">Figure 5
<p>Statistics of the ground and canopy energy contribution ratios for different canopy structure scenes: (<b>a</b>) scene S1; (<b>b</b>) scene S2; (<b>c</b>) scene S3; (<b>d</b>) scene S4.</p>
Full article ">Figure 6
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the C-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 7
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the X-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 8
<p>Cumulative backscattering energy distribution curves for various scenes under different polarization modes in the Ku-Band: (<b>a</b>) HH polarization mode; (<b>b</b>) VV polarization mode; (<b>c</b>) HV polarization mode; (<b>d</b>) VH polarization mode.</p>
Full article ">Figure 9
<p>Variation of backscattering energy with observation incidence angle for scene S1: (<b>a</b>) in the C-band; (<b>b</b>) in the X-band; (<b>c</b>) in the Ku-band.</p>
Full article ">Figure 10
<p>Variation of backscattering energy with observation incidence angle for scene S1 after de-orientation: (<b>a</b>) in the C-band; (<b>b</b>) in the X-band; (<b>c</b>) in the Ku-band.</p>
Full article ">Figure 11
<p>Side-looking backscattering energy for different canopy structure scenes of Masson pine: (<b>a</b>–<b>c</b>) represents the backscattering energy of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 12
<p>Side-looking backscattering energy for different canopy structure scenes of Masson pine after orientation correction: (<b>a</b>–<b>c</b>) represents the backscattering energy of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 13
<p>Decomposition energy error statistics based on different polarization decomposition algorithms: (<b>a</b>–<b>d</b>) represents the energy error distribution under different incident angles for scenes S1, S2, S3, and S4, respectively, using the Freeman–Durden model decomposition; (<b>e</b>–<b>h</b>) represents that for scenes S1, S2, S3, and S4, respectively, using the decomposition based on the Freeman–Durden model combined with orientation correction; (<b>i</b>–<b>l</b>) represents that for scenes S1, S2, S3, and S4, respectively, using the decomposition based on the modified Freeman–Durden model combined with orientation correction.</p>
Full article ">Figure 14
<p>The scattering characteristics energy proportion of each scene obtained by the Freeman–Durden model: (<b>a</b>–<b>c</b>) represents the energy proportion of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">Figure 15
<p>The scattering characteristics energy proportion of each scene obtained by the modified Freeman–Durden model combined with orientation correction: (<b>a</b>–<b>c</b>) represents the energy proportion of the C-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>d</b>–<b>f</b>) represents that of the X-band at incidence angles of 35°, 45°, and 55°, respectively; (<b>g</b>–<b>i</b>) represents that of the Ku-band at incidence angles of 35°, 45°, and 55°, respectively.</p>
Full article ">
18 pages, 2792 KiB  
Article
Research on Optimization of Target Positioning Error Based on Unmanned Aerial Vehicle Platform
by Yinglei Li, Qingping Hu, Shiyan Sun, Yuxiang Zhou and Wenjian Ying
Appl. Sci. 2024, 14(24), 11935; https://doi.org/10.3390/app142411935 - 20 Dec 2024
Viewed by 325
Abstract
Achieving precise target localization for UAVs is a complex problem that is often discussed. In order to achieve precise spatial localization of targets by UAVs and to solve the problems of premature convergence and easy to fall into local optimum in the original [...] Read more.
Achieving precise target localization for UAVs is a complex problem that is often discussed. In order to achieve precise spatial localization of targets by UAVs and to solve the problems of premature convergence and easy to fall into local optimum in the original dung beetle algorithm, an error handling method based on the coordinate transformation of an airborne measurement system and the dung beetle optimization with crisscross and 3 Sigma Rule optimization (CCDBO) is proposed. Firstly, the total standard deviation is calculated by integrating the carrier position, the attitude angle, the pod azimuth, the pitch angle, and the given alignment error of the pod’s orientation. Subsequently, the Taylor series expansion method is adopted to linearize the approximated coordinate transformation process and simplify the error propagation model. Finally, in order to further improve the positioning accuracy, a target position correction strategy with the improved dung beetle optimization algorithm is introduced. The simulation and flight experiment results show that this method can significantly reduce the target positioning error of UAVs and improve the positioning accuracy by 20.42% on average compared with that of the original dung beetle algorithm, which provides strong support for the high-precision target observation and identification of UAVs in complex environments. Full article
Show Figures

Figure 1

Figure 1
<p>Space rectangular coordinate system.</p>
Full article ">Figure 2
<p>Geographic coordinate system.</p>
Full article ">Figure 3
<p>Airframe coordinate system.</p>
Full article ">Figure 4
<p>CCDBO flowchart.</p>
Full article ">Figure 5
<p>Average simulation results.</p>
Full article ">Figure 6
<p>Scatterplot distribution of simulated latitude and longitude of target points: (<b>a</b>) Unprocessed data. (<b>b</b>) Data processed according to the equal influence principle. (<b>c</b>) Data processed by the original dung beetle algorithm. (<b>d</b>) Data processed by the improved dung beetle algorithm.</p>
Full article ">Figure 7
<p>Distribution of target point simulation latitude and longitude histograms: (<b>a</b>) Unprocessed data. (<b>b</b>) Data processed according to the equal influence principle. (<b>c</b>) Data processed by the original dung beetle algorithm. (<b>d</b>) Data processed by the improved dung beetle algorithm.</p>
Full article ">Figure 8
<p>Plot of distance between simulation and true value points.</p>
Full article ">Figure 9
<p>Center and target points.</p>
Full article ">Figure 10
<p>Measurement data for target point 1.</p>
Full article ">Figure 11
<p>Target point localization error.</p>
Full article ">
20 pages, 419 KiB  
Article
Current Density Induced by a Cosmic String in de Sitter Spacetime in the Presence of Two Flat Boundaries
by Wagner Oliveira dos Santos, Herondy F. Santana Mota and Eugênio R. Bezerra de Mello
Universe 2024, 10(11), 428; https://doi.org/10.3390/universe10110428 - 17 Nov 2024
Cited by 1 | Viewed by 520
Abstract
In this paper, we investigate the vacuum bosonic current density induced by a carrying-magnetic-flux cosmic string in a (D+1)-de Sitter spacetime considering the presence of two flat boundaries perpendicular to it. In this setup, the Robin boundary conditions [...] Read more.
In this paper, we investigate the vacuum bosonic current density induced by a carrying-magnetic-flux cosmic string in a (D+1)-de Sitter spacetime considering the presence of two flat boundaries perpendicular to it. In this setup, the Robin boundary conditions are imposed on the scalar charged quantum field on the boundaries. The particular cases of Dirichlet and Neumann boundary conditions are studied separately. Due to the coupling of the quantum scalar field with the classical gauge field, corresponding to a magnetic flux running along the string’s core, a nonzero vacuum expectation value for the current density operator along the azimuthal direction is induced. The two boundaries divide the space in three regions with different properties of the vacuum states. In this way, our main objective is to calculate the induced currents in these three regions. In order to develop this analysis we calculate, for both regions, the positive frequency Wightman functions. Because the vacuum bosonic current in dS space has been investigated before, in this paper we consider only the contributions induced by the boundaries. We show that for each region the azimuthal current densities are odd functions of the magnetic flux along the string. To probe the correctness of our results, we take the particular cases and analyze some asymptotic limits of the parameters of the model. Also some graphs are presented exhibiting the behavior of the current with relevant physical parameter of the system. Full article
(This article belongs to the Section Field Theory)
Show Figures

Figure 1

Figure 1
<p>The VEV of the azimuthal current density induced by a single plate, located at <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <msub> <mi>a</mi> <mi>j</mi> </msub> </mrow> </semantics></math>, is plotted as function of the proper distance from the string, <math display="inline"><semantics> <msub> <mi>r</mi> <mi>p</mi> </msub> </semantics></math>, (<b>top panel</b>) and the proper distance from the plate, <math display="inline"><semantics> <msub> <mi>z</mi> <mi>p</mi> </msub> </semantics></math>, (<b>bottom panel</b>), in units of <span class="html-italic">a</span>. In both plots, we consider Dirichlet and Neumann boundary conditions and various values of <span class="html-italic">q</span>. Both graphs are plotted for <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>a</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mi>j</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. Moreover, in the <b>top panel</b> we have fixed <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and in the <b>bottom</b> one, <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The VEV of the azimuthal current density induced between the plates is plotted as function of the proper distance from the string, <math display="inline"><semantics> <msub> <mi>r</mi> <mi>p</mi> </msub> </semantics></math>. In the <b>top panel</b> we consider <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> and in the <b>bottom panel</b>, <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>. In both plots we consider Dirichlet and Neumann boundary conditions and different values of <span class="html-italic">q</span>. Both graphs are plotted for <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>a</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>. The positions of the plates are in both plots at <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The VEV of the azimuthal current density induced between the plates is plotted as function of <math display="inline"><semantics> <msub> <mi>z</mi> <mi>p</mi> </msub> </semantics></math>. In the <b>top panel</b> we assume <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, and in the <b>bottom</b>, <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>. For both plots we also assume <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>a</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>.</p>
Full article ">
18 pages, 5821 KiB  
Article
Simulating Vertical Profiles of Optical Turbulence at the Special Astrophysical Observatory Site
by Artem Y. Shikhovtsev, Sergey A. Potanin, Evgeniy A. Kopylov, Xuan Qian, Lidia A Bolbasova, Asya V. Panchuk and Pavel G. Kovadlo
Atmosphere 2024, 15(11), 1346; https://doi.org/10.3390/atmos15111346 - 9 Nov 2024
Viewed by 636
Abstract
In this paper, we used meteorological data to simulate vertical profiles of optical turbulence at the Special Astrophysical Observatory (SAO) (Russia, 43°40′19″ N 41°26′23″ E, 2100 m a.s.l.), site of the 6 m Big Telescope Alt-azimuthal. For the first time, the vertical profiles [...] Read more.
In this paper, we used meteorological data to simulate vertical profiles of optical turbulence at the Special Astrophysical Observatory (SAO) (Russia, 43°40′19″ N 41°26′23″ E, 2100 m a.s.l.), site of the 6 m Big Telescope Alt-azimuthal. For the first time, the vertical profiles of optical turbulence are calculated for the SAO using ERA-5 reanalysis data. These profiles are corrected using DIMM measurements as well as estimations of atmospheric boundary layer heights. We may note that the method basically reconstructs the most important features of the shape of the measured profile under clear sky. Atmospheric turbulent layers were identified, and the strength of optical turbulence in these layers was estimated. The model hourly values of seeing corresponding to the obtained vertical profiles range from 0.40 to 3.40 arc sec; the values of the isoplanatic angle vary in the range from 1.00 to 3.00 arc sec (at λ = 500 nm). The calculated median of seeing is close to 1.21 arc sec. These estimations are close to the measured median of seeing (1.21 arc sec). Full article
Show Figures

Figure 1

Figure 1
<p>Geographical position of SAO. The figure also shows the sites where measurements are planned have been or carried out. The red box is the region of the interest.</p>
Full article ">Figure 2
<p>Differential image motion monitor (DIMM).</p>
Full article ">Figure 3
<p>Nighttime changes of seeing at the SAO during (<b>a</b>) 27–28 August 2024; (<b>b</b>) 28–29 August 2024. For brevity, the parameter seeing is designated as <math display="inline"><semantics> <mi>β</mi> </semantics></math> (Local time). Black lines correspond to variations in total <math display="inline"><semantics> <mi>β</mi> </semantics></math>. The changes of <math display="inline"><semantics> <mi>β</mi> </semantics></math> within the free atmosphere (above 500 m) are shown by red lines.</p>
Full article ">Figure 4
<p>Vertical profiles of the bulk Richardson number at the Divnoe. The grey narrow vertical bars show the ranges of changes in the critical Richardson number. The horizontal lines are estimations of atmospheric boundary layer heights. (<b>a</b>) Typical vertical profiles of the Richardson number during daytime (blue line) and nighttime (orange line); <math display="inline"><semantics> <msub> <mi>u</mi> <mo>*</mo> </msub> </semantics></math> = 0 m/s. (<b>b</b>) Vertical profiles of the Richardson number during nighttime. Typical vertical profiles of <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>i</mi> </mrow> </semantics></math> are shown by the green and red lines (<math display="inline"><semantics> <msub> <mi>u</mi> <mo>*</mo> </msub> </semantics></math> = 0 m/s). Profile for weakly stable surface layer is shown by the blue line. The orange line corresponds to the profile corrected using the surface value of the friction velocity (<math display="inline"><semantics> <msub> <mi>u</mi> <mo>*</mo> </msub> </semantics></math>= 0.1 m/s).</p>
Full article ">Figure 5
<p>Nighttime changes in atmospheric boundary layer height at the Divnoe station site (00 UTC). Blue lines correspond to HBL values estimated from ERA-5 reanalysis data. Heights of BL determined by the threshold value of <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>i</mi> </mrow> </semantics></math> (radiosondes) are shown by orange lines.</p>
Full article ">Figure 6
<p>Nighttime vertical profiles of optical turbulence at the SAO, 27 August 2024. (<b>a</b>) shows the vertical profiles of optical turbulence in terms of <math display="inline"><semantics> <msubsup> <mi>C</mi> <mi>n</mi> <mn>2</mn> </msubsup> </semantics></math>, (<b>b</b>) corresponds to the profiles of dimensionless intensity of optical turbulence.</p>
Full article ">Figure 7
<p>Nighttime vertical profiles of optical turbulence at the SAO, 28–29 August 2024. (<b>a</b>) shows the vertical profiles of optical turbulence in terms of <math display="inline"><semantics> <msubsup> <mi>C</mi> <mi>n</mi> <mn>2</mn> </msubsup> </semantics></math>, (<b>b</b>) corresponds to the profiles of dimensionless intensity of optical turbulence.</p>
Full article ">Figure 8
<p>Vertical profile of optical turbulence at the SAO site, 1 June–15 September 2024. The shading corresponds to the interval between the first and the third quartiles.</p>
Full article ">Figure 9
<p>Histogram of seeing at the SAO, calculated at the wavelength 500 nm, January–December 2023.</p>
Full article ">Figure 10
<p>Histogram of isoplanatic angle at the SAO, calculated at the wavelength 500 nm, January–December 2023.</p>
Full article ">Figure A1
<p>Distributions of <math display="inline"><semantics> <msub> <mi>u</mi> <mo>*</mo> </msub> </semantics></math> in the lower atmospheric layer for 2012–2024 within the SAO region: (<b>a</b>) December–February and (<b>b</b>) June–August.</p>
Full article ">Figure A2
<p>Typical vertical distribution of optical turbulence for medium and high values of seeing obtained from the image scintillation, Terskol Peak Observatory.</p>
Full article ">
15 pages, 3644 KiB  
Article
A Calculation Study on the Escape of Incident Solar Radiation in Buildings with Glazing Facades
by Shunyao Lu, Zhengzhi Wang and Tao Chen
Buildings 2024, 14(11), 3497; https://doi.org/10.3390/buildings14113497 - 31 Oct 2024
Viewed by 523
Abstract
More and more modern buildings are using glass curtain walls as their building envelope. The large area of window leads to a significant increase in solar heat gain, resulting in an increase in the cooling load and energy consumption of the building envelope. [...] Read more.
More and more modern buildings are using glass curtain walls as their building envelope. The large area of window leads to a significant increase in solar heat gain, resulting in an increase in the cooling load and energy consumption of the building envelope. In the calculation of building cooling load, the thermal performance parameter of windows, the solar heat gain coefficient, is used to calculate the solar radiation heat gain of the windows. The window-to-wall ratio of buildings with glazing facades is large, and the phenomenon of escape of incident solar radiation cannot be ignored. In order to calculate the solar radiation escape rate, a dynamic model of solar radiation escape rate incorporating the solar path tracking model is developed in this research, which can achieve big data simulation analysis based on actual meteorological conditions. The model is programmed and simulated using MATLAB R2024a software. Five representative cities from different climate regions in China are selected and the variation rule of solar radiation escape rate are analyzed on three different time scales: day, month, and year. The influence of building orientation was also calculated and analyzed. The numerical calculation results indicate that the escape solar radiation rate varies with the incident angle of solar radiation at different times. It was found that the smaller the solar azimuth angle and solar altitude angle, the smaller the escape rate of solar radiation. The latitude of a city has a significant impact on the solar radiation escape rate. The weighted average of the solar radiation escape rates for each city were calculated for both summer and winter. Regardless of the season, the city’s location, and the orientation of the room, the value of solar radiation escape rate varies from 8.64% to 10.33%, which indicates that the solar radiation escape phenomenon cannot be ignored in glass curtain wall buildings. The results can be used as a reference value of solar radiation escape rate for the correction of actual solar heat gain of buildings in different climate regions. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the escape of incident solar radiation: (1) escape after first reflection; (2) escape after second reflection; (3) escape after third reflection. Notes: The red line is incident solar radiation, the orange line is the reflected solar radiation, and the yellow line is the escape solar radiation. The arrows represent the direction of radiation.</p>
Full article ">Figure 2
<p>Schematic diagram of the irradiation spot formed by direct radiation on indoor walls.</p>
Full article ">Figure 3
<p>Schematic diagram of radiosity of indoor micro surfaces.</p>
Full article ">Figure 4
<p>The incident and escape amounts of solar radiation on 15 July.</p>
Full article ">Figure 5
<p>Solar radiation escape rate Y at various times on the summer solstice.</p>
Full article ">Figure 6
<p>The incident and escape amounts of solar radiation at 12:00 on the 15th of different months.</p>
Full article ">Figure 7
<p>Solar radiation escape rate Y at 12:00 on the 15th of different months.</p>
Full article ">Figure 8
<p>Solar radiation escape rate Y (%) in rooms with different orientations while irradiated by direct solar radiation through the year. (<b>a</b>) The room facing south; (<b>b</b>) The room facing east; (<b>c</b>) The room facing west. Notes: The four vertical blue lines represent the vernal equinox, summer solstice, autumnal equinox, and winter solstice.</p>
Full article ">
33 pages, 14046 KiB  
Article
High-Resolution Collaborative Forward-Looking Imaging Using Distributed MIMO Arrays
by Shipei Shen, Xiaoli Niu, Jundong Guo, Zhaohui Zhang and Song Han
Remote Sens. 2024, 16(21), 3991; https://doi.org/10.3390/rs16213991 - 27 Oct 2024
Viewed by 1184
Abstract
Airborne radar forward-looking imaging holds significant promise for applications such as autonomous navigation, battlefield reconnaissance, and terrain mapping. However, traditional methods are hindered by complex system design, azimuth ambiguity, and low resolution. This paper introduces a distributed array collaborative, forward-looking imaging approach, where [...] Read more.
Airborne radar forward-looking imaging holds significant promise for applications such as autonomous navigation, battlefield reconnaissance, and terrain mapping. However, traditional methods are hindered by complex system design, azimuth ambiguity, and low resolution. This paper introduces a distributed array collaborative, forward-looking imaging approach, where multiple aircraft with linear arrays fly in parallel to achieve coherent imaging. We analyze signal model characteristics and highlight the limitations of conventional algorithms. To address these issues, we propose a high-resolution imaging algorithm that combines an enhanced missing-data iterative adaptive approach with aperture interpolation technique (MIAA-AIT) for effective signal recovery in distributed arrays. Additionally, a novel reference range cell migration correction (reference RCMC) is employed for precise range–azimuth decoupling. The forward-looking algorithm effectively transforms distributed arrays into a virtual long-aperture array, enabling high-resolution, high signal-to-noise ratio imaging with a single snapshot. Simulations and real data tests demonstrate that our method not only improves resolution but also offers flexible array configurations and robust performance in practical applications. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

Figure 1
<p>Geometric configuration of the system.</p>
Full article ">Figure 2
<p>Analysis of the single-array configuration. (<b>a</b>) Demonstration of equivalent antenna transformation. (<b>b</b>) Configuration of actual array and equivalent virtual array.</p>
Full article ">Figure 3
<p>Analysis of the mismatch between traditional algorithms and distributed imaging models. (<b>a</b>) Azimuth time-domain envelope of echo sampling in distributed arrays. (<b>b</b>) Azimuth spectrum of echo sampling in distributed arrays. (<b>c</b>) Azimuth focusing results using single-array and distributed multi-array configurations.</p>
Full article ">Figure 4
<p>Analysis of the system’s range cell migration. (<b>a</b>) Single-array RCM. (<b>b</b>) Inter-array RCM.</p>
Full article ">Figure 5
<p>Comparison between the proposed RCMC and traditional RCMC.</p>
Full article ">Figure 6
<p>Coherent processing of azimuth gapped signals.</p>
Full article ">Figure 7
<p>Overall workflow of the distributed array collaborative, forward-looking imaging.</p>
Full article ">Figure 8
<p>Original reference image.</p>
Full article ">Figure 9
<p>Comparative analysis of RCMC algorithms. (<b>a</b>) Target signals in the range—Doppler domain before RCMC. (<b>b</b>) Target signals in the time domain before RCMC. (<b>c</b>) Target signals in the range-Doppler domain after traditional RCMC. (<b>d</b>) Target signals in the time domain after traditional RCMC. (<b>e</b>) Target signals in the range—Doppler domain after proposed RCMC. (<b>f</b>) Target signals in the time domain after proposed RCMC. (<b>g</b>) Imaging results using traditional RCMC. (<b>h</b>) Imaging results using proposed RCMC.</p>
Full article ">Figure 9 Cont.
<p>Comparative analysis of RCMC algorithms. (<b>a</b>) Target signals in the range—Doppler domain before RCMC. (<b>b</b>) Target signals in the time domain before RCMC. (<b>c</b>) Target signals in the range-Doppler domain after traditional RCMC. (<b>d</b>) Target signals in the time domain after traditional RCMC. (<b>e</b>) Target signals in the range—Doppler domain after proposed RCMC. (<b>f</b>) Target signals in the time domain after proposed RCMC. (<b>g</b>) Imaging results using traditional RCMC. (<b>h</b>) Imaging results using proposed RCMC.</p>
Full article ">Figure 10
<p>Forward—looking imaging performance analysis of the proposed distributed array coherent processing algorithm.(<b>a</b>) Original reference image. (<b>b</b>) Target envelope formed by ECS using single array. (<b>c</b>) Target envelope formed by ECS—based full aperture algorithm. (<b>d</b>) Distributed array signals with an inter—array spacing of 10 m and an SNR of 25 dB. (<b>e</b>) Azimuth virtual long—aperture signal formed by the proposed algorithm under the corresponding conditions.(<b>f</b>) Target envelope formed by the proposed algorithm under the corresponding conditions. (<b>g</b>) Distributed array signals with an inter—array spacing of 20 m and an SNR of 25 dB. (<b>h</b>) Azimuth virtual long—aperture signal formed by the proposed algorithm under the corresponding conditions. (<b>i</b>) Target envelope formed by the proposed algorithm under the corresponding conditions. (<b>j</b>) Distributed array signals with an inter—array spacing of 10 m and an SNR of 10 dB. (<b>k</b>) Azimuth virtual long—aperture signal formed by the proposed algorithm under the corresponding conditions. (<b>l</b>) Target envelope formed by the proposed algorithm under the corresponding conditions.</p>
Full article ">Figure 11
<p>Comparative analysis of RCMC algorithms. (<b>a</b>) The target envelope based on LPM—AIT with 10 m array spacing. (<b>b</b>) The target envelope based on GAPES with 10 m array spacing. (<b>c</b>) The target envelope based on OMP with 10 m array spacing. (<b>d</b>) The target envelope based on ISTA with 10 m array spacing. (<b>e</b>) Target envelope from ECS algorithm with a 20 m real aperture. (<b>f</b>) The target envelope based on improved MIAA−AIT with 20 m array spacing. (<b>g</b>) The target envelope based on LPM−AIT with 20 m array spacing. (<b>h</b>) The target envelope based on GAPES with 20 m array spacing. (<b>i</b>) The target envelope based on OMP with 20 m array spacing. (<b>j</b>) The target envelope based on ISTA with 20 m array spacing. (<b>k</b>) Target envelope from ECS algorithm with a 40 m real aperture. (<b>l</b>) The target envelope based on improved MIAA−AIT with 20 m array spacing.</p>
Full article ">Figure 12
<p>Comparison of gapped signal recovery capabilities between different algorithms.</p>
Full article ">Figure 13
<p>Simulation results of surface targets using various algorithms. (<b>a</b>) Original image of surface target. (<b>b</b>) Imaging results of surface targets using 20 m aperture radar based on ECS algorithm. (<b>c</b>) Imaging results of surface targets using single—array radar based on ECS algorithm. (<b>d</b>) Imaging results of surface targets using distributed array based on OMP algorithm. (<b>e</b>) Imaging results of surface targets using distributed array based on LPM—AIT algorithm. (<b>f</b>) Imaging results of surface targets using distributed array based on MIAA—AIT algorithm.</p>
Full article ">Figure 14
<p>Comparison of algorithms with measured data (<b>a</b>) Overall experimental setup photo1. (<b>b</b>) Overall experimental setup photo2. (<b>c</b>) Imaging results with 0.5 m synthetic array. (<b>d</b>) Target azimuth envelope imaging results with 0.5 m synthetic array. (<b>e</b>) Imaging results with single cascade radar. (<b>f</b>) Target azimuth envelope imaging results with single cascade radar. (<b>g</b>) Imaging results using distributed array based on OMP algorithm. (<b>h</b>) Target azimuth envelope imaging results using distributed array based on OMP algorithm. (<b>i</b>) Imaging results using distributed array based on ISTA algorithm. (<b>j</b>) Target azimuth envelope imaging results using distributed array based on ISTA algorithm. (<b>k</b>) Imaging results using distributed array based on LPM—AIT algorithm. (<b>l</b>) Target azimuth envelope imaging results using distributed array based on LPM—AIT algorithm. (<b>m</b>) Imaging results using distributed array based on GAPES algorithm. (<b>n</b>) Target azimuth envelope imaging results using distributed array based on GAPES algorithm. (<b>o</b>) Imaging results using distributed array based on MIAA—AIT algorithm. (<b>p</b>) Target azimuth envelope imaging results using distributed array based on MIAA—AIT algorithm.</p>
Full article ">Figure 14 Cont.
<p>Comparison of algorithms with measured data (<b>a</b>) Overall experimental setup photo1. (<b>b</b>) Overall experimental setup photo2. (<b>c</b>) Imaging results with 0.5 m synthetic array. (<b>d</b>) Target azimuth envelope imaging results with 0.5 m synthetic array. (<b>e</b>) Imaging results with single cascade radar. (<b>f</b>) Target azimuth envelope imaging results with single cascade radar. (<b>g</b>) Imaging results using distributed array based on OMP algorithm. (<b>h</b>) Target azimuth envelope imaging results using distributed array based on OMP algorithm. (<b>i</b>) Imaging results using distributed array based on ISTA algorithm. (<b>j</b>) Target azimuth envelope imaging results using distributed array based on ISTA algorithm. (<b>k</b>) Imaging results using distributed array based on LPM—AIT algorithm. (<b>l</b>) Target azimuth envelope imaging results using distributed array based on LPM—AIT algorithm. (<b>m</b>) Imaging results using distributed array based on GAPES algorithm. (<b>n</b>) Target azimuth envelope imaging results using distributed array based on GAPES algorithm. (<b>o</b>) Imaging results using distributed array based on MIAA—AIT algorithm. (<b>p</b>) Target azimuth envelope imaging results using distributed array based on MIAA—AIT algorithm.</p>
Full article ">Figure 14 Cont.
<p>Comparison of algorithms with measured data (<b>a</b>) Overall experimental setup photo1. (<b>b</b>) Overall experimental setup photo2. (<b>c</b>) Imaging results with 0.5 m synthetic array. (<b>d</b>) Target azimuth envelope imaging results with 0.5 m synthetic array. (<b>e</b>) Imaging results with single cascade radar. (<b>f</b>) Target azimuth envelope imaging results with single cascade radar. (<b>g</b>) Imaging results using distributed array based on OMP algorithm. (<b>h</b>) Target azimuth envelope imaging results using distributed array based on OMP algorithm. (<b>i</b>) Imaging results using distributed array based on ISTA algorithm. (<b>j</b>) Target azimuth envelope imaging results using distributed array based on ISTA algorithm. (<b>k</b>) Imaging results using distributed array based on LPM—AIT algorithm. (<b>l</b>) Target azimuth envelope imaging results using distributed array based on LPM—AIT algorithm. (<b>m</b>) Imaging results using distributed array based on GAPES algorithm. (<b>n</b>) Target azimuth envelope imaging results using distributed array based on GAPES algorithm. (<b>o</b>) Imaging results using distributed array based on MIAA—AIT algorithm. (<b>p</b>) Target azimuth envelope imaging results using distributed array based on MIAA—AIT algorithm.</p>
Full article ">
26 pages, 12380 KiB  
Article
Winch Traction Dynamics for a Carrier-Based Aircraft Under Trajectory Control on a Small Deck in Complex Sea Conditions
by Guofang Nan, Sirui Yang, Yao Li and Yihui Zhou
Aerospace 2024, 11(11), 885; https://doi.org/10.3390/aerospace11110885 - 27 Oct 2024
Viewed by 742
Abstract
When the winch traction system of a carrier-based aircraft works under complex sea conditions, the rope and the tire forces are greatly changed compared with under simple sea conditions, and it poses a potential threat to the safety and stability of the aircraft’s [...] Read more.
When the winch traction system of a carrier-based aircraft works under complex sea conditions, the rope and the tire forces are greatly changed compared with under simple sea conditions, and it poses a potential threat to the safety and stability of the aircraft’s traction system. The accurate calculation of the rope and tire forces of a carrier-based aircraft’s winch traction under complex sea conditions is an arduous problem. A novel method of dynamic analysis of the aircraft-winch-ship whole system under complex sea conditions is proposed. A multiple-frequency excitation is adopted to describe the complex sea conditions and the influences of pitching amplitude, and the rolling frequency on the traction dynamics of a carrier-based aircraft along the setting trajectory under complex sea conditions are studied. The advantages and disadvantages of a winch traction system with trajectory control and without trajectory control in complex sea conditions are analyzed. For realizing the trajectory control of the aircraft, the vector difference between the center of mass for the carrier-based aircraft and the position on the predetermined Bessel curve is calculated, so as to obtain the azimuth vector in the aircraft coordinate system. This research is innovative in the modeling of the whole system and the trajectory control of a carrier-based aircraft’s winch traction system under the complicated sea condition of the multi-frequency excitation. ADAMS (Automatic Dynamic Analysis of Mechanical System) is used to verify the correctness of the theoretical calculation for the winch traction. The results show that the complex sea environment has a certain influence on the winch traction safety of the aircraft; in the range of 10–15 s for the traction, the rope force amplitude of complex sea conditions under the multi-frequency excitation is 29.5% larger than that of the single-frequency amplitude, while the vertical force amplitude of the tire is 201.1% larger than that of the single-frequency amplitude. This research has important guiding significance for the selection of rope and tire models for a carrier-borne aircraft’s winch traction in complex sea conditions. Full article
(This article belongs to the Special Issue Advances in Thermal Fluid, Dynamics and Control)
Show Figures

Figure 1

Figure 1
<p>Virtual prototyping model of the tractor–aircraft system [<a href="#B20-aerospace-11-00885" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>Schematic diagram of aircraft winch traction.</p>
Full article ">Figure 3
<p>Schematic diagram of the whole system of carrier-based aircraft traction.</p>
Full article ">Figure 4
<p>Cardan Angles(The conversion relationships between different coordinate systems).</p>
Full article ">Figure 5
<p>The mathematical model of a landing gear-tire system in a <span class="html-italic">z</span>-direction.</p>
Full article ">Figure 6
<p>Schematic diagram of wind load.</p>
Full article ">Figure 7
<p>Graph of PID control for aircraft speed.</p>
Full article ">Figure 8
<p>The generated trajectory diagram (Bessel curve).</p>
Full article ">Figure 9
<p>Traveling trajectory diagram of the aircraft with the control (curve) (<b>a</b>) Planar view from <span class="html-italic">z</span> to −<span class="html-italic">z</span>; (<b>b</b>) Three-dimensional view.</p>
Full article ">Figure 10
<p>Traveling trajectory diagram of the aircraft without the control (straight line). (<b>a</b>) Planar view from <span class="html-italic">z</span> to <span class="html-italic">−z</span>; (<b>b</b>) Three-dimensional view.</p>
Full article ">Figure 11
<p>Curves of rope force changing with time under different pitching amplitudes (<span class="html-italic">θ</span><sub>1</sub> = 5°, 2°, 0.8° and 0.1°, <span class="html-italic">φ</span><sub>1</sub> = 5°).</p>
Full article ">Figure 12
<p>Curve of the vertical force of each tire over time (<span class="html-italic">φ</span><sub>1</sub> = 5°, <span class="html-italic">θ</span><sub>1</sub> = 2°).</p>
Full article ">Figure 13
<p>Curve of tire force over time in each direction for tire three (<span class="html-italic">φ</span><sub>1</sub> = 5°; <span class="html-italic">θ</span><sub>1</sub> = 2°).</p>
Full article ">Figure 14
<p>Curve of the vertical force of tire three over time at different pitching amplitudes (<span class="html-italic">θ</span><sub>1</sub> = 5°, 2°, 0.8° and 0.1°; <span class="html-italic">φ</span><sub>1</sub> = 5°).</p>
Full article ">Figure 15
<p>Curve of the force of the front rope over time at different rolling angle frequencies (<span class="html-italic">ω</span><sub>1</sub> = 2π/T<sub>φ1</sub> = 0.93 rad/s, 0.63 rad/s, 0.23 rad/s and 0.1rad/s; <span class="html-italic">φ</span><sub>1</sub> = 5°, <span class="html-italic">θ</span><sub>1</sub> = 2°).</p>
Full article ">Figure 16
<p>Curve of the vertical force of tire three changing with time at different rolling angle frequencies (<span class="html-italic">ω</span><sub>1</sub> = 2π/T<sub>φ1</sub> = 0.93 rad/s, 0.63 rad/s, 0.23 rad/s and 0.1 rad/s; <span class="html-italic">φ</span><sub>1</sub> = 5°; <span class="html-italic">θ</span><sub>1</sub> = 2°).</p>
Full article ">Figure 17
<p>Curve of the force of the front rope over time at different pitching amplitudes (<span class="html-italic">θ</span><sub>1</sub> = 2°, 0.8°, 0.4° and 0.1°; <span class="html-italic">θ</span><sub>2</sub> = 1°).</p>
Full article ">Figure 18
<p>Curve of the vertical force of tire three over time at different pitching amplitudes (<span class="html-italic">θ</span><sub>1</sub> = 2°, 0.8°, 0.4° and 0.1°; <span class="html-italic">θ</span><sub>2</sub> = 1°).</p>
Full article ">Figure 19
<p>Curve of the force of the front rope over time at different rolling angle frequencies (<span class="html-italic">ω</span><sub>2</sub> = 2π/T<sub>φ2</sub> = 2 rad/s, 1 rad/s, 0.63 rad/s and 0.23 rad/s).</p>
Full article ">Figure 20
<p>Curve of the vertical force of tire three changing with time at different rolling angle frequencies (<span class="html-italic">ω</span><sub>2</sub> = 2π/T<sub>φ2</sub> = 2 rad/s, 1 rad/s, 0.63 rad/s and 0.23 rad/s).</p>
Full article ">Figure 21
<p>Curve of the force of the front rope over time at different heaving amplitudes (<span class="html-italic">z</span><sub>1</sub> = 0.19 m, 0.1 m, 0.05 m, and 0.019 m).</p>
Full article ">Figure 22
<p>Curve of the vertical force of tire three over time at different heaving amplitudes (<span class="html-italic">z</span><sub>1</sub> = 0.19 m, 0.1 m, 0.05 m, and 0.019 m).</p>
Full article ">Figure 23
<p>Curve of the front rope force over time under single-frequency excitation and multi-frequency excitation with trajectory control.</p>
Full article ">Figure 24
<p>Curve of the vertical force of tire three over time under single-frequency excitation and multi-frequency excitation with trajectory control.</p>
Full article ">Figure 25
<p>Curve of the front rope force over time with trajectory control and without trajectory control under multi-frequency excitation.</p>
Full article ">Figure 26
<p>Curve of the vertical force of tire three over time with trajectory control and without trajectory control under multi-frequency excitation.</p>
Full article ">Figure 27
<p>Simulation of five-winch traction of aircraft.</p>
Full article ">Figure 28
<p>Local amplication view of landing gear in five-winch traction of aircraft.</p>
Full article ">Figure 29
<p>Front landing gear model.</p>
Full article ">Figure 30
<p>Rear landing gear model.</p>
Full article ">Figure 31
<p>Comparison of the front rope forces obtained by ADAMS and MATLAB.</p>
Full article ">Figure 32
<p>Comparison of vertical forces for tire three obtained by ADAMS and MATLAB.</p>
Full article ">
22 pages, 12882 KiB  
Article
Automated Cloud Shadow Detection from Satellite Orthoimages with Uncorrected Cloud Relief Displacements
by Hyeonggyu Kim, Wansang Yoon and Taejung Kim
Remote Sens. 2024, 16(21), 3950; https://doi.org/10.3390/rs16213950 - 23 Oct 2024
Viewed by 817
Abstract
Clouds and their shadows significantly affect satellite imagery, resulting in a loss of radiometric information in the shadowed areas. This loss reduces the accuracy of land cover classification and object detection. Among various cloud shadow detection methods, the geometric-based method relies on the [...] Read more.
Clouds and their shadows significantly affect satellite imagery, resulting in a loss of radiometric information in the shadowed areas. This loss reduces the accuracy of land cover classification and object detection. Among various cloud shadow detection methods, the geometric-based method relies on the geometry of the sun and sensor to provide consistent results across diverse environments, ensuring better interpretability and reliability. It is well known that the direction of shadows in raw satellite images depends on the sun’s illumination and sensor viewing direction. Orthoimages are typically corrected for relief displacements caused by oblique sensor viewing, aligning the shadow direction with the sun. However, previous studies lacked an explicit experimental verification of this alignment, particularly for cloud shadows. We observed that this implication may not be realized for cloud shadows, primarily due to the unknown height of clouds. To verify this, we used Rapideye orthoimages acquired in various viewing azimuth and zenith angles and conducted experiments under two different cases: the first where the cloud shadow direction was estimated based only on the sun’s illumination, and the second where both the sun’s illumination and the sensor’s viewing direction were considered. Building on this, we propose an automated approach for cloud shadow detection. Our experiments demonstrated that the second case, which incorporates the sensor’s geometry, calculates a more accurate cloud shadow direction compared to the true angle. Although the angles in nadir images were similar, the second case in high-oblique images showed a difference of less than 4.0° from the true angle, whereas the first case exhibited a much larger difference, up to 21.3°. The accuracy results revealed that shadow detection using the angle from the second case improved the average F1 score by 0.17 and increased the average detection rate by 7.7% compared to the first case. This result confirms that, even if the relief displacement of clouds is not corrected in the orthoimages, the proposed method allows for more accurate cloud shadow detection. Our main contributions are in providing quantitative evidence through experiments for the application of sensor geometry and establishing a solid foundation for handling complex scenarios. This approach has the potential to extend to the detection of shadows in high-resolution satellite imagery or UAV images, as well as objects like high-rise buildings. Future research will focus on this. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>A workflow of the cloud shadow detection method.</p>
Full article ">Figure 2
<p>Illustrations of the location where cloud shadows are projected: (<b>a</b>) a case where cloud shadows are projected based on the height of the clouds; (<b>b</b>) the position of clouds and cloud shadows depicted in the image.</p>
Full article ">Figure 3
<p>An example of the positions of clouds and cloud shadows in satellite images: (<b>a</b>) before orthorectification; (<b>b</b>) after orthorectification.</p>
Full article ">Figure 4
<p>Illustrations depicting the cloud relief displacement: (<b>a</b>) a case of a vertical image; (<b>b</b>) a case of a high-oblique image.</p>
Full article ">Figure 5
<p>A calculation method for the direction vector from a cloud to cloud shadow in a 3-dimensional coordinate system.</p>
Full article ">Figure 6
<p>Explanation for search range of cloud shadow based on cloud height.</p>
Full article ">Figure 7
<p>Calculation method for cloud object movement in image coordinates using ground coordinate.</p>
Full article ">Figure 8
<p>Examples for explanations of redesigning the spectral threshold by Equation (12): (<b>a</b>) explanation of the NIR-RED ratio values for shadows projected on vegetation and water; (<b>b</b>) reason for changing the NIR threshold.</p>
Full article ">Figure 9
<p>A decision tree diagram applied in Equations (9)–(12).</p>
Full article ">Figure 10
<p>An example of noise removal: (<b>a</b>) before noise removal; (<b>b</b>) after noise removal.</p>
Full article ">Figure 11
<p>Satellite image and reference data used in the experiment: (<b>a</b>) Scene-1 image; (<b>b</b>) Scene-2 image; (<b>c</b>) Scene-3 image (white pixels and black pixels denote clouds and cloud shadows, respectively).</p>
Full article ">Figure 12
<p>Verification data for checking azimuth angle of cloud shadow from cloud collected in Scene-1: (<b>C1</b>–<b>C10</b>) represent verification data 1 to 10 (C for case) from Scene-1 (the orange arrows represent <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>T</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>).</p>
Full article ">Figure 13
<p>The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in Scene-1: (<b>C6</b>) a result of case 6 in Scene-1; (<b>C7</b>) a result of case 7 in Scene-1 (white pixels and black pixels denote clouds and cloud shadows, respectively).</p>
Full article ">Figure 14
<p>Verification data for checking azimuth angle of cloud shadow from cloud collected in Scene-2: (<b>C1</b>–<b>C10</b>) represent verification data 1 to 10 (C for case) from Scene-2 (the orange arrows represent <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>T</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>).</p>
Full article ">Figure 15
<p>The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in Scene-2: (<b>C1</b>) a result of case 1 in Scene-2; (<b>C5</b>) a result of case 5 in Scene-2 (white pixels and black pixels denote clouds and cloud shadows, respectively).</p>
Full article ">Figure 16
<p>Verification data for checking azimuth angle of cloud shadow from cloud collected in Scene-3: (<b>C1</b>–<b>C10</b>) represent verification data 1 to 10 (C for case) from Scene-3 (the orange arrows represent <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>T</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>).</p>
Full article ">Figure 17
<p>The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in Scene-3: (<b>C5</b>) a result of case 5 in Scene-3; (<b>C6</b>) a result of case 6 in Scene-3 (white pixels and black pixels denote clouds and cloud shadows, respectively).</p>
Full article ">Figure 18
<p>Intermediate process of cloud shadow detection for searching PCSR in Scene-3: (<b>a</b>) the NIR-R-G composite image; (<b>b</b>–<b>e</b>) the process of shadow detection based on distance using <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>; (<b>f</b>–<b>i</b>) that using <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> (white pixels and black pixels denote clouds and cloud shadows, respectively).</p>
Full article ">Figure 19
<p>Post-processing process for cloud shadow detection: (<b>a</b>) the reference data; (<b>b</b>) cloud shadow detection before post-processing; (<b>c</b>) cloud shadow detection after post-processing(white pixels and black pixels denote clouds and cloud shadows, respectively).</p>
Full article ">Figure 20
<p>Cloud shadow detection results from Scene-1 images (white pixels and black pixels denote clouds and cloud shadows, respectively): (<b>a</b>–<b>c</b>) represent, in sequence, the reference cloud and cloud shadows, the detection results using <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>, and the detection results using <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> from an enlarged image.</p>
Full article ">Figure 21
<p>Cloud shadow detection results from Scene-2 images (white pixels and black pixels denote clouds and cloud shadows, respectively): (<b>a</b>–<b>c</b>) represent, in sequence, the reference cloud and cloud shadows, the detection results using <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>, and the detection results using <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> from an enlarged image.</p>
Full article ">Figure 22
<p>Cloud shadow detection results from Scene-3 images (white pixels and black pixels denote clouds and cloud shadows, respectively): (<b>a</b>–<b>c</b>) represent, in sequence, the reference cloud and cloud shadows, the detection results using <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>, and the detection results using <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> from an enlarged image.</p>
Full article ">
23 pages, 2815 KiB  
Article
Simulation Research on the Dual-Electrode Current Excitation Method for Distance Measurements While Drilling
by Xinyu Dou, Xiaoping Yan, Longyu Hu and Huaqing Liang
Appl. Sci. 2024, 14(20), 9584; https://doi.org/10.3390/app14209584 - 21 Oct 2024
Viewed by 713
Abstract
Based on a comprehensive analysis of the existing methods for measuring adjacent well distances, along with their advantages and disadvantages, this study employs theoretical analysis, simulation experiments, and other comprehensive research methods to investigate a distance measurement method based on current excitation. In [...] Read more.
Based on a comprehensive analysis of the existing methods for measuring adjacent well distances, along with their advantages and disadvantages, this study employs theoretical analysis, simulation experiments, and other comprehensive research methods to investigate a distance measurement method based on current excitation. In response to the need for measuring and controlling the connection of relief wells, a method utilizing dual-electrode current excitation during drilling is proposed. This approach facilitates synchronous excitation measurements while drilling, significantly reducing both time and costs while ensuring safety and efficiency, making it particularly suitable for the connection operation of relief wells that involve safety risks. Firstly, this paper establishes a drilling with measurement model corresponding to the excitation mode, which derives the calculation formulas for the target casing current amplitude attenuation, as well as the induced magnetic field distribution within the formation. Additionally, it provides the calculation methods for determining the target well distance and azimuth direction. Lastly, the impact levels of various key factors are verified through numerical calculations and simulation analyses, which confirm the correctness and effectiveness of this distance measurement method. The findings from this research establish both a core theoretical foundation and a technological basis for the real-time measurement of adjacent well distances during relief well operations. Full article
(This article belongs to the Topic Petroleum and Gas Engineering)
Show Figures

Figure 1

Figure 1
<p>Diagram of electromagnetic MWD model based on dual-electrode current excitation.</p>
Full article ">Figure 2
<p>The amplitude distribution of the casing current in the target well.</p>
Full article ">Figure 3
<p>Magnetic induction intensity on the drilling well axis.</p>
Full article ">Figure 4
<p>Corresponding curves of the magnetic induction intensity <span class="html-italic">B<sub>u</sub></span> at point <span class="html-italic">H<sub>u</sub></span> and d of ranging with different electrode distances.</p>
Full article ">Figure 5
<p>The amplitudes of <span class="html-italic">I<sub>c</sub></span> and <span class="html-italic">B<sub>c</sub></span> with different ranging distances.</p>
Full article ">Figure 6
<p>The amplitudes of <span class="html-italic">I<sub>c</sub></span> and the <span class="html-italic">B<sub>c</sub></span> with different electrode distances.</p>
Full article ">Figure 7
<p>The amplitude of the casing current and the intensities of the induced magnetic field on the axis of the drilling well with different electrode lengths.</p>
Full article ">Figure 8
<p>The amplitude of the casing current and the intensities of the induced magnetic field on the axis of the drilling well at different angles between two axis in the same plane.</p>
Full article ">Figure 9
<p>The amplitude of the casing current and the intensities of the induced magnetic field on the axis of the drilling well with different formation resistivities.</p>
Full article ">Figure 10
<p>The amplitude of the casing current and the intensities of the induced magnetic field on the axis of the drilling well with different mud resistivities.</p>
Full article ">
19 pages, 15677 KiB  
Article
Automatic Correction of Time-Varying Orbit Errors for Single-Baseline Single-Polarization InSAR Data Based on Block Adjustment Model
by Huacan Hu, Haiqiang Fu, Jianjun Zhu, Zhiwei Liu, Kefu Wu, Dong Zeng, Afang Wan and Feng Wang
Remote Sens. 2024, 16(19), 3578; https://doi.org/10.3390/rs16193578 - 26 Sep 2024
Viewed by 875
Abstract
Orbit error is one of the primary error sources of interferometric synthetic aperture radar (InSAR) and differential InSAR (D-InSAR) measurements, arising from inaccurate orbit determination of SAR platforms. Typically, orbit error in the interferogram can be estimated using polynomial models. However, correcting for [...] Read more.
Orbit error is one of the primary error sources of interferometric synthetic aperture radar (InSAR) and differential InSAR (D-InSAR) measurements, arising from inaccurate orbit determination of SAR platforms. Typically, orbit error in the interferogram can be estimated using polynomial models. However, correcting for orbit errors with significant time-varying characteristics presents two main challenges: (1) the complexity and variability of the azimuth time-varying orbit errors make it difficult to accurately model them using a set of polynomial coefficients; (2) existing patch-based polynomial models rely on empirical segmentation and overlook the time-varying characteristics, resulting in residual orbital error phase. To overcome these problems, this study proposes an automated block adjustment framework for estimating time-varying orbit errors, incorporating the following innovations: (1) the differential interferogram is divided into several blocks along the azimuth direction to model orbit error separately; (2) automated segmentation is achieved by extracting morphological features (i.e., peaks and troughs) from the azimuthal profile; (3) a block adjustment method combining control points and connection points is proposed to determine the model coefficients of each block for the orbital error phase estimation. The feasibility of the proposed method was verified by repeat-pass L-band spaceborne and P-band airborne InSAR data, and finally, the InSAR digital elevation model (DEM) was generated for performance evaluation. Compared with the high-precision light detection and ranging (LiDAR) elevation, the root mean square error (RMSE) of InSAR DEM was reduced from 18.27 m to 7.04 m in the spaceborne dataset and from 7.83~14.97 m to 3.36~6.02 m in the airborne dataset. Then, further analysis demonstrated that the proposed method outperforms existing algorithms under single-baseline and single-polarization conditions. Moreover, the proposed method is applicable to both spaceborne and airborne InSAR data, demonstrating strong versatility and potential for broader applications. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method for estimating the time-varying orbital error phase.</p>
Full article ">Figure 2
<p>Schematic diagram of differential interferogram division by the proposed method. (<b>a</b>) is the differential interferogram, (<b>b</b>) is the profile of the time-varying orbital error phase along the azimuth at the dotted line shown in (<b>a</b>), (<b>c</b>) is a schematic diagram of different blocks, and (<b>d</b>) is the overlap area between different blocks and the distribution of control points and connection points.</p>
Full article ">Figure 3
<p>Geographical location of the test sites: (<b>a</b>) Hunan and (<b>b</b>) Krycklan. The red boxes represent LuTan-1 InSAR data and E-SAR data, the blue box represents airborne LiDAR data, and the yellow dots represent the footprint of ICESat-2 ATL08 elevation.</p>
Full article ">Figure 4
<p>Orbital error phase analysis of LT-1 InSAR data from the Hunan test site: (<b>a</b>) interferometric coherence, (<b>b</b>) differential interferometric phase, (<b>c</b>,<b>d</b>) are the profiles of the orbital error phase in the azimuth and range directions, respectively.</p>
Full article ">Figure 5
<p>Time-varying orbit error phase estimation results: (<b>a</b>) original differential interferometric phase, (<b>b</b>) estimated orbital error phase, (<b>c</b>) differential interferometric phase after removing orbit error.</p>
Full article ">Figure 6
<p>DEM elevation validation. (<b>a</b>) InSAR DEM from interferogram inversion after removing orbit error, (<b>b</b>) difference between InSAR DEM and external DEM, (<b>c</b>) error histogram of InSAR DEM relative to ICESat−2 elevation.</p>
Full article ">Figure 7
<p>Analysis of airborne P−band time−varying orbit errors in the Krycklan test site. (<b>a</b>) Airborne SAR intensity map, (<b>b</b>) azimuth profiles of five differential interferograms, (<b>c</b>) azimuth profile of interferometric pair 0101–0103 and extracted peaks and troughs, (<b>d</b>) range profile of interferometric pair 0101–0103.</p>
Full article ">Figure 8
<p>Results of the proposed method for estimating airborne time–varying orbital error phase. (<b>a1</b>–<b>a5</b>) are original differential interferometric phases, (<b>b1</b>–<b>b5</b>) are the estimated orbital error phases, (<b>c1</b>–<b>c5</b>) are differential interferometric phases after removing orbit error phases. From left to right, the five interferometric pairs shown in <a href="#remotesensing-16-03578-t001" class="html-table">Table 1</a> are represented.</p>
Full article ">Figure 9
<p>(<b>a</b>) InSAR DEM estimated by the interferometric pair numbered 0103–0111, (<b>b</b>–<b>f</b>) differences between InSAR DEM and LiDAR DTM estimated after correcting orbit errors for the five interferometric pairs in <a href="#remotesensing-16-03578-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>e</b>) Error statistical histograms of InSAR DEM and LiDAR DTM estimated before and after orbital error correction for the five interferometric pairs in <a href="#remotesensing-16-03578-t001" class="html-table">Table 1</a>.</p>
Full article ">
23 pages, 3642 KiB  
Article
A Novel Chirp-Z Transform Algorithm for Multi-Receiver Synthetic Aperture Sonar Based on Range Frequency Division
by Mingqiang Ning, Heping Zhong, Jinsong Tang, Haoran Wu, Jiafeng Zhang, Peng Zhang and Mengbo Ma
Remote Sens. 2024, 16(17), 3265; https://doi.org/10.3390/rs16173265 - 3 Sep 2024
Viewed by 846
Abstract
When a synthetic aperture sonar (SAS) system operates under low-frequency broadband conditions, the azimuth range coupling of the point target reference spectrum (PTRS) is severe, and the high-resolution imaging range is limited. To solve the above issue, we first convert multi-receivers’ signal into [...] Read more.
When a synthetic aperture sonar (SAS) system operates under low-frequency broadband conditions, the azimuth range coupling of the point target reference spectrum (PTRS) is severe, and the high-resolution imaging range is limited. To solve the above issue, we first convert multi-receivers’ signal into the equivalent monostatic signal and then divide the equivalent monostatic signal into range subblocks and the range frequency subbands within each range subblock in order. The azimuth range coupling terms are converted into linear terms based on piece-wise linear approximation (PLA), and the phase error of the PTRS within each subband is less than π/4. Then, we use the chirp-z transform (CZT) to correct range cell migration (RCM) to obtain low-resolution results for different subbands. After RCM correction, the subbands’ signals are coherently summed in the range frequency domain to obtain a high-resolution image. Finally, different subblocks are concatenated in the range time domain to obtain the final result of the whole swath. The processing of different subblocks and different subbands can be implemented in parallel. Computer simulation experiments and field data have verified the superiority of the proposed method over existing methods. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

Figure 1
<p>Geometric model of the multi-receiver SAS in the side-looking mode.</p>
Full article ">Figure 2
<p>Schematic diagram in the slant plane.</p>
Full article ">Figure 3
<p>Range migration curves of different receivers.</p>
Full article ">Figure 4
<p>Range history errors. (<b>a</b>) Range history error of the PCA method. (<b>b</b>) Range history error of the RHS.</p>
Full article ">Figure 5
<p>Curves of <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>m</mi> </msub> <mo stretchy="false">(</mo> <mi>r</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> changing with <span class="html-italic">r</span>.</p>
Full article ">Figure 6
<p>Schematic diagram of PLA.</p>
Full article ">Figure 7
<p>Curves of <math display="inline"><semantics> <mrow> <mo mathvariant="sans-serif">Δ</mo> <msub> <mi>φ</mi> <mrow> <mi>max</mi> <mo>,</mo> <mi>m</mi> </mrow> </msub> </mrow> </semantics></math> changing with <span class="html-italic">P</span> and <span class="html-italic">Q</span>.</p>
Full article ">Figure 8
<p>Quick implementation of CZT.</p>
Full article ">Figure 9
<p>The flowchart of the proposed method.</p>
Full article ">Figure 10
<p>Location of simulation point targets.</p>
Full article ">Figure 11
<p>The imaging results of different methods. (<b>a</b>) The subblock CZT method. (<b>b</b>) The subblock–subband CZT method. (<b>c</b>) BPA.</p>
Full article ">Figure 11 Cont.
<p>The imaging results of different methods. (<b>a</b>) The subblock CZT method. (<b>b</b>) The subblock–subband CZT method. (<b>c</b>) BPA.</p>
Full article ">Figure 12
<p>Cross-sections of the point target P1.</p>
Full article ">Figure 13
<p>Cross-sections of the point target P2.</p>
Full article ">Figure 14
<p>Cross-sections of the point target P3.</p>
Full article ">Figure 15
<p>Image results of the field data. (<b>a</b>) The subblock CZT method. (<b>b</b>) The subblock–subband- CZT method. (<b>c</b>) BPA.</p>
Full article ">Figure 15 Cont.
<p>Image results of the field data. (<b>a</b>) The subblock CZT method. (<b>b</b>) The subblock–subband- CZT method. (<b>c</b>) BPA.</p>
Full article ">
24 pages, 5746 KiB  
Article
A Novel SAR Imaging Method for GEO Satellite–Ground Bistatic SAR System with Severe Azimuth Spectrum Aliasing and 2-D Spatial Variability
by Jingjing Ti, Zhiyong Suo, Yi Liang, Bingji Zhao and Jiabao Xi
Remote Sens. 2024, 16(15), 2853; https://doi.org/10.3390/rs16152853 - 3 Aug 2024
Viewed by 1133
Abstract
The satellite–ground bistatic configuration, which uses geosynchronous synthetic aperture radar (GEO SAR) for illumination and ground equipment for reception, can achieve wide coverage, high revisit, and continuous illumination of interest areas. Based on the analysis of the signal characteristics of GEO satellite–ground bistatic [...] Read more.
The satellite–ground bistatic configuration, which uses geosynchronous synthetic aperture radar (GEO SAR) for illumination and ground equipment for reception, can achieve wide coverage, high revisit, and continuous illumination of interest areas. Based on the analysis of the signal characteristics of GEO satellite–ground bistatic SAR (GEO SG-BiSAR), it is found that the bistatic echo signal has problems of azimuth spectrum aliasing and 2-D spatial variability. Therefore, to overcome those problems, a novel SAR imaging method for a GEO SG-BiSAR system with severe azimuth spectrum aliasing and 2-D spatial variability is proposed. Firstly, based on the geometric configuration of the GEO SG-BiSAR system, the time-domain and frequency-domain expressions of the signal are derived in detail. Secondly, in order to avoid the increasing cost caused by traditional multi-channel reception technology and the processing burden caused by inter-channel errors, the azimuth deramping is executed to solve the azimuth spectrum aliasing of the signal under the special geometric structure of GEO SG-BiSAR. Thirdly, based on the investigation of azimuth and range spatial variability characteristics of GEO SG-BiSAR in the Range Doppler (RD) domain, the azimuth spatial variability correction strategy is proposed. The signal corrected by the correction strategy has the same migration characteristics as monostatic radar. Therefore, the traditional chirp scaling function (CSF) is also modified to solve the range spatial variability of the signal. Finally, the two-dimensional spectrum of GEO SG-BiSAR with modified chirp scaling processing is derived, followed by the SPECAN operation to obtain the focused SAR image. Furthermore, the completed flowchart is also given to display the main composed parts for GEO SG-BiSAR imaging. Both azimuth spectrum aliasing and 2-D spatial variability are taken into account in the imaging method. The simulated data and the real data obtained by the Beidou navigation satellite are used to verify the effectiveness of the proposed method. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial observation geometric model of the GEO SG-BiSAR system.</p>
Full article ">Figure 2
<p>RD position model of GEO SG-BiSAR system.</p>
Full article ">Figure 3
<p>Target position obtained from GEO SG-BiSAR RD positioning equations.</p>
Full article ">Figure 4
<p>Azimuth spatial variation correction process for GEO SG-BiSAR.</p>
Full article ">Figure 5
<p>Range spatial variation schematic diagram of GEO SG-BiSAR signal.</p>
Full article ">Figure 6
<p>Processing flowchart for GEO SG-BiSAR imaging.</p>
Full article ">Figure 7
<p>Signals before and after azimuth deramping preprocessing. (<b>a</b>) Raw echo data. (<b>b</b>) Echo signal in the RD domain. (<b>c</b>) Signal after range compression. (<b>d</b>) Azimuth deramping preprocessed signal in time domain. (<b>e</b>) Azimuth deramping preprocessed signal in RD domain. (<b>f</b>) The range compression result of the azimuth deramping preprocessed signal.</p>
Full article ">Figure 8
<p>The two-dimensional spatial correction results of the azimuth preprocessed signal. (<b>a</b>) The range compression result of the azimuth preprocessed signal. (<b>b</b>) Signal after azimuth spatial variation correction. (<b>c</b>) Signal after range spatial variation correction. (<b>d</b>) The enlarged result of the red, block diagram in (<b>a</b>). (<b>e</b>) The enlarged result of the red, block diagram in (<b>b</b>). (<b>f</b>) The enlarged result of the red, block diagram in (<b>c</b>).</p>
Full article ">Figure 9
<p>SAR imaging result of the GEO SG-BiSAR simulation data.</p>
Full article ">Figure 10
<p>Comparison of imaging results obtained by traditional NLCS, BP, and the proposed method. (<b>a</b>) Imaging result of NLCS method. (<b>b</b>) Imaging result of BP method. (<b>c</b>) Imaging result of our proposed method.</p>
Full article ">Figure 11
<p>GEO SG-BiSAR navigation satellite experiment. (<b>a</b>) Experimental geometry configuration. (<b>b</b>) Optical photos of roof experiment field.</p>
Full article ">Figure 12
<p>Preprocessing results of the GEO SG-BiSAR navigation satellite experiment. (<b>a</b>) Capture result of the GEO SG-BiSAR navigation satellite. (<b>b</b>) Sky plot of the GEO SG-BiSAR navigation satellite.</p>
Full article ">Figure 13
<p>Imaging results of the GEO SG-BiSAR navigation satellite experiment. (<b>a</b>) Two-dimensional time-domain SAR signal. (<b>b</b>) The focused image of the repeater signal. (<b>c</b>) Azimuth pulse response of strong scattering point in (<b>b</b>). (<b>d</b>) Range pulse response of strong scattering point in (<b>b</b>).</p>
Full article ">
17 pages, 13009 KiB  
Article
A Near-Vertical Well Attitude Measurement Method with Redundant Accelerometers and MEMS IMU/Magnetometers
by Shaowen Ji, Chunxi Zhang, Shuang Gao and Aoxiang Lian
Appl. Sci. 2024, 14(14), 6138; https://doi.org/10.3390/app14146138 - 15 Jul 2024
Viewed by 3180
Abstract
Vertical drilling is the first stage of petroleum exploitation and directional well technology. The near-vertical attitude at each survey station directly determines the whole direction accuracy of the borehole trajectory. However, the attitude measurement for near-vertical wells has poor azimuth accuracy because the [...] Read more.
Vertical drilling is the first stage of petroleum exploitation and directional well technology. The near-vertical attitude at each survey station directly determines the whole direction accuracy of the borehole trajectory. However, the attitude measurement for near-vertical wells has poor azimuth accuracy because the poor signal-to-noise ratio of radial accelerometers hardly obtains the correct horizontal attitude, especially the roll angle. In this paper, a novel near-vertical attitude measurement method was proposed to address this issue. The redundant micro-electromechanical system (MEMS) accelerometers were employed to replace the original accelerometers from MEMS inertial measurement unit (IMU)/magnetometers for calculating horizontal attitude under near-vertical conditions. In addition, a simplified four-position calibration method for the redundant accelerometers was proposed to compensate for the installation and non-orthogonal error. We found that the redundant accelerometers enhanced the signal-to-noise ratio to upgrade the azimuth accuracy at the near-vertical well section. Compared with the traditional method, the experiment results show that the average azimuth errors and roll errors are reduced from 34.45° and 27.09° to 5.7° and 0.61°, respectively. The designed configuration scheme is conducive to the miniaturized design and low-cost requirements of wellbore measuring tools. The proposed attitude measurement method can effectively improve the attitude accuracy of near-vertical wells. Full article
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)
Show Figures

Figure 1

Figure 1
<p>The spatial relationships of the coordinate frame.</p>
Full article ">Figure 2
<p>Schematic diagram of magnetic field components.</p>
Full article ">Figure 3
<p>The schematic of attitude calculation-based MEMS-IMU/magnetometers.</p>
Full article ">Figure 4
<p>Wellbore attitudes characteristics in near-vertical state.</p>
Full article ">Figure 5
<p>Monte Carlo simulation of attitude errors. (<b>a</b>) The error characteristics of pitch at different roll and pitch angles. (<b>b</b>) The error variation of roll angle at different pitch and roll angles. (<b>c</b>) The error variation of azimuth angle at different roll and pitch angles. (<b>d</b>) The error variation of azimuth angle at different azimuth and pitch angles.</p>
Full article ">Figure 6
<p>Schematic of accelerometer redundancy configuration.</p>
Full article ">Figure 7
<p>Accelerometer relative error and offset angle curves.</p>
Full article ">Figure 8
<p>Schematic of installation angle and calibration for redundant accelerometers. (<b>a</b>) Installation angle and calibration. (<b>b</b>) Four-position calibration method.</p>
Full article ">Figure 9
<p>Compensation for non-orthogonal redundant accelerometers.</p>
Full article ">Figure 10
<p>Redundant sensor configuration on hexahedron structure.</p>
Full article ">Figure 11
<p>Schematic of near-vertical algorithm and well section types.</p>
Full article ">Figure 12
<p>Turntable test of near-vertical wellbore attitude.</p>
Full article ">Figure 13
<p>Comparison of pitch errors at different attitudes. (<b>a</b>) Pitch error at azimuth 330° and roll 0°. (<b>b</b>) Pitch error at azimuth 330° and roll 90°. (<b>c</b>) Pitch error at azimuth 150° and roll 90°. (<b>d</b>) Pitch error at azimuth 150° and roll 180°.</p>
Full article ">Figure 14
<p>Comparison of roll errors at different attitudes. (<b>a</b>) Roll error at azimuth 330° and roll 0°. (<b>b</b>) Roll error at azimuth 330° and roll 90°. (<b>c</b>) Roll error at azimuth 150° and roll 90°. (<b>d</b>) Roll error at azimuth 150° and roll 180°.</p>
Full article ">Figure 15
<p>Comparison of azimuth errors at different attitudes. (<b>a</b>) Azimuth error at azimuth 330° and roll 0°. (<b>b</b>) Azimuth error at azimuth 330° and roll 90°. (<b>c</b>) Azimuth error at azimuth 150° and roll 90°. (<b>d</b>) Azimuth error at azimuth 150° and roll 180°.</p>
Full article ">
Back to TopTop