[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (240)

Search Parameters:
Keywords = hydrophone

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 6189 KiB  
Article
The Extraction and Validation of Low-Frequency Wind-Generated Noise Source Levels in the Chukchi Plateau
by Zhicheng Li, Yanming Yang, Hongtao Wen, Hongtao Zhou, Hailin Ruan and Yu Zhang
J. Mar. Sci. Eng. 2025, 13(1), 49; https://doi.org/10.3390/jmse13010049 - 31 Dec 2024
Viewed by 249
Abstract
Low-frequency ocean noise (50–500 Hz) was recorded by a single omnidirectional hydrophone in the open waters of the Chukchi Plateau from 31 August 2021 to 6 September 2021 (local time). After other non-wind interference was filtered out, wind-generated noise source levels (NSLs) were [...] Read more.
Low-frequency ocean noise (50–500 Hz) was recorded by a single omnidirectional hydrophone in the open waters of the Chukchi Plateau from 31 August 2021 to 6 September 2021 (local time). After other non-wind interference was filtered out, wind-generated noise source levels (NSLs) were extracted from the wind-generated noise. The correlation coefficients between the one-third octave wind-generated NSLs and sea surface wind speed exceed 0.84, an improvement of approximately 10% compared to those between the raw data and the wind speed. For 200–500 Hz, the wind-generated NSLs are highly consistent with Wilson’s (1983) estimated curve. The 50–300 Hz results closely match those of Chapman and Cornish (1993) from vertical line array (VLA) measurements. Both demonstrate the feasibility of extracting wind-generated NSLs by utilizing a single omnidirectional hydrophone in the Chukchi Plateau’s open waters. Furthermore, the research results of wind speed dependence and frequency dependence can be applied to calculate wind-generated NSLs in the Chukchi Plateau. Wind-derived ocean ambient noise data are useful for background correction in underwater target detection, recognition, tracking, and positioning. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

Figure 1
<p>Location of acoustic mooring (74.9940° N, 160.1345° W).</p>
Full article ">Figure 2
<p>(Color of line) Red line: Time series of wind speed during experimental period. Blue line: Time series of ice concentration during experimental period.</p>
Full article ">Figure 3
<p>A comparison of the estimated ship interference levels received by the USR and the received sound levels of the USR.</p>
Full article ">Figure 4
<p>Time–frequency spectrogram of acoustic data from 0:00 to 0:02, 9 September 2021.</p>
Full article ">Figure 5
<p>Wind-generated noise level at 500 Hz versus noise level at 1 kHz. Solid line is linear regression on data.</p>
Full article ">Figure 6
<p>(Color of line) Green line: correlation of wind-generated NSLs with wind speed. Yellow line: correlation of raw data with wind speed.</p>
Full article ">Figure 7
<p>Comparison of PDF between normalized wind speed logarithm and normalized wind-generated NSL within 50–500 Hz.</p>
Full article ">Figure 8
<p>(Color of line) Blue line: the goodness of fit for Equation (9) below 10 knots; green line: the goodness of fit for Equation (9) above 10 knots; yellow line: the goodness of fit for Equation (8).</p>
Full article ">Figure 9
<p>Comparison of our wind-generated NSLs with those of Wilson.</p>
Full article ">Figure 10
<p>Comparison of our wind-generated NSLs with those of Kewley et al.</p>
Full article ">Figure 11
<p>Comparison of our wind-generated NSLs with those of Chapman and Cornish.</p>
Full article ">
35 pages, 15971 KiB  
Review
MEMS Acoustic Sensors: Charting the Path from Research to Real-World Applications
by Qingyi Wang, Yang Zhang, Sizhe Cheng, Xianyang Wang, Shengjun Wu and Xufeng Liu
Micromachines 2025, 16(1), 43; https://doi.org/10.3390/mi16010043 - 30 Dec 2024
Viewed by 236
Abstract
MEMS acoustic sensors are a type of physical quantity sensor based on MEMS manufacturing technology for detecting sound waves. They utilize various sensitive structures such as thin films, cantilever beams, or cilia to collect acoustic energy, and use certain transduction principles to read [...] Read more.
MEMS acoustic sensors are a type of physical quantity sensor based on MEMS manufacturing technology for detecting sound waves. They utilize various sensitive structures such as thin films, cantilever beams, or cilia to collect acoustic energy, and use certain transduction principles to read out the generated strain, thereby obtaining the targeted acoustic signal’s information, such as its intensity, direction, and distribution. Due to their advantages in miniaturization, low power consumption, high precision, high consistency, high repeatability, high reliability, and ease of integration, MEMS acoustic sensors are widely applied in many areas, such as consumer electronics, industrial perception, military equipment, and health monitoring. Through different sensing mechanisms, they can be used to detect sound energy density, acoustic pressure distribution, and sound wave direction. This article focuses on piezoelectric, piezoresistive, capacitive, and optical MEMS acoustic sensors, showcasing their development in recent years, as well as innovations in their structure, process, and design methods. Then, this review compares the performance of devices with similar working principles. MEMS acoustic sensors have been increasingly widely applied in various fields, including traditional advantage areas such as microphones, stethoscopes, hydrophones, and ultrasound imaging, and cutting-edge fields such as biomedical wearable and implantable devices. Full article
(This article belongs to the Special Issue Recent Advances in Silicon-Based MEMS Sensors and Actuators)
Show Figures

Figure 1

Figure 1
<p>Classification of MEMS acoustic sensors based on different working principles.</p>
Full article ">Figure 2
<p>Piezoelectric MEMS acoustic sensors. (<b>a</b>) Basic working principle and typical multilayer structure of piezoelectric MEMS acoustic sensors. (<b>b</b>) A ZnO MEMS acoustic sensor with air cavity [<a href="#B29-micromachines-16-00043" class="html-bibr">29</a>]. (<b>c</b>) Multilayer cantilever design of a piezoelectric MEMS microphone, with AlN as piezoelectric material and MO as an electrode material [<a href="#B30-micromachines-16-00043" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>Piezoelectric MEMS acoustic sensor based on ZnO film. (<b>a</b>) ZnO based structure for development of MEMS acoustic sensor [<a href="#B29-micromachines-16-00043" class="html-bibr">29</a>]. (<b>b</b>–<b>d</b>) The cavity structure with microtunnel design, which relates to the atmosphere, as a replacement of the traditional acoustic holes. (<b>b</b>) The fabricated cavity and metal electrode structure of ZnO MEMS acoustic sensor [<a href="#B48-micromachines-16-00043" class="html-bibr">48</a>]. (<b>c</b>) A ZnO MEMS acoustic sensor for aeroacoustic measurements [<a href="#B50-micromachines-16-00043" class="html-bibr">50</a>]. (<b>d</b>) A MEMS acoustic sensor with microtunnel for high SPL measurement, and with less risk of microtunnel blockages [<a href="#B51-micromachines-16-00043" class="html-bibr">51</a>].</p>
Full article ">Figure 4
<p>Piezoelectric MEMS acoustic sensor based on AlN. (<b>a</b>) A AlN pMUT based on the compatibility characteristics between AlN and CMOS processes [<a href="#B53-micromachines-16-00043" class="html-bibr">53</a>]. (<b>b</b>) AlN MEMS acoustic sensor aiming for ultra low working frequency [<a href="#B54-micromachines-16-00043" class="html-bibr">54</a>]. (<b>c</b>) AlN MEMS acoustic sensor with ultra-thin silicon substrate, and different structures for low and high working frequency [<a href="#B56-micromachines-16-00043" class="html-bibr">56</a>]. (<b>d</b>) AlN MEMS acoustic sensor with enhanced SNR (67.03 dB at 1 kHz) [<a href="#B22-micromachines-16-00043" class="html-bibr">22</a>]. (<b>e</b>) AlN MEMS hydrophone with high sensitivity (−178 dB, re. 1 V/μPa) and low noise density (52.6 dB@100 Hz, re. μPa/√Hz) [<a href="#B58-micromachines-16-00043" class="html-bibr">58</a>]. (<b>f</b>) AlN MEMS wideband (10 Hz to more than 10 kHz) acoustic sensor coated by organic film (elastic polyurethane) [<a href="#B59-micromachines-16-00043" class="html-bibr">59</a>].</p>
Full article ">Figure 5
<p>Piezoelectric MEMS hydrophone. (<b>a</b>) A face to face, cross-configuration of four cantilevers design [<a href="#B67-micromachines-16-00043" class="html-bibr">67</a>]. (<b>b</b>) Single cantilever beam design [<a href="#B68-micromachines-16-00043" class="html-bibr">68</a>].</p>
Full article ">Figure 6
<p>Wearable acoustic sensor based on piezoelectric method. (<b>a</b>) Air-silicone composite device for physiological sounds detection [<a href="#B69-micromachines-16-00043" class="html-bibr">69</a>,<a href="#B72-micromachines-16-00043" class="html-bibr">72</a>]. (<b>b</b>) MEMS bionic hydrophone for heart sound sensing [<a href="#B73-micromachines-16-00043" class="html-bibr">73</a>].</p>
Full article ">Figure 7
<p>Representative structure and working principle diagram of piezoresistive MEMS hydrophone.</p>
Full article ">Figure 8
<p>Piezoresistive MEMS acoustic sensor. (<b>a</b>) Low-frequency-detectable acoustic sensor using a piezoresistive cantilever [<a href="#B57-micromachines-16-00043" class="html-bibr">57</a>]. (<b>b</b>) Frequency-specific highly sensitive acoustic sensor using a piezoresistive cantilever and parallel Helmholtz resonators [<a href="#B81-micromachines-16-00043" class="html-bibr">81</a>].</p>
Full article ">Figure 9
<p>Piezoresistive hydrophones with cilium structure. (<b>a</b>) Traditional cilium design in piezoresistive hydrophones [<a href="#B88-micromachines-16-00043" class="html-bibr">88</a>]. (<b>b</b>) CCVH: cilia cluster vector hydrophone [<a href="#B85-micromachines-16-00043" class="html-bibr">85</a>]. (<b>c</b>) DCVH: dumbbell-shaped ciliary vector hydrophone [<a href="#B86-micromachines-16-00043" class="html-bibr">86</a>]. (<b>d</b>) HCVH: hollow cilium cylinder vector hydrophone [<a href="#B87-micromachines-16-00043" class="html-bibr">87</a>]. (<b>e</b>) BCVH: beaded cilia MEMS vector hydrophone [<a href="#B88-micromachines-16-00043" class="html-bibr">88</a>]. (<b>f</b>) CSCVH: cap-shaped ciliary vector hydrophone [<a href="#B89-micromachines-16-00043" class="html-bibr">89</a>]. (<b>g</b>) SCVH: sculpture-shape cilium MEMS vector hydrophone [<a href="#B90-micromachines-16-00043" class="html-bibr">90</a>]. (<b>h</b>) CCCVH: crossed-circle cilium vector hydrophone [<a href="#B91-micromachines-16-00043" class="html-bibr">91</a>].</p>
Full article ">Figure 10
<p>Piezoresistive hydrophones with multiple cilium structure. (<b>a</b>,<b>b</b>) FUVH: four-unit MEMS vector hydrophone [<a href="#B93-micromachines-16-00043" class="html-bibr">93</a>,<a href="#B95-micromachines-16-00043" class="html-bibr">95</a>]. (<b>c</b>) FUVH with annulus-shaped structure [<a href="#B94-micromachines-16-00043" class="html-bibr">94</a>].</p>
Full article ">Figure 11
<p>Representative structure and working principle diagram of capacitive MEMS acoustic sensors.</p>
Full article ">Figure 12
<p>Capacitive MEMS Microphone. (<b>a</b>) Low-power digital capacitive MEMS microphone based on a triple-sampling delta-sigma ADC with embedded gain [<a href="#B101-micromachines-16-00043" class="html-bibr">101</a>]. (<b>b</b>) Wearable capacitive MEMS microphone for cardiac monitoring at the wrist [<a href="#B102-micromachines-16-00043" class="html-bibr">102</a>]. (<b>c</b>) Capacitive MEMS stethoscope with anti-stiction-dimple array design in the diaphragm and the backplate for highly reliable heart or lung sounds detection [<a href="#B105-micromachines-16-00043" class="html-bibr">105</a>].</p>
Full article ">Figure 13
<p>Capacitive MEMS microphone with biomimetic design. (<b>a</b>) Dual-band MEMS directional acoustic sensor for near-resonance operation [<a href="#B110-micromachines-16-00043" class="html-bibr">110</a>]. (<b>b</b>) Directional-resonant MEMS acoustic sensor and associated acoustic vector sensor [<a href="#B111-micromachines-16-00043" class="html-bibr">111</a>]. Both (<b>a</b>,<b>b</b>) are inspired by the tympana configuration of the parasitic fly <span class="html-italic">Ormia ochracea</span>. The circled numbers in (<b>b</b>) are used to distinguish different structures.</p>
Full article ">Figure 14
<p>MEMS acoustic sensor based on optical grating interferometer. (<b>a</b>) A grating interferometer design by a diffraction grating integrated backplate and a pressure-sensitive diaphragm [<a href="#B117-micromachines-16-00043" class="html-bibr">117</a>]. (<b>b</b>) Design of a MEMS optical microphone transducer based on light phase modulation [<a href="#B120-micromachines-16-00043" class="html-bibr">120</a>]. (<b>c</b>) Grating interferometer design with short-cavity structure and grating-on-convex-platform structure [<a href="#B118-micromachines-16-00043" class="html-bibr">118</a>,<a href="#B119-micromachines-16-00043" class="html-bibr">119</a>].</p>
Full article ">Figure 15
<p>MEMS acoustic sensor based on Fabry–Perot method. (<b>a</b>) A typical structure of Fabry–Perot MEMS acoustic sensors. (<b>b</b>) An acoustic sensor based on active fiber Fabry–Pérot microcavities [<a href="#B21-micromachines-16-00043" class="html-bibr">21</a>]. (<b>c</b>) An application in the detection and position of partial discharge [<a href="#B122-micromachines-16-00043" class="html-bibr">122</a>].</p>
Full article ">Figure 16
<p>Applications of MEMS acoustic sensors in biomedical field.</p>
Full article ">
17 pages, 4222 KiB  
Article
Design of Deep-Sea Acoustic Vector Sensors for Unmanned Platforms
by Qindong Sun and Lianglong Da
J. Mar. Sci. Eng. 2025, 13(1), 43; https://doi.org/10.3390/jmse13010043 - 30 Dec 2024
Viewed by 204
Abstract
To meet the critical need for compact, multifunctional acoustic vector sensors on deep-sea unmanned platforms such as acoustic profiling buoys and underwater gliders, we have developed a novel composite resonant acoustic vector sensor capable of large-depth operations. The sensor innovatively integrates the sound [...] Read more.
To meet the critical need for compact, multifunctional acoustic vector sensors on deep-sea unmanned platforms such as acoustic profiling buoys and underwater gliders, we have developed a novel composite resonant acoustic vector sensor capable of large-depth operations. The sensor innovatively integrates the sound pressure channel and the vector channel, and utilizes the conjugate cross-spectrum between them to effectively reduce the isotropic noise, enhance the detection of weak signals from ships, and make up for the shortcomings of a single sound pressure channel and a vector channel. Certified to function reliably at depths up to 1500 m, field sea trials confirm its efficacy in deep-sea deployments, capturing essential marine environmental noise data. Key analysis during sea trials focused on marine ambient noise levels captured at frequencies of 65 Hz, 125 Hz, 315 Hz, 400 Hz, and 500 Hz, correlating these with changes in depth. The test results revealed the following insights: (a) At the same depth, the marine environmental noise level increases as the frequency decreases; (b) At the same frequency, the marine environmental noise level decreases with increasing depth; (c) Under favorable deep-sea conditions, the marine environmental noise level reaches 55 decibels (dB) at 500 Hz; (d) Noise levels tend to increase at various frequencies when surface ships are in proximity. These findings underscore its significant potential for enhancing deep-sea acoustic surveillance and exploration. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the structure of an acoustic vector.</p>
Full article ">Figure 2
<p>The information processing flowchart for the vector hydrophone.</p>
Full article ">Figure 3
<p>Acoustic vector sensor prototype.</p>
Full article ">Figure 4
<p>Acousticpressure channel sensitivity level simulation curve. (The x-axis represents the thickness, and the y-axis represents the sensitivity level).</p>
Full article ">Figure 5
<p>Pressure stress nephogram of piezoceramic circular tube.</p>
Full article ">Figure 6
<p>Piezoelectric accelerometer structure diagram.</p>
Full article ">Figure 7
<p>Stress nephogram of metal pressure-resisting shell.</p>
Full article ">Figure 8
<p>Vector hydrophone simulates section pressure test. (The x-axis represents the time, and the y-axis represents the pressure).</p>
Full article ">Figure 9
<p>Sensitivity curves of vector hydrophone.(The x-axis represents the frequence, and the y-axis represents the sensitivity).</p>
Full article ">Figure 10
<p>Sea trial results of vector hydrophone. (<b>a</b>): The x-axis represents the sea trial time, and the y-axis represents the sea trial depth. (<b>b</b>): The x-axis represents the spectrum sound level, and the y-axis represents the sea trial depth).</p>
Full article ">
17 pages, 4611 KiB  
Article
Analysis of Deep-Sea Acoustic Ranging Features for Enhancing Measurement Capabilities in the Study of the Marine Environment
by Grigory Dolgikh, Yuri Morgunov, Aleksandr Golov, Aleksandr Burenin and Sergey Shkramada
J. Mar. Sci. Eng. 2024, 12(12), 2365; https://doi.org/10.3390/jmse12122365 - 23 Dec 2024
Viewed by 337
Abstract
This article explores the features of using hydroacoustic methods to measure and monitor climate-induced temperature variations along acoustic paths in the Sea of Japan. It delves into effective techniques for controlling and positioning of deep-sea autonomous measuring systems (DSAMS) for diverse applications. Theoretical [...] Read more.
This article explores the features of using hydroacoustic methods to measure and monitor climate-induced temperature variations along acoustic paths in the Sea of Japan. It delves into effective techniques for controlling and positioning of deep-sea autonomous measuring systems (DSAMS) for diverse applications. Theoretical and experimental findings from research conducted in the Sea of Japan in August 2023 along a 144.4 km acoustic route under summer–autumn hydrological conditions, including the aftermath of the powerful typhoon “Khanun”, are presented. The main hydrological regime characteristics for this period are compared with data obtained in 2022. This study explores the transmission of pulsed pseudorandom signals from a broad shelf into the deep area of the sea, with receptions occurring at depths of 69, 126, 680, and 914 m. An experiment was conducted to receive broadband pulse signals centered at a frequency of 400 Hz, located 144.4 km from the source of navigation signals (SNS), which is positioned on the shelf at a depth of 30 m in waters that are 45 m deep. A system of hydrophones, deployed to depths of up to 1000 m, was utilized to capture signal data, allowing for prolonged recording at fixed depths or during descent. An analysis of the experimentally acquired impulse characteristics revealed a series of ray arrivals lasting approximately 0.5 s, with a peak consistently observed across all depths. Findings from both full-scale and numerical experiments enabled the assessment of impulse characteristics within an acoustic waveguide, the calculation of effective signal propagation speeds at varying depths, and the development of conclusions regarding the viability of tackling control and positioning challenges for DSAMS at depths reaching up to 1000 m and distances spanning hundreds of kilometers from control stations. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

Figure 1
<p>The appearance of autonomous hydrophones in hermetic containers.</p>
Full article ">Figure 2
<p>A scheme of DSAMS mounting.</p>
Full article ">Figure 3
<p>Block-scheme of the electronic part of the DSAMS.</p>
Full article ">Figure 4
<p>The outer view of a high-power, low-frequency acoustic source based on a piston-type emitter.</p>
Full article ">Figure 5
<p>Measured vertical distribution of sound speed at the emission point in 2018 and after typhoon “Khanun” in 2023.</p>
Full article ">Figure 6
<p>The scheme of the conducted experiments.</p>
Full article ">Figure 7
<p>Pulse characteristics of the waveguide obtained at the depths of 69, 126, 680, and 914 m.</p>
Full article ">Figure 8
<p>Results of raytracing using the “RAY” algorithm: (<b>a</b>) VDSS at points along the route; (<b>b</b>) ray pattern of propagation of acoustic signals; (<b>c</b>) emission and arrival angles; (<b>d</b>) impulse response.</p>
Full article ">Figure 8 Cont.
<p>Results of raytracing using the “RAY” algorithm: (<b>a</b>) VDSS at points along the route; (<b>b</b>) ray pattern of propagation of acoustic signals; (<b>c</b>) emission and arrival angles; (<b>d</b>) impulse response.</p>
Full article ">Figure 9
<p>Results of numerical modeling: (<b>a</b>) vertical distributions of sound speed near the source and at receiving points (blue, red, and green curve); the vertical distribution of the effective speed of sound calculated in the ray approximation (orange dots); experimentally measured values of the effective speed of sound at given depths using the impulse characteristics of the waveguide (black dots); (<b>b</b>) bottom topography and an example of a ray pattern for an acoustic path when receiving at a depth of 126 m; (<b>c</b>) angular structure of the field at the receiving point (red dots are the source angles, blue dots are the receiver angles); (<b>d</b>) impulse response of the waveguide at the receiving point at a horizon of 126 m (the experimental IR is shown in blue, and the simulated one in red).</p>
Full article ">
24 pages, 10305 KiB  
Article
A Hybrid Harmonic Curve Model for Multi-Streamer Hydrophone Positioning in Seismic Exploration
by Kaiwei Sang, Cuilin Kuang, Lingsheng Lv, Heng Liu, Haonan Zhang, Yijun Yang and Baocai Yang
Sensors 2024, 24(24), 8025; https://doi.org/10.3390/s24248025 - 16 Dec 2024
Viewed by 382
Abstract
Towed streamer positioning is a vital and essential stage in marine seismic exploration, and accurate hydrophone coordinates exert a direct and significant influence on the quality and reliability of seismic imaging. Current methods predominantly rely on analytical polynomial models for towed streamer positioning; [...] Read more.
Towed streamer positioning is a vital and essential stage in marine seismic exploration, and accurate hydrophone coordinates exert a direct and significant influence on the quality and reliability of seismic imaging. Current methods predominantly rely on analytical polynomial models for towed streamer positioning; however, these models often produce significant errors when fitting to streamers with high curvature, particularly during turning scenarios. To address this limitation, this study introduces a novel multi-streamer analytical positioning method that uses a hybrid harmonic function to model the three-dimensional coordinates of streamers. This approach mitigates the substantial modeling errors associated with polynomial models in high-curvature conditions and better captures the dynamic characteristics of streamer fluctuations. Firstly, the mathematical model for the hybrid harmonic function is constructed. Then, the algorithmic implementation of the model is detailed, along with the derivation of the error equation and the multi-sensor fusion solution process. Finally, the validity of the model is verified using both simulated and field data. The results demonstrate that, in the turning scenario without added error, the proposed harmonic model improves simulation accuracy by 35.5% compared to the analytical polynomial model, and by 27.2% when error is introduced. For field data, accuracy improves by 18.1%, underscoring the model’s effectiveness in significantly reducing errors associated with polynomial models in turning scenarios. The performance of the harmonic function model is generally comparable to that of the polynomial model in straight scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Towed streamer exploration and positioning network.</p>
Full article ">Figure 2
<p>Hybrid harmonic function positioning model.</p>
Full article ">Figure 3
<p>Simulated positioning network configuration.</p>
Full article ">Figure 4
<p>Simulated data trajectory.</p>
Full article ">Figure 5
<p>Mean positioning deviation over epoch.</p>
Full article ">Figure 6
<p>Mean positioning deviation along offset in turning scenario.</p>
Full article ">Figure 7
<p>Mean positioning deviation along offset in straight scenario.</p>
Full article ">Figure 8
<p>Streamer shape in (<b>a</b>) 3790 s turning and (<b>b</b>) 8000 s straight scenario.</p>
Full article ">Figure 9
<p>Mean positioning deviation over epoch.</p>
Full article ">Figure 10
<p>Mean positioning deviation along offset in the turning scenario.</p>
Full article ">Figure 11
<p>Mean positioning deviation along offset in the straight scenario.</p>
Full article ">Figure 12
<p>Streamer shape in (<b>a</b>) 3790 s turning and (<b>b</b>) 8000 s straight scenario.</p>
Full article ">Figure 13
<p>Deviation quartile over (<b>a</b>) temporal and (<b>b</b>) spatial in turning scenario.</p>
Full article ">Figure 14
<p>Deviation quartile over (<b>a</b>) temporal and (<b>b</b>) spatial in straight scenario.</p>
Full article ">Figure 15
<p>Turning trajectory for field data.</p>
Full article ">Figure 16
<p>Mean positioning deviation over epoch.</p>
Full article ">Figure 17
<p>Mean positioning deviation along offset in the turning scenario.</p>
Full article ">Figure 18
<p>Streamer shape in (<b>a</b>) 10,000 s turning and (<b>b</b>) 5829.2 s straight scenario.</p>
Full article ">Figure 19
<p>Deviation quartile over temporal and spatial in turning (<b>a</b>,<b>b</b>) and straight (<b>c</b>,<b>d</b>) scenarios.</p>
Full article ">Figure 20
<p>Mean positioning deviation over epoch.</p>
Full article ">Figure 21
<p>Mean positioning deviation along offset in the straight scenario.</p>
Full article ">
12 pages, 2546 KiB  
Article
The Characterization of the Alcoholic Fermentation Process in Wine Production Based on Acoustic Emission Analysis
by Angel Sanchez-Roca, Juan-Ignacio Latorre-Biel, Emilio Jiménez-Macías, Juan Carlos Saenz-Díez and Julio Blanco-Fernández
Processes 2024, 12(12), 2797; https://doi.org/10.3390/pr12122797 - 7 Dec 2024
Viewed by 561
Abstract
The present experimental study assessed the viability of utilizing an acoustic emission signal as a monitoring instrument to predict the chemical characteristics of wine throughout the alcoholic fermentation process. The purpose of this study is to acquire the acoustic emission signals generated by [...] Read more.
The present experimental study assessed the viability of utilizing an acoustic emission signal as a monitoring instrument to predict the chemical characteristics of wine throughout the alcoholic fermentation process. The purpose of this study is to acquire the acoustic emission signals generated by CO₂ bubbles to calculate the must density and monitor the kinetics of the alcoholic fermentation process. The kinetics of the process were evaluated in real time using a hydrophone immersed in the liquid within the fermentation tank. The measurements were conducted in multiple fermentation tanks at a winery engaged in the production of wines bearing the Rioja Denomination of Origin (D.O.) designation. Acoustic signals were acquired throughout the entirety of the fermentation process, via a sampling period of five minutes, and stored for subsequent processing. To validate the results, the measurements obtained manually in the laboratory by the winemaker were collected during this stage. Signal processing was conducted to extract descriptors from the acoustic signal and evaluate their correlation with the experimental data acquired during the process. The results of the analyses confirm that there is a high linear correlation between the density data obtained from the acoustic analysis and the density data obtained at the laboratory level, with determination coefficients exceeding 95%. The acoustic emission signal is a valuable decision-making tool for technicians and winemakers due to its sensitivity when describing variations in kinetics and density during the alcoholic fermentation process. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Type A5 hydrophone and the (<b>b</b>) location of the hydrophone inside the tank.</p>
Full article ">Figure 2
<p>Experimental setup.</p>
Full article ">Figure 3
<p>Instant acceleration signal acquired by the hydrophones during the tests.</p>
Full article ">Figure 4
<p>Temperature during the fermentation process in Tank 24.</p>
Full article ">Figure 5
<p>Filtered signals: (<b>a</b>) Tank 24; (<b>b</b>) Tank 25; (<b>c</b>) Tank 26.</p>
Full article ">Figure 6
<p>Process dynamic curves obtained from the acoustic signal generated by the hydrophone and the densities obtained from the acoustic signal and measured with a densitometer. (<b>a</b>) Tank 24; (<b>b</b>) Tank 25; (<b>c</b>) Tank 26.</p>
Full article ">Figure 7
<p>A graph showing the correlation between the density estimated from the acoustic signal and the value measured with a densitometer in the warehouse. (<b>a</b>) Tank 24; (<b>b</b>) Tank 25; (<b>c</b>) Tank 26; (<b>d</b>) All.</p>
Full article ">
22 pages, 6513 KiB  
Article
A Novel Beam-Domain Direction-of-Arrival Tracking Algorithm for an Underwater Target
by Xianghao Hou, Weisi Hua, Yuxuan Chen and Yixin Yang
Remote Sens. 2024, 16(21), 4074; https://doi.org/10.3390/rs16214074 - 31 Oct 2024
Viewed by 491
Abstract
Underwater direction-of-arrival (DOA) tracking using a hydrophone array is an important research subject in passive sonar signal processing. In this study, a DOA tracking algorithm based on a novel beam-domain signal processing technique is proposed to ensure robust DOA tracking of an interested [...] Read more.
Underwater direction-of-arrival (DOA) tracking using a hydrophone array is an important research subject in passive sonar signal processing. In this study, a DOA tracking algorithm based on a novel beam-domain signal processing technique is proposed to ensure robust DOA tracking of an interested underwater target under a low signal-to-noise ratio (SNR) environment. Firstly, the beam-based observation is designed and proposed, which innovatively applies beamforming after array-based observation to achieve specific spatial directivity. Next, the proportional–integral–differential (PID)-optimized Olen–Campton beamforming method (PIDBF) is designed and proposed in the beamforming process to achieve faster and more stable sidelobe control performance to enhance the SNR of the target. The adaptive dynamic beam window is designed and proposed to focusing the observation on more likely observation area. Then, by utilizing the extended Kalman filter (EKF) tracking framework, a novel PIDBF-optimized beam-domain DOA tracking algorithm (PIDBF-EKF) is proposed. Finally, simulations with different SNR scenarios and comprehensive analyses are made to verify the superior performance of the proposed DOA tracking approach. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Configuration of the ULA-based measurement system.</p>
Full article ">Figure 2
<p>Comparison of beam patterns for Olen series optimization methods at different iteration counts. (<b>a</b>) 1 iteration. (<b>b</b>) 5 iterations. (<b>c</b>) 10 iterations. (<b>d</b>) 20 iterations. (<b>e</b>) 50 iterations. (<b>f</b>) 100 iterations.</p>
Full article ">Figure 3
<p>MSE and RMSE plots for the Olen series optimization methods. (<b>a</b>) MSE plot for the Olen series optimization methods. (<b>b</b>) RMSE plot for the Olen series optimization methods.</p>
Full article ">Figure 4
<p>The time required for the Olen series optimization method to achieve an RMSE of 2.</p>
Full article ">Figure 5
<p>The time required for the Olen series optimization method to achieve an RMSE of 1.</p>
Full article ">Figure 6
<p>The time required for the Olen series optimization method to achieve an RMSE of 0.5.</p>
Full article ">Figure 7
<p>Comparison of various methods with an SNR of 0 dB. (<b>a</b>) Comparison of bearing angle tracking result with an SNR of 0 dB. (<b>b</b>) BEEs obtained with an SNR of 0 dB.</p>
Full article ">Figure 8
<p>Comparison of various methods with an SNR of −10 dB. (<b>a</b>) Comparison of bearing angle tracking result with an SNR of −10 dB. (<b>b</b>) BEEs obtained with an SNR of −10 dB.</p>
Full article ">Figure 9
<p>Comparison of various methods with an SNR of −20 dB. (<b>a</b>) Comparison of bearing angle tracking result with an SNR of −20 dB. (<b>b</b>) BEEs obtained with an SNR of −20 dB.</p>
Full article ">Figure 10
<p>Comparison of various methods with an SNR of −30 dB. (<b>a</b>) Comparison of bearing angle tracking result with an SNR of −30 dB. (<b>b</b>) BEEs obtained with an SNR of −30 dB.</p>
Full article ">Figure 11
<p>Comparison of various methods with the number of beams set to 3. (<b>a</b>) Comparison of bearing angle tracking result with the number of beams set to 3. (<b>b</b>) BEEs obtained with the number of beams set to 3.</p>
Full article ">Figure 12
<p>Comparison of various methods with the number of beams set to 5. (<b>a</b>) Comparison of bearing angle tracking result with the number of beams set to 5. (<b>b</b>) BEEs obtained with the number of beams set to 5.</p>
Full article ">Figure 13
<p>Comparison of various methods with the number of beams set to 9. (<b>a</b>) Comparison of bearing angle tracking result with the number of beams set to 9. (<b>b</b>) BEEs obtained with the number of beams set to 9.</p>
Full article ">Figure 14
<p>Comparison of various methods with the number of beams set to 15. (<b>a</b>) Comparison of bearing angle tracking result with the number of beams set to 15. (<b>b</b>) BEEs obtained with the number of beams set to 15.</p>
Full article ">Figure 15
<p>Comparison of various methods with the number of beams set to 18. (<b>a</b>) Comparison of bearing angle tracking result with the number of beams set to 18. (<b>b</b>) BEEs obtained with the number of beams set to 18.</p>
Full article ">Figure 16
<p>Comparison of various methods with the number of beams set to 25. (<b>a</b>) Comparison of bearing angle tracking result with the number of beams set to 25. (<b>b</b>) BEEs obtained with the number of beams set to 25.</p>
Full article ">
11 pages, 3672 KiB  
Article
Aquariums as Research Platforms: Characterizing Fish Sounds in Controlled Settings with Preliminary Insights from the Blackbar Soldierfish Myripristis jacobus
by Javier Almunia, María Fernández-Maquieira and Melvin Flores
J. Zool. Bot. Gard. 2024, 5(4), 630-640; https://doi.org/10.3390/jzbg5040042 - 29 Oct 2024
Viewed by 724
Abstract
This study highlights the potential of aquariums as research platforms for bioacoustic research. Aquariums provide access to a wide variety of fish species, offering unique opportunities to characterize their acoustic features in controlled settings. In particular, we present a preliminary description of the [...] Read more.
This study highlights the potential of aquariums as research platforms for bioacoustic research. Aquariums provide access to a wide variety of fish species, offering unique opportunities to characterize their acoustic features in controlled settings. In particular, we present a preliminary description of the acoustic characteristics of Myripristis jacobus, a soniferous species in the Holocentridae family, within a controlled environment at a zoological facility in the Canary Islands, Spain. Using two HydroMoth 1.0 hydrophones, we recorded vocalizations of the blackbar soldierfish in a glass tank, revealing a pulsed sound type with a peak frequency around 355 Hz (DS 64), offering a more precise characterization than previously available. The vocalizations exhibit two distinct patterns: short sequences with long pulse intervals and fast pulse trains with short inter-pulse intervals. Despite some limitations, this experimental setup highlights the efficacy of cost-effective methodologies in public aquariums for initial bioacoustic research. These findings contribute to the early stages of acoustic characterization of coastal fishes in the western central Atlantic, emphasizing the value of passive acoustic monitoring for ecological assessments and conservation efforts. Moreover, this study opens new avenues for considering the acoustic environment as a crucial factor in the welfare of captive fish, an aspect that has largely been overlooked in aquarium management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagram of the experimental setup. This figure illustrates the layout of the aquarium, including the positioning of the hydrophones (b,c) and the 360° camera (a) used for video recording during the experiment. The hydrophones were strategically placed in the middle of the tank to minimize directional bias and ensure the best possible sound capture from the fish. The figure highlights the placement of the hydrophones in relation to the fish and the tank walls, which is crucial for understanding the controlled environment and the measures taken to minimize external noise and vibrations.</p>
Full article ">Figure 2
<p>Spectrogram and oscillogram of a sequence of two pulses. This figure represents an example of the acoustic signals recorded from <span class="html-italic">M. jacobus</span>. The spectrogram shows the frequency distribution over time, while the oscillogram provides a visual representation of the sound wave’s amplitude. The two-pulse sequence depicted here is a common sound pattern produced by the species.</p>
Full article ">Figure 3
<p>Spectrogram and oscillogram of a pulse train. This figure shows a more complex sound event recorded during the experiment, featuring a pulse train composed of multiple pulses in rapid succession. The figure is essential for understanding the variability in <span class="html-italic">M. jacobus</span> vocalizations, as it demonstrates the occurrence of more rapid pulse sequences, which may be associated with specific behaviors or stress responses.</p>
Full article ">
16 pages, 955 KiB  
Article
Automatically Differentiable Higher-Order Parabolic Equation for Real-Time Underwater Sound Speed Profile Sensing
by Mikhail Lytaev
J. Mar. Sci. Eng. 2024, 12(11), 1925; https://doi.org/10.3390/jmse12111925 - 28 Oct 2024
Viewed by 596
Abstract
This paper is dedicated to the acoustic inversion of the vertical sound speed profiles (SSPs) in the underwater marine environment. The method of automatic differentiation is applied for the first time in this context. Representing the finite-difference Padé approximation of the propagation operator [...] Read more.
This paper is dedicated to the acoustic inversion of the vertical sound speed profiles (SSPs) in the underwater marine environment. The method of automatic differentiation is applied for the first time in this context. Representing the finite-difference Padé approximation of the propagation operator as a computational graph allows for the analytical computation of the gradient with respect to the SSP directly within the numerical scheme. The availability of the gradient, along with the high computational efficiency of the numerical method used, enables rapid inversion of the SSP based on acoustic measurements from a hydrophone array. It is demonstrated that local optimization methods can be effectively used for real-time sound speed inversion. Comparative analysis with existing methods shows the significant superiority of the proposed method in terms of computation speed. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Schematic description of the problem.</p>
Full article ">Figure 2
<p>Original and inverted SSPs. Each subsequent profile shifted by 1 m/s for clarity.</p>
Full article ">Figure 3
<p>Original and inverted SSPs at <span class="html-italic">t</span> = 20, 30, 40, 50.</p>
Full article ">Figure 4
<p>Pointwise difference between the original and inverted SSPs (<math display="inline"><semantics> <mrow> <mi>c</mi> <mfenced open="(" close=")"> <mi>z</mi> </mfenced> <mo>−</mo> <mover accent="true"> <mi>c</mi> <mo stretchy="false">˜</mo> </mover> <mfenced open="(" close=")"> <mi>z</mi> </mfenced> </mrow> </semantics></math>). Each subsequent profile shifted by 1 m/s for clarity.</p>
Full article ">Figure 5
<p>Error between original and inverted SSPs.</p>
Full article ">Figure 6
<p>Number of function evaluations and gradient computations at each step.</p>
Full article ">Figure 7
<p>Two-dimensional distributions of the acoustic pressure (<math display="inline"><semantics> <mrow> <mrow> <mn>20</mn> <mi>log</mi> <mo>|</mo> <mi>ψ</mi> </mrow> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>,</mo> <mi>z</mi> </mfenced> <mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>) at different time points. <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> Hz.</p>
Full article ">Figure 8
<p>Error between original and inverted SSPs for various values of SNR. <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math> Hz, range <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> km.</p>
Full article ">
21 pages, 2399 KiB  
Article
Gridless DOA Estimation Method for Arbitrary Array Geometries Based on Complex-Valued Deep Neural Networks
by Yuan Cao, Tianjun Zhou and Qunfei Zhang
Remote Sens. 2024, 16(19), 3752; https://doi.org/10.3390/rs16193752 - 9 Oct 2024
Viewed by 1160
Abstract
Gridless direction of arrival (DOA) estimation methods have garnered significant attention due to their ability to avoid grid mismatch errors, which can adversely affect the performance of high-resolution DOA estimation algorithms. However, most existing gridless methods are primarily restricted to applications involving uniform [...] Read more.
Gridless direction of arrival (DOA) estimation methods have garnered significant attention due to their ability to avoid grid mismatch errors, which can adversely affect the performance of high-resolution DOA estimation algorithms. However, most existing gridless methods are primarily restricted to applications involving uniform linear arrays or sparse linear arrays. In this paper, we derive the relationship between the element-domain covariance matrix and the angular-domain covariance matrix for arbitrary array geometries by expanding the steering vector using a Fourier series. Then, a deep neural network is designed to reconstruct the angular-domain covariance matrix from the sample covariance matrix and the gridless DOA estimation can be obtained by Root-MUSIC. Simulation results on arbitrary array geometries demonstrate that the proposed method outperforms existing methods like MUSIC, SPICE, and SBL in terms of resolution probability and DOA estimation accuracy, especially when the angular separation between targets is small. Additionally, the proposed method does not require any hyperparameter tuning, is robust to varying snapshot numbers, and has a lower computational complexity. Finally, real hydrophone data from the SWellEx-96 ocean experiment validates the effectiveness of the proposed method in practical underwater acoustic environments. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Magnitude of Fourier coefficients at various orders. (<b>b</b>) Relationship between array steering vector error and <span class="html-italic">N</span>.</p>
Full article ">Figure 2
<p>Angular-domain covariance matrix reconstruction network architecture.</p>
Full article ">Figure 3
<p>Training loss, validation loss, and learning rate variation with epochs for the following: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>DOA estimation performance of CDNNs with truncation orders <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> at different target angular separations. (<b>a</b>) RMSE and (<b>b</b>) RP for <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mrow> <msup> <mrow> <mn>85</mn> </mrow> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mrow> <mn>95</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mfenced> </mrow> </semantics></math>. (<b>c</b>) RMSE and (<b>d</b>) RP for <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mrow> <msup> <mrow> <mn>80</mn> </mrow> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mrow> <mn>100</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mfenced> </mrow> </semantics></math>. (<b>e</b>) RMSE and (<b>f</b>) RP for <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mrow> <msup> <mrow> <mn>70</mn> </mrow> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mrow> <mn>110</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mfenced> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>DOA estimation results. The proposed method, source numbers (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. MUSIC, source numbers (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. SPICE, source numbers (<b>g</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, (<b>h</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and (<b>i</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. SBL, source numbers (<b>j</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5 Cont.
<p>DOA estimation results. The proposed method, source numbers (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. MUSIC, source numbers (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. SPICE, source numbers (<b>g</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, (<b>h</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and (<b>i</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. SBL, source numbers (<b>j</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, (<b>k</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and (<b>l</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Relationship between SNR and both RMSE and RP under spatio-temporal Gaussian white noise conditions.(<b>a</b>) RMSE and (<b>b</b>) RP for <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mrow> <msup> <mrow> <mn>80</mn> </mrow> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mrow> <mn>100</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mfenced> </mrow> </semantics></math>. (<b>c</b>) RMSE and (<b>d</b>) RP for <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mrow> <msup> <mrow> <mn>70</mn> </mrow> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mrow> <mn>110</mn> </mrow> <mo>∘</mo> </msup> </mrow> </mfenced> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Relationship between RP and RMSE with respect to angle separation. (<b>a</b>) RMSE. (<b>b</b>) RP.</p>
Full article ">Figure 8
<p>Algorithm performance under different snapshot conditions. (<b>a</b>) RMSE. (<b>b</b>) RP.</p>
Full article ">Figure 9
<p>Schematic of the Swellex-96 Event S59 experiment scenario [<a href="#B38-remotesensing-16-03752" class="html-bibr">38</a>].</p>
Full article ">Figure 10
<p>BTR results using different methods. (<b>a</b>) GPS. (<b>b</b>) CBF. (<b>c</b>) Proposed method. (<b>d</b>) MUSIC. (<b>e</b>) SPICE. (<b>f</b>) SBL.</p>
Full article ">Figure 10 Cont.
<p>BTR results using different methods. (<b>a</b>) GPS. (<b>b</b>) CBF. (<b>c</b>) Proposed method. (<b>d</b>) MUSIC. (<b>e</b>) SPICE. (<b>f</b>) SBL.</p>
Full article ">Figure 11
<p>Processing results of Swellex-96 Event S59 data using different methods. (<b>a</b>) RP. (<b>b</b>) RMSE. (<b>c</b>) CPU time.</p>
Full article ">
16 pages, 2194 KiB  
Article
DOA Estimation Method for Vector Hydrophones Based on Sparse Bayesian Learning
by Hongyan Wang, Yanping Bai, Jing Ren, Peng Wang, Ting Xu, Wendong Zhang and Guojun Zhang
Sensors 2024, 24(19), 6439; https://doi.org/10.3390/s24196439 - 4 Oct 2024
Viewed by 799
Abstract
Through extensive literature review, it has been found that sparse Bayesian learning (SBL) is mainly applied to traditional scalar hydrophones and is rarely applied to vector hydrophones. This article proposes a direction of arrival (DOA) estimation method for vector hydrophones based on SBL [...] Read more.
Through extensive literature review, it has been found that sparse Bayesian learning (SBL) is mainly applied to traditional scalar hydrophones and is rarely applied to vector hydrophones. This article proposes a direction of arrival (DOA) estimation method for vector hydrophones based on SBL (Vector-SBL). Firstly, vector hydrophones capture both sound pressure and particle velocity, enabling the acquisition of multidimensional sound field information. Secondly, SBL accurately reconstructs the received vector signal, addressing challenges like low signal-to-noise ratio (SNR), limited snapshots, and coherent sources. Finally, precise DOA estimation is achieved for multiple sources without prior knowledge of their number. Simulation experiments have shown that compared with the OMP, MUSIC, and CBF algorithms, the proposed method exhibits higher DOA estimation accuracy under conditions of low SNR, small snapshots, multiple sources, and coherent sources. Furthermore, it demonstrates superior resolution when dealing with closely spaced signal sources. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Two-dimensional vector hydrophone array signal receiving model.</p>
Full article ">Figure 2
<p>RMSE variation curves with the number of snapshots under different signal source conditions. (<b>a</b>) For 1 signal source, (<b>b</b>) 3 signal sources, and (<b>c</b>) 5 signal sources.</p>
Full article ">Figure 3
<p>RMSE variation curves with SNR under different signal source conditions. (<b>a</b>) For 1 signal source, (<b>b</b>) 3 signal sources, and (<b>c</b>) 5 signal sources.</p>
Full article ">Figure 4
<p>RMSE variation curves with angular difference under different signal source conditions. (<b>a</b>) For 3 signal sources and (<b>b</b>) 5 signal sources.</p>
Full article ">Figure 5
<p>Comparison of the performance under coherent source conditions. (<b>a</b>) RMSE variation curves with the number of snapshots, (<b>b</b>) RMSE variation curves with SNR, and (<b>c</b>) RMSE variation curves with angular difference.</p>
Full article ">Figure 6
<p>Curve of success rate versus signal-to-noise ratio.</p>
Full article ">Figure 7
<p>Schematic diagram of an experiment conducted at a reservoir in Taiyuan, Shanxi Province.</p>
Full article ">Figure 8
<p>Measurement results of the Vector-SBL algorithm.</p>
Full article ">
22 pages, 1364 KiB  
Article
Signal Denoising Method Based on EEMD and SSA Processing for MEMS Vector Hydrophones
by Peng Wang, Jie Dong, Lifu Wang and Shuhui Qiao
Micromachines 2024, 15(10), 1183; https://doi.org/10.3390/mi15101183 - 24 Sep 2024
Viewed by 3514
Abstract
The vector hydrophone is playing a more and more prominent role in underwater acoustic engineering, and it is a research hotspot in many countries; however, it also has some shortcomings. For the mixed problem involving received signals in micro-electromechanical system (MEMS) vector hydrophones [...] Read more.
The vector hydrophone is playing a more and more prominent role in underwater acoustic engineering, and it is a research hotspot in many countries; however, it also has some shortcomings. For the mixed problem involving received signals in micro-electromechanical system (MEMS) vector hydrophones in the presence of a large amount of external environment noise, noise and drift inevitably occur. The distortion phenomenon makes further signal detection and recognition difficult. In this study, a new method for denoising MEMS vector hydrophones by combining ensemble empirical mode decomposition (EEMD) and singular spectrum analysis (SSA) is proposed to improve the utilization of received signals. First, the main frequency of the noise signal is transformed using a Fourier transform. Then, the noise signal is decomposed by EEMD to obtain the intrinsic mode function (IMF) component. The frequency of each IMF component in the center further determines that the IMF component belongs to the noise IMF component, invalid IMF component, or pure IMF component. Then, there are pure IMF reserved components, removing noisy IMF components and invalid IMF components. Finally, the desalinated IMF reconstructs the signal through SSA to obtain the denoised signal, which realizes the denoising processing of the signal, extracting the useful signal and removing the drift. The role of SSA is to effectively separate the trend noise and the periodic vibration noise. Compared to EEMD and SSA separately, the proposed EEMD-SSA algorithm has a better denoising effect and can achieve the removal of drift. Following that, EEMD-SSA is used to process the data measured by Fenhe. The experiment is carried out by the North University of China. The simulation and lake test results show that the proposed EEMD-SSA has certain practical research value. Full article
(This article belongs to the Special Issue MEMS Sensors and Actuators: Design, Fabrication and Applications)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the EEMD-SSA algorithm.</p>
Full article ">Figure 2
<p>Original signal of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 3
<p>IMF signal and corresponding spectrum of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 3 Cont.
<p>IMF signal and corresponding spectrum of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 4
<p>The denoising result of EEMD algorithm of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 5
<p>The denoising result of SSA algorithm of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 5 Cont.
<p>The denoising result of SSA algorithm of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 6
<p>Time domain signals of different methods of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 7
<p>Comparison of denoising results of different algorithms of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 7 Cont.
<p>Comparison of denoising results of different algorithms of Equation (<a href="#FD8-micromachines-15-01183" class="html-disp-formula">8</a>).</p>
Full article ">Figure 8
<p>Original signal of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with −10 dB.</p>
Full article ">Figure 9
<p>IMF signal and corresponding spectrum of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with −10 dB.</p>
Full article ">Figure 10
<p>The denoising result of EEMD algorithm of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with −10 dB.</p>
Full article ">Figure 11
<p>The denoising result of SSA algorithm of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with −10 dB.</p>
Full article ">Figure 12
<p>Time-domain signals of different methods of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with −10 dB.</p>
Full article ">Figure 13
<p>Comparison of denoising results of different algorithms of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with −10 dB.</p>
Full article ">Figure 14
<p>Original signal of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with 10 dB.</p>
Full article ">Figure 15
<p>IMF signal and corresponding spectrum of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with 10 dB.</p>
Full article ">Figure 16
<p>The denoising result of EEMD algorithm of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with 10 dB.</p>
Full article ">Figure 17
<p>The denoising result of SSA algorithm of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with 10 dB.</p>
Full article ">Figure 18
<p>Time domain signals of different methods of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with 10 dB.</p>
Full article ">Figure 19
<p>Comparison of denoising results of different algorithms of Equation (<a href="#FD9-micromachines-15-01183" class="html-disp-formula">9</a>) with 10 dB.</p>
Full article ">Figure 20
<p>Experimental process.</p>
Full article ">Figure 21
<p>Measured signal of MEMS hydorphone.</p>
Full article ">Figure 22
<p>IMF signal and corresponding spectrum of measured signal.</p>
Full article ">Figure 23
<p>The denoising result of EEMD algorithm of measured signal.</p>
Full article ">Figure 24
<p>The denoising result of SSA algorithm of measured signal.</p>
Full article ">Figure 25
<p>Comparison of denoising results of different algorithms of measured signal.</p>
Full article ">
14 pages, 4692 KiB  
Article
Experimental Study of Surface Microtexture Formed by Laser-Induced Cavitation Bubble on 7050 Aluminum Alloy
by Bin Li, Byung-Won Min, Yingxian Ma, Rui Zhou, Hai Gu and Yupeng Cao
Coatings 2024, 14(9), 1230; https://doi.org/10.3390/coatings14091230 - 23 Sep 2024
Viewed by 966
Abstract
In order to study the feasibility of forming microtexture at the surface of 7050 aluminum alloy by laser-induced cavitation bubble, and how the density of microtexture influences its tribological properties, the evolution of the cavitation bubble was captured by a high-speed camera, and [...] Read more.
In order to study the feasibility of forming microtexture at the surface of 7050 aluminum alloy by laser-induced cavitation bubble, and how the density of microtexture influences its tribological properties, the evolution of the cavitation bubble was captured by a high-speed camera, and the underwater acoustic signal of evolution was collected by a fiber optic hydrophone system. This combined approach was used to study the effect of the cavitation bubble on 7050 aluminum alloy. The surface morphology of the microtexture was analyzed by a confocal microscope, and the tribological properties of the microtexture were analyzed by a friction testing machine. Then the feasibility of the preparation process was verified and the optimal density was obtained. The study shows that the microtexture on the surface of a sample is formed by the combined results of the plasma shock wave and the collapse shock wave. When the density of microtexture is less than or equal to 19.63%, the diameters of the micropits range from 478 μm to 578 μm, and the depths of the micropits range from 13.56 μm to 18.25 μm. This shows that the laser-induced cavitation bubble is able to form repeatable microtexture. The friction coefficient of the sample with microtexture is lower than that of the untextured sample, with an average friction coefficient of 0.16. This indicates that the microtexture formed by laser-induced cavitation bubble has a good lubrication effect. The sample with a density of 19.63% is uniform and smooth, having the minimum friction coefficient, with an average friction coefficient of 0.14. This paper provides a new approach for microtexture processing of metal materials. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of laser-induced cavitation bubble microtexture platform.</p>
Full article ">Figure 2
<p>Test equipment: (<b>a</b>) high-speed camera, (<b>b</b>) fiber optic hydrophone, (<b>c</b>) confocal microscope, (<b>d</b>) friction and wear testing machine.</p>
Full article ">Figure 3
<p>The route of microtexture formed by laser-induced cavitation bubble.</p>
Full article ">Figure 4
<p>Evolution of laser-induced cavitation bubble with an energy of 400 mJ.</p>
Full article ">Figure 5
<p>Relationship between size and time of laser-induced cavitation bubble with an energy of 400 mJ.</p>
Full article ">Figure 6
<p>Underwater acoustic signal of laser-induced cavitation bubble with energy of 400 mJ.</p>
Full article ">Figure 7
<p>The micropitted three-dimensional morphology of the specimen surface: (<b>a</b>) three-dimensional morphology of the micropit, (<b>b</b>) the morphology of the micropit.</p>
Full article ">Figure 8
<p>Macroscopic morphology of samples with different densities: (<b>a</b>) 0%, (<b>b</b>) 8.72%, (<b>c</b>) 12.56%, (<b>d</b>) 19.63%, (<b>e</b>) 34.88%.</p>
Full article ">Figure 9
<p>Surface 3D morphology of samples with different densities: (<b>a</b>) 8.72%, (<b>b</b>) 12.56%, (<b>c</b>) 19.63%, (<b>d</b>) 34.88%.</p>
Full article ">Figure 10
<p>Two-dimensional cross-section of samples with different densities: (<b>a</b>) 8.72%, (<b>b</b>) 12.56%, (<b>c</b>) 19.63%, (<b>d</b>) 34.88%.</p>
Full article ">Figure 11
<p>Relationship between mechanical properties of sample and density.</p>
Full article ">Figure 12
<p>Friction curves of different microtexture densities. (<b>a</b>) 600S rule of friction coefficient. (<b>b</b>) 50S rule of friction coefficient.</p>
Full article ">
20 pages, 2723 KiB  
Article
Source Range Estimation Using Linear Frequency-Difference Matched Field Processing in a Shallow Water Waveguide
by Penghua Song, Haozhong Wang, Bolin Su, Liang Wang and Wei Gao
Remote Sens. 2024, 16(18), 3529; https://doi.org/10.3390/rs16183529 - 23 Sep 2024
Viewed by 649
Abstract
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, imperfect knowledge of the actual propagation environment and sidelobes due to modal interference prevent accurate propagation modeling and source localization via MFP. To [...] Read more.
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, imperfect knowledge of the actual propagation environment and sidelobes due to modal interference prevent accurate propagation modeling and source localization via MFP. To suppress the sidelobes and improve the method’s robustness, a linear frequency-difference matched field processing (LFDMFP) method for estimating the source range is proposed. A two-neighbor-frequency high-order cross-spectrum between the measurement and the replica of each hydrophone of the vertical line array is first computed. The cost function can then be derived from the dual summation or double integral of the high-order cross-spectrum with respect to the depth of the hydrophones and the candidate sources of the replicas, where the range that corresponds to the minimum is the optimal estimation. Because of the larger modal interference distances, LFDMFP can efficiently provide only one optimal range within the same range search interval rather than some conventional matched field processing. The efficiency of the presented method was verified using simulations and experiments. The LFDMFP unambiguously estimated the source range in two experimental datasets with average relative errors of 2.2 and 1.9%. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

Figure 1
<p>Simulation environment and localization results for the LFDMFP and CMFP. (<b>a</b>) The sound speed profile of the simulated environment. (<b>b</b>) The dot-dashed curve is the auto-spectrum of 300 Hz, the dashed curve is the auto-spectrum of 310 Hz, and the solid curve is the cross-spectrum between 300 and 310 Hz. (<b>c</b>) The cost function using Equation (<a href="#FD11-remotesensing-16-03529" class="html-disp-formula">11</a>). (<b>d</b>) The change in the reformulated cost function using Equation (<a href="#FD14-remotesensing-16-03529" class="html-disp-formula">14</a>) with range.</p>
Full article ">Figure 2
<p>Source depth estimation using Equation (<a href="#FD15-remotesensing-16-03529" class="html-disp-formula">15</a>).</p>
Full article ">Figure 3
<p>Influence of SNR on the two methods.</p>
Full article ">Figure 4
<p>Orthogonality of normal modes at six different frequency differences: (<b>a</b>) 300 and 300.01 Hz, (<b>b</b>) 300 and 301 Hz, (<b>c</b>) 300 and 310 Hz, (<b>d</b>) 300 and 350 Hz, (<b>e</b>) 300 and 400 Hz, and (<b>f</b>) 300 and 600 Hz.</p>
Full article ">Figure 5
<p>Range estimation of the LFDMFP at six different frequency differences: (<b>a</b>) 300 and 300.01 Hz, (<b>b</b>) 300 and 301 Hz, (<b>c</b>) 300 and 310 Hz, (<b>d</b>) 300 and 350 Hz, (<b>e</b>) 300 and 400 Hz, and (<b>f</b>) 300 and 600 Hz.</p>
Full article ">Figure 6
<p>The influence of the frequency difference on the LFDMFP.</p>
Full article ">Figure 7
<p>Source trajectory and array position.</p>
Full article ">Figure 8
<p>Laoshan Bay condition: (<b>a</b>) the sound speed profile of the acoustic propagation experiment and the depth of each receiver of the 15-element VLA labeled by dots; (<b>b</b>) a snapshot of the signals received by the VLA.</p>
Full article ">Figure 9
<p>Localization results for the LFDMFP and CMFP: (<b>a</b>) the incoherent averaging ambiguity function of the CMFP; (<b>b</b>) the cost function of the LFDMFP at the frequency difference [780 Hz, 786 Hz]; (<b>c</b>) the depth of the source estimated by Equation (<a href="#FD15-remotesensing-16-03529" class="html-disp-formula">15</a>).</p>
Full article ">Figure 10
<p>Range estimation and error of the LFDMFP at the different frequency differences.</p>
Full article ">Figure 11
<p>The deployment depth of the VLA and sound speed in the water body in the experimental sea areas: (<b>a</b>) the sound speed profile of the acoustic propagation experiment; (<b>b</b>) the depths of the 32 VLA elements labeled by dots.</p>
Full article ">Figure 12
<p>Localization results for the LFDMFP and CMFP: (<b>a</b>) the incoherent averaging ambiguity function of the CMFP; (<b>b</b>) the cost function of the LFDMFP at the frequency difference [297 Hz, 301 Hz]; (<b>c</b>) the depth of the source estimated using Equation (<a href="#FD15-remotesensing-16-03529" class="html-disp-formula">15</a>).</p>
Full article ">Figure 13
<p>Localization results for the LFDMFP and CMFP: (<b>a</b>) the incoherent averaging ambiguity function of the CMFP; (<b>b</b>) the cost function of the LFDMFP at the frequency difference [300 Hz, 306 Hz]; (<b>c</b>) the depth of the source estimated using Equation (<a href="#FD15-remotesensing-16-03529" class="html-disp-formula">15</a>).</p>
Full article ">Figure 14
<p>Comparison of range estimations and the real range for the CMFP and LFDMFP at different positions.</p>
Full article ">
26 pages, 14062 KiB  
Article
Off-Grid Underwater Acoustic Source Direction-of-Arrival Estimation Method Based on Iterative Empirical Mode Decomposition Interval Threshold
by Chuanxi Xing, Guangzhi Tan and Saimeng Dong
Sensors 2024, 24(17), 5835; https://doi.org/10.3390/s24175835 - 8 Sep 2024
Viewed by 1009
Abstract
To solve the problem that the hydrophone arrays are disturbed by ocean noise when collecting signals in shallow seas, resulting in reduced accuracy and resolution of target orientation estimation, a direction-of-arrival (DOA) estimation algorithm based on iterative EMD interval thresholding (EMD-IIT) and off-grid [...] Read more.
To solve the problem that the hydrophone arrays are disturbed by ocean noise when collecting signals in shallow seas, resulting in reduced accuracy and resolution of target orientation estimation, a direction-of-arrival (DOA) estimation algorithm based on iterative EMD interval thresholding (EMD-IIT) and off-grid sparse Bayesian learning is proposed. Firstly, the noisy signal acquired by the hydrophone array is denoised by the EMD-IIT algorithm. Secondly, the singular value decomposition is performed on the denoised signal, and then an off-grid sparse reconstruction model is established. Finally, the maximum a posteriori probability of the target signal is obtained by the Bayesian learning algorithm, and the DOA estimate of the target is derived to achieve the orientation estimation of the target. Simulation analysis and sea trial data results show that the algorithm achieves a resolution probability of 100% at an azimuthal separation of 8° between adjacent signal sources. At a low signal-to-noise ratio of −9 dB, the resolution probability reaches 100%. Compared with the conventional MUSIC-like and OGSBI-SVD algorithms, this algorithm can effectively eliminate noise interference and provides better performance in terms of localization accuracy, algorithm runtime, and algorithm robustness. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Model diagram of array received signal.</p>
Full article ">Figure 2
<p>Iteration flow chart of EMD algorithm.</p>
Full article ">Figure 3
<p>Flowchart of EMD-IIT algorithm.</p>
Full article ">Figure 4
<p>Flowchart of algorithm for iterative EMD interval thresholding and off-grid sparse Bayesian learning.</p>
Full article ">Figure 5
<p>The time–frequency spectrum of the original signal.</p>
Full article ">Figure 6
<p>The time–frequency spectrum of each array when the noisy signal is received. (<b>a</b>) The time–frequency spectrum of the first to the fourth array when the noisy signal is received. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array when the noisy signal is received.</p>
Full article ">Figure 7
<p>The time–frequency spectrum of each array after EMD-IIT denoising. (<b>a</b>) The time–frequency spectrum of the first to the fourth array after EMD-IIT denoising. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array after EMD-IIT denoising.</p>
Full article ">Figure 8
<p>The spatial power spectrum of three algorithms.</p>
Full article ">Figure 9
<p>RMSE vs. number of Monte Carlo trials.</p>
Full article ">Figure 10
<p>RMSE vs. signal-to-noise ratios.</p>
Full article ">Figure 11
<p>RMSE vs. snaps.</p>
Full article ">Figure 12
<p>Variation plots of RMSE with signal-to-noise ratios under different grid distances.</p>
Full article ">Figure 13
<p>Spatial power spectrum of compact sound source.</p>
Full article ">Figure 14
<p>Discriminative probabilities at different DOA intervals.</p>
Full article ">Figure 15
<p>Discriminative probabilities at different signal-to-noise ratios.</p>
Full article ">Figure 16
<p>Runtimes at different grid spacings.</p>
Full article ">Figure 17
<p>The profile of sound speed.</p>
Full article ">Figure 18
<p>The transmitted signal.</p>
Full article ">Figure 19
<p>The time–frequency spectrum of the sound source.</p>
Full article ">Figure 20
<p>Sea trial deployment diagram.</p>
Full article ">Figure 21
<p>The time–frequency spectrum of each array at 14:57 p.m. (<b>a</b>) The time–frequency spectrum of the first to the fourth array. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array.</p>
Full article ">Figure 21 Cont.
<p>The time–frequency spectrum of each array at 14:57 p.m. (<b>a</b>) The time–frequency spectrum of the first to the fourth array. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array.</p>
Full article ">Figure 22
<p>The time–frequency spectrum of each array at 16:41 p.m. (<b>a</b>) The time–frequency spectrum of the first to the fourth array. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array.</p>
Full article ">Figure 22 Cont.
<p>The time–frequency spectrum of each array at 16:41 p.m. (<b>a</b>) The time–frequency spectrum of the first to the fourth array. (<b>b</b>) The time–frequency spectrum of the fifth to the eighth array.</p>
Full article ">Figure 23
<p>Estimation of the spatial power spectrum at the second position. (<b>a</b>) The snapshot count is 512. (<b>b</b>) The snapshot count is 1024.</p>
Full article ">Figure 24
<p>Estimation of the spatial power spectrum at the third position. (<b>a</b>) The snapshot count is 512. (<b>b</b>) The snapshot count is 1024.</p>
Full article ">
Back to TopTop