[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

UAVs Revolutionizing Smart City Transportation: Innovations, Challenges, and Potential

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: closed (15 October 2024) | Viewed by 3337

Special Issue Editor


E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Small unmanned aerial vehicles (UAVs), commonly known as drones, play a crucial role in the enhancement of transportation systems for future smart cities. The integration of drones offers numerous opportunities within the transportation sector, with ongoing efforts to discover novel approaches to harness their potential. The majority of these efforts revolve around utilizing camera-equipped UAVs to gather traffic and driving behavior data, which is then utilized for various purposes including surveillance, traffic violation detection, congestion management, signal optimization, and analyzing vehicle trajectories for accident risk assessment and other research inquiries.

This research topic focuses on the latest developments in utilizing UAVs to address traffic and transportation challenges. Specifically, it aims to explore the following questions:

  • How has UAV technology been applied to advance specific transportation-focused research or objectives?
  • What advantages does UAV technology offer compared to traditional methods?
  • What hurdles hinder the widespread adoption of UAV technology for certain objectives and how can these challenges be overcome?

Key areas where the existing technological limitations of UAVs can be enhanced to effectively solve transportation issues.

We invite submissions of original research and review articles concerning the use of UAVs in the context of addressing traffic and transportation problems. Potential areas of interest include, but are not limited to:

  • Innovative utilization of UAVs as communication tools (UAVs-to-Vehicle) to manage traffic in urban and rural settings.
  • Development of advanced algorithms to extract insights from UAV-derived data for purposes like accident investigation, road safety, and traffic engineering.
  • Integration of UAVs and AI to advance the technology of smart and intelligent transportation systems.
  • Applications of UAVs to enhance the service quality of public transport and transit systems.
  • Studies investigating user acceptance and willingness to increase the use of UAVs in transportation domains.
  • Strategies to overcome barriers, limitations, and regulatory challenges to facilitate broad implementation of UAV applications in transportation.

Prof. Dr. Ansar Yasar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

34 pages, 15986 KiB  
Article
A Comprehensive Framework for Transportation Infrastructure Digitalization: TJYRoad-Net for Enhanced Point Cloud Segmentation
by Zhen Yang, Mingxuan Wang and Shikun Xie
Sensors 2024, 24(22), 7222; https://doi.org/10.3390/s24227222 - 12 Nov 2024
Viewed by 729
Abstract
This research introduces a cutting-edge approach to traffic infrastructure digitization, integrating UAV oblique photography with LiDAR point clouds for high-precision, lightweight 3D road modeling. The proposed method addresses the challenge of accurately capturing the current state of infrastructure while minimizing redundancy and optimizing [...] Read more.
This research introduces a cutting-edge approach to traffic infrastructure digitization, integrating UAV oblique photography with LiDAR point clouds for high-precision, lightweight 3D road modeling. The proposed method addresses the challenge of accurately capturing the current state of infrastructure while minimizing redundancy and optimizing computational efficiency. A key innovation is the development of the TJYRoad-Net model, which achieves over 85% mIoU segmentation accuracy by including a traffic feature computing (TFC) module composed of three critical components: the Regional Coordinate Encoder (RCE), the Context-Aware Aggregation Unit (CAU), and the Hierarchical Expansion Block. Comparative analysis segments the point clouds into road and non-road categories, achieving centimeter-level registration accuracy with RANSAC and ICP. Two lightweight surface reconstruction techniques are implemented: (1) algorithmic reconstruction, which delivers a 6.3 mm elevation error at 95% confidence in complex intersections, and (2) template matching, which replaces road markings, poles, and vegetation using bounding boxes. These methods ensure accurate results with minimal memory overhead. The optimized 3D models have been successfully applied in driving simulation and traffic flow analysis, providing a practical and scalable solution for real-world infrastructure modeling and analysis. These applications demonstrate the versatility and efficiency of the proposed methods in modern traffic system simulations. Full article
Show Figures

Figure 1

Figure 1
<p>The technical roadmap of the entire paper.</p>
Full article ">Figure 2
<p>DJI M300RTK UAV with Zenith P1 gimbal camera.</p>
Full article ">Figure 3
<p>Dense UAV point cloud of road infrastructure.</p>
Full article ">Figure 4
<p>Laser point cloud of road infrastructure.</p>
Full article ">Figure 5
<p>TJYRoad-Net network.</p>
Full article ">Figure 5 Cont.
<p>TJYRoad-Net network.</p>
Full article ">Figure 6
<p>Traditional machine learning versus transfer learning.</p>
Full article ">Figure 7
<p>Fine-tuning ideas of enhanced TJYRoad-Net.</p>
Full article ">Figure 8
<p>Image point cloud and laser point cloud.</p>
Full article ">Figure 9
<p>Semantic segmentation result of laser point cloud.</p>
Full article ">Figure 10
<p>Semantic segmentation results of image point cloud.</p>
Full article ">Figure 11
<p>Semantic segmentation results of image point clouds from a road intersection scene, showing Input (original point cloud), Ground Truth (manually annotated labels), and Predicted Value (model output with misclassifications circled).</p>
Full article ">Figure 12
<p>Comparison of segmentation results across different state-of-the-art methods, with red circles highlighting the segmentation outputs at identical locations for each method.</p>
Full article ">Figure 13
<p>ICP fine alignment error of pavement point cloud.</p>
Full article ">Figure 14
<p>Registered results of road surface point clouds.</p>
Full article ">Figure 15
<p>Process of building façade precision.</p>
Full article ">Figure 16
<p>ICP fine alignment error of building façade point clouds.</p>
Full article ">Figure 17
<p>Alignment result of point clouds.</p>
Full article ">Figure 18
<p>Variation in model error with downsampling voxel size.</p>
Full article ">Figure 19
<p>Downsampling results of road surface point clouds.</p>
Full article ">Figure 20
<p>Result of road reconstruction.</p>
Full article ">Figure 21
<p>Marker triangle network structure based on Poisson reconstruction.</p>
Full article ">Figure 22
<p>A section of the grid center of mass.</p>
Full article ">Figure 23
<p>Design of road marking template library.</p>
Full article ">Figure 24
<p>Marking reconstruction results.</p>
Full article ">Figure 25
<p>Vegetation reconstruction results.</p>
Full article ">Figure 26
<p>Real scene of road infrastructure.</p>
Full article ">Figure 27
<p>Driving simulation data visualization platform.</p>
Full article ">
32 pages, 6284 KiB  
Article
UAV Detection Using Reinforcement Learning
by Arwa AlKhonaini, Tarek Sheltami, Ashraf Mahmoud and Muhammad Imam
Sensors 2024, 24(6), 1870; https://doi.org/10.3390/s24061870 - 14 Mar 2024
Cited by 3 | Viewed by 2059
Abstract
Unmanned Aerial Vehicles (UAVs) have gained significant popularity in both military and civilian applications due to their cost-effectiveness and flexibility. However, the increased utilization of UAVs raises concerns about the risk of illegal data gathering and potential criminal use. As a result, the [...] Read more.
Unmanned Aerial Vehicles (UAVs) have gained significant popularity in both military and civilian applications due to their cost-effectiveness and flexibility. However, the increased utilization of UAVs raises concerns about the risk of illegal data gathering and potential criminal use. As a result, the accurate detection and identification of intruding UAVs has emerged as a critical research concern. Many algorithms have shown their effectiveness in detecting different objects through different approaches, including radio frequency (RF), computer vision (visual), and sound-based detection. This article proposes a novel approach for detecting and identifying intruding UAVs based on their RF signals by using a hierarchical reinforcement learning technique. We train a UAV agent hierarchically with multiple policies using the REINFORCE algorithm with entropy regularization term to improve the overall accuracy. The research focuses on utilizing extracted features from RF signals to detect intruding UAVs, which contributes to the field of reinforcement learning by investigating a less-explored UAV detection approach. Through extensive evaluation, our findings show the remarkable results of the proposed approach in achieving accurate RF-based detection and identification, with an outstanding detection accuracy of 99.7%. Additionally, our approach demonstrates improved cumulative return performance and reduced loss. The obtained results highlight the effectiveness of the proposed solution in enhancing UAV security and surveillance while advancing the field of UAV detection. Full article
Show Figures

Figure 1

Figure 1
<p>System model for UAV detection: detection of the emitted RF signals from the communication between the UAV and its controller.</p>
Full article ">Figure 2
<p>Visual representation of the number of segments in level one and level two of the experiment. (<b>a</b>) First level with two classes (UAV, NO UAV). (<b>b</b>) Second level represents the presence of the UAV with three classes.</p>
Full article ">Figure 3
<p>The number of segments in the third level of the experiment. The graph consists of 10 classes, the existence of intruding UAV, its model, and the flying mode.</p>
Full article ">Figure 4
<p>Normalized RF activities (scaled between 1 and <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>) (<b>a</b>) segment 1 representing no RF activity (<b>b</b>) segment 3 representing a flying UAV of the model AR.</p>
Full article ">Figure 5
<p>The average spectrums of the three experiment levels.</p>
Full article ">Figure 6
<p>Data distribution of the three experiment levels illustrating the power of each class in the dataset.</p>
Full article ">Figure 7
<p>Hierarchical reinforcement learning (HRL) architecture.</p>
Full article ">Figure 8
<p>Overview of the training architecture using the REINFORCE algorithm.</p>
Full article ">Figure 9
<p>The hierarchical learning approach has two stages: splitting data into 2 subsets, a training set and testing-set, then a learning stage that uses four levels in the hierarchy to identify the class of a given sample [<a href="#B24-sensors-24-01870" class="html-bibr">24</a>].</p>
Full article ">Figure 10
<p>Convergence of cumulative return, representing gained rewards per episode throughout the training of the agent.</p>
Full article ">Figure 11
<p>Convergence of cross-entropy loss during the training process.</p>
Full article ">Figure 12
<p>Convergence of total policy loss during the training process.</p>
Full article ">Figure 13
<p>Average return per episode for binary classification in the detection task.</p>
Full article ">Figure 14
<p>Average return per episode for 10-classes classification in the identification task.</p>
Full article ">Figure 15
<p>Performance evaluation of agent in detection and identification tasks: accuracy, precision, recall, and f1-score metrics.</p>
Full article ">Figure 16
<p>Comparison of accuracy, recall, and F1 score: proposed HRL vs. hierarchical ML and SDAE-LOF.</p>
Full article ">Figure 17
<p>Comparison of cross-entropy loss: (<b>a</b>) proposed HRL vs. (<b>b</b>) hierarchical ML and (<b>c</b>) PRPG algorithm with deep-RL.</p>
Full article ">Figure 18
<p>Comparison between the proposed HRL and Hierarchical ML [<a href="#B24-sensors-24-01870" class="html-bibr">24</a>] to ensure efficient resource management by conserving the energy and time.</p>
Full article ">
Back to TopTop