Automatic Road Crack Detection And Characterization

0 views
Skip to first unread message

Avenall Trejo

unread,
Aug 4, 2024, 9:26:21 PM8/4/24
to sembgesptata
Allarticles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.


Abstract: When it comes to some essential abilities of autonomous ground vehicles (AGV), detection is one of them. In order to safely navigate through any known or unknown environment, AGV must be able to detect important elements on the path. Detection is applicable both on-road and off-road, but they are much different in each environment. The key elements of any environment that AGV must identify are the drivable pathway and whether there are any obstacles around it. Many works have been published focusing on different detection components in various ways. In this paper, a survey of the most recent advancements in AGV detection methods that are intended specifically for the off-road environment has been presented. For this, we divided the literature into three major groups: drivable ground and positive and negative obstacles. Each detection portion has been further divided into multiple categories based on the technology used, for example, single sensor-based, multiple sensor-based, and how the data has been analyzed. Furthermore, it has added critical findings in detection technology, challenges associated with detection and off-road environment, and possible future directions. Authors believe this work will help the reader in finding literature who are doing similar works. Keywords: autonomous ground vehicles; off-road environment; drivable ground; positive obstacles; negative obstacles


A diagram that demonstrates the key ideas utilized in the line-joining algorithm. (a) The angle difference criterion. (b) The endpoint to line segment distance criterion. (c) The search arc criterion, and (d) a demonstration of the line-joining process.


A flowchart that demonstrates the use of convolutional neural networks along with fully connected layers for classification of horizontally aligned tree box images as either left, right, or inconclusive (inc).


Depiction of the clustering algorithm utilized to find two distinct clusters of directions in a grid square. The dots represent the endpoints of unit vectors placed on the unit circle. The red and blue dots show the corresponding cluster, and the teal dots represent the medoid of each cluster.


Histograms comparing the angle difference between manual and automated treefall direction vectors for the Alonsa, Manitoba, tornado. Bins of 5 are used comparing both median and clustering methods. The 80% mark indicates the value at which 80% of the vectors have been counted.


(left) Manual and (right) automated TrIDA maps for a 4.25 km 4.25 km section of the Brooks Lake, Ontario, tornado with a 250-m grid size. This section of the Brooks Lake tornado does not overlap with any images used for training or validation of the automated model.


2024 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).


The enhanced Fujita (EF) scale is implemented in several countries such as Canada, the United States, China, and Japan to assess the severity of tornadoes. Within this scale, various damage indicators (DIs) are evaluated, each having corresponding degrees of damage (DOD) along with their associated wind speeds. The spectrum of DODs typically spans from the point of minimal discernible damage to the complete devastation of the DI. (McDonald and Mehta 2006). The overall EF-scale rating for the damage is then assigned based on the maximum wind speed across all observed damage indicators (Mehta 2013).


All aerial surveys of severe wind damage conducted by researchers result in some level of damage analysis. Typically, at minimum, damage from the wind event is manually outlined, the centerline, pathlength, and maximum width are determined, and a wind speed and associated EF rating are assigned. Since 2020, for particularly significant or complicated events, especially those involving a mix of tornado and downburst damage, the NTP generates a Treefall Identification and Direction Analysis (TrIDA) map. These analyses are inspired by similar work done by T. T. Fujita in several of his papers (e.g., Fujita and Wakimoto 1981; Fujita 1989). Generating a TrIDA map consists of identifying the areas of treefall damage and the general treefall directions along the damage path. First, all areas of fresh treefall are enclosed by polygons to highlight the damage path. Then, the average treefall directions of groups of trees are noted, spaced as needed to obtain a good understanding of the treefall patterns in the damage. Finally, when applicable, these treefall directions can be used to distinguish between tornadic and downburst damage. Generally speaking, tornadic damage is convergent and closer to the damage centerline, whereas related downburst damage is usually divergent and off the main path of the tornado. Occasionally, the entire area of damage is divergent and caused by one or multiple downbursts, which may not be known until after the analysis is performed. TrIDA maps are useful for separating out potential downburst damage from related tornado paths while also providing some insight into the character of the event and a valuable visual representation of the wind patterns.


In 2017, a method for coarse-to-fine extraction of downed trees from UAVs was proposed by Duan et al. (2017). Their extraction method first utilizes a random forest machine learning model (Breiman 2001) to extract a rough mask of the trees, followed by image processing techniques that leverage the linear shape of trees to refine the mask. Once a refined mask is produced, the Hough transform (Hough 1962) is implemented to fit lines to the tree stems. However, their dataset was limited to a single event in northeastern Hainan, China, and taken from a UAV camera at a relatively high altitude of 500 m, producing lower-resolution (10 cm per pixel) imagery. As a result, many of their techniques would require significant manual adjustment if applied to a broader range of tornadic events with higher concentrations of trees or different species of trees with more pronounced branches that are resolved with higher-resolution imagery.


Most recently, the detection of tree stems from UAV orthomosaics using U-Net convolutional neural networks was demonstrated by Reder et al. (2022). They created various datasets augmented from 454 trees downed in a severe storm northeast of Berlin, Germany. These datasets are then used to train a U-Net model to perform semantic segmentation of downed tree stems (Ronneberger et al. 2015). Their results are automated and are more effective at extracting tree stems than the coarse-to-fine method proposed by Duan et al. (2017). However, their datasets are limited to a single event with a moderate density of trees, and no fitting of lines or extraction of direction is performed.


Given the above, a method for systematic wide-ranging treefall analysis in remote forested areas is needed. The objective of this paper is to describe a machine learning and image processing-based model that can automatically extract fallen trees from large-scale aerial imagery, assess their fall directions, and produce an area-averaged treefall vector map with minimal initial human interaction.


The method developed to produce an automated treefall map model is described in this section and summarized in Fig. 3. First, a treefall segmentation mask is produced similar to that of Reder et al. (2022). Next, instance segmentation is performed using the segmentation mask to extract individual trees, followed by assessing their fall directions. Finally, the treefall directions are sorted into a chosen grid, and the area-averaged directions of trees in each grid square are used to create a treefall vector map.

3a8082e126
Reply all
Reply to author
Forward
0 new messages