Hits:
Indexed by:期刊论文
Date of Publication:2018-12-01
Journal:COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING
Included Journals:SCIE、Scopus
Volume:33
Issue:12
Page Number:1073-1089
ISSN No.:1093-9687
Key Words:Brick; Convolution; Drying; Historic preservation; Image classification; Neural networks; Pixels; Statistical tests; Surface chemistry, Classification results; Convolutional neural network; Convolutional Neural Networks (CNN); Damage classification; Deep architectures; Historic structures; Professional equipment; Sliding window-based, Damage detection, artificial neural network; historic building; image classification; masonry; pixel
Abstract:Manual inspection (i.e., visual inspection and/or with professional equipment) is the most predominant approach for identifying and assessing superficial damage of masonry historic structures at present. However, this method is costly and at times difficult to apply to remote structures or components. Existing convolutional neural network (CNN)-based damage detection methods have not been specifically designed for the multiple damage identification of masonry historic structures. To overcome these limits, a deep architecture of CNN damage classification techniques for masonry historic structures is proposed in this article using a sliding window-based CNN method to identify and locate four categories of damage (intact, crack, efflorescence, and spall) with an accuracy of 94.3%. This is the first attempt to identify the multidamage of historic masonry structures based on CNN techniques and achieve excellent classification results. The data are only trained and tested from images of the Forbidden City Wall in China, and the pixel resolutions of stretcher brick images and header brick images are 480 x 105 and 210 x 105, respectively. Two CNNs (AlexNet and GoogLeNet) are both trained on a small dataset (2,000 images for training, 400 images for validation and testing) and a large dataset (20,000 images for training, 4,000 images for validation and testing). The performance of the trained model (94.3% accuracy) is examined on five new images with 1,860 x 1,260 pixel resolutions.