Author(s): Cheng Wei Wu; Hao Che Ho; Po Cheng Chien
Linked Author(s): Hao-Che Ho
Keywords: Rainfall Intensity Measurement Image Recognition Audio Recognition Convolutional Neural Network Mel Frequency Cepstral Coefficients
Abstract: This study combines image and audio recognition technologies to analyze rainfall intensity, aiming to improve the accuracy and stability of rainfall monitoring. Image recognition provides spatial distribution and visual characteristics of rainfall but performs poorly in low-visibility or nighttime conditions. Audio recognition, which estimates rainfall intensity by analyzing raindrop sounds, is highly sensitive to light rain and works well in adverse weather and low-light conditions. However, it is susceptible to ambient noise and lacks broad spatial information. Combining these methods mitigates individual limitations, strengthens noise resistance, and yields more stable data across various weather conditions. This research introduces a rainfall intensity detection method using optical images and acoustic signals, analyzed with deep learning algorithms. Four artificial and five real rainfall events were recorded, producing nine datasets using a custom-built instrument. A camera captured rainfall images, while a microphone recorded the sound of raindrops striking a hard plastic surface as input for the model. The audio data was transformed into Mel Frequency Cepstral Coefficients (MFCC) and, along with synchronized images, fed into Convolutional Neural Network (CNN) models for analysis. The results showed that our model achieved an accuracy of 99.88% during the day and 99.75% at night in artificial rainfall simulator experiments. When applied to real rainfall, the model achieved an accuracy of 99.33% during the day and 70.56% at night. With future IoT integration, this model could support disaster response, flood warnings and smart cities, helping to mitigate the societal impacts of climate change.
Year: 2025