Edge-Computing Video Analytics Solution for Automated Plastic-Bag Contamination Detection: A Case from Remondis
<p>Annotated samples from the RCD.</p> "> Figure 2
<p>Conceptualization illustration of the proposed automated plastic-bag contamination detection.</p> "> Figure 3
<p>Overview of Faster R-CNN architecture.</p> "> Figure 4
<p>Overview of YOLOv4 architecture.</p> "> Figure 5
<p>Laboratory hardware setup for the proposed automated plastic-bag contamination detection.</p> "> Figure 6
<p>Block diagram representation of the research approach for automated plastic-bag contamination detection.</p> "> Figure 7
<p>Training loss curves for the different variants of computer vision object detection models implemented for plastic-bag contamination Detection.</p> "> Figure 8
<p>Training mAP curves for the different variants of computer vision object detection models implemented for plastic-bag contamination Detection.</p> "> Figure 8 Cont.
<p>Training mAP curves for the different variants of computer vision object detection models implemented for plastic-bag contamination Detection.</p> "> Figure 9
<p>Training time per epoch for each implemented computer vision object detection model for plastic-bag contamination Detection.</p> "> Figure 10
<p>NVIDIA Jetson Nano system usage plots for different variants of implemented computer vision object detection models.</p> "> Figure 11
<p>NVIDIA Jetson TX2 System usage plots for different variants of implemented computer vision object detection models.</p> "> Figure 11 Cont.
<p>NVIDIA Jetson TX2 System usage plots for different variants of implemented computer vision object detection models.</p> "> Figure 12
<p>Sample correct predictions by the YOLOv4 with CSPDarkNet_tiny backbone model.</p> "> Figure 13
<p>Sample false predictions by the YOLOv4 CSPDarkNet_tiny backbone model.</p> ">
Abstract
:1. Introduction
- 1.
- Development of a challenging utility-oriented waste contamination dataset (i.e., RCD) from the Remondis manual bin-tagging historical records and annotation for plastic-bag contamination bboxes;
- 2.
- Development, validation, and analysis of an edge-computing practical solution for automated plastic-bag contamination detection in waste collection trucks.
2. Related Work
3. Remondis Contamination Dataset (RCD)
4. Automated Plastic-Bag Contamination Detection System
4.1. Computer Vision Models for Plastic-Bag Contamination Detection
4.1.1. Faster R-CNN
- First, based on the backbone network, the weights (w) and bias (b) of the network are initialized;
- A forward-propagation process starts, which performs the computations on the input image based on the type of layer in the network.
- –
- For a fully connected layer, forward computation is performed using the following expression:
- –
- For a convolutional layer, forward computation is performed using the following expression:
- –
- For the pooling layer, a reduced dimension operation is performed on the input;
- –
- For the output layer, a Softmax function is used to predict the class probabilities. Softmax operation can be mathematically represented as:
- Based on the loss function, a backpropagation operation is performed depending on the type of layer in network. The backpropagation process involves loss minimization using the gradient descent approach where weights and bias values are updated for each layer depending on the gradient values. Learning rate plays a vital role in the gradient descent process and has to be chosen carefully during the training process. For the Faster R-CNN training, a learning rate of 0.02 was used.
4.1.2. You Only Look Once version 4 (YOLOv4)
4.2. Hardware Components
- Mitsubishi Analog Camera: Remondis waste collection trucks are already equipped with aluminum-encased Mitsubishi C4010 heavy-duty waterproof analog cameras specifically built for such harsh industrial utilities. The camera is capable of operating in low-lighting conditions and a high-vibration environment. The camera operates on +12V DC with 150mA current consumption and +50 C maximum operating temperature;
- EasyCap Analog-to-Digital Converter: To convert the analog video coming from the camera into digital for processing, an EasyCap USB 2.0 capture card was used. The capture card is a plug-and-play solution and supports high-resolution NTSC and PAL50 video formats;
- NVIDIA edge-computer: The edge-computer is the most important hardware component of the proposed system, with the role of performing all the computations related to plastic-bag contamination detection. For the developed prototype, NVIDIA Jetson Nano and NVIDIA Jetson TX2 edge-computers were used. The detailed specifications for both the edge-computers are presented in Table 2.
4.3. Experimental Design
- In first experiment, a variety of computer vision object detection models were trained and compared for their performance in detecting the plastic-bag contamination;
- In second experiment, the computer vision models were exported and deployed on the edge-computing hardware using a DeepStream video analytics application. The hardware performance of the models was compared for their suitability as a practical solution;
- In third and final experiment, the edge-computing hardware was deployed on three waste collection trucks where functionality of the developed solution was validated and additional data was collected. The collected data was then used to retrain the computer vision models for improved plastic-bag contamination detection performance.
5. Experimental Protocols and Evaluation Measures
Performance Evaluation Measures
6. System Evaluation
6.1. Software Evaluation
6.1.1. Training Performance
6.1.2. Testing Performance
6.2. Hardware Performance
7. Discussion of the Results
8. Field Data Collection and Model Retraining
9. Cost Analysis
10. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
COAG | Council of Australian Government |
NSW | New South Wales |
AIoT | Artificial Intelligence of Things |
YOLO | You Only Look Once |
RCD | Remondis Contamination Dataset |
SSD | Single Shot Detector |
RPN | Region Proposal Network |
CmBN | Cross Mini Batch Normalizations |
CSP | Cross Stage Partial Connections |
SAT | Self Adversarial Training |
GPU | Graphical Processing Unit |
SGD | Stochastic Gradient Descent |
Adam | Adaptive Momentum |
FPS | Frames Per Second |
mAP | Mean Average Precision |
TP | True Positive |
FP | False Positive |
FN | False Negative |
References
- Rene, E.R.; Sethurajan, M.; Ponnusamy, V.K.; Kumar, G.; Dung, T.N.B.; Brindhadevi, K.; Pugazhendhi, A. Electronic waste generation, recycling and resource recovery: Technological perspectives and trends. J. Hazard. Mater. 2021, 416, 125664. [Google Scholar] [CrossRef] [PubMed]
- Singh, O. Forecasting trends in the generation and management of hazardous waste. In Hazardous Waste Management; Elsevier: Amsterdam, The Netherlands, 2022; pp. 465–489. [Google Scholar]
- Ferdous, W.; Manalo, A.; Siddique, R.; Mendis, P.; Zhuge, Y.; Wong, H.S.; Lokuge, W.; Aravinthan, T.; Schubel, P. Recycling of landfill wastes (tyres, plastics and glass) in construction–A review on global waste generation, performance, application and future opportunities. Resour. Conserv. Recycl. 2021, 173, 105745. [Google Scholar] [CrossRef]
- Guo, W.; Xi, B.; Huang, C.; Li, J.; Tang, Z.; Li, W.; Ma, C.; Wu, W. Solid waste management in China: Policy and driving factors in 2004–2019. Resour. Conserv. Recycl. 2021, 173, 105727. [Google Scholar] [CrossRef]
- Ziouzios, D.; Baras, N.; Balafas, V.; Dasygenis, M.; Stimoniaris, A. Intelligent and Real-Time Detection and Classification Algorithm for Recycled Materials Using Convolutional Neural Networks. Recycling 2022, 7, 9. [Google Scholar] [CrossRef]
- Anshassi, M.; Sackles, H.; Townsend, T.G. A review of LCA assumptions impacting whether landfilling or incineration results in less greenhouse gas emissions. Resour. Conserv. Recycl. 2021, 174, 105810. [Google Scholar] [CrossRef]
- Alabi, O.A.; Ologbonjaye, K.I.; Awosolu, O.; Alalade, O.E. Public and environmental health effects of plastic wastes disposal: A review. J. Toxicol. Risk Assess 2019, 5, 1–13. [Google Scholar]
- Vaverková, M.D. Landfill impacts on the environment. Geosciences 2019, 9, 431. [Google Scholar] [CrossRef] [Green Version]
- Zaman, A. Waste Management 4.0: An Application of a Machine Learning Model to Identify and Measure Household Waste Contamination—A Case Study in Australia. Sustainability 2022, 14, 3061. [Google Scholar] [CrossRef]
- Fatimah, Y.A.; Govindan, K.; Murniningsih, R.; Setiawan, A. Industry 4.0 based sustainable circular economy approach for smart waste management system to achieve sustainable development goals: A case study of Indonesia. J. Clean. Prod. 2020, 269, 122263. [Google Scholar] [CrossRef]
- Iyamu, H.; Anda, M.; Ho, G. A review of municipal solid waste management in the BRIC and high-income countries: A thematic framework for low-income countries. Habitat Int. 2020, 95, 102097. [Google Scholar] [CrossRef]
- Mironenko, O.; Mironenko, E. Education against plastic pollution: Current approaches and best practices. In Plastics in the Aquatic Environment-Part II; Springer: Berlin/Heidelberg, Germany, 2020; pp. 67–93. [Google Scholar]
- Heubach, M. Municipal Solid Waste Contracts: Tools for Reducing Recycling Contamination? Ph.D. Thesis, Evergreen State College, Olympia, WA, USA, 2019. [Google Scholar]
- Parliament of Australia. Waste Management and Recycling in Australia—Chapter 2; Parliament of Australia: Canberra, Australia, 2018. [Google Scholar]
- Barthélemy, J.; Verstaevel, N.; Forehead, H.; Perez, P. Edge-computing video analytics for real-time traffic monitoring in a smart city. Sensors 2019, 19, 2048. [Google Scholar] [CrossRef] [PubMed]
- Iqbal, U.; Barthelemy, J.; Li, W.; Perez, P. Automating visual blockage classification of culverts with deep learning. Appl. Sci. 2021, 11, 7561. [Google Scholar] [CrossRef]
- Arshad, B.; Barthelemy, J.; Pilton, E.; Perez, P. Where is my deer?-wildlife tracking and counting via edge-computing and deep learning. In Proceedings of the 2020 IEEE SENSORS, Rotterdam, The Netherlands, 9 December 2020; pp. 1–4. [Google Scholar]
- Iqbal, U.; Bin Riaz, M.Z.; Barthelemy, J.; Perez, P. Prediction of Hydraulic Blockage at Culverts using Lab Scale Simulated Hydraulic Data. Urban Water J. 2022, 19, 686–699. [Google Scholar] [CrossRef]
- Barthelemy, J.; Amirghasemi, M.; Arshad, B.; Fay, C.; Forehead, H.; Hutchison, N.; Iqbal, U.; Li, Y.; Qian, Y.; Perez, P. Problem-Driven and Technology-Enabled Solutions for Safer Communities: The case of stormwater management in the Illawarra-Shoalhaven region (NSW, Australia). In Handbook of Smart Cities; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–28. [Google Scholar]
- Rad, M.S.; Kaenel, A.v.; Droux, A.; Tieche, F.; Ouerhani, N.; Ekenel, H.K.; Thiran, J.P. A computer vision system to localize and classify wastes on the streets. In Proceedings of the International Conference on Computer Vision Systems, Shenzhen, China, 10–13 July 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 195–204. [Google Scholar]
- Ibrahim, K.; Savage, D.A.; Schnirel, A.; Intrevado, P.; Interian, Y. ContamiNet: Detecting contamination in municipal solid waste. arXiv 2019, arXiv:1911.04583. [Google Scholar]
- Kumar, S.; Yadav, D.; Gupta, H.; Verma, O.P.; Ansari, I.A.; Ahn, C.W. A novel yolov3 algorithm-based deep learning approach for waste segregation: Towards smart waste management. Electronics 2020, 10, 14. [Google Scholar] [CrossRef]
- Li, X.; Tian, M.; Kong, S.; Wu, L.; Yu, J. A modified YOLOv3 detection method for vision-based water surface garbage capture robot. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420932715. [Google Scholar] [CrossRef]
- Panwar, H.; Gupta, P.; Siddiqui, M.K.; Morales-Menendez, R.; Bhardwaj, P.; Sharma, S.; Sarker, I.H. AquaVision: Automating the detection of waste in water bodies using deep transfer learning. Case Stud. Chem. Environ. Eng. 2020, 2, 100026. [Google Scholar] [CrossRef]
- White, G.; Cabrera, C.; Palade, A.; Li, F.; Clarke, S. WasteNet: Waste classification at the edge for smart bins. arXiv 2020, arXiv:2006.05873. [Google Scholar]
- Kraft, M.; Piechocki, M.; Ptak, B.; Walas, K. Autonomous, onboard vision-based trash and litter detection in low altitude aerial images collected by an unmanned aerial vehicle. Remote Sens. 2021, 13, 965. [Google Scholar] [CrossRef]
- Patel, D.; Patel, F.; Patel, S.; Patel, N.; Shah, D.; Patel, V. Garbage Detection using Advanced Object Detection Techniques. In Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, 25–27 March 2021; pp. 526–531. [Google Scholar]
- Chazhoor, A.A.P.; Ho, E.S.; Gao, B.; Woo, W.L. Deep transfer learning benchmark for plastic waste classification. Intell. Robot. 2022, 2, 1–19. [Google Scholar] [CrossRef]
- Olowolayemo, A.; Radzi, N.I.A.; Ismail, N.F. Classifying Plastic Waste Using Deep Convolutional Neural Networks for Efficient Plastic Waste Management. Int. J. Perceptive Cogn. Comput. 2022, 8, 6–15. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef] [PubMed]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Tzutalin, D. LabelImg. GitHub Epository. 2015. Available online: https://github.com/tzutalin/labelImg (accessed on 12 August 2022).
Author | Year | Addressed Problem | Dataset | Proposed Approach | Performance |
---|---|---|---|---|---|
Rad et al. [20] | 2017 | Litter classification | Custom dataset | OverFeatGoogleNet | Precision of |
and detection | (4000 images) | 63.7% | |||
Ibrahim et al. [21] | 2019 | Waste contamination | ContamiNet | CNN | AUC 0f |
detection | (30000 images) | 0.88 | |||
Kumar et al. [22] | 2020 | Waste classification | Custom dataset | YOLOv3 | mAP of |
and detection | (8000 images) | 95% | |||
Li et al. [23] | 2020 | Water surface | Custom dataset | YOLOv3 | mAP of |
garbage detection | (1200 images) | 91% | |||
Panwar et al. [24] | 2020 | Water waste | AcquaVision | RetinaNet | mAP of |
detection | (369 images) | 81% | |||
White et al. [25] | 2020 | Waste object | TrashNet | WasteNet | Prediction accuracy |
classification | (2500 images) | ((VGG)-16 based) | of 97% | ||
Kraft et al. [26] | 2021 | Trash object | UAVVaste | YOLOv4, EfficientDet | YOLOv4 mAP |
detection | (774 images) | SSD | of 78% | ||
Patel et al. [27] | 2021 | Garbage | Custom dataset | YOLOv5, EfficientDet | YOLOv5 mAP |
detection | (544 images) | RetinaNet, CenterNet | of 61% | ||
Chazhoor et al. [28] | 2022 | Plastic waste | WaDaBa | AlexNet, ResNeXt, ResNet | ResNeXt AUC |
classification | (4000 images) | MobileNet, DenseNet | of 94.8% | ||
Radzi et al. [29] | 2022 | Plastic waste | Custom dataset | ResNet50 | Accuracy of |
classification | (2110 images) | 94% | |||
Ziouzios et al. [5] | 2022 | Waste detection | Custom dataset | YOLOv4 | mAP of |
and classification | (4000 images) | 92% |
NVIDIA Jetson Nano | NVIDIA Jetson TX2 | |
---|---|---|
GPU | 128-core NVIDIA Maxwell | 256-core NVIDIA Pascal |
CPU | Quad-core ARM A57 | Quad-core ARM A57 |
Memory | 2 GB 64-bit LPDDR4 25.6 GB/s | 8 GB 128-bit LPDDR4 59.7 GB/s |
Storage | 16 GB eMMC 5.1 | 32 GB eMMC 5.1 |
Dimensions | 69.6 mm × 45 mm | 87 mm × 50 mm |
Performance | 472 GFLOPs | 1.3 TFLOPs |
Power | 5/10 W | 7.5/15 W |
Temperature range | −25 C ∼ 80 C | −25 C ∼ 80 C |
Model | Model Size | Trainable Parameters | Training Time (per epoch) | |||||
---|---|---|---|---|---|---|---|---|
Name | Unpruned | Pruned | Unpruned | Pruned | Unpruned | Pruned | ||
Faster R-CNN (DarkNet53 backbone) | 344 MB | 266 MB | 42 M | 33 M | 95 s | 90 s | ||
Faster R-CNN (ResNet50 backbone) | 342 MB | 97 MB | 42 M | 24 M | 100 s | 87 s | ||
Faster R-CNN (MobileNet backbone) | 45 MB | 24 MB | 5 M | 3 M | 55 s | 47 s | ||
YOLOv4 (CSPDarkNet53 backbone) | 594 MB | 402 MB | 49 M | 38 M | 132 s | 120 s | ||
YOLOv4 (CSPDarkNet_tiny backbone) | 70 MB | 69 MB | 5 M | 4.9 M | 59 s | 48 s |
Model | Training Loss | mAP | Precision | Recall |
---|---|---|---|---|
Faster R-CNN Models | ||||
MobileNet backbone | 0.1044 | 63% | 0.4569 | 0.7425 |
DarkNet53 backbone | 0.1915 | 61% | 0.1448 | 0.8517 |
ResNet50 backbone | 0.2106 | 63% | 0.2961 | 0.7871 |
YOLOv4 Models | ||||
CSPDarkNet53 backbone | 21.83 | 67% | NA | NA |
CSPDarkNet_tiny backbone | 18.73 | 65% | NA | NA |
Model | mAP |
---|---|
Faster R-CNN (DarkNet53 backbone) | 50% |
Faster R-CNN (ResNet50 backbone) | 64% |
Faster R-CNN (MobileNet backbone) | 59% |
YOLOv4 (CSPDarkNet53 backbone) | 62% |
YOLOv4 (CSPDarkNet_tiny backbone) | 63% |
Model | FPS | Avg GPU | Avg CPU | Max CPU Temp | Max GPU Temp | Avg Power |
---|---|---|---|---|---|---|
(%) | (%) | (C) | (C) | (watt) | ||
Jetson Nano 2GB | ||||||
Faster-RCNN Models | ||||||
DarkNet53 | 0.4 | 98.945 | 9.435 | 54.5 | 53 | NA |
MobileNet | 3.2 | 98.29 | 11.40 | 54.5 | 53 | NA |
ResNet50 | 0.88 | 98.82 | 10.42 | 54.5 | 52.5 | NA |
YOLOv4 Models | ||||||
CSPDarkNet_tiny | 16.4 | 97.66 | 21.15 | 57 | 55 | NA |
CSPDarkNet53 | 2.4 | 98.70 | 11.63 | 56.5 | 54.5 | NA |
Jetson TX2 | ||||||
Faster-RCNN Models | ||||||
DarkNet53 | 1.8 | 98.82 | 8.72 | 58 | 66.5 | 16.942 |
MobileNet | 8.4 | 98.08 | 11.05 | 54.5 | 59 | 13.863 |
ResNet50 | 2.6 | 98.77 | 9.07 | 58.5 | 65 | 16.131 |
YOLOv4 Models | ||||||
CSPDarkNet_tiny | 24.8 | 58.5 | 16.21 | 50.5 | 53.5 | 10.680 |
CSPDarkNet53 | 6.6 | 98.60 | 11.23 | 58.5 | 63.5 | 15.719 |
Base Model | Retrained Model | Percentage Change | |
---|---|---|---|
mAP | 58% | 69% | 11% Increased |
False Positives (FP) | 176 | 112 | 36.6% decreased |
False Negatives (FN) | 239 | 218 | 8.29% decreased |
True Positives (TP) | 338 | 359 | 6.21% increased |
Item | Quantity | Purpose | Cost (USD) | ||
---|---|---|---|---|---|
Non-Recurring One Time Cost | Hardware Cost | Mitsubishi analog camera | 1 | Capture the visual data from the hopper of the garbage truck. | ≈100 |
Analog-to-digital video converter | 1 | To convert the analog video feed into digital for processing through edge-computer. | ≈20 | ||
NVIDIA edge computer | 1 | Edge-computer to perform computer vision operations. | ≈2000 | ||
Power adapter | 1 | To power the edge-computer hardware. | ≈100 | ||
External USB storage | 1 | To store the contamination-detected images for future analysis. | ≈25 | ||
Software Cost | AI models development cost | NA | To train the AI models with the capability to process the raw data and extract waste contamination relevant information. | ≈10,000 | |
Software Implementation | NA | To deploy the trained AI model(s) on the edge computing hardware. | ≈5000 | ||
Services Cost | Installation cost | NA | To visit potential sites and set up the hardware system. | ≈5000 | |
Recurring Cost | Software Maintenance | Tuning of computer-vision models | NA | To update and optimize the computer vision models and overall software firmware. A major part of listed price is anticipated cost for the AI model fine-tuning and performance improvements. The price is listed for twice-a-year updates. | ≈10,000 |
Hardware Maintenance | Replacement of the hardware components | NA | To manage the hardware components replacement and/or repair including camera, edge-computer, cables and USB drive. The anticipated life of hardware components is 10 years. The listed price is calculated relatively for one year. | ≈225 | |
Operational Cost | Operations and logistics to maintain the hardware | NA | To perform the maintenance operations on site. This includes the labor cost and logistics. Listed is the price for twice-a-year maintenance operation. | ≈5000 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Iqbal, U.; Barthelemy, J.; Perez, P.; Davies, T. Edge-Computing Video Analytics Solution for Automated Plastic-Bag Contamination Detection: A Case from Remondis. Sensors 2022, 22, 7821. https://doi.org/10.3390/s22207821
Iqbal U, Barthelemy J, Perez P, Davies T. Edge-Computing Video Analytics Solution for Automated Plastic-Bag Contamination Detection: A Case from Remondis. Sensors. 2022; 22(20):7821. https://doi.org/10.3390/s22207821
Chicago/Turabian StyleIqbal, Umair, Johan Barthelemy, Pascal Perez, and Tim Davies. 2022. "Edge-Computing Video Analytics Solution for Automated Plastic-Bag Contamination Detection: A Case from Remondis" Sensors 22, no. 20: 7821. https://doi.org/10.3390/s22207821
APA StyleIqbal, U., Barthelemy, J., Perez, P., & Davies, T. (2022). Edge-Computing Video Analytics Solution for Automated Plastic-Bag Contamination Detection: A Case from Remondis. Sensors, 22(20), 7821. https://doi.org/10.3390/s22207821