Water Color Identification System for Monitoring Aquaculture Farms
<p>Our own designed color checkerboard (<b>a</b>), and the referenced ColorChecker Passport (<b>b</b>).</p> "> Figure 2
<p>Two real scenes in which the color checkerboard is placed.</p> "> Figure 3
<p>The proposed system established at the experimental site.</p> "> Figure 4
<p>Results of ArUco marker detection, and ROI and small color blocks extraction.</p> "> Figure 5
<p>The images of a real scene: (<b>a</b>) original capture, and (<b>b</b>) after color correction.</p> "> Figure 6
<p>Three classes must be excluded: (<b>a</b>) waterwheel, (<b>b</b>) checkerboard, and (<b>c</b>) foams.</p> "> Figure 7
<p>Result of YOLACT-based instance segmentation.</p> "> Figure 8
<p>Close-up of an image patch and distances from foams.</p> "> Figure 9
<p>Fuzzy sets of antecedent variables: (<b>a</b>) input 1, and (<b>b</b>) input 2.</p> "> Figure 10
<p>Fuzzy sets of consequent variables.</p> "> Figure 11
<p>Deep CNN with ResNet-50 backbone for water color identification.</p> "> Figure 12
<p>Results of color correction on site: original images (<b>upper</b>), and the color-corrected images (<b>bottom</b>).</p> "> Figure 13
<p>More results of color correction on site: original images (<b>upper</b>), and the color-corrected images (<b>bottom</b>).</p> "> Figure 14
<p>The distribution of 24 colors in <math display="inline"><semantics> <mrow> <mi>a</mi> <mi>b</mi> </mrow> </semantics></math> plane: (<b>a</b>) original vs. ideal, and (<b>b</b>) corrected vs. ideal colors.</p> "> Figure 15
<p>Results of image segmentation from different scenes at three sites.</p> "> Figure 16
<p>Validation accuracy per epoch.</p> "> Figure 17
<p>Block diagram of the proposed system implemented at experimental sites.</p> "> Figure 18
<p>Input–output mapping surface of the proposed FIS.</p> ">
Abstract
:1. Introduction
- We designed a color checkerboard based on 24 colors commonly used for color correction. We then adopted this checkerboard to correct the colors of images captured under various lighting conditions in the outside environment.
- We proposed a scheme for extracting candidate patches from the water regions in an image. These candidate patches are further used to identify the representative color of a pond. The proposed scheme consists of two main steps: semantic segmentation and fuzzy inferencing to determine the degree to which a specified image patch is considered to be the candidate patch.
- A simple color identification model with a deep CNN was implemented. The model’s output is the probability of belonging to one of the predefined color categories.
2. The Proposed System and Algorithm
2.1. System Overview
2.2. Main Algorithm of Water Color Identification
2.2.1. Image Color Correction
Algorithm 1: Proposed color correction algorithm (Pseudocode) | |
Input: an image capture (), with the size of image width () and image height () | |
Output: an image after color correction | |
1: | , , , //Initialize four marker positions |
2: | //Initialize ArUco marker dictionary |
3: | //Detect ArUco markers, return their positions and IDs by OpenCV |
4: | ) //Find transformation matrix (geometric homography) by OpenCV |
5: | //Normalize the ROI into the size pixels by OpenCV |
6: | Detect 24 color blocks in normalized ROI //Use OpenCV APIs to detect color blocks |
7: | for each color block in 1 to 24 do: |
8: | Calculate its mean value: , , |
9: | Convert to value from : , , through Equations (1)–(3). |
10: | end for |
11: | Calculate transformation matrix through Equations (4) and (6). |
12: | for every pixel in do: |
13: | Convert its value to |
14: | Compute the color-corrected value as |
15: | Transform the corrected value back to color model. |
16: | end for |
2.2.2. Image Segmentation and Candidate Patch Extraction
- Stage 1: Segmentation of water region.
- Stage 2: Extraction of candidates from image patches.
“The farther the patch is distant from the foams and the flatter the patch texture is, the higher the probability that the patch will be selected as a candidate”.
Algorithm 2: Proposed water-only candidate extraction algorithm (pseudocode) | |
Input: a color-corrected image (), with the size of image width () and image height () | |
Output: candidates for water-only patches. | |
1: | //Initialize parameters |
2: | //Initialize a counter |
3: | //Initialize the threshold |
4: | while do: |
5: | , //Randomly generate pixel in lower 2/3 part |
6: | , , , //Determine the coordinates of corners |
7: | Crop the patch , whose upper-left corner is and the lower-right corner is |
8: | for in 1 to do: |
9: | //Set the minimum of s to be |
10: | end for |
11: | Calculate through Equations (7) and (8) |
12: | Import the input pair into the proposed FIS |
13: | Derive the crisp output of the FIS |
14: | if then do: |
15: | |
16: | n ← n + 1 |
17: | end if |
18: | end while |
2.2.3. Color Identification
3. Implementation and Experimental Results
3.1. Results of Color Correction
3.2. Results of Image Segmentation
3.3. Results of Color Identification
4. Discussions
4.1. Representative Color Determination
4.2. Overall System Construction on Site
4.3. Further Discussion on Proposed FIS
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Stankus, A. State of world aquaculture 2020 and regional reviews: FAO webinar series. FAO Aquac. Newsl. 2021, 63, 17–18. [Google Scholar]
- Mitra, S.; Khan, M.A.; Nielsen, R.; Islam, N. Total factor productivity and technical efficiency differences of aquaculture farmers in Bangladesh: Do environmental characteristics matter? J. World Aquac. Soc. 2020, 51, 918–930. [Google Scholar] [CrossRef]
- Alam, F. Measuring technical, allocative and cost efficiency of pangas (Pangasius hypophthalmus: Sauvage 1878) fish farmers of Bangladesh. Aquac. Res. 2011, 42, 1487–1500. [Google Scholar] [CrossRef]
- Khan, A.; Guttormsen, A.; Roll, K.H. Production risk of pangas (Pangasius hypophthalmus) fish farming. Aquac. Econ. Manag. 2018, 22, 192–208. [Google Scholar] [CrossRef]
- Kumar, G.; Engle, C.R. Technological advances that led to growth of shrimp, salmon, and tilapia farming. Rev. Fish. Sci. Aquac. 2016, 24, 136–152. [Google Scholar] [CrossRef]
- Iliyasu, A.; Mohamed, Z.A. Evaluating contextual factors affecting the technical efficiency of freshwater pond culture systems in Peninsular Malaysia: A two-stage DEA approach. Aquac. Rep. 2016, 3, 12–17. [Google Scholar] [CrossRef]
- Zion, B. The use of computer vision technologies in aquaculture—A review. Comput. Electron. Agric. 2012, 88, 125–132. [Google Scholar] [CrossRef]
- Vo, T.T.E.; Ko, H.; Huh, J.H.; Kim, Y. Overview of smart aquaculture system: Focusing on applications of machine learning and computer vision. Electronics 2021, 10, 2882. [Google Scholar] [CrossRef]
- Le, J.; Xu, L. An automated fish counting algorithm in aquaculture based on image processing. In Proceedings of the 2016 International Forum on Mechanical, Control and Automation, Shenzhen, China, 30–31 December 2016. [Google Scholar]
- Liu, L.; Lu, H.; Cao, Z.; Xiao, Y. Counting fish in sonar images. In Proceedings of the 25th IEEE International Conference on Image Processing, Athens, Greece, 7–10 October 2018. [Google Scholar]
- Wang, X.; Yang, W. Water quality monitoring and evaluation using remote sensing techniques in China: A systematic review. Ecosyst. Health Sustain. 2019, 5, 47–56. [Google Scholar] [CrossRef]
- Garcia, R.; Prados, R.; Quintana, J.; Tempelaar, A.; Gracias, N.; Rosen, S.; Vågstøl, H.; Løvall, K. Automatic segmentation of fish using deep learning with application to fish size measurement. ICES J. Mar. Sci. 2020, 77, 1354–1366. [Google Scholar] [CrossRef]
- Nieto-Navarro, J.T.; Zetina-Rejón, M.; Aguerrin-Sanchez, F.; Arcos-Huitron, N.E.; Pena-Messina, E. Length-weight relationship of demersal fish from the eastern coast of the mouth of the gulf of Califonia. J. Fish. Aquat. Sci. 2010, 5, 494–502. [Google Scholar]
- Zhang, L.; Wang, J.; Duan, Q. Estimation for fish mass using image analysis and neural network. Comput. Electron. Agric. 2020, 173, 105439. [Google Scholar] [CrossRef]
- Fernandes, A.F.A.; Turra, E.M.; de Alvarenga, E.R.; Passafaro, T.L.; Lopes, F.B.; Alves, G.F.O.; Singh, V.; Rosa, G.J.M. Deep Learning image segmentation for extraction of fish body measurements and prediction of body weight and carcass traits in Nile tilapia. Comput. Electron. Agric. 2020, 170, 105274. [Google Scholar] [CrossRef]
- Balaban, M.O.; Ünal Şengör, G.F.; Soriano, M.G.; Ruiz, E.G. Using image analysis to predict the weight of Alaskan salmon of different species. J. Food Sci. 2010, 75, E157–E162. [Google Scholar] [CrossRef] [PubMed]
- Webb, M.A.H.; Van Eenennaam, J.P.; Crossman, J.A.; Chapman, F.A. A practical guide for assigning sex and stage of maturity in sturgeons and paddlefish. J. Appl. Ichthyol. 2019, 35, 169–186. [Google Scholar] [CrossRef]
- Petochi, B.T.; Di Marco, P.; Donadelli, V.; Longobardi, A.; Corsalini, I.; Bertotto, D.; Finoia, M.G.; Marino, G. Sex and reproductive stage identification of sturgeon hybrids (Acipenser naccarii × Acipenser baerii) using different tools: Ultrasounds, histology and sex steroids. J. Appl. Ichthyol. 2011, 27, 637–642. [Google Scholar] [CrossRef]
- Yang, L.; Liu, Y.; Yu, H.; Fang, X.; Song, L.; Li, D.; Chen, Y. Computer vision models in intelligent aquaculture with emphasis on fish detection and behavior analysis: A review. Arch. Comput. Methods Eng. 2020, 28, 2785–2816. [Google Scholar] [CrossRef]
- Tharwat, A.; Hemedan, A.A.; Hassanien, A.E.; Gabel, T. A biometric-based model for fish species classification. Fish. Res. 2018, 204, 324–336. [Google Scholar] [CrossRef]
- Alver, M.O.; Tennøy, T.; Alfredsen, J.A.; Øie, G. Automatic measurement of rotifer Brachionus plicatilis densities in first feeding tanks. Aquac. Eng. 2007, 36, 115–121. [Google Scholar] [CrossRef]
- Mane, A.M.; Pattanaik, S.S.; Jadhav, R.; Jena, A.K. Pond coloration, interpretation and possible measures of rectification for sustainable aquaculture practice. Aquac. Times 2017, 3, 2394–2398. [Google Scholar]
- Alam, M.; Khan, M.; Huq, A.S.M. Technical efficiency in tilapia farming of Bangladesh: A stochastic frontier production approach. Aquac. Int. 2012, 20, 619–634. [Google Scholar] [CrossRef]
- Finlayson, G.; Gong, H.; Fisher, R.B. Color homography: Theory and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 41, 20–33. [Google Scholar] [CrossRef] [PubMed]
- Nomura, K.; Sugimura, D.; Hamamoto, T. Color correction of underwater images based on multi-illuminant estimation with exposure bracketing imaging. In Proceedings of the 2017 IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017. [Google Scholar]
- Hao, S.; Zhou, Y.; Guo, Y. A brief survey on semantic segmentation with deep learning. Neurocomputing 2020, 406, 302–321. [Google Scholar] [CrossRef]
- Thoma, M. A survey of semantic segmentation. arXiv 2016, arXiv:1602.06541. [Google Scholar]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
- Lateef, F.; Ruichek, Y. Survey on semantic segmentation using deep learning techniques. Neurocomputing 2019, 338, 321–348. [Google Scholar] [CrossRef]
- Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef]
- Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019. [Google Scholar]
- Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. Yolact++: Better real-time instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1108–1121. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
- X-Rite. Available online: https://www.xrite.com/categories/calibration-profiling/colorchecker-classic-family/colorchecker-passport-photo-2 (accessed on 29 August 2022).
- Detection of ArUco Markers. Available online: https://docs.opencv.org/4.5.0/d5/dae/tutorial_aruco_detection.html (accessed on 29 August 2022).
- Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Marín-Jiménez, M.J. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016. [Google Scholar]
- Zimmermann, H.J. Fuzzy Set Theory and Its Applications, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 232–239. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861v1. [Google Scholar]
Color | [R,G,B] | Color | [R,G,B] | Color | [R,G,B] | Color | [R,G,B] |
---|---|---|---|---|---|---|---|
[111,77,71] | [189,149,132] | [93,116,151] | [86,106,67] | ||||
[118,123,164] | [92,181,170] | [206,121,37] | [50,85,164] | ||||
[172,75,85] | [88,62,103] | [164,201,100] | [212,149,55] | ||||
[20,46,123] | [67,140,76] | [162,46,55] | [205,185,0] | ||||
[194,83,145] | [47,144,170] | [229,225,230] | [201,201,200] | ||||
[158,161,162] | [118,119,123] | [83,86,90] | [55,58,54] |
0 | 25 | 50 | 75 | 100 | |
0 | 45 | 90 | 135 | 180 |
VS | S | M | L | VL | ||
---|---|---|---|---|---|---|
VS | M | L | L | VL | VL | |
S | S | M | L | L | VL | |
M | S | S | M | L | L | |
L | VS | S | S | M | L | |
VL | VS | VS | S | S | M |
Index | Major Color | Major Code | Sub Color | Sub Code | Full Code |
---|---|---|---|---|---|
1 | Green | 001 | Blue-green | 001 | 001-001 |
2 | Dark green | 002 | 001-002 | ||
3 | Green | 003 | 001-003 | ||
4 | Light green | 004 | 001-004 | ||
5 | Brown | 002 | Dark brown | 001 | 002-001 |
6 | Brown | 002 | 002-002 | ||
7 | Light brown | 003 | 002-003 | ||
8 | Red | 003 | Dark red | 001 | 003-001 |
9 | Pink | 002 | 003-002 | ||
10 | Light red | 003 | 003-003 | ||
11 | Red | 004 | 003-004 | ||
12 | Yellow | 004 | Tawny | 001 | 004-001 |
13 | Dark yellow | 002 | 004-002 | ||
14 | Light yellow | 003 | 004-003 | ||
15 | Yellow | 004 | 004-004 | ||
16 | Dark | 005 | Dark gray | 001 | 005-001 |
17 | Dark | 002 | 005-002 | ||
18 | Gray | 003 | 005-003 | ||
19 | Blue | 006 | Blue | 001 | 006-001 |
Model | Backbone | Time (ms) | FPS | AP50 | AP75 |
---|---|---|---|---|---|
YOLACT-400 | ResNet-101 | 18.34 | 54..5 | 40.8 | 23.4 |
YOLACT-550 | ResNet-101 | 24.51 | 40.8 | 47.3 | 29.8 |
YOLACT-700 | ResNet-101 | 33.77 | 29.6 | 49.3 | 30.6 |
YOLACT-550++ | ResNet-101 | 28.63 | 34.9 | 52.4 | 34.9 |
YOLACT-700++ | ResNet-101 | 36.57 | 27.3 | 54.1 | 36.5 |
YOLACT-700++ | ResNet-200 | 42.02 | 23.8 | 56.7 | 38.4 |
Feature Extractor: ResNet-50 Encoder | ||||
---|---|---|---|---|
Layer Name | Kernel Size | Stride | Channels | Repeat Times |
Convolution | 2 | 3→64 | 1 | |
Pooling | 2 | 1 | ||
Res-Block 1 | 1 | 64→256 | 3 | |
Res-Block 2 | 1 | 256→512 | 4 | |
Res-Block 3 | 1 | 256→1024 | 6 | |
Res-Block 4 | 1 | 1024→2048 | 3 |
Classifier: Fully Connected Neural Network | ||
---|---|---|
Layer Name | Input Dimension | Output Dimension |
FC-1 1 | 2048 | 1000 |
FC-2 1 | 1000 | 100 |
FC-3 1 | 100 | 19 |
Softmax 2 | 19 | 19 |
Predicted Class | ||||||||||||||||||||
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | ||
True Class | 1 | 0.99 | 0.01 | |||||||||||||||||
2 | 1.0 | |||||||||||||||||||
3 | 0.94 | 0.06 | ||||||||||||||||||
4 | 1.0 | |||||||||||||||||||
5 | 0.9 | 0.05 | 0.05 | |||||||||||||||||
6 | 0.94 | 0.05 | 0.01 | |||||||||||||||||
7 | 0.99 | 0.01 | ||||||||||||||||||
8 | 0.04 | 0.96 | ||||||||||||||||||
9 | 1.0 | |||||||||||||||||||
10 | 0.01 | 0.99 | ||||||||||||||||||
11 | 0.02 | 0.98 | ||||||||||||||||||
12 | 0.04 | 0.88 | 0.08 | |||||||||||||||||
13 | 0.08 | 0.91 | 0.01 | |||||||||||||||||
14 | 1.0 | |||||||||||||||||||
15 | 0.04 | 0.96 | ||||||||||||||||||
16 | 0.01 | 0.02 | 0.01 | 0.96 | ||||||||||||||||
17 | 1.0 | |||||||||||||||||||
18 | 1.0 | |||||||||||||||||||
19 | 1.0 |
Patch sample | True Class | Predicted Class |
---|---|---|
Tawny | Dark brown | |
Tawny | Dark yellow |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, H.-C.; Xu, S.-Y.; Deng, K.-H. Water Color Identification System for Monitoring Aquaculture Farms. Sensors 2022, 22, 7131. https://doi.org/10.3390/s22197131
Chen H-C, Xu S-Y, Deng K-H. Water Color Identification System for Monitoring Aquaculture Farms. Sensors. 2022; 22(19):7131. https://doi.org/10.3390/s22197131
Chicago/Turabian StyleChen, Hsiang-Chieh, Sheng-Yao Xu, and Kai-Han Deng. 2022. "Water Color Identification System for Monitoring Aquaculture Farms" Sensors 22, no. 19: 7131. https://doi.org/10.3390/s22197131
APA StyleChen, H. -C., Xu, S. -Y., & Deng, K. -H. (2022). Water Color Identification System for Monitoring Aquaculture Farms. Sensors, 22(19), 7131. https://doi.org/10.3390/s22197131