A Study on Energy Efficiency of a Distributed Processing Scheme for Image-Based Target Recognition for Internet of Multimedia Things
<p>Computation of 2D Haar wavelet transformation.</p> "> Figure 2
<p>The scenario of the proposed distributed processing scheme.</p> "> Figure 3
<p>Packet structure used in the scheme.</p> "> Figure 4
<p>Haar-based and GFD-based feature extraction applied to some animal classes from the dataset; (<b>a</b>,<b>b</b>) Horses database, (<b>c</b>,<b>d</b>) Wolves dataset, (<b>e</b>,<b>f</b>) Tiger dataset.</p> "> Figure 5
<p>Extracting possible threshold values from a dataset.</p> "> Figure 6
<p>Energy consumption for notification in (<b>a</b>) scheme based on GFD and (<b>b</b>) scheme based on Haar wavelet transform.</p> "> Figure 7
<p>Comparing total per-node time and energy consumption in the processing cluster: (<b>a</b>) scheme based on GFD using image of size 64 by 64-8 bpps; (<b>b</b>) scheme based on GFD using image of size 128 by 128-8 bpps; (<b>c</b>) scheme based on Haar wavelet transform using image of size 64 by 64-8 bpps; (<b>d</b>) scheme based on Haar wavelet transform using image of size 128 by 128-8 bpps.</p> "> Figure 8
<p>Energy consumption and elapsed time per sensing cycle. (<b>a</b>) Energy consumption per sensing cycle, image 64 × 64-8 bpp; (<b>b</b>) energy consumption per sensing cycle, image 128 × 128-8 bpp; (<b>c</b>) elapsed time per sensing cycle, image 64 × 64 bpp; (<b>d</b>) elapsed time per sensing cycle, image 128 × 128 bpp.</p> "> Figure 9
<p>Distribution of energy consumption load per sensing cycle for the first five sensing cycles using (<b>a</b>) images with a size of 64 × 64 × 8 bpp and (<b>b</b>) images with a size of 128 × 128 × 8 bpp.</p> "> Figure 10
<p>Energy consumption in camera, centralized Haar-based recognition scheme.</p> "> Figure 11
<p>Cumulative residual energy in the nodes in Haar-based scheme for (<b>a</b>) 64 × 64 × 8 bpp images and (<b>b</b>) 128 × 128 × 8 bpp images.</p> "> Figure 12
<p>Cumulative residual energy in the nodes in the GFD-based scheme for (<b>a</b>) 64 × 64 × 8 bpp images and (<b>b</b>) 128 × 128 × 8 bpp images.</p> "> Figure 13
<p>Average elapsed time per 10th sensing cycle in network.</p> "> Figure A1
">
Abstract
:1. Introduction
2. Related Works
2.1. Compression-Based Approaches
2.2. Distributed Processing Approach
2.3. Event-Based Detection Approach
3. Methodology
3.1. Local Event Detection
3.1.1. Background Subtraction
3.1.2. The ROI Extraction
3.1.3. Extraction of Features’ Vectors
- Haar Wavelet Transformation
- 2.
- General Fourier Transformation
3.1.4. Target Recognition
3.1.5. End-User Notification
- A one-byte message representing the recognition status, i.e., yes/no message;
- The extracted feature set that represents the recognized object;
- The extracted region of interest itself.
3.2. Distributed Processing Cluster Design
3.2.1. Network Model
- The network consists of constant wireless camera sensors in each processing cluster and regular static sensors used for processing and communication assignments.
- The camera sensors are scattered in predefined locations that consent to target detection.
- The density of the network nodes must be high and randomly distributed for complete area coverage.
- The camera node will lead the processing cluster initiation and termination. The selection of cooperating nodes mainly depends on the maximum resident energy level.
- The network will be deployed in a low-dynamic environment, such as the natural habitat of wild animals.
- The camera node broadcasts the [ENERGY_REQUEST] packet to neighbors’ nodes towards creating a processing cluster.
- After, the camera’s neighborhood processing nodes will respond with their residual energy using the [ENERGY_RESPONSE] packet. Then, the camera retains the candidates’ list of responding processing nodes ranked by their energy level and selects two candidate nodes of the processing cluster P1 and P2, based on the highest resident energy level.
- The camera node uses the [JOIN] packet to delegate a particular task for each participating cooperating node of the cluster.
- The selected cooperating nodes P1 and P2 should approve their participation by sending [FORM] acknowledgment packets. Otherwise, go back to step 1.
- After forming processing clusters, the camera will apply the object extraction method to the detected object and send it to the first node for further processing through the [ROI] packet.
- Then, P1 and P2 will work together on object feature extracting, matching, and notifying steps, where P1 is responsible for feature extraction on the functional ROI obtained from the camera, and P2 will accomplish the matching and notification step. Once the object is detected, the P2 responsible for the matching process will notify the camera.
- At the end of the processing cycle, the camera will notify the end-user when the extracted signature matches a reference signature. Otherwise, the detected object is discarded, and finally, the camera will terminate the current sensing cycle and start a new cycle again.
3.2.2. Energy Consumption Model
4. Result and Discussion
4.1. Experiment Setup and Parameters
4.2. Target Recognition and Performance Analysis
4.3. Energy Consumption Efficiency Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
References
- Alvi, S.A.; Afzal, B.; Shah, G.A.; Atzori, L.; Mahmood, W. Internet of multimedia things: Vision and Challenges. Ad Hoc Netw. 2015, 33, 87–111. [Google Scholar] [CrossRef]
- Yadav, V.; Arora, G. Analysis of Wireless Multimedia Sensor Network. In Proceedings of the 2019 2nd International Conference on Power Energy, Environment and Intelligent Control (PEEIC), Greater Noida, India, 18–19 October 2019; pp. 496–498. [Google Scholar]
- Misra, S.; Reisslein, M.; Xue, G. A survey of multimedia streaming in wireless sensor networks. IEEE Commun. Surv. Tutor. 2008, 10, 18–39. [Google Scholar] [CrossRef]
- Chiwariro, R. Quality of service-aware routing protocols in wireless multimedia sensor networks: Survey. Int. J. Inf. Technol. 2020, 14, 789–800. [Google Scholar] [CrossRef]
- Banerjee, R.; Bit, S.D. An energy-efficient image compression scheme for wireless multimedia sensor network using curve fitting technique. Wirel. Netw. 2019, 25, 167–183. [Google Scholar] [CrossRef]
- Latreche, B.; Saadi, S.; Kious, M.; Benziane, A. A novel hybrid image fusion method based on integer lifting wavelet and discrete cosine transformer for visual sensor networks. Multimed. Tools Appl. 2019, 78, 10865–10887. [Google Scholar] [CrossRef]
- Alhilal, M.S.; Soudani, A.; Al-Dhelaan, A. Image-based object identification for efficient event-driven sensing in wireless multimedia sensor networks. Int. J. Distrib. Sens. Netw. 2015, 11, 850869. [Google Scholar] [CrossRef]
- Al-Quayed, F.; Soudani, A.; Al-Ahmadi, S. Design of a New Lightweight Accurate Grid-Based Localization Method for WASN. IEEE Access 2021, 9, 42660–42673. [Google Scholar] [CrossRef]
- Keyes, L.; Winstanley, A.C. Fourier descriptors as a general classification tool for topographic shapes. In Proceedings of the Irish Machine Vision and Image Processing Conference, Dublin, UK, 8–9 September 1999; pp. 193–203. [Google Scholar]
- Mallat, S.G. A theory for multiresolution signal decomposition: The wavelet representation. In Fundamental Papers in Wavelet Theory; Princeton University Press: Princeton, NJ, USA, 2009; pp. 494–513. [Google Scholar]
- Gao, X.; Niyato, D.; Yang, K.; An, J. Cooperative Scheme for Backscatter-Aided Passive Relay Communications in Wireless-Powered D2D Networks. IEEE Internet Things J. 2022, 9, 152–164. [Google Scholar] [CrossRef]
- Gazis, V. A Survey of Standards for Machine-to-Machine and the Internet of Things. IEEE Commun. Surv. Tutor. 2017, 19, 482–511. [Google Scholar] [CrossRef]
- Kaddachi, M.L.; Soudani, A.; Lecuire, V.; Torki, K.; Makkaoui, L.; Moureaux, J.-M. Low power hardware-based image compression solution for wireless camera sensor networks. Comput. Stand. Interfaces 2012, 34, 14–23. [Google Scholar] [CrossRef]
- Akyildiz, I.F.; Su, W.; Sankarasubramaniam, Y.; Cayirci, E. Wireless sensor networks: A survey. Comput. Netw. 2002, 38, 393–422. [Google Scholar] [CrossRef]
- Sharif, A.; Potdar, V.; Chang, E. Wireless multimedia sensor network technology: A survey. In Proceedings of the 2009 7th IEEE International Conference on Industrial Informatics, Cardiff, UK, 23–26 June 2009; pp. 606–613. [Google Scholar]
- Leila, M.; Lecuire, V.; Moureaux, J.-M. Fast zonal DCT-based image compression for Wireless Camera Sensor Networks. In Proceedings of the IPTA 2010, 2010 2nd International Conference on Image Processing Theory, Tools and Applications, Paris, France, 7–10 July 2010; pp. 126–129. [Google Scholar]
- Wang, Y.-C.; Hsieh, Y.-Y.; Tseng, Y.-C. Multiresolution spatial and temporal coding in a wireless sensor network for long-term monitoring applications. IEEE Trans. Comput. 2009, 58, 827–838. [Google Scholar] [CrossRef]
- Ganesan, D.; Greenstein, B.; Estrin, D.; Heidemann, J.; Govindan, R. Multiresolution storage and search in sensor networks. ACM Trans. Storage TOS 2005, 1, 277–315. [Google Scholar] [CrossRef]
- Kouadria, N.; Mechouek, K.; Harize, S.; Doghmane, N. Region-of-interest based image compression using the discrete Tchebichef transform in wireless visual sensor networks. Comput. Electr. Eng. 2019, 73, 194–208. [Google Scholar] [CrossRef]
- Wu, H.; Abouzeid, A.A. Energy efficient distributed JPEG2000 image compression in multihop wireless networks. In Proceedings of the 2004 4th Workshop on Applications and Services in Wireless Networks, ASWN 2004, Boston, MA, USA, 9–11 August 2004; pp. 152–160. [Google Scholar] [CrossRef]
- Wu, H.; Abouzeid, A.A. Energy efficient distributed image compression in resource-constrained multihop wireless networks. Comput. Commun. 2005, 28, 1658–1668. [Google Scholar] [CrossRef]
- Xu, Y.; Qi, H.; Kuruganti, P.T. Distributed computing paradigms for collaborative processing in sensor networks. In Proceedings of the GLOBECOM ‘03. IEEE Global Telecommunications Conference, IEEE, San Francisco, CA, USA, 1–5 December 2003; Volume 6, pp. 3531–3535. [Google Scholar]
- Qi, X.; Wei, P.; Liu, L.; Xie, M.; Cai, G. Wireless sensor networks energy effectively distributed target detection. Int. J. Distrib. Sens. Netw. 2014, 10, 763918. [Google Scholar] [CrossRef]
- Lin, Q.; Yang, J.; Zhang, B.; Wang, R.; Ye, N.; Yan, M. Distributed face recognition in wireless sensor networks. Int. J. Distrib. Sens. Netw. 2014, 10, 175864. [Google Scholar] [CrossRef]
- Zam, A.; Khayyambashi, M.R.; Bohlooli, A. Energy-efficient face detection and recognition scheme for wireless visual sensor networks. Appl. Soft Comput. 2020, 89, 106014. [Google Scholar] [CrossRef]
- Koyuncu, M.; Yazici, A.; Civelek, M.; Cosar, A.; Sert, M. Visual and auditory data fusion for energy-efficient and improved object recognition in wireless multimedia sensor networks. IEEE Sens. J. 2018, 19, 1839–1849. [Google Scholar] [CrossRef]
- Zuo, Z.; Lu, Q.; Luo, W. A two-hop clustered image transmission scheme for maximizing network lifetime in wireless multimedia sensor networks. Comput. Commun. 2012, 35, 100–108. [Google Scholar] [CrossRef]
- Chefi, A.; Soudani, A.; Sicard, G. Hardware compression scheme based on low complexity arithmetic encoding for low power image transmission over WSNs. AEU Int. J. Electron. Commun. 2014, 68, 193–200. [Google Scholar] [CrossRef]
- Zam, A.; Khayyambashi, M.R.; Bohlooli, A. Energy-aware strategy for collaborative target-detection in wireless multimedia sensor network. Multimed. Tools Appl. 2019, 78, 18921–18941. [Google Scholar] [CrossRef]
- Alhilal, M. Design of Low-Power Scheme for Image-Based Object Identification in WMSN. Ph.D. Thesis, King Saud Univeristy, Riyadh, Saudi Arabia, 2016. [Google Scholar]
- Alsabhan, M.; Soudani, A. Target Recognition Approach for Efficient Sensing in Wireless Multimedia Sensor Networks. In Proceedings of the 7th International Conference on Sensor Networks (SENSORNETS 2018), Funchal, Portugal, 22–24 January 2018; pp. 91–98. [Google Scholar]
- Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef]
- Vasuhi, S.; Fathima, A.A.; Shanmugam, S.A.; Vaidehi, V. Object detection and tracking in secured area with wireless and multimedia sensor network. In Proceedings of the International Conference on Networked Digital Technologies, Dubai, United Arab Emirates, 24–26 April 2012; pp. 356–367. [Google Scholar]
- Wang, H.; Peng, D.; Wang, W.; Sharif, H.; Wegiel, J.; Nguyen, D.; Bowne, R.; Backhaus, C. Artificial immune system-based image pattern recognition in energy efficient wireless multimedia sensor networks. In Proceedings of the MILCOM 2008, 2008 IEEE Military Communications Conference, IEEE, San Diego, CA, USA, 16–19 November 2008; pp. 1–7. [Google Scholar]
- Aghdasi, H.S.; Yousefi, S. Enhancing lifetime of visual sensor networks with a preprocessing-based multi-face detection method. Wirel. Netw. 2018, 24, 1939–1951. [Google Scholar] [CrossRef]
- Nikolakopoulos, G.; Stavrou, P.; Tsitsipis, D.; Kandris, D.; Tzes, A.; Theocharis, T. A dual scheme for compression and restoration of sequentially transmitted images over Wireless Sensor Networks. Ad Hoc Netw. 2013, 11, 410–426. [Google Scholar] [CrossRef]
- Piccardi, M. Background subtraction techniques: A review. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583), The Hague, The Netherlands, 10–13 October 2004; Volume 4, pp. 3099–3104. [Google Scholar]
- Pham, C. Communication performances of IEEE 802.15. 4 wireless sensor motes for data-intensive applications: A comparison of WaspMote, Arduino MEGA, TelosB, MicaZ and iMote2 for image surveillance. J. Netw. Comput. Appl. 2014, 46, 48–59. [Google Scholar] [CrossRef]
- Rehman, Y.A.U.; Tariq, M.; Sato, T. A novel energy efficient object detection and image transmission approach for wireless multimedia sensor networks. IEEE Sens. J. 2016, 16, 5942–5949. [Google Scholar] [CrossRef]
- Adams, R.; Bischof, L. Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 641–647. [Google Scholar] [CrossRef]
- Kang, W.-X.; Yang, Q.-Q.; Liang, R.-P. The comparative research on image segmentation algorithms. In Proceedings of the 2009 First International Workshop on Education Technology and Computer Science, Wuhan, China, 7–8 March 2009; Volume 2, pp. 703–707. [Google Scholar]
- Chapa, J.O.; Rao, R.M. Optimal matched wavelet construction and its application to image pattern recognition. Wavelet Appl. II 1995, 2491, 518–529. [Google Scholar]
- Tsai, D.-M.; Chiang, C.-H. Rotation-invariant pattern matching using wavelet decomposition. Pattern Recognit. Lett. 2002, 23, 191–201. [Google Scholar] [CrossRef]
- Alwakeel, M.; Shaaban, Z. Face recognition based on Haar wavelet transform and principal component analysis via Levenberg-Marquardt backpropagation neural network. Eur. J. Sci. Res. 2010, 42, 25–31. [Google Scholar]
- Talukder, K.H.; Harada, K. Haar wavelet-based approach for image compression and quality assessment of compressed image. arXiv 2010, arXiv:1010.4084. [Google Scholar]
- Flusser, J.; Zitova, B.; Suk, T. Moments and Moment Invariants in Pattern Recognition; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- Tang, Y.Y.; Li, B.F.; Ma, H.; Liu, J.; Leung, C.H.; Suen, C.Y. A novel approach to optical character recognition based on ring-projection-wavelet-fractal signatures. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; Volume 2, pp. 325–329. [Google Scholar]
- Zhang, D.; Lu, G. Generic fourier descriptor for shape-based image retrieval. In Proceedings of the IEEE International Conference on Multimedia and Expo, ICME’02, Lausanne, Switzerland, 26–29 August 2002; Volume 1, pp. 425–428. [Google Scholar]
- Zhang, D.; Lu, G. Shape-based image retrieval using generic Fourier descriptor. Signal Process. Image Commun. 2002, 17, 825–848. [Google Scholar] [CrossRef]
- Larsson, F.; Felsberg, M.; Forssén, P. Linköping University Post Print Patch Contour Matching by Correlating Fourier Descriptors. In Proceedings of the 2009 Digital Image Computing: Techniques and Applications, Melbourne, VIC, Australia, 1–3 December 2009. [Google Scholar]
- Sajjanhar, U.; Mitra, P. Distributive energy efficient adaptive clustering protocol for wireless sensor networks. In Proceedings of the 2007 International Conference on Mobile Data Management, Mannheim, Germany, 1 May 2007; pp. 326–330. [Google Scholar]
- Heinzelman, W.R.; Chandrakasan, A.; Balakrishnan, H. Energy-efficient communication protocol for wireless microsensor networks. In Proceedings of the 33rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7 January 2002; p. 10. [Google Scholar]
- Radhika, M.; Sivakumar, P. Video Traffic Analysis over LEACH-GA routing protocol in a WSN. Procedia Comput. Sci. 2019, 165, 701–707. [Google Scholar] [CrossRef]
- Titzer, B.L.; Lee, D.K.; Palsberg, J. Avrora: Scalable sensor network simulation with precise timing. In Proceedings of the IPSN 2005, Fourth International Symposium on Information Processing in Sensor Networks, Boise, ID, USA, 15 April 2005; pp. 477–482. [Google Scholar]
- Chang, S.-F.; Sikora, T.; Purl, A. Overview of the MPEG-7 standard. IEEE Trans. Circuits Syst. Video Technol. 2001, 11, 688–695. [Google Scholar] [CrossRef]
- Yadav, R.B.; Nishchal, N.K.; Gupta, A.K.; Rastogi, V.K. Retrieval and classification of shape-based objects using Fourier, generic Fourier, and wavelet-Fourier descriptors technique: A comparative study. Opt. Lasers Eng. 2007, 45, 695–708. [Google Scholar] [CrossRef]
- Yang, M.; Kpalma, K.; Ronsin, J. A survey of shape feature extraction techniques. Pattern Recognit. 2008, 15, 43–90. [Google Scholar]
- Muñoz, A.; Fernández-Gago, C.; López-Villa, R. A Test Environment for Wireless Hacking in Domestic IoT Scenarios. Mob. Netw. Appl. 2022, 2022, 1–10. [Google Scholar] [CrossRef]
Tasks | Computational Complexity |
---|---|
Background subtraction | O (N.N) |
ROI extraction | O (N.N) |
Feature extraction based on 2D Haar wavelet transform | O (N.N) |
GFD feature extraction based on 2D Fourier transform | O (N.N.N) |
Matching using ED | O (N) |
Criteria | Description |
---|---|
Mote Series | Mica2 |
Sensor Processor | ATmega128L, 868/916 MHz |
Measurement Flash | 512 K Bytes |
Program Flash Memory | 128 K Bytes |
Sensor Data Rate | 38.4 Kbaud = 20/40 Kbps |
Network Communication Model | Based on Signal Strength |
Initial Energy | 100 mJ |
Electric Consumption Energy (RX, TX) | 5 × 10−5 mJ/bit |
Transmit Amplifier Efs | 1 × 10−8 mJ/bit/m2 |
Transmit Amplifier EMP | 1.3 × 10−12 mJ/bit/m4 |
Data Aggregation Energy (EDA) | 5 × 10−9 mJ/bit/signal |
Sqrt (Efs/Emp) d0 | 8.7705 m |
Threshold Value | Classification Performance | Horse | Wolf | Deer | Elephant | Rhino | Tiger | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Train | Test | Train | Test | Train | Test | Train | Test | Train | Test | Train | Test | ||
7 | Precision % | 94.4 | 94.4 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 95.7 | 97.7 |
Recall % | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
5 | Precision % | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
Recall % | 94.1 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
3 | Precision % | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
Recall % | 94.1 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
2 | Precision % | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
Recall % | 94.1 | 95 | 97 | 97 | 100 | 100 | 95 | 95 | 100 | 100 | 95.5 | 97.5 | |
0 | Precision % | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
Recall % | 94.1 | 95 | 94.6 | 94.6 | 100 | 100 | 94.1 | 94.1 | 100 | 100 | 95.5 | 97.5 |
Threshold Value | Classification Performance | Horse | Wolf | Deer | Elephant | Rhino | Tiger | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Train | Test | Train | Test | Train | Test | Train | Test | Train | Test | Train | Test | ||
0.26 | Precision % | 46.8 | 50 | 34.5 | 48 | 100 | 100 | 53.5 | 60 | 33 | 47 | 46.2 | 50 |
Recall % | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
0.19 | Precision % | 88.3 | 90 | 82.9 | 90 | 100 | 100 | 97.1 | 100 | 73.9 | 80 | 97.1 | 100 |
Recall % | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
0.17 | Precision % | 94.9 | 97 | 98.5 | 100 | 100 | 100 | 100 | 100 | 95.7 | 98 | 100 | 100 |
Recall % | 94.1 | 95 | 100 | 100 | 95.5 | 98 | 100 | 100 | 100 | 100 | 100 | 100 | |
0.165 | Precision % | 94.9 | 98 | 100 | 100 | 100 | 100 | 100 | 100 | 95.7 | 98 | 100 | 100 |
Recall % | 94.1 | 95 | 100 | 100 | 95.5 | 97 | 100 | 100 | 100 | 100 | 95.5 | 97 | |
0.155 | Precision % | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
Recall % | 94.1 | 95 | 100 | 100 | 95.5 | 97 | 100 | 100 | 100 | 100 | 95.5 | 97 |
Sub-Task | 64 × 64 Pixels 8 bpp | 128 × 128 Pixels 8 bpp | ||
---|---|---|---|---|
Time (s) | Energy (%) | Time (s) | Energy (%) | |
Image decomposition using wavelet decomposition | 0.037 | 0.84 | 0.074 | 1.64 |
Object extraction | 0.07 | 1.825 | 0.07 | 1.825 |
Transform ring projection (TRP) | 0.058 | 1.33 | 0.058 | 1.33 |
Matching using ED | 0.0112 | 0.0256 | 0.0112 | 0.0256 |
Total | 0.277 | 4.02 | 0.213 | 4.86 |
Cluster-Forming Task | Camera | Single Neighbor Node | ||
---|---|---|---|---|
Time (s) | Energy (%) | Time (s) | Energy (%) | |
Neighborhood Energy Request | 0.000025 | 0.005 | 0 | 0.006 |
Neighbor Energy Responses | 0 | 0.055 | 0.000025 | 0.005 |
Cluster Forming and Acknowledgment | 0.00005 | 0.12 | 0.000025 | 0.011 |
Total | 0.015 | 0.18 | 0.00005 | 0.021 |
64 × 64 Pixels 8 bpp | 128 × 128 Pixels 8 bpp | |||||||
---|---|---|---|---|---|---|---|---|
Camera | P1 | Camera | P1 | |||||
Time (s) | Energy (%) | Time (s) | Energy (%) | Time (s) | Energy (%) | Time (s) | Energy (%) | |
Apply Haar wavelet transformation | 0.037 | 0.84 | - | - | 0.074 | 1.64 | _ | _ |
Camera sends low-band coefficients to P1 | 0.000025 | 0.4 | 0 | 0.45 | 0.000025 | 0.4 | 0 | 0.45 |
P1 extracts and normalizes the object | - | - | 0.131 | 2.94 | - | - | 0.131 | 2.94 |
Total | 0.037025 | 1.24 | 0.131 | 3.39 | 0.074025 | 2.04 | 0.131 | 3.39 |
64 × 64 Pixels 8 bpp | 128 × 128 Pixels 8 bpp | |||||||
---|---|---|---|---|---|---|---|---|
Sub-Tasks | Camera | P1 | Camera | P1 | ||||
Time (s) | Energy (%) | Time (s) | Energy (%) | Time (s) | Energy (%) | Time (s) | Energy (%) | |
Camera extracts ROI | 0.118 | 2.25 | - | - | 0.39 | 9.06 | - | - |
Camera sends ROI to P1 | 0.000025 | 0.134 | 0 | 0.146 | 0.000025 | 0.453 | 0 | 0.496 |
P1 extracts the GFD vectors | 0.131 | 2.94 | 0.131 | 2.94 | ||||
Total | 0.118025 | 2.384 | 0.131 | 3.086 | 0.390025 | 9.513 | 0.131 | 3.436 |
Sub-Tasks | P1 | P2 | ||
---|---|---|---|---|
Time (s) | Energy (%) | Time (s) | Energy (%) | |
P1 normalizes object | 0.07 | 1.825 | _ | _ |
P1 sends coefficients to P2 | 0.000025 | 0.05121024 | 0 | 0.056 |
P2 transforms 2D vectors using ring projection | _ | _ | 0.058 | 1.33 |
P2 matches the vectors to a reference | _ | _ | 0.0112 | 0.0256 |
Total | 0.070025 | 1.87621 | 0.0692 | 1.4116 |
Sub-Tasks | P1 | P2 | ||
---|---|---|---|---|
Time (s) | Energy (%) | Time (s) | Energy (%) | |
P1 sends GFD vectors to P2 | 0.000025 | 0.056 | 0 | 0.062 |
P2 matches the vectors to a reference | _ | _ | 0.21 | 4.8 |
Total | 0.000025 | 0.056 | 0.21 | 4.862 |
Related Work | Processing Model | Schema Based On | Implementation Approach | Per-Node Energy Consumption in Comparison to the Presented Solution |
---|---|---|---|---|
[30] | Local | Local event-based detection based on centroid distance and histogram algorithms | Software | Higher energy consumption |
[31] | Local | Local event-based detection based on GFD | Software | Higher energy consumption |
[5] | Distributed | Curve fitting technique | Software | Higher energy consumption |
[19] | Local | Discrete Tchebichef transform (DTT) | Software | Higher energy consumption |
[27] | Distributed | Distributed compression | Software | Higher energy consumption |
[28] | Local | Object extraction scheme | Hardware | Lower energy but with a very high implementation cost |
[36] | Local | Quad-tree decomposition | Software | Higher energy consumption |
[25] | Distributed | Face-detection algorithm using discriminative vectors | Software | Higher energy consumption |
GFD | Distributed | Distributed event-based detection and recognition using GFD | Software | Higher energy consumption |
Presented Approach | Distributed | Distributed event-based detection and recognition using Haar wavelet | Software | -- |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Soudani, A.; Alsabhan, M.; Almusallam, M. A Study on Energy Efficiency of a Distributed Processing Scheme for Image-Based Target Recognition for Internet of Multimedia Things. Computers 2023, 12, 99. https://doi.org/10.3390/computers12050099
Soudani A, Alsabhan M, Almusallam M. A Study on Energy Efficiency of a Distributed Processing Scheme for Image-Based Target Recognition for Internet of Multimedia Things. Computers. 2023; 12(5):99. https://doi.org/10.3390/computers12050099
Chicago/Turabian StyleSoudani, Adel, Manal Alsabhan, and Manan Almusallam. 2023. "A Study on Energy Efficiency of a Distributed Processing Scheme for Image-Based Target Recognition for Internet of Multimedia Things" Computers 12, no. 5: 99. https://doi.org/10.3390/computers12050099
APA StyleSoudani, A., Alsabhan, M., & Almusallam, M. (2023). A Study on Energy Efficiency of a Distributed Processing Scheme for Image-Based Target Recognition for Internet of Multimedia Things. Computers, 12(5), 99. https://doi.org/10.3390/computers12050099