[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3674805.3686697acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article
Open access

M-score: An Empirically Derived Software Modularity Metric

Published: 24 October 2024 Publication History

Abstract

Background: Software practitioners need reliable metrics to monitor software evolution, compare projects, and understand modularity variations. This is crucial for assessing architectural improvement or decay. Existing popular metrics offer little help, especially in systems with implicitly connected but seemingly isolated files.
Aim: Our objective is to explore why and how state-of-the-art modularity measures fail to serve as effective metrics and to devise a new metric that more accurately captures complexity changes, and is less distorted by sizes or isolated files.
Methods: We analyzed metric scores for 1,220 releases across 37 projects to identify the root causes of their shortcomings. This led to the creation of M-score, a new software modularity metric that combines the strengths of existing metrics while addressing their flaws. M-score rewards small, independent modules, penalizes increased coupling, and treats isolated modules and files consistently.
Results: Our evaluation revealed that M-score outperformed other modularity metrics in terms of stability, particularly with respect to isolated files, because it captures coupling density and module independence. It also correlated well with maintenance effort, as indicated by historical maintainability measures, meaning that the higher the M-score, the more likely maintenance tasks can be accomplished independently and in parallel.
Conclusions: Our research identifies the shortcomings of current metrics in accurately depicting software complexity and proposes M-score, a new metric with superior stability and better reflection of complexity and maintenance effort, making it a promising metric for software architectural assessments, comparison, and monitoring.

References

[1]
[n. d.]. Depends: Multi-language Source Code Dependency Extraction Tool. https://github.com/multilang-depends/depends.
[2]
[n. d.]. Understand: Take Control of Your Code. https://scitools.com/.
[3]
Saleh Almugrin, Waleed Albattah, and Austin Melton. 2016. Using indirect coupling metrics to predict package maintainability and testability. Journal of systems and software 121 (2016), 298–310.
[4]
T. W. Anderson and D. A. Darling. 1952. Asymptotic Theory of Certain "Goodness of Fit" Criteria Based on Stochastic Processes. The Annals of Mathematical Statistics 23, 2 (1952), 193 – 212. https://doi.org/10.1214/aoms/1177729437
[5]
Carliss Y. Baldwin and Kim B. Clark. 2000. Design Rules, Vol. 1: The Power of Modularity. MIT Press.
[6]
Dennis Bijlsma, Miguel Alexandre Ferreira, Bart Luijten, and Joost Visser. 2012. Faster Issue Resolution with Higher Technical Quality of Software. j-sqj 20, 2 (June 2012), 265–285.
[7]
M. Host M. Ohlsson B. Regnell C. Wohlin, P. Runeson and A. Wesslen. 2012. Experimentation in Software Engineering.
[8]
Yuanfang Cai and Rick Kazman. 2019. DV8: Automated Architecture Analysis Tool Suites. In Proceedings of the Second International Conference on Technical Debt (Montreal, Quebec, Canada) (TechDebt ’19). IEEE Press, 53–54. https://doi.org/10.1109/TechDebt.2019.00015
[9]
Shyam R. Chidamber and Chris F. Kemerer. 1994. A Metrics Suite for Object Oriented Design. IEEE Transactions on Software Engineering 20, 6 (June 1994), 476–493.
[10]
Michel-Daniel Cojocaru, Alexandru Uta, and Ana-Maria Oprescu. 2019. Attributes assessing the quality of microservices automatically decomposed from monolithic applications. In 2019 18th International Symposium on Parallel and Distributed Computing (ISPDC). IEEE, 84–93.
[11]
Istvan Gergely Czibula, Gabriela Czibula, Diana-Lucia Miholca, and Zsuzsanna Onet-Marian. 2019. An aggregated coupling measure for the analysis of object-oriented software systems. Journal of Systems and Software 148 (2019), 1–20.
[12]
Fernando Brito e Abreu. 1995. The MOOD Metrics Set. In Proc. ECOOP’95 Workshop on Metrics.
[13]
Thomas Engel, Melanie Langermeier, Bernhard Bauer, and Alexander Hofmann. 2018. Evaluation of microservice architectures: A metric and tool-based approach. In Information Systems in the Big Data Era: CAiSE Forum 2018, Tallinn, Estonia, June 11-15, 2018, Proceedings 30. Springer, 74–89.
[14]
Xiaoqin Fu and Haipeng Cai. 2019. Measuring interprocess communications in distributed systems. In 2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC). IEEE, 323–334.
[15]
Daniel Galin. 2004. Software quality assurance: from theory to implementation. Pearson education.
[16]
Fernand Gobet and Gary Clarkson. 2004. Chunks in expert memory: evidence for the magical number four... or is it two?Memory 12, 6 (Nov. 2004), 732–47.
[17]
Maurice H. Halstead. 1977. Elements of Software Science (Operating and Programming Systems Series). Elsevier Science Inc.
[18]
R. Harrison, S. J. Counsell, and R. V. Nithi. 1998. An Investigation into the Applicability and Validity of Object-Oriented Design Metrics. Empirical Software Engineering 3, 3 (Sept. 1998), 255–273.
[19]
Ilja Heitlager, Tobias Kuipers, and Joost Visser. 2007. A Practical Model for Measuring Maintainability. In Proc. 6th International Conference on Quality of Information and Communications Technology. 30–39.
[20]
Rick Kazman, Yuanfang Cai, Ran Mo, Qiong Feng, Lu Xiao, Serge Haziyev, Volodymyr Fedak, and Andrey Shapochka. 2015. A Case Study in Locating the Architectural Roots of Technical Debt. In Proc. 37th International Conference on Software Engineering.
[21]
Serkan Kirbas, Alper Sen, Bora Caglayan, Ayse Bener, and Rasim Mahmutogullari. 2014. The effect of evolutionary coupling on software defects: an industrial case study on a legacy system. In Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. 1–7.
[22]
Wei Li and Sallie Henry. 1993. Object-oriented Metrics That Predict Maintainability. Journal of Systems and Software 23, 2 (Nov. 1993), 111–122.
[23]
Mark Lorenz and Jeff Kidd. 1994. Object-Oriented Software Metrics. Prentice Hall. 146 pages.
[24]
Alan MacCormack, John Rusnak, and Carliss Baldwin. 2008. Exploring the Duality between Product and Organizational Architecture: A Test of the Mirroring Hypothesis. Working Paper 08-039. Harvard Business School. http://www.hbs.edu/research/pdf/08-039.pdf.
[25]
Alan MacCormack, John Rusnak, and Carliss Y. Baldwin. 2006. Exploring the Structure of Complex Software Designs: An Empirical Study of Open Source and Proprietary Code. Management Science 52, 7 (July 2006), 1015–1030.
[26]
Thomas J. McCabe. 1976. A Complexity Measure. IEEE Transactions on Software Engineering 2, 4 (Dec. 1976), 308–320.
[27]
George A. Miller. 1956. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review 63, 2 (1956), 81–97.
[28]
Subhas Chandra Misra. 2005. Modeling Design/Coding Factors That Drive Maintainability of Software Systems. Software Quality Control 13, 3 (Sept. 2005), 297–320.
[29]
Ran Mo, Yuanfang Cai, Rick Kazman, Lu Xiao, and Qiong Feng. 2016. Decoupling Level: A New Metric for Architectural Maintenance Complexity. In Proc. 38rd International Conference on Software Engineering.
[30]
Ran Mo, Will Snipes, Yuanfang Cai, Srini Ramaswamy, Rick Kazman, and Martin Naedele. 2018. Experiences Applying Automated Architecture Analysis Tool Suites. In 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE). 779–789. https://doi.org/10.1145/3238147.3240467
[31]
Nachiappan Nagappan, Thomas Ball, and Andreas Zeller. 2006. Mining metrics to predict component failures. In Proc. 28th International Conference on Software Engineering. 452–461.
[32]
M. Nayebi, Y. Cai, R. Kazman, G. Ruhe, Q. Feng, C. Carlson, and F. Chew. 2019. A Longitudinal Study of Identifying and Paying Down Architecture Debt. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). 171–180. https://doi.org/10.1109/ICSE-SEIP.2019.00026
[33]
Paul Oman and Jack Hagemeister. 1994. Construction and testing of polynomials predicting software maintainability. Journal of Systems and Software 24, 3 (1994), 251–266.
[34]
Michail D Papamichail and Andreas L Symeonidis. 2020. A generic methodology for early identification of non-maintainable source code components through analysis of software releases. Information and Software Technology 118 (2020), 106218.
[35]
Mikhail Perepletchikov, Caspar Ryan, and Keith Frampton. 2007. Cohesion metrics for predicting maintainability of service-oriented software. In Seventh International Conference on Quality Software (QSIC 2007). IEEE, 328–335.
[36]
Ernst Pisch, Yuanfang Cai, Jason Lefever, Rick Kazman, and Hongzhou Fang. 2024. M-Score: An Empirically Derived Software Modularity Metric. https://doi.org/10.5281/zenodo.12747463
[37]
James Rumbaugh, Michael Blaha, William Premerlani, Frederick Eddy, William E. Lorensen, 1991. Object-oriented modeling and design. Vol. 199. Prentice-hall Englewood Cliffs, NJ.
[38]
Richard W. Selby and Victor R. Basili. 1991. Analyzing Error-Prone System Structure. IEEE Transactions on Software Engineering 17, 2 (Feb. 1991), 141–152.
[39]
SA SonarSource. 2013. SonarQube. Capturado em: http://www. sonarqube. org (2013).
[40]
Margaret-Anne Storey, Neil A. Ernst, Courtney Williams, and Eirini Kalliamvakou. 2020. The who, what, how of software engineering research: a socio-technical framework. Empirical Softw. Engg. 25, 5 (sep 2020), 4097–4129. https://doi.org/10.1007/s10664-020-09858-z
[41]
Kevin J. Sullivan, William G. Griswold, Yuanfang Cai, and Ben Hallen. 2001. The Structure and Value of Modularity in Software Design. In Proc. Joint 8th European Conference on Software Engineering and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering. 99–108.
[42]
Menzies Tim, Butcher Andrew, Marcus Andrian, Zimmermann Thomas, and Cok David. 2011. Local vs. Global Models for Effort Estimation and Defect Prediction. In Proc. 26thIEEE/ACM International Conference on Automated Software Engineering. 343–351.
[43]
Melanie P Ware, F G Wilkie, and Mary Shapcott. 2007. The Application of Product Measures in Directing Software Maintenance Activity. Journal of Software Maintenance 19, 2 (March 2007), 133–154.
[44]
Muhammad Waseem, Peng Liang, Mojtaba Shahin, Amleto Di Salle, and Gastón Márquez. 2021. Design, monitoring, and testing of microservices systems: The practitioners’ perspective. Journal of Systems and Software 182 (2021), 111061.
[45]
Sunny Wong, Yuanfang Cai, Giuseppe Valetto, Georgi Simeonov, and Kanwarpreet Sethi. 2009. Design Rule Hierarchies and Parallelism in Software Development Tasks. In Proc. 24th IEEE/ACM International Conference on Automated Software Engineering. 197–208.
[46]
Wensheng Wu, Yuanfang Cai, Rick Kazman, Ran Mo, Zhipeng Liu, Rongbiao Chen, Yingan Ge, Weicai Liu, and Junhui Zhang. 2018. Software Architecture Measurement—Experiences from a Multinational Company: 12th European Conference on Software Architecture, ECSA 2018, Madrid, Spain, September 24–28, 2018, Proceedings. 303–319. https://doi.org/10.1007/978-3-030-00761-4_20
[47]
Chenxing Zhong, He Zhang, Chao Li, Huang Huang, and Daniel Feitosa. 2023. On measuring coupling between microservices. Journal of Systems and Software 200 (2023), 111670. https://doi.org/10.1016/j.jss.2023.111670

Index Terms

  1. M-score: An Empirically Derived Software Modularity Metric

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ESEM '24: Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
    October 2024
    633 pages
    ISBN:9798400710476
    DOI:10.1145/3674805
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 October 2024

    Check for updates

    Author Tags

    1. Software Evolution
    2. Software Metrics
    3. Software Modularity

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    ESEM '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 130 of 594 submissions, 22%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 71
      Total Downloads
    • Downloads (Last 12 months)71
    • Downloads (Last 6 weeks)43
    Reflects downloads up to 24 Dec 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media