[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3482909.3482912acmotherconferencesArticle/Chapter ViewAbstractPublication PagessastConference Proceedingsconference-collections
research-article

An Analysis of Automated Code Inspection Tools for PHP Available on Github Marketplace

Published: 12 October 2021 Publication History

Abstract

Code Inspection is a validation process widely used to improve the quality of software. To streamline this process and decrease the possibility of human error, improving the reliability of inspection results, it is possible to use specialized automated code inspection tools. Thus, this article proposes to analyze code inspection tools for PHP programming language, freely available on Github Marketplace. To achieve this goal, the GLPI system was chosen to be inspected, in addition, four code inspection tools were selected, out of twenty-eight available. Criteria were used for the tools selection, consistent with the sytem profile to be inspected and that do not have limitations on the inspection result. To classify the results obtained, the Common Weakness Enumeration (CWE) was used, a list of software and hardware weaknesses developed by numerous renowned companies, such as Microsoft, Apple and IBM. As a result of the inspection work, we found more than ten thousand failures divided into thirty-four different CWEs and from these we analyzed the individual feedback of each tool, as each one of them had unique advantages and disadvantages.

References

[1]
Apostolos Ampatzoglou, Stamatia Bibi, Paris Avgeriou, Marijn Verbeek, and Alexander Chatzigeorgiou. 2019. Identifying, categorizing and mitigating threats to validity in software engineering secondary studies. Information and Software Technology 106 (2019), 201–230.
[2]
F Bomarius and H Iida. 2004. Introducing the Next Generation of Software Inspection Tools Henrik Hedberg.
[3]
CWE. 2021a. CWE VIEW: Hardware Design. https://cwe.mitre.org/data/definitions/1194.html [Online; accessed 20-Jun-2021].
[4]
CWE. 2021b. CWE VIEW: Research Concepts. https://cwe.mitre.org/data/definitions/1000.html [Online; accessed 20-Jun-2021].
[5]
CWE. 2021c. CWE VIEW: Software Development. https://cwe.mitre.org/data/definitions/699.html [Online; accessed 20-Jun-2021].
[6]
Jorgy Rady de Almeida, Joao Batista Camargo, Bruno Abrantes Basseto, and Sérgio Miranda Paz. 2003. Best practices in code inspection for safety-critical software. IEEE software 20, 3 (2003), 56–63.
[7]
Marcio Delamaro, Mario Jino, and Jose Maldonado. 2013. Introdução ao teste de software. Elsevier Brasil, São Paulo.
[8]
Henrik Hedberg and Jouni Lappalainen. 2005. A preliminary evaluation of software inspection tools, with the DESMET method. In Fifth International Conference on Quality Software (QSIC’05). IEEE, IEEE, Melbourne, 45–52.
[9]
Timothy Kinsman, Mairieli Wessel, Marco A Gerosa, and Christoph Treude. 2021. How Do Software Developers Use GitHub Actions to Automate Their Workflows?arXiv preprint arXiv:2103.12224 abs/2103.12224 (2021). arxiv:2103.12224https://arxiv.org/abs/2103.12224
[10]
Barbara Kitchenham, Stephen Linkman, and David Law. 1996. DESMET: A method for evaluating software engineering methods and tools. Technical Report. Department of Computer Science, University of Keele, Staffordshire.
[11]
Oliver Laitenberger. 1998. Studying the effects of code inspection and structural testing on software quality. In Proceedings Ninth International Symposium on Software Reliability Engineering (Cat. No. 98TB100257). IEEE, IEEE, Paderborn, Germany, 237–246.
[12]
Fraser Macdonald and James Miller. 1999. A comparison of computer support systems for software inspection. Automated Software Engineering 6, 3 (1999), 291–313.
[13]
Matti Mantere, Ilkka Uusitalo, and Juha Roning. 2009. Comparison of static code analysis tools. In 2009 Third International Conference on Emerging Security Information, Systems and Technologies. IEEE, IEEE, Athens, 15–22.
[14]
Bob Martin, Mason Brown, Alan Paller, Dennis Kirby, and S Christey. 2011. CWE/SANS Top 25 Most Dangerous Software Errors. SANS top 25(2011).
[15]
Nachiappan Nagappan, Laurie Williams, John Hudepohl, Will Snipes, and Mladen Vouk. 2004. Preliminary results on using static analysis tools for software inspection. In 15th International Symposium on Software Reliability Engineering. IEEE, IEEE, Saint-Malo, 429–439.
[16]
Jernej Novak, Andrej Krajnc, 2010. Taxonomy of static code analysis tools. In The 33rd International Convention MIPRO. IEEE, IEEE, Opatija, 418–422.
[17]
David Lorge Parnas and Mark Lawford. 2003. The role of inspection in software quality assurance. IEEE Transactions on Software engineering 29, 8 (2003), 674–676.
[18]
Devarshi Singh, Varun Ramachandra Sekar, Kathryn T Stolee, and Brittany Johnson. 2017. Evaluating how static analysis tools can reduce code review effort. In 2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, IEEE, Raleigh, USA, 101–105.
[19]
Harvey Siy and Lawrence Votta. 2001. Does the modern code inspection have value?. In Proceedings IEEE International Conference on Software Maintenance. ICSM 2001. IEEE, IEEE, Florence, 281–289.

Cited By

View all
  • (2025)On the suitability of hugging face hub for empirical studiesEmpirical Software Engineering10.1007/s10664-024-10608-830:2Online publication date: 18-Jan-2025

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
SAST '21: Proceedings of the 6th Brazilian Symposium on Systematic and Automated Software Testing
September 2021
63 pages
ISBN:9781450385039
DOI:10.1145/3482909
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Automated code review
  2. Code review
  3. common weakness enumaration

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SAST'21

Acceptance Rates

Overall Acceptance Rate 45 of 92 submissions, 49%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)2
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)On the suitability of hugging face hub for empirical studiesEmpirical Software Engineering10.1007/s10664-024-10608-830:2Online publication date: 18-Jan-2025

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media