[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3021460.3021491acmotherconferencesArticle/Chapter ViewAbstractPublication PagesisecConference Proceedingsconference-collections
extended-abstract

Software Quality Predictive Modeling: An Effective Assessment of Experimental Data

Published: 05 February 2017 Publication History

Abstract

A major problem faced by software project managers is to develop good quality software products within tight schedules and budget constraints [1]. Predictive modeling, in the context of software engineering relates to construction of models for estimation of software quality attributes such as defect-proneness, maintainability and effort amongst others. For developing such models, software metrics act as predictor variables as they signify various design characteristics of a software such as coupling, cohesion, inheritance and polymorphism. A number of techniques such as statistical and machine learning are available for developing predictive models.
However, conducting effective empirical studies, which develop successful predictive models, is not possible if proper research methodology and steps are not followed. This work introduces a successful stepwise procedure for efficient application of various techniques to predictive modeling. A number of research issues which are important to be addressed while conducting empirical studies such as data collection, validation method, use of statistical tests, use of an effective performance evaluator etc. are also discussed with the help of an example.
The tutorial presents an overview of the research process and methodology followed in an empirical research [2]. All steps that are needed to perform an effective empirical study are described. The tutorial would demonstrate the research methodology with the help of an example based on a data set for defect prediction.
In this work we focus on various research issues that are stated below:
RQ1: Which repositories are available for extracting software engineering data?
RQ2: What type of data pre-processing and feature selection techniques should be used before developing predictive models?
RQ3: Which possible tools are freely available for mining and analysis of data for developing software quality predictive models?
RQ4: Which techniques are available for developing software quality predictive models?
RQ5: Which metrics should be used for performance evaluation for models developed for software?
RQ6: Which statistical tests can be effectively used for hypothesis testing using search-based techniques?
RQ7: How can we effectively use search-based techniques for predictive modeling?
RQ8: What are possible fitness functions while using search-based techniques for predictive modeling?
RQ9: How would researchers account for the stochastic nature of search-based techniques?
The reasons for relevance of this study are manifold. Empirical validation of OO metrics is a critical research area in the present day scenario, with a large number of academicians and research practitioners working towards this direction to predict software quality attributes in the early phases of software development. Thus, we explore the various steps involved in development of an effective software quality predictive model using a modeling technique with an example data set. Performing successful empirical studies in software engineering is important for the following reasons:
• To identify defective classes at the initial phases of software development so that more resources can be allocated to these classes to remove errors.
• To analyze the metrics which are important for predicting software quality attributes and to use them as quality benchmarks so that the software process can be standardized and delivers effective products.
• To efficiently plan testing, walkthroughs, reviews and inspection activities so that limited resources can be properly planned to provide good quality software.
• To use and adapt different techniques (statistical, machine learning & search-based) in predicting software quality attributes.
• To analyze existing trends for software quality predictive modeling and suggest future directions for researchers.
• To document the research methodology so that effective replicated studies can be performed with ease.

References

[1]
Malhotra, R. 2015. A systematic review of machine learning techniques for software fault prediction. Appl. Soft Comput. 27, (February 2015), 504--518. DOI=http://dx.doi.org/10.1016/j.asoc.2014.11.023
[2]
Malhotra, R. 2015. Empirical Research in Software Engineering: Concepts, Analysis and Applications. CRC Press, UK.

Cited By

View all
  • (2022)An analysis of learners’ programming skills through data miningEducation and Information Technologies10.1007/s10639-022-11079-427:8(11615-11633)Online publication date: 18-May-2022
  • (2020)Toward Understanding Students’ Learning Performance in an Object-Oriented Programming Course: The Perspective of Program QualityIEEE Access10.1109/ACCESS.2020.29734708(37505-37517)Online publication date: 2020
  1. Software Quality Predictive Modeling: An Effective Assessment of Experimental Data

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ISEC '17: Proceedings of the 10th Innovations in Software Engineering Conference
      February 2017
      235 pages
      ISBN:9781450348560
      DOI:10.1145/3021460
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      In-Cooperation

      • iSOFT: iSOFT

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 05 February 2017

      Check for updates

      Author Tags

      1. Empirical Validation
      2. Object-oriented metrics
      3. Search-based techniques
      4. Software quality predictive modeling

      Qualifiers

      • Extended-abstract
      • Research
      • Refereed limited

      Conference

      ISEC '17

      Acceptance Rates

      ISEC '17 Paper Acceptance Rate 25 of 81 submissions, 31%;
      Overall Acceptance Rate 76 of 315 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)8
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 14 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2022)An analysis of learners’ programming skills through data miningEducation and Information Technologies10.1007/s10639-022-11079-427:8(11615-11633)Online publication date: 18-May-2022
      • (2020)Toward Understanding Students’ Learning Performance in an Object-Oriented Programming Course: The Perspective of Program QualityIEEE Access10.1109/ACCESS.2020.29734708(37505-37517)Online publication date: 2020

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media