Abstract
When conducting a systematic literature review, researchers usually determine the relevance of primary studies on the basis of the title and abstract. However, experience indicates that the abstracts for many software engineering papers are of too poor a quality to be used for this purpose. A solution adopted in other domains is to employ structured abstracts to improve the quality of information provided. This study consists of a formal experiment to investigate whether structured abstracts are more complete and easier to understand than non-structured abstracts for papers that describe software engineering experiments. We constructed structured versions of the abstracts for a random selection of 25 papers describing software engineering experiments. The 64 participants were each presented with one abstract in its original unstructured form and one in a structured form, and for each one were asked to assess its clarity (measured on a scale of 1 to 10) and completeness (measured with a questionnaire that used 18 items). Based on a regression analysis that adjusted for participant, abstract, type of abstract seen first, knowledge of structured abstracts, software engineering role, and preference for conventional or structured abstracts, the use of structured abstracts increased the completeness score by 6.65 (SE 0.37, p < 0.001) and the clarity score by 2.98 (SE 0.23, p < 0.001). 57 participants reported their preferences regarding structured abstracts: 13 (23%) had no preference; 40 (70%) preferred structured abstracts; four preferred conventional abstracts. Many conventional software engineering abstracts omit important information. Our study is consistent with studies from other disciplines and confirms that structured abstracts can improve both information content and readability. Although care must be taken to develop appropriate structures for different types of article, we recommend that Software Engineering journals and conferences adopt structured abstracts.
Similar content being viewed by others
Notes
The protocol was reviewed by the team as it was developed, and was then subjected to a more detailed review at a meeting with James Hartley present as expert advisor. We also took advice from the information services (libraries) at Keele and Durham to the effect that using and restructuring the abstracts in this manner did not infringe copyright in any way.
References
Automated Readability Index (2006) In: Wikipedia, the free encyclopedia. Retrieved 29 September 2006 URL:http://en.wikipedia.org/w/index.php?title=Automated_Readability_Index
Bayley L, Eldredge J (2003) The structured abstract: an essential tool for researchers. Hypothesis 17(1):11–13
Booth A (2003) Bridging the research–practice gap? The role of evidence based librarianship. New Review of Information and Library Research, pp 3–23
Brereton O, Kitchenham B, Budgen D, Turner M, Khalil M (2007) Lessons from applying the systematic literature review process within the software engineering domain. J Syst Softw 80(4):571–583
Budgen D, Kitchenham B, Charters S, Turner M, Brereton P, Linkman S (2007a) Preliminary results of a study of the completeness and clarity of structured abstracts. In: EASE 2007: evaluation and assessment in software engineering. BCS-eWiC, pp 64–72
Budgen D, Kitchenham B, Charters S, Turner M, Brereton P, Linkman S (2007b) Protocol for an experimental study of the use of structured abstracts. Technical report, EBSE Project. Version 1.3.2.
Editorial (2004) Addressing the limitations of structured abstracts. Ann Intern Med 140:480–481
Flesch R (1948) A new readability yardstick. J Appl Psychol 32:221–233
Flesch-Kincaid Readability Test (2006) In: Wikipedia, the free encyclopedia. URL:http://en.wikipedia.org/w/index.php?title=Flesch-Kincaid_Readability_Test&oldid=77211134. Retrieved 29 September 2006
Glasser B, Strauss A (1967) The discovery of grounded theory. Aldine, Chicago
Hartley J (2000) Typographic settings for structured abstracts. J Tech Writ 30(4):355–365
Hartley J (2003) Improving the clarity of journal abstracts in psychology: the case for structure. Sci Commun 24:366–379
Hartley J (2004) Current findings from research on structured abstracts. J Med Libr Assoc 92:368–371
Hartley J, Benjamin M (1998) An evaluation of structured abstracts in journals published by the British Psychological Society. Br J Educ Psychol 68:443–456
Hartley J, Sydes M (1996) Which layout do you prefer? An analysis of readers’ preferences for different typographical layouts of structured abstracts. J Inf Sci 22(1):27–37
Hartley J, Sydes M (1997) Are structured abstracts easier to read than traditional ones? J Res Read 20:122–136
Jedlitschka A, Pfahl D (2005) Reporting guidelines for controlled experiments in software engineering. In: Proc. ACM/IEEE international symposium on empirical software engineering (ISESE) 2005. IEEE Computer Society Press, pp 95–195
Jedlitschka A, Ciolkowski M, Pfahl D (2008) Reporting experiments in software engineering. In: Shull F, Singer J, Sjøberg D (eds) Guide to advanced empirical software engineering, Chapter 8. Springer, London
Kitchenham B (2004) Procedures for undertaking systematic reviews. Technical Report TR/SE-0401. Department of Computer Science, Keele University and National ICT, Australia Ltd. Joint Technical Report
Kitchenham B, Al-Khilidar H, Babar MA, Berry M, Cox K, Keung J, Kurniawati F, Staples M, Zhang H, Zhu L (2006) Evaluating guidelines for empirical software engineering studies. In: Proceedings ACM/IEEE international symposium on empirical software engineering (ISESE 2006). IEEE Computer Society Press
Kitchenham B, Brereton P, Owen S, Butcher J, Jefferies C (2008) Length and readability of structured software engineering abstracts. IET Softw 2:37–45
Kitchenham B, Budgen D, Brereton P, Turner M (2007) 2nd international workshop on realising evidence-based software engineering (REBSE-2): overview and introduction. In: Proceedings of REBSE-2 workshop, ICSE 2007. IEEE Computer Society Press, pp 1–5
Kitchenham B, Dybå T, Jørgensen M (2004) Evidence-based software engineering. In: Proceedings of ICSE 2004. IEEE Computer Society Press, pp 273–281
Kitchenham B, Pfleeger SL, Pickard L, Jones P, Hoaglin D, Emam KE, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28:721–734
Milliken G, Johnson D (1992) Analysis of messy data—volume 1: designed experiments. Chapman and Hall
Petticrew M, Roberts H (2006) Systematic reviews in the social sciences: a practical guide. Blackwell
Senn S (2002) Cross-over trials in clinical research, 2nd edn. Wiley
Sharma S, Harrison JE (2006) Structured abstracts: do they improve the quality of information in abstracts? Am J Orthod Dentofac Orthop 130(4):523–530
Sjøberg D, Hannay J, Hansen O, Kampenes V, Karahasanovic A, Liborg N, Rekdal A (2005) A survey of controlled experiments in software engineering. IEEE Trans Softw Eng 31(9):733–753
Webster J, Watson R (2002) Analysing the past to prepare for the future: writing a literature review. MIS Quarterly 26, xiii–xxiii
Acknowledgements
This work was supported by an award from the U.K.’s Engineering and Physical Sciences Research Council (EPSRC). The authors would also like to thank Dag Sjøberg for providing a random selection of papers to use in this study; all those who helped by participating in the study as ‘judges’; John Bailey who organised the data collection; and Professor Jim Hartley of Keele University for his advice and guidance.
Author information
Authors and Affiliations
Corresponding author
Additional information
Editor: Dag Sjoberg
Appendices
Appendix 1 Procedures for Rewriting of Abstracts into Structured Form
The process to be followed was made as prescriptive as possible, so that all of the editors were following the same procedures. A full description is provided in the study protocol, here we give the basic outlines and describe some of the conventions employed.
1.1 A.1 The Rewriting Process
The process was organised as the following sequence of steps (these have been left in ‘directive’ form).
-
1.
First complete a paper copy of the evaluation form. Then rewrite the material from the existing abstract into a structured form as completely as possible. Keep a copy of this initial rewrite for later use in counting words under different headings. Each heading should begin on a new line, but please do not use white space between headings, the abstract should be a continuous sequence of text.
-
2.
Where the entries for headings are incomplete, seek additional material from the paper. If you do so, please keep a note of:
-
(a)
Where in the paper you found the necessary information.
-
(b)
What information was still missing at the end of this process.
Rewrite the abstract using the additional material. This version should then be checked by the designated team member, and any suggested changes need to be agreed, edited and recorded.
-
(a)
-
3.
If the original authors respond with suggested changes, the abstract may need to be further revised. Note that only material available in the original paper should be included and a record should be kept of what is done about each of the suggested changes.
As general guidelines, editors were asked to constrain the abstract to having no more than two sentences for each heading (with the possible exception of the Results heading); to try to keep to an overall limit of 300 words; and to reuse the original wording wherever possible.
1.2 A.2 Organisation of Material
With multiple editors involved we needed to use a common file naming convention for all of the files involved, as well as a common means of documenting the changes we made. The basic structure that we employed for this was:
<first-author>-<year> <year-no>-<index>
So, an example filename might be:
kitchenham-2002c-0.doc
Subsequent versions of the file then replaced the index value of ‘0’ with the following values:
-
1
to designate the first structured version produced in Step 1 . Any sentences omitted were stored in a file with a value of 1x as index.
-
2a
for the edited abstract using material from the paper if necessary.
-
2b
for the version agreed after internal review.
-
3
as revised after feedback from the original authors.
We also kept a note of where in the paper any material was extracted to augment the information in the existing abstract.
Appendix 2 Completeness Questions used in Evaluating Abstracts
The questions used to judge each abstract are listed below.
-
1.
Is the rationale for the study reported?
-
2.
Is the aim/purpose of the study reported?
-
3.
Is a hypothesis (or hypotheses) provided?
-
4.
Is there any indication of where this study took place? (E.g. in industry or academia, what the application domain was, etc.)
-
5.
Is the number of participants reported?
-
6.
Are the types of participants (e.g. students) reported?
-
7.
Is any information about the experience of the participants reported?
-
8.
Is the skill level of the participants described?
-
9.
Is there any description of how the study was performed?
-
10.
Does it report how the participants were allocated to different tasks or conditions?
-
11.
Is the way that the data was collected reported?
-
12.
Is there any description of the form of analysis performed?
-
13.
Are the main results summarised in the abstract?
-
14.
Are actual numbers from the results presented in the abstract?
-
15.
Is any statistical information provided about the results?
-
16.
Are any conclusions drawn?
-
17.
Are any limitations of the study identified?
-
18.
Is there any discussion of required future research?
Appendix 3 Demographic and Qualitative Data Questions
The following questions provided the ‘third page’ seen by participants after they had completed their judging of the two abstracts.
-
1.
Did you have any knowledge about structured abstracts before taking part in this study? (yes/no)
If your answer was ‘yes’, then please indicate the nature of your knowledge:
-
(a)
Heard about them, but not seen them before: (yes/no)
-
(b)
Read papers about their use: (yes/no)
-
(c)
Read papers with structured abstracts: (yes/no)
-
(d)
Created structured abstracts for your own papers: (yes/no)
-
(a)
-
2.
Please report up to three things that you like about structured abstracts (if there is nothing that you like, please leave blank).
-
3.
Please report up to three things that you dislike about structured abstracts (if there is nothing that you dislike, please leave blank).
-
4.
Overall, do you prefer structured or conventional abstracts?
-
(a)
Prefer structured abstracts
-
(b)
Prefer conventional abstracts
-
(c)
No preference
-
(a)
-
5.
Please indicate which description fits you best:
-
(a)
Full-time researcher
-
(b)
Practitioner
-
(c)
Post-graduate Research Student
-
(d)
Post-graduate Student
-
(e)
Undergraduate
-
(f)
Other (please specify)
-
(a)
-
6.
Please indicate years of experience of software engineering research or practice.
-
7.
Any other comments?
Rights and permissions
About this article
Cite this article
Budgen, D., Kitchenham, B.A., Charters, S.M. et al. Presenting software engineering results using structured abstracts: a randomised experiment. Empir Software Eng 13, 435–468 (2008). https://doi.org/10.1007/s10664-008-9075-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10664-008-9075-7