[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Toward a web-based multi-criteria decision support system for the layered evaluation of interactive adaptive systems

  • Long Paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

The choice of suitable evaluation methods for the layered evaluation of Interactive Adaptive Systems (IAS) needs the consideration of different factors, leading to a multi-criteria decision analysis problem. This paper proposes the first step toward a Web-based Multi-Criteria Decision Support System (MCDSS). Our proposal is based on a recent multi-criteria decision method called ELECTRE TRI-B-H (Elimination and Choice Translating Reality) to guide the layered evaluation of IAS. The goal is to support the choice of appropriate evaluation methods for individual layers taking into account constraints of the layered evaluation and the individual layers. The appropriateness of each evaluation method is analyzed for the layered evaluation in general and each layer in particular. The use of ELECTRE TRI-B-H method allows decomposing a decision problem into intermediate sub-problems through a hierarchy model and sorting alternative evaluation methods at different levels of the hierarchy. A case study of an adaptive hypermedia system is presented; in this study, the ELECTRE TRI-B-H method has been applied to select the most suitable evaluation methods for each of its layers taking into account the layered evaluation context. The promising results show the feasibility of the proposed approach, leading to various research perspectives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. The decision-maker represents the actor who is being aided.

  2. It is called also ELECTRE TRI method.

  3. In contextual design method, users are observed in their work place in order to discover how they go about their work, and in what environment.

  4. The Wizard-of-Oz method can be applied in association with usability test method in design phase. During this method, a designer plays the role of the system [21].

  5. Ten heuristics are proposed by [37], including (1) visibility of system status, (2) match between system and the real world, (3) user control, (4) consistency and standards, (5) error prevention, (6) recognition rather than recall, (7) flexibility and efficiency of use, (8) aesthetic and minimalist design, (9) help users recognize, diagnose and recover from errors, and (10) help and documentation.

  6. Also named usability testing or user test.

  7. Adhesion is the acronym for “Aide à la Décision Humaine pour I’Evaluation des Systèmes Interactifs et leur validation” (translated by: Human decision-making for the evaluation of interactive systems and their validation).

  8. In the MCDA field, very few hierarchical methods have been proposed to deal with the sorting problems using a hierarchy model of criteria, such as Mutli-criteria Spider-gram Cumulative Surface Area (MCSA score) [40]. However, most of these MCDA hierarchical methods consider the criteria at the same level despite their natural organization. The ELECTRE TRI-B-H represents an effective MCDA method for sorting alternatives following a hierarchical structure of criteria.

  9. Four types of decision problems have been introduced in [47], namely (1) choice (P.α), where the problem is to identify the most desirable solution(s), (2) ranking (P.γ), where the objective is to rank solutions from the best to the worst, (3) sorting (P.β), where the aim is to affect each alternative to one of a pre-defined set of classes, and (4) description (P.δ), where the aim is to describe solutions in terms of their main distinguishing characteristics. In this research paper, we focus on the sorting problem in the field of IAS evaluation. In fact, the goal is to assign each alternative evaluation method to a category representing its appropriateness level for the layered evaluation context in general and the individual layers in particular.

  10. The category improvement value represents a measure of performance used to compare the profile limits and alternatives when they have been affected to ordered categories.

  11. The focus group method can be applied in the specification and design phases. When focus group is applied in the specification phase, the participants are told what types of input the target layer would have and they are asked to discuss how a layer should produce its output. In the design phase, participants are shown the layer’s input and output data. In this way, different evaluations are possible for focus group on “specification phase,” “design phase,” and “layer’s output data” criteria. For this reason, we propose to substitute this method with two versions: focus group specification, and focus group design. The focus group specification has a performance of “produced” for the “layer’s output data” criterion and a performance of “applicable” for “specification phase” criterion. Regarding focus group design, it has a performance of “given” for “layer’s output data” criterion and a performance of “applicable” for “design phase” criterion.

  12. The scale can be nominal, ordinal, ratio, interval or cardinal according to Stevens’ typology of data [50].

  13. We recall that these layers refer to the ones identified by Paramythis et al. [43] (ID layer: Interpretation of the collected Data, MW: Modeling the current state of the “World,” DA: Deciding upon Adaptation, AA: Applying Adaptation). More details can be found in Sect. 2.1.

  14. The main goal of AHP is to rank a finite number of alternatives in terms of a finite set of criteria. The interested reader can find more details about the this MCDA method in [48].

  15. It is essential to note that the Knowledge-Based Management Subsystem (KBMS) is an optional component. In the case of this research, the KBMS is not developed and it will be considered as a research perspective.

  16. Two variants in using our proposed web-based MCDSS are possible. The first one is to use it without registration. In this case, the preliminary results are not stored. The application of this variant is preferable for small evaluation projects with a short list of alternatives (evaluation methods and evaluation sub-attributes) and decision criteria. The second variant consists of creating a DM account. In this case, a storage of the preliminary results is allowed.

References

  1. Almeida-Dias, J.: Multiple criteria decision aiding for sorting problems: Concepts, methodologies, and applications. PhD dissertation, Univ. Téchnica de Lisboa and Univ. Paris Dauphine, Lisbon and Paris. (2011)

  2. Antona, M., Ntoa, S., Adami, I., Stephanidis, C.: User Requirements Elicitation for Universal Access. In: Stephanidis, C. (ed.) The Universal Access Handbook, pp. 1–14. CRC Press (2009)

    Google Scholar 

  3. Brusilovsky, P., Karagiannidis, C., Sampson, D.: Layered evaluation of adaptive learning systems. Int. J. Contin. Eng. Edu. Lifelong Learn., Special Issue on Hybrid recommender 14(4–5), 402–421 (2004)

    Google Scholar 

  4. Brusilovsky, P., Karagiannidis, C., Sampson, D.: The benefits of layered evaluation of adaptive applications and services. In: 1st Workshop on Empirical Evaluation of Adaptive Systems at UM2001, pp. 1–8. Sonthofen, Germany (2001)

    Google Scholar 

  5. Carmagnola, F., Cena, F., Console, L., Cortassa, O., Gena, C., Goy, A., Torre, I., Toso, A., Vernero, F.: Tag based user modeling for social multi-device adaptive guides. User Model. User-Adap. Inter. 18(5), 497–538 (2008)

    Article  Google Scholar 

  6. Del Vasto-Terrientes, L., Valls, A., Zielniewicz, P., Borràs, J.: A hierarchical multi-criteria sorting approach for recommender systems. J. Intell. Inf. Syst. 46(2), 313–346 (2015)

    Article  Google Scholar 

  7. Del Vasto-Terrientes, L.: Hierarchical outranking methods for multi-criteria decision aiding. PhD dissertation, Universitat Rovira i Virgili, Department of Computer Science and Mathematics. (2015b)

  8. Dhouib, A., Trablesi, A., Kolski, C., Neji, M.: An approach for the selection of evaluation methods for interactive adaptive systems using analytic hierarchy process. IEEE Tenth International Conference on Research Challenges in Information Science, 1–3 June, Grenoble, France, 1–10. (2016a)

  9. Dhouib, A., Trabelsi, A., Kolski, C., Neji, M.: A classification and comparison of usability evaluation methods for interactive adaptive systems. 9th International Conference on Human System Interactions, Portsmouth, UK, 246–251. (2016b)

  10. Dhouib, A., Trabelsi, A., Kolski, C., Neji, M.: Towards the Layered Evaluation of Interactive Adaptive Systems using ELECTRE TRI Method. Proceedings of the 12th International Conference on Software Technologies (ICSOFT 2017), Scitepress, Madrid, 163–170 (2017a)

  11. Dhouib, A., Trablesi, A., Kolski, C., Neji, M.: EvalCHOICE: A Decision Support Approach for the Usability Evaluation of Interactive Adaptive Systems. C. Zanni-merk, C. Frydman, C. Toro, Y. Hicks, R. J. Howlett, L. C. Jain (Eds.), Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 21st International Conference, KES-2017, Elsevier Procedia Computer Science, Marseille, France. pp. 864–873. (2017b)

  12. Dhouib, A., Trabelsi, A., Kolski, C., Neji, M.: Prioritizing the Usability Criteria of Adaptive User Interfaces of Information Systems based on ISO/IEC 25040 Standard. Ingénierie des Systèmes d’Information 22(4), 107–128 (2017)

    Article  Google Scholar 

  13. Dhouib A., Assila, A., Trabelsi A., Kolski C., Neji M.: Factors Affecting the Choice of Usability Evaluation Methods for Interactive Adaptive Systems. 7th International Working Conference on Human-Centered Software Engineering (HCSE 2018), 3–5 September 2018, Nice, France, pp. 270–282. (2018)

  14. Dumas, J.S., Redish, J.C.: A Practical Guide To Usability Testing. Ablex Publishing Corp, Norwood, N.J (1999)

    Google Scholar 

  15. Figueira, J.R., Greco, S., Ehrgott, M.: Multiple criteria decision analysis: State of the art surveys. Springer, Boston (2005)

    Book  MATH  Google Scholar 

  16. Fernandez, E., Navarro, J., Duarte, A., Core, I.G.: A decision support system for regional competitiveness analysis based on multi-criteria sorting. Decis. Support Syst. 54(3), 1417–1426 (2013)

    Article  Google Scholar 

  17. Ferre X., Bevan N. 2011. Usability planner: a tool to support the process of selecting usability methods. In Human-Computer Interaction– INTERACT 2011, ed: Springer, 652–655.

  18. Gena, C.: Methods and techniques for the evaluation of user-adaptive systems. Knowl Eng Rev 20, 1–37 (2005)

    Article  Google Scholar 

  19. Gena, C., Weibelzahl, S.: Usability engineering for the adaptive web. In: Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.) The Adaptive Web: Methods and Strategies of Web Personalization, pp. 720–762. Springer, Berlin (2007)

    Chapter  Google Scholar 

  20. Greenbaum, T.L.: The Handbook of Focus Group Research, 2nd edn. Lexington Books, New York, NY (1998)

    Book  Google Scholar 

  21. Gould, J., Conti, J., Hovanyecz, T.: Composing letters with a simulated listening typewriter. In: 1st ACM Conference on Human Factors in Computer Systems (CHI), pp. 367–370. ACM, Gaithersburg, MD (1982)

    Google Scholar 

  22. Höök, K.: Steps to take before intelligent user interfaces become real. Interact. Comput. 12(4), 409–426 (2000)

    Article  Google Scholar 

  23. ISO TR 16982.: Usability methods supporting human centred design. (2002)

  24. Jameson, A.: Adaptive interfaces and agents. In: Jacko, J.A., Sears, A. (eds.) The human-computer interaction handbook: Fundamentals, evolving technologies and emerging applications, pp. 305–330. Erlbaum, Mahwah, NJ (2003)

    Google Scholar 

  25. Jameson, A.: User modeling meets usability goals. In: 10th International Conference on User Modeling Edinburgh UK LNAI, pp. 1–3. Springer, Berlin (2005)

    Google Scholar 

  26. Jannach, D., Ludewig, M., Lerche, L.: Session-based item recommendation in e-commerce: on short-term intents, reminders, trends and discounts. User Model. User-Adap. Inter. 27(3–5), 351–392 (2017)

    Article  Google Scholar 

  27. Karagiannidis, C., Sampson, D.: Layered evaluation of adaptive applications and services. In: 1st International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, Trento, Italy. LNCS, vol. 1892, pp. 343–346. Springer, Berlin. (2000)

  28. Kolski, C.: Human-computer interactions in transport. ISTE Ltd and John Wiley & Sons Inc., Hoboken (2011)

    Book  Google Scholar 

  29. Keates, S.: When universal access is not quite universal enough: case studies and lessons to be learned. Univ. Access Inf. Soc. 19, 133–144 (2020)

    Article  Google Scholar 

  30. Magoulas, G.D., Chen, S.Y., Papanikolaou, K.A.: Integrating layered and heuristic evaluation for adaptive learning environments. In: 2nd Workshop on Empirical Evaluation of Adaptive Systems at UM2003, pp. 5–14. Johnstown, PA (2003)

    Google Scholar 

  31. Manouselis, N., Karagiannidis, C., Sampson, D.G.: Layered evaluation for data discovery and recommendation systems: an initial set of principles. IEEE 14th International Conference on Advanced Learning Technologies, pp. 518–519. (2014)

  32. Mahatody, T., Sagar, M., Kolski, C.: State of the art on the cognitive walkthrough method, its variants and evolutions. Int. J. Human-Comput. Interaction 26(8), 741–785 (2010)

    Article  Google Scholar 

  33. Masthoff, J.: The user as wizard: a method for early involvement in the design and evaluation of adaptive systems. In: 5thWorkshop on User-Centred Design and Evaluation of Adaptive Systems at AH’06, pp. 460–469. Dublin, Ireland (2006)

    Google Scholar 

  34. Mulwa, C., Wade, V.: A web-based evaluation framework for supporting novice and expert evaluators of adaptive e-learning systems. International Conference on E-Technologies and Business on the Web (EBW201z3), pp. 62–67 Thailand. (2013)

  35. Miñón, R., Paternò, F., Arrue, M., et al.: Integrating adaptation rules for people with special needs in model-based UI development process. Univ. Access Inf. Soc. 15, 153–168 (2016)

    Article  Google Scholar 

  36. Marucci, L., Paterno, F.: Design and evaluation of an adaptive virtual guide for Web applications. Univ. Access Inf. Soc. 1, 163–176 (2002)

    Article  Google Scholar 

  37. Nielsen, J.: Heuristic evaluation. In: Nielsen, J., Mack, R.L. (eds.) Usability Inspection Methods, pp. 25–64. John Wiley & Sons, New York (1994)

    Google Scholar 

  38. Ngo The, A., Mousseau, V.: Using assignment examples to infer category limits for the ELECTRE TRI method. Journal of Multi-Criteria Decision Analysis 11(1), 29–43 (2002)

    Article  MATH  Google Scholar 

  39. Nendjo Ella A., Kolski C., Wawak F., Jacques C., Yim P. (1999). An approach of computer-aided choice of UI evaluation criteria and methods. In Vanderdonckt and A. Puerta (Eds.), Computer-Aided Design of User Interfaces II. Kluwer Academic Publishers, Dordrecht, 319–328.

  40. Nzila, C., Dewulf, J., Spanjers, H., Tuigong, D., Kiriamiti, H., van, Langenhove H.: Multi criteria sustainability assessment of biogas production in Kenya. Appl. Energy 93, 496–506 (2012)

    Article  Google Scholar 

  41. O’Malley, C.E., Draper, S.W., Riley, M.S.: Constructive interaction: A method for studying Human–Computer-human interaction. In: 1st International Conference on Human-Computer Interaction, pp. 269–274. Honolulu, HI (1984)

    Google Scholar 

  42. Paramythis, A., Totter, A., Stephanidis, C.: A modular approach to the evaluation of adaptive user interfaces. In: 1stWorkshop on Empirical Evaluation of Adaptive Systems at UM2001, pp. 9–24. Sonthofen, Germany (2001)

    Google Scholar 

  43. Paramythis, A., Weibelzahl, S., Masthoff, J.: Layered evaluation of interactive adaptive systems: framework and formative methods. User Model. User-Adap. Inter. 20, 383–453 (2010)

    Article  Google Scholar 

  44. Philipp P., Eck R., Peinsipp-Byma E.: Decision Support in Context of a Standardbased Usability Evaluation. Annual Global Online Conference on Information and Computer Technology, 36–40. (2015)

  45. Primož, K., Matjaž, D., Danijela, M.: Adequateness of usability evaluation methods regarding adaptivity. Simpozijum o računarskim naukama i informacionim tehnologijama, Kopaonik, Serbia (2007)

    Google Scholar 

  46. Quintal, C., Macías, A.J.: Measuring and improving the quality of development processes based on usability and accessibility. Univ. Access Inf. Soc. 20(2), 203–221 (2001)

    Article  Google Scholar 

  47. Roy, B.: Multicriteria Methodology for Decision Aiding. Kluwer, Dordrecht (1996)

    Book  MATH  Google Scholar 

  48. Saaty, T.L.: Decision making with the analytic hierarchy process. Int. J. Serv. Sci. 1, 83 (2008). https://doi.org/10.1504/IJSSCI.2008.017590

    Article  Google Scholar 

  49. Staikopoulos A., Conlan O.: Towards comparative evaluations of user-adaptive software systems, UMAP 2016 Extended Proceedings, 1618 (3), (2016)

  50. Stevens, S.S.: On the theory of scales of measurement. Science 103, 677–680 (1946)

    Article  MATH  Google Scholar 

  51. Stephanidis, C.: The Universal Access Handbook. CRC Press (2017)

    Google Scholar 

  52. Taktak, H., Moussa, F.: A service-oriented application creation process in ubiquitous environments: Travel assistant mobile application. Int. J. Pervasive Comput. Commun. 13(3), 300–330 (2017)

    Article  Google Scholar 

  53. Takougang, M., Aimé, S., Pirlo, M., Yonkeu, S., Some, B.: Assessing the Response to Land Degradation Risk: The Case of the Loulouka Catchment Basin in Burkina Faso. In: Evaluation and Decision Models with Multiple Criteria: Case Studies, pp. 341–399. Springer-Verlag, Berlin Heidelberg (2015)

    Chapter  Google Scholar 

  54. Weibelzahl, S., Lauer, C. U. (2001). Framework for the evaluation of adaptive CBR-systems. In I. Vollrath, S. et al. (eds.), Experience Management as Reuse of Knowledge. Proc. 9th German Workshop on Case Based Reasoning, pp. 254–263. Baden-Baden, Germany.

  55. Weibelzahl, S.: Evaluation of adaptive systems. In: 8th International Conference on User Modeling, pp. 292–294. LNCS, Springer, Berlin (2001)

    Google Scholar 

  56. Weibelzahl, S.: Problems and pitfalls in evaluating adaptive systems. In: 4th Workshop on the Evaluation of Adaptive Systems at UM’05, pp. 57–66. Edinburgh, UK (2005)

    Google Scholar 

  57. Wharton, C., Rieman, J., Lewis, C., Polson, P.: The cognitive walkthrough method: A practitioner’s guide. In: Nielsen, J., Mack, R.L. (eds.) Usability Inspection Methods, pp. 105–141. John Wiley & Sons, New York (1994)

    Google Scholar 

  58. Yigitbas E, Jovanovikj I, Josifovska K, Sauer S, Engels G. (2019): On-the-Fly Usability Evaluation of Mobile Adaptive UIs Through Instant User Feedback. IFIP Conference on Human-Computer Interaction, (4): 563–567.

Download references

Acknowledgements

We would like to thank the anonymous reviewers for their careful reading of our manuscript and their insightful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amira Dhouib.

Ethics declarations

Conflicts of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Ethical approval

Ethics approval was not required for this paper. The case study has been simulated.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

The defined rules for the intermediate criteria related to layers (layer 2, layer 3, layer 4, and layer 5) criteria

For layer 2, two rules are defined. These rules have been defined based on the evaluation settings of layer 2 with produced input data, given output data, medium temporal resources, and high budget.

  • Rule 2.1 If (evaluation-attribute-L2 = moderately covered L2 or highly covered L2) and (layer 2’ input data = produced) and (layer 2’ output data = given) and (L2 temporal resources = medium or low), then Layer 2 = L2 suitable evaluation methods

  • Rule 2.2 If (evaluation-attribute-L2 = moderately covered L2 or not covered L2) and ((layer 2’ input data = given) or (layer 2’ output data = produced) or (L2 temporal resources = high)), then Layer 2 = L2 unsuitable evaluation methods

For layer 3, two rules are defined. These rules have been defined based on the evaluation settings of layer 3 with produced input data, given output data, medium temporal resources, and high budget.

  • Rule 3.1 If (evaluation-attribute-L3 = moderately covered L3 or highly covered L3) and (layer 3’ input data = produced) and (layer 3’ output data = given) and (L3 temporal resources = medium or low), then Layer3 = L3 suitable evaluation methods

  • Rule 3.2 If (evaluation-attribute-L3 = moderately covered L3 or not covered L3) and ((layer 3’ input data = given) or (layer 3’ output data = produced) or (L3 temporal resources = high)), then Layer 3 = L3 unsuitable evaluation methods

For layer 4, two rules are defined. These rules have been defined based on the evaluation settings of layer 4 with given input data, given output data, medium temporal resources, and high budget.

  • Rule 4.1 If (evaluation-attribute-L4 = moderately covered L4 or highly covered L4) and (Layer 4’ input data = given) and (layer 4’ output data = given) and (L4 temporal resources = low or medium), then Layer 4 = L4 suitable evaluation methods

  • Rule 4.2 If (evaluation-attribute-L4 = moderately covered L4 or not covered L4) and ((layer 4’ input data = produced) or (layer 4’ output data = produced) or (L4 temporal resources = high)), then Layer 4 = L4 unsuitable evaluation methods

For layer 5, two rules are defined. These rules have been defined based on the evaluation settings of layer 5 with given input data, given output data, high temporal resources, and high budget.

  • Rule 5.1 If (evaluation-attribute-L5 = moderately covered L5 or highly covered L5) and (layer 5’ input data = given) and (layer 5’ output data = given), then Layer 5 = L5 suitable evaluation methods

  • Rule 5.2 If (evaluation-attribute-L5 = moderately covered L5 or not covered L5) and ((layer 5’ input data = produced) or (layer 5’ output data = produced), then Layer 5 = L5 unsuitable evaluation methods

Appendix B

Robustness analysis of ELECTRE TRI-B-H method

After the assignment of alternative evaluation methods to their respective categories, attention should be turned to carry out the analysis of robustness. The goal of this analysis is to check in which way the variations introduced in the parameters of the intermediate criteria influence the assignment of alternatives to the categories at the root node. This analysis is essential since the values of these parameters are imprecisely determined. Two scenarios with different configurations of parameters are considered in this study, as shown in Table

Table 13 Robustness configuration parameters

13. The analysis will focus on variations in the veto thresholds of intermediate criteria. The other parameters (i.e., preference thresholds, and indifference thresholds) will be left untouched since the scales of the decision criteria in our case consist of few levels (at most three). In the first scenario, the veto threshold is activated if some decision criterion is not fulfilled previously. This means that the assignments at intermediate criteria should fully accomplish the defined rules. This scenario is the one applied to obtain the results presented in Table 12. In the second scenario, the veto is slightly increased and it cannot be applied until a difference of more than two units is obtained.

Figure 

Fig. 9
figure 9

Number of alternative evaluation methods assigned for each category by applying two scenarios

9 presents the number of alternatives assigned to each category in the root node after the consideration of two scenarios. From this figure, we can see that the assignment of alternatives to the categories at the root is consistent with the profile limits defined through the rules presented in Table 8.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dhouib, A., Kolski, C. & Neji, M. Toward a web-based multi-criteria decision support system for the layered evaluation of interactive adaptive systems. Univ Access Inf Soc 22, 415–443 (2023). https://doi.org/10.1007/s10209-021-00850-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-021-00850-y

Keywords

Navigation