Performance Evaluation of E-Collaboration Raoudha Chebil, Wided Lejouad-Chaari, Stefano A. Cerri To cite this version: Raoudha Chebil, Wided Lejouad-Chaari, Stefano A. Cerri. Performance Evaluation of ECollaboration. Nik Bessis; Piet Kommers; Pedro Isaı́as. IADIS’10: International Conference on Collaborative Technologies, Jul 2010, Fribourg, Germany. pp.163-167, 2010, <http://www.collaborativetech-conf.org/>. <lirmm-00557744> HAL Id: lirmm-00557744 http://hal-lirmm.ccsd.cnrs.fr/lirmm-00557744 Submitted on 19 Jan 2011 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. PERFORMANCE EVALUATION OF E-COLLABORATION Raoudha CHEBIL, Wided LEJOUAD CHAARI Laboratoire d'Ingénierie Informatique Intelligente (LI3)-ISG Tunis Ecole Nationale des Sciences de l’Informatique – Université de la Manouba Campus de la Manouba, 2010 Manouba Tunisie Stefano A. CERRI Laboratoire d’Informatique de Robotique et Microélectronique de Montpellier (LIRMM) Univ. Montpellier2 & CNRS - 161, Rue Ada - F-34095 Montpellier France ABSTRACT The current global dimension of human exchanges in any domain (work, commerce, learning, entertainment…) is accompanied by technologies that enhance synchronous and asynchronous communication thus facilitating both collaboration and competition: the two driving forces for progress since ages. Collaboration can be made essentially in asynchronous mode by e-mails, files and information exchanges, or in synchronous mode by organizing meetings where collaborators communicate directly. Geographical and temporal distance may be overcome by several ICT (Information and Communication Technologies) solutions, usually under the label of e-collaboration. This concept is based on a high number of interactions that could be classified in three types: Computer to Computer Interaction (1), Collaborator to Computer Interaction (2) and Collaborator to Collaborator Interaction (3). Consequently, performance evaluation of ecollaboration has to be considered as consisting separately on the evaluation of each of the three types of interaction. This view leads to focus on three main aspects: the first is the system -efficiency- the second is the interface -ergonomics- the third is the collaborator’s behavior during collaboration and its influence on the outcome of the joint effort -effectiveness. Three evaluation layers are so found. In this paper, we propose an appropriate evaluation method to each layer, so that future developments, applying the new evaluation method and exploiting results in actual settings, may improve separately efficiency, ergonomics and effectiveness of e-collaboration in a complementary way. KEYWORDS E-collaboration, Performance Evaluation, Efficiency, Ergonomics, Effectiveness. 1. INTRODUCTION Electronic collaboration (or e-collaboration) can be defined as “the collaboration among individuals engaged in a common task using electronic technologies” [4]. Two centuries ago, collaboration was possible only between persons in the same place at the same time, then inventions followed and a primitive form of ecollaboration appeared by exploiting the telegraph then the telephone until, in the 1980ties, the mainframes. Despite these developments, e-collaboration was always quite difficult. With the advent of e-mail, ecollaboration has been remarkably favored. Subsequently, other technologies were developed such as Group Decision Support Systems. The Web, in particular its technologies facilitating users that communicate both by “reading and writing”, accelerated tremendously the emergence of social networks of many kinds, where “easy” bidirectional communication by the “casual” user permits quite sophisticated forms of e-collaboration. The concept of e-collaboration has revolutionized many domains like e-commerce and e-learning; so its improvement and dissemination are very interesting and may be beneficial for any application domain. But it was surprising that in the state of the art, works on e-collaboration performance evaluation and improvement present still several limits and are not yet based on widely accepted criteria. This fact will, in our opinion, affect negatively the evolution of the concept. As a solution to this problem, we propose here an ecollaboration performance evaluation method. This paper will be organized as follows. In section 2, we position the reader in the context by summarizing most of the existing work on e-collaboration. In section 3, we detail the proposed performance evaluation solution by explaining first the new interaction view that is behind the three proposed aspects to evaluate: efficiency, ergonomics and effectiveness and second, the evaluation method of each. In section 4, we discuss the validation procedure of the suggested method. 2. E-COLLABORATION STATE OF THE ART The state of the art of e-collaboration is quite rich and the existing works can be classified into several categories according to the problem type. The first category consists of the conception and development of collaborative platforms providing services increasingly useful like Agora [6] and AGrIP [7]. The second category focuses on the most suitable technologies permitting to improve and refine services offered by collaborative platforms. Two particular technologies were studied by the majority of these research works and, at the same time, exploited in some concrete collaboration developments [3]: they are Grid and Agent technologies. The third category of works deals with performance evaluation of e-collaboration. This concept has no general definition; it is characterized by its strong dependence on the studied domain’s constraints. In general, technical evaluations are based on aspects dealing with the performance of the software, like computing time, results and accuracy: these measures can’t be applied straightforward in collaborative contexts, because they don’t adopt a holistic view of the socio-technical system (the system and the humans) and can’t predict its future evolution. To obtain a realistic and useful evaluation, many other factors should be considered, like the objective of e-collaboration, and the actual data and resources (what is traditionally called the pragmatic context)1. This strong dependence of e-collaboration from its context renders the evaluation of its performance rather difficult and the identification of general performance evaluation solutions not evident. In the literature [2], there are different types of evaluations: feasibility evaluation that is based on the cost, iterative evaluation that aims to improve collaborative platforms, comparative evaluation that compares systems and appropriateness evaluation that determines if a system is appropriate to a given organization’s process. In e-collaboration performance evaluation works, there are no largely known standard evaluation methods. The most used performance evaluation approach is top-down; it consists on “identifying useful metrics from goals” [8]. There are many methods based on it, like Quality Function Deployment (QFD), Software Quality Metrics (SQM) and Goal/Question/Metric (GQM). Also, many works on new collaborative platforms speak about performances but do not mention how they evaluate. In our opinion, it is due to the lack of standard and well-known e-collaboration performance evaluation methods. We consider them a key task in the development and maintenance of any software; they can affect negatively the evolution, the reliability and even the life cycle of the whole promising concept of e-collaboration. For these reasons, we propose our evaluation method. 3. A VIEW ON PERFORMANCE EVALUATION 3.1 Interaction view In order to evaluate e-collaboration, let’s begin by analyzing and describing its properties in time. In general, an e-collaboration environment is supported by a distributed system, composed by human collaborators and disposes of software and hardware resources. It is characterized by one or many objectives and involves, to reach them, a certain number of exchanges between collaborators. A successful ecollaboration, is supposed to provide the most adequate conditions to the achievement of all needed exchanges. In fact, to communicate with collaborator B; the collaborator A needs to interact with its 1 One of the reasons underlying the emergence of a “ is exactly this: on the future Web, technologies (infrastructures and applications) will not be fruitfully conceived, deployed and exploited unless a very accurate empirical (scientific) study has been associated that analyses the use of those technologies by societies of humans. It becomes therefore evident the profound conceptual shift from the classical “application context” to the future “requirement elicitation, evaluation and exploitation scenario of use” (http://webscience.org/home.html). The same “paradigm shift” is claimed by most of the scientists currently engaged in Service Oriented Computing. computer which needs to interact act on his turn with the recipient’s computer. From this is ddescription, three types of interactions can be identified fied during an e-collaboration session as shown byy ffigure 1: Computer to Computer Interaction, Collaborat rator to Computer Interaction, Collaborator to Collabora rator Interaction. Figure 1. Interaction diagram As e-collaboration is basedd on o the overlap of these different types of interactions ns, its evaluation can be considered with respect to the ev evaluation of each type of these interactions. The eval valuation of Computer to Computer Interaction judges the system’s performance, i.e. e-collaboration’s efficien iency. The evaluation of Computer to Collaborator Intera eraction judges the interface of the platform, i.e. the ergonomic e aspects and finally the evaluation of Colla ollaborator to Collaborator Interaction judges the user's u behavior during collaboration and its influencee on o the global outcomes, i.e. e-collaboration effectiv tiveness. This view will permit us to consider e-collabora oration’s evaluation as the analysis of the superposed layers. la Our contribution will not consist in proposing a ne new evaluation method for each layer; but in investiga igating the most adequate method for each one in the comb mbination needed for accounting the previously explain lained superposition with respect to studied contexts (scena narios of use). 3.2 Evaluation method 3.2.1 Efficiency evaluation In the literature [1], the main performance p evaluation techniques are analytical mod odeling, simulation and measuring. The first techniquee cconsists in representing the system by an abstract mathematical ma model. The analysis of this model permits to t extract the system performance parameters. This is ttechnique allows rapid implementation and gives precis cise results. But its application to complex systems requ quires the assumption of some mathematical hypothesiss and an approximations that may affect the fidelity of the he system representation. The second technique consistss in implementing a software model permitting to imita itate in a simple manner the system’s evolution. It is inte nteresting when the studied system is under constructio ction, inaccessible or too complex to be handled directly. y. B But it does not always guarantee a faithful representat tation of the real system. The third technique consists inn m measuring certain characteristics of the system andd aanalyzing the obtained results. These measures are taken ken by specific instruments or realized by the system it itself. The advantage of this technique is the precisionn of results. However, the task of measuring could ld degrade the system's functioning. To obtain a reliable evaluati ation, we have to choose the technique representing the t reality in the most faithful manner, namely, the mea easuring technique. Consequently, the presented efficie ciency evaluation will be based on it and we have to iden dentify the significant measures to capture. We estima mate that this layer must guarantee rapidity of communic nication and integrity of transferred data. To evaluate te these two criteria, we propose to carry out some statis tistics on communication time and rate of losses havin ving occurred during the collaboration. As shown in Table le 1, we distinguish synchronous and asynchronous mod odes. Table 1. Efficiency measures Criterion Synchronous Mode Average response time to a synchronous request: Asynchronous Mode Average response time to an asynchronous request: ∑ ∑ TRk is the response time to the synchronous request k and Ns is the number of satisfied synchronous requests. Percentage of unsatisfied synchronous requests (having no response): TTk is the response time to the asynchronous request k and Nas is the number of asynchronous transferred requests. Percentage of asynchronous lost requests (not transferred): Communication Losses Nns is the number of unsatisfied synchronous Np is the number of asynchronous lost requests and requests and N1 is the total number of N2 is the total number of asynchronous requests synchronous requests (Nns= N1-Ns). (Np= N2-Nas). After the evaluation, obtained results have to be interpreted by comparing them to expected values. Since the reliability of evaluation depends heavily on the interpretation, these values have to be rigorously chosen. The analysis of several series of experimentations has to be realized to fix these particular values. 3.2.2 Ergonomic evaluation To evaluate ergonomics, many methods exist in the literature [5]. They can be divided in two categories: analytical and empirical. Analytical methods consist in the simulation of task executions without involving the user while empirical methods observe users behavior during their interaction. Each of these two methods implements diverse techniques: GOMS (Goals, Operator, Methods and Selection Rules), cognitive exploration and heuristic evaluation for analytical methods; and interviews, questionnaires and measuring through required time to execute a task, accuracy of results and number of errors for empirical methods. Since this layer concerns Computer to Collaborator Interaction, its evaluation should be oriented to the user behavior. So, we adopt the empirical techniques and we propose the following plan to the evaluator: Before the beginning of e-collaboration work: 1. Designate a collaboration member mastering all the session details (objectives, constraints, members profile…) to give precise and correct responses when asked in the following steps and also in effectiveness evaluation. This member will be named the collaboration leader. 2. Determine the global and intermediate objectives of collaboration by interacting with the collaboration leader. 3. According to recovered information, identify the important tasks having to be carried out to reach collaboration objectives. During the collaborative session: 4. Test the collaborators' capacities to execute the identified tasks in step 3. For this aim, we propose two measures estimated as the most significant in this context: time spent to launch a task, number of committed errors before launching a task. The obtained values are interpreted by comparing them to theoretical values fixed by evaluator. After achieving the collaborative work: 5. Retrieve positive and negative collaborators' remarks about the system interface. 6. Generate an evaluation report summarizing the detected failures of the evaluated interface as well as its positive aspects. 3.2.3 Effectiveness evaluation In general, the success of an e-collaboration is related to the adequacy between the envisaged objectives and the ones actually attained. This adequacy depends on collaborators' behavior and their efficacy in accomplishing the work in question. The evaluation process is as follows: Before the beginning of the collaboration work: 1. Identify e-collaboration constraints by interacting with the collaboration leader. These constraints can consist, for example, in some dependencies between different collaboration steps or distinct collaborators. Their non-compliance could be the cause of unsatisfactory results. 2. Select the events having to be captured according to the stated constraints in the previous step. The evaluation system is intended to offer the possibility to capture different types of events as connection and disconnection of collaborators, the profile of each collaborator, the used software resources and the exchanges carried out during the collaboration session. After achieving the collaborative session: 3. Verify if the global and intermediate objectives were attained through a questionnaire sent to the collaboration leader. 4. DISCUSSION AND CONCLUSION As explained in section 2, related works on e-collaboration present several missing conventions, standards, methods and even failures especially in performance evaluation of the socio-technical system consisting of machines and humans engaged in distant collaboration for performing jointly complex tasks. The conception of the presented evaluation method was motivated by the lack of clear guidelines in the literature and the conviction of the importance of validated criteria. Our contribution started by a new vision of the ecollaboration concept, then a new evaluation method was proposed, composed by three evaluation layers: efficiency, ergonomics and effectiveness. As many works have been done in efficiency and ergonomics evaluations, we were able, after some readings, to choose an evaluation method for each of the quoted aspects. The third aspect reflecting performance of collaborator’s behavior is specific to e-collaboration: there is no work discussing its evaluation in the literature. So we proposed a new procedure to evaluate it. The overall method is so composed of the three proposed evaluation procedures. The described evaluation does not stop at judging performances but also detects and explains problem origins enabling a more targeted improvement of the evaluated e-collaboration environment. In order to be put in practice, this contribution has to be validated by a number of different collaboration scenarios, each significant for a class of applications. This validation is intended to ensure that the application of the proposed evaluation method reflects correctly the collaborators' satisfaction and permits to detect the eventual collaboration problems. The interpretation process can also be adjusted by many series of experimentations. REFERENCES Book [1] Jain, R., 1991. The Art of Computer Systems Performance Analysis. John Wiley and Sons Publishers, England. Journal [2] Damianos, L. et al, 1999. Evaluation for Collaborative Systems. In ACM Computing Surveys, Vol. 31, No. 2, pp. 1526. [3] Jonquet, C. et al, 2008. Agent-Grid Integration Language. In International Journal on Multi-Agent and Grid Systems, Vol. 4, No. 2, pp. 167-211. [4] Kock, N. and Nosek, J., 2005. Expanding the boundaries of e-collaboration. In IEEE Transactions on Professional Communication, Vol. 48, No. 1, pp. 1-9. Conference paper or contributed volume [5] Doubleday, A. et al, 1997. A Comparison of Usability Techniques for Evaluating Design. Proceedings of the 2nd conference on Designing interactive systems: processes, practices, methods, and techniques. Amsterdam, The Netherlands, pp. 101-110. [6] Dugénie, P. et al, 2008. Agora UCS, Ubiquitous Collaborative Space. Intelligent Tutoring Systems-Volume 5091 of Lecture Notes in Computer Science. Heidelberg, Germany, pp. 696-698. [7] Jiewen. L and Zhongzhi. S., 2007. Distributed System Integration in Agent Grid Collaborative Environment. Proceedings of the IEEE International Conference on Integration Technology. Shenzhen, China, pp. 373-378. [8] Steves, M. and Scholtz, J., 2005. A Framework for Evaluating Collaborative Systems in the Real World. Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05). Hawaii, USA, pp. 29-37.
US