Task Models and Diagrams For User Interface Design
Task Models and Diagrams For User Interface Design
Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
University of Dortmund, Germany
Madhu Sudan
Massachusetts Institute of Technology, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Moshe Y. Vardi
Rice University, Houston, TX, USA
Gerhard Weikum
Max-Planck Institute of Computer Science, Saarbruecken, Germany
Marco Winckler Hilary Johnson
Philippe Palanque (Eds.)
13
Volume Editors
Marco Winckler
Philippe Palanque
Universit Paul Sabatier (Toulouse 3)
Toulouse, France
E-mail: {winckler,palanque}@irit.fr
Hilary Johnson
University of Bath, UK
E-mail: H.Johnson@bath.ac.uk
CR Subject Classification (1998): H.5.2, H.5, D.2, D.3, F.3, I.6, K.6
ISSN 0302-9743
ISBN-10 3-540-77221-9 Springer Berlin Heidelberg New York
ISBN-13 978-3-540-77221-7 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
Springer is a part of Springer Science+Business Media
springer.com
Springer-Verlag Berlin Heidelberg 2007
Printed in Germany
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper SPIN: 12201389 06/3180 543210
Preface
Task analysis and modelling have existed for many years, initially for train-
ing purposes but latterly for providing a principled approach to improving the
usability of existing and proposed interactive systems. There have been many
successes along with critical appraisal of the utility of task analysis. The commu-
nity remains strong, active and enthusiastic. Over the years we have developed a
plethora of theoretical approaches, models and techniques. These dier in terms
of what is modelled, the nature of the representations and notations used, their
scalability, the ease with which they can be applied with good eect, and the
ease with which they can direct the design of systems to support task execution.
Task models and associated diagrams that represent task knowledge and
behavior are in demand now as much as they ever were. Good design is fun-
damental, appreciated by users, sells and improves the quality of our daily
lives, and good system design means supporting users and their interaction
with technology. Technology is changing we now have mobile and pervasive
systems and yet we still need to analyze the goals and tasks undertaken using
these systems. The nature of the tasks might be dierent (shorter in duration,
overlapping, needing to be performed more quickly, be routed in communication
and entertainment), but it is still important to understand, model and support
user goals.
The proceedings give a avor of the issues facing task modelling at this mo-
ment in time. A primary aim of Tamodia as a conference series is to educate,
to promote and exchange existing ideas and problem solutions, and to generate
new ideas and associated research programmes. As in previous years the scope
of the papers is broad. This year we were very privileged that the invited talk
on Modelling Activity Switching was given by Stephen Payne, from Manch-
ester Business School. Other highlights of the conference included sessions on
Workow-Based Systems; Task Patterns; Task Models for Non-standard Ap-
plications; Model-Driven Engineering; Task-Based Evaluation and Testing; and
Extending Task Models.
A rigorous refereeing process was applied to the papers, and the standard
of the accepted papers is high and represents a good cross-section of academic
research and to a lesser extent industrial research. We are grateful to the authors
for submitting their papers to Tamodia and to the many people who took part in
refereeing including the Programme Committee members. These contributions
have made the conference series a success.
The proceedings is a valuable information resource for both researchers and
industry members alike, who are interested in applying task analysis and mod-
elling techniques to an ever-widening range of domains and problems. The re-
ported research is diverse and gives some indication of the new directions in
which task analysis theories, methods, techniques and tools are progressing.
VI Preface
Additionally, there are several new challenging opportunities for the use of task
modelling in the future, and we are sure that the Tamodia conference series will
be at the forefront in promoting research in these new areas.
General Chair
Philippe Palanque, University Paul Sabatier (Toulouse 3), France
Program Chairs
Hilary Johnson - University of Bath, UK
Marco Winckler - University Paul Sabatier (Toulouse 3), France
Program Committee
Sandrine Balbo, University of Melbourne, Australia
Eric Barboni, University Paul Sabatier (Toulouse 3), France
Remi Bastide, University Toulouse 1, France
Birgit Bomsdorf, University of Hagen, Germany
Gaelle Calvary, University of Grenoble I, France
Gilbert Cockton, University of Sunderland, UK
Karin Coninx, Hasselt University, Belgium
Maria-Francesca Costabile, Universita di Bari, Italy
Anke Dittmar, University of Rostock, Germany
Alan Dix, Lancaster University, UK
Peter Forbrig, University of Rostock, Germany
Elizabeth Furtado, UNIFOR, Brazil
Nick Graham, Queen University, Canada
Hilary Johnson, University of Bath, UK
John Karat, IBM T.J. Watson Research Center, USA
Mara-Dolores Lozano, Universidad de Castilla-La Mancha, Spain
Kris Luyten, Hasselt University, Belgium
Mieke Massink, CNR-ISTI, Italy
David Navarre, University Toulouse 1, France
Jerey Nichols, Carnegie Mellon University, USA
Philippe Palanque, University Paul Sabatier (Toulouse 3), France
Thomas Pederson, University of Umea, Sweden
Fabio Paterno, ISTI-CNR, Italy
Costin Pribeanu, ICI Bucuresti, Romania
Matthias Rauterberg, Eindhoven University of Technology, The Netherlands
Carmen Santoro, ISTI-CNR, Italy
Corina Sas, Lancaster University, UK
Dominique Scapin, INRIA, France
Kevin Schneider, University of Saskatchewan, Canada
VIII Organization
Organization
Model-Driven Engineering
Articulating Interaction and Task Models for the Design of Advanced
Interactive Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Syrine Char, Emmanuel Dubois, and Remi Bastide
Task Patterns
Dening Task Oriented Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Gregory Bourguin, Arnaud Lewandowski, and Jean-Claude Tarby
Stephen Payne
Abstract. How do people decide what to do when? Why is it that people often
given up one task to begin another, only later to resume the first? In this talk I
will briefly review some experiments on how people allocate their time
adaptively across multiple texts and multiple tasks. I will then focus on how
strategies for adaptive time allocation can be modelled. The model I develop
derives from heuristic accounts of animal foraging behaviour. In the course of
the talk I will review recent arguments by Roberts and Pashler to suggest that
the standard criterion of fitting models to experimental data is too lax, even
though the model I am considering has only two free parameters and even
though its output is being fitted simultaneously to several quantitative
dependent variables. Focussing instead on whether the model can predict the
data leads to a more complicated but more interesting model. This model
suggests that people orient to their activities in terms of either goal
accomplishment or currency accumulation, and may switch between these
orientations. To understand human activities and in particular the decisions that
people make to continue or switch activities, we need to understand not only
goal-subgoal hierarchies but also moment-by-moment gain curves.
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, p. 1, 2007.
Springer-Verlag Berlin Heidelberg 2007
Agile Development of Workflow Applications with
Interpreted Task Models
1 Introduction
Workflow applications are used in enterprise settings to increase visibility, efficiency
and compliance of important business processes. They help to realize efficiency
potentials through the elimination of transport and wait times between process
activities and provide a detailed level of control over the assignment of work to
process participants [9]. Examples of such processes are the tracking of candidates,
tracking of benefit changes, order tracking, prospect follow-up and generation and
review of quotes and proposals. In these examples, important enterprise data
(employee, customer, and contract) are worked on by multiple people in predefined
roles and steps. The Workflow Management Coalition (WFMC) defines workflow as
the automation of a business process during which documents, information, or tasks
are passed from one participant to another for action, according to a set of procedural
rules [8]. These rules (i.e. workflow task models) define the organizational units,
roles, and activities as well as data, events, and tools that comprise the workflow [7].
Interactive workflow applications coordinate the tasks of human actors. Data is
frequently represented as forms that are passed from one participant to another.
Development of interactive workflow applications usually follows one of two
approaches. The first approach uses a workflow system that provides a high-level
workflow modeling language. Here, system development involves mainly the description
of the desired workflows in the provided modeling language. Using techniques
from model-driven development, the workflow model is then used to generate the
application code of the running system. Examples of commercial systems supporting
such an approach are FileNet (www.filenet.com), VDoc (www.vdocprocess.com) and
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 2 14, 2007.
Springer-Verlag Berlin Heidelberg 2007
Agile Development of Workflow Applications with Interpreted Task Models 3
pages of the bread-crumb links. CUICs can also access global web-application data
from the application database and the file system to determine their appearance and
behavior. Once defined, CUICs appear in the palette of GUI components in the visual
UI builder that is part of LCD. From the palette, the CUICs can be dragged-and-
dropped onto pages just like any other (standard) UI component.
CUICs can be stored together with template pages in template applications. These
template applications are later loaded and adapted for rapid application development.
The ease of development makes LCD particular useful for the development of
situational applications by occasional developers who focus on solving business
problems.
We used these features of LCD to create a specialized template application for the
development of interactive workflow applications. The template includes template
data, a template XML task definition, and template pages and CUICs. The template
pages reference the data and task definition. Occasional developers create a workflow
application by (1) adapting the template data objects, (2) adapting the template user
interface pages, and (3) adapting the template workflow task definition that defines
the layout and behavior of the workflow-specific user interface components.
Fig. 1. High Level Overview of the Hiring Process. The diagram show the creation and pro-
cessing of a single Job Description and a single associated Job Application by the different
roles involved in the process.
Below we will discuss the adaptation of template pages and the workflow task
definition for a concrete example application that supports a hiring process. The
application supports the publication of job descriptions and the collection and
evaluation of job applications that are send in response to the published job
descriptions. Figure 1 provides a high-level view of the process. It describes the
processing of a single job description and a single associated job application by the
different roles involved in the process. The roles defined in this application are:
Applicants: Applicants can browse all published job descriptions. They can read
the public information of the job descriptions, but they cannot see, for example, the
responsible manager or director. Applicants can add and draft a job application as a
response to a job description. They can browse and edit the job applications that
they created. They can also submit their job application. Applicants cannot edit the
main part of a submitted job application. After submission they can only add
individually logged notes to a job application.
Managers: Managers can add and draft job descriptions. They can submit job
descriptions for publication. Managers can browse their own job descriptions and
Agile Development of Workflow Applications with Interpreted Task Models 5
all job applications that have been submitted in response to the job applications
that they published. Managers can edit comments of a job application. These
comments are visible to directors, but not to applicants.
Directors: Directors can browse all job descriptions submitted for publication.
They can decide to either return them for additional drafting, or publish the job
description. Directors can also browse all job applications in their area and decide
the invitation of an applicant for an interview.
In this example we see that different roles have different rights to browse, read,
edit, and add forms-data in the different situations. For example, applicants can only
edit the main part of a job application before it is submitted. Thus, the page presenting
data of a submitted job application to an applicant should not provide the information
in editable fields. Similarly, the button that lets applicants submit their application
should not be displayed any more for a job application that has been submitted
already.
Figure 2 shows the example of a Welcome page of the hiring portal application.
The page is created from the unmodified template page by interpreting the
application-specific workflow task model (Figure 3). The workflow task model
specifies explicitly the BREAD (Browse, Read, Edit, Add, Delete) operation for each
role, the available data-objects to that role and the states in which this operation is
available (canBrowse, canRead, etc). Furthermore, it defines with the canSubmit
clauses which role can initiate a state transition on which object in which state.
The template Welcome page assembles two workflow specific CUICs. Figure 4
provides the detailed XML definition of the Welcome page and Figure 5 the
definition of the swfRoleLinks component. Currently there is no visual editor for the
XML task definition, so occasional developers will need to edit the XML task
definition. Instead, changes to pages can be performed in the graphical page editor.
Thus, occasional developers do not need to edit directly the page code provided in
Figure 4 and Figure 5.
Fig. 2. Manager Welcome Page: On her Welcome Page managers1 see a general description
of the application and the tasks associated with the different roles. As manager1 is in the group
of managers she is provided with links to the Applicant Home Page (available to every user)
and a link to the Manager Home Page (available only to members of the group managers).
6 M. Stolze et al.
Fig. 3. Workflow task model (extract) for the Hiring Portal workflow application
Fig. 4. Welcome Page XML: The Welcome page assembles two CUICs, the swfWelcomePage-
Header and the swfRoleHomePageLinks. The top-level task description text is directly retrieved
from the workflow definition.
Agile Development of Workflow Applications with Interpreted Task Models 7
Fig. 5. swfRoleHomePageLinks Custom UI Control XML. The UI control lays out the block of
links to the role-specific home pages. It retrieves the list of defined roles from the workflow
task model. For each role it creates a panel with a link UI component that has as text the
printName of the role (plus appended Home Page) and as target page the homePage attribute
specified in the workflow task definition. The JavaScript function isCurrentUserInGroup is
called to determine whether the link should be made available to the user.
Login
Welcome Page
Data
Detail
Manager Applicant Manager Admin
Pages JobDetail JobDetail JobDetail JobDetail
Fig. 8. Data Detail Page for an Applicant of a Job Application in State DRAFTING: The data
fields are editable as specified in the workflow definition, and the button for submitting the data
is enabled and shows the next state
10 M. Stolze et al.
Fig. 9. Data Detail Page for an Applicant of a Job Application in State SUBMITTED. The data
fields are not editable any more; Also, the Submit button is not visible any more.
3 Development Process
Occasional developers use the following steps to create a new workflow application:
1. Import a new (empty) template application
2. Create the new data structures and associated data views.
3. Create the XML process definition.
4. Create the role-specific home pages.
5. Create the role-specific data detail pages.
6. Deploy and test the solution.
All of these tasks, with the exception of the editing of the XML process definition,
are supported by visual editors. LCD provides a visual XML Schema editor and a
Agile Development of Workflow Applications with Interpreted Task Models 11
visual data view editor to support the tasks in step 2. In step 4 the LCD visual page
editor is used to copy and adapt the template home page. The data views defined in
Step 2 are placed on the page and page role is specified (c.f. Figure 10). Also for the
creation of the data detail pages in Step 5 the visual page editor can be used (c.f.
Figure 11). Finally, the application is deployed to a remote portal server by using the
LCD application deployment utility. The application is then ready for testing and use
once users and their group membership have been defined for the portal.
Fig. 10. Creating the Applicant Home Page using the LCD Visual Page Editor
12 M. Stolze et al.
Fig. 11. Creating the Director Job Application Data Detail Page using the LCD Visual Page
Editor
Agile Development of Workflow Applications with Interpreted Task Models 13
References
1. Brambilla, M.: Generation of WebML Web Application Models from Business Process
Specifications. In: 6th International Conference on Web Engineering (ICWE 2006), pp. 85
86. ACM press, New York (2006)
2. Cachero, C., Gmez, J.: Advanced Conceptual Modeling of Web Applications: Embedding
Operation Interfaces in Navigation Design. JISBD 2002, 235248 (2002)
3. Ceri, S., Fraternali, P., Bongio, A.: Web Modeling Language (WebML): a modeling
language for designing Web sites. Computer Networks 33(1-6), 137157 (2000)
4. Koehler, J., Hauser, R., Kapoor, S., Wu, F.Y., Kumaran, S.: A Model-Driven Transformation
Method. EDOC 2003, 186197 (2003)
5. Limbourg, Q., Vanderdonckt, J.: Adressing the Mapping Problem in User Interface Design
with UsiXML. In: Proceedings of TAMODIA 2004, pp. 155163. ACM Press, New York
(2004)
6. Mellor, S.J., Scott, K., Uhl, A., Weise, D.: MDA Distilled: Principles of Model-Driven
Architecture. Addison Wesley, Reading (2004)
7. Stary, C.: TADEUS: Seamless Development of Task-Based and User-Oriented Interfaces.
IEEE Transactions on Systems, Man, and Cybernetics 30, 509525 (2000)
8. WfMC Workflow Management Coalition Terminology & Glossary, WFMC-TC-1011,
Document Status- Issue 2.0 (June 1996)
9. zur Muehlen, M.: Organizational management in workflow applications. Information
Technology and Management 5(3), 271291 (2004)
MDA Applied: A Task-Model Driven Tool Chain
for Multimodal Applications
1 Introduction
During the last decade, capabilities of end-user devices have made a remarkable
evolution in terms of multimodality and context-awareness. Nevertheless, the
richness of the applications raises development costs, especially in terms of user
interface adaptation.
The EMODE project [1] addresses this issue. Since Model Driven Architec-
ture (MDA) also tackles the productivity problem [2], the EMODE tool chain
consistently follows the MDA approach.
In contrast to existing projects also proposing a MDA-compliant approach
[3], the EMODE project goes beyond the current exploitation of model-to-model
transformations and tool integration.
Combining essential tools model editors, model repository, and transforma-
tion engine in a dedicated design time environment accompanied by a runtime
environment drives the tool integration. Furthermore model-to-model transfor-
mations using the MDA mapping language QVT [5] accelerate the modelling
process. Both assets are the foundation of a cost ecient development lifecycle.
This work was supported in part by the German Ministry of Eduction and Re-
search (BMBF). The project executing organization is the German Aerospace Center
(DLR).
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 1527, 2007.
c Springer-Verlag Berlin Heidelberg 2007
16 M. Heinrich et al.
2 Related Work
Despite the fact that the MDA approach promises a number of benets [2], MDA-
compliant tool chains supporting the development of multimodal, context-aware
applications are rare.
Focusing multimodal, context-aware user interface development, various re-
search projects are centred on UsiXML [6]. Since UsiXML is widely accepted,
a growing set of tools is available. In particular the CAMELEON [7] and the
SALAMANDRE [8] project made major contributions.
Figure 1 illustrates the variety of tools the UsiXML tool chain consists of.
Editing dierent models on various abstraction levels and initiating transforma-
tions requires distinct tools which are not part of an integrating environment.
The missing integration hinders a seamless modelling workow.
While UsiXML and EMODE provide task-centric tool chains to develop mul-
timodal applications, Rousseau et al. focus on tool support for the creation of
behavioural models [9]. The behavioural model expressed by a set of election
rules represents an algorithm to determine when to use what modality. How-
ever, this approach does not provide capabilities to generate user interfaces.
Another model-based approach the DynaMo-AID project [10] is devoted to
the development of context-aware user interfaces. The DynaMo-AID design time
environment provides editors for dierent models and a model-to-model transfor-
mation mechanism. While from a conceptual point of view the MDA principles
are satised, the tool support is currently considered a limited prototype [10].
MDA Applied: A Task-Model Driven Tool Chain 17
3 EMODE Methodology
The EMODE methodology, an approach to model multimodal, adaptive user
interfaces, is based on the MDA. It is comprised of a number of modelling and
development phases, artifacts, transformations, and a conceptual architecture.
MDA encourages the (iterative) renement of models from an abstract to a
platform-specic model [14,15]. EMODE follows the same approach by letting
the developer specify models at dierent levels of abstraction - very abstract
at the beginning (e.g. goal model) and more concrete at the end (e.g. the UI
model for a specic modality). Among others, the models include goal, concepts,
context, user interface and the (for EMODE central) task model (see Section 3.1).
As in MDA, the modelling phase in EMODE is followed by a code generation
step that produces the application code.
The metamodel for the dierent models used in EMODE is specied using
the Meta-Object Facility (MOF) [4]. This facilitates the use of MOF-based tools,
especially model-to-model transformations to support the modelling process, as
elaborated in Section 3.2. The EMODE tools presented in Section 4 are im-
plemented on top of the MOF-compliant metamodel and tightly integrated to
support seamless development, despite crossing dierent levels of abstraction.
some of the models are directly connected through mappings. For example, the
task model has relations to the Abstract User Interface (AUI) model, describing
the applications user interface, and the Functional Core Adapter (FCA) model,
which represents the connection to the applications logic. Furthermore, model-
to-code transformations have been implemented, generating the concrete user
interfaces, controller logic, and method stubs to integrate the application logic.
The dierent phases, models, and transformations are depicted in Figure 2.
The EMODE conceptual architecture describes the components involved in
the development process as well as for runtime support. It has been implemented
in form of the EMODE tool chain, which is comprised of the modelling infra-
structure and runtime components, oering services such as modality handling
and processing of contextual information.
4 Tool Chain
The EMODE tool chain is divided into a design time environment and a runtime
environment. The design time environment is an integrated modelling environ-
ment, which supports the developer modelling the application. Generation and
adaptation of code is the last step in using the design time environment. After-
wards, the completed application will be deployed into the runtime environment.
All editors are based on the Graphical Editing Framework (GEF) [11] and
thus provide a common look and feel. The main dierence between the individual
editors is the set of model-specic tools they provide. The task editor for example
supports the developer in connecting tasks to each other via task edges and
allows to start model-to-model transformations to AUI or FCA. On the other
hand, e.g., the AUI editor lets the developer specify interactors and supports a
renement of the user interface, depending on the set of available modalities.
To seamlessly integrate working at the junction between dierent models,
editors can reference model elements from other models. For example, the task
editor allows to specify mappings between task and FCA model by letting the
developer associate system tasks to FCA calls. The support is depicted in Figure
4. It shows four central editors of the tool chain. The entire development workow
is controlled from within the modelling environment.
MOF Repository. All models produced by the EMODE editors are stored
in an external model repository. This repository and the Java based editors
are connected via a MOF-to-CORBA-IDL binding, as proposed by the MOF
Specication [4].
Fig. 4. Models and the support by the design time environment. The developer is
supported through all stages of the development workow: from abstract (goal) models
to platform-specic code.
This increases exibility and removes the need to produce complex code from
the model.
5 Example Application
This section illustrates the EMODE approach by example. Therefore an appli-
cation in the area of plant maintenance will be modelled, by using the EMODE
design time environment. The nal deployment of the modelled application in
the EMODE runtime environment creates a fully executable application.
The objective of the application is to support the plant maintenance sta of
a large company. Since the plant maintenance sta is responsible for tending
to occurring problems as quick as possible, the desired benet of a multimodal,
context-aware application would be an increased eciency of maintenance order
processing.
In the plant maintenance scenario the following IT-Infrastructure is assumed:
A Product Lifecycle Management (PLM) System, that acts as a central
server where all maintenance orders are entered.
MDA Applied: A Task-Model Driven Tool Chain 23
To outline the modelling process, the listed steps are required to create the
application:
The central Task Model captures the ow of the application and represents the
starting point of the modelling process (the optional goal model will not be
regarded). Figure 5 denes the ow of the example application.
Starting at the initial node, the control ow denes the execution order of the
various tasks. In this example, the PLM System receives a new maintenance order
that will be transmitted to a mobile device assigned to a maintenance employee.
To map the diversity of tasks, system and interaction tasks are introduced. While
system tasks require some kind of computation (e.g. the transmission of a main-
tenance order), the interaction tasks demand end-user interactions (e.g. read
the incoming maintenance order). After receiving the maintenance order, a sta
member can decide whether to accept or to reject the maintenance order. In the
case of acceptance, the responsible sta member receives detailed instructions,
24 M. Heinrich et al.
supporting the maintenance task. Note that the system task retrieves data con-
cerning the current noise level from a context service (represented by the event
consumer) in order to judge whether instructions are delivered via voice or via a
graphical user interface. Afterwards the control ow reaches the nal node and
the application will terminate.
Acknowledgement
Contributions from the EMODE partner IKV++ Technologies are gratefully
acknowledged.
References
1. Enabling Model Transformation-Based Cost Ecient Adaptive Multi-modal User
Interfaces (EMODE) project (2007), http://www.emode-projekt.de
2. Bast, W., Kleppe, A., Warmer, J.: MDA Explained - The Model Driven Architec-
ture: Practice and Promise. Addison-Wesley, Reading (2003)
3. Vanderdonckt, J.: A MDA-Compliant Environment for Developing User Interfaces
of Information Systems. In: Proceedings of the 17th conference on Advanced In-
formation Systems Engineering (2005)
4. Meta Object Facility (MOF) Specication (2002), http://www.omg.org/docs/
formal/02-04-03.pdf
5. MOF QVT Final Adopted Specication (2005), http://www.omg.org/docs/
ptc/05-11-01.pdf
6. Limbourg, Q., Vanderdonckt, J., Michotte, B., Bouillon, L., Florins, M., Trevisan,
D.: USIXML: A User Interface Description Language for Context-Sensitive User
Interfaces. In: Proceedings of the ACM AVI 2004 Workshop (2004)
7. The CAMELEON Project (2004), http://giove.cnuce.cnr.it/cameleon.html
8. The SALAMANDRE Project (2005), http://www.isys.ucl.ac.be/bchi/
research/salamandre.htm
9. Rousseau, C., Bellik, Y., Vernier, F.: Multimodal Output Specication / Simula-
tion Platform. In: Proceedings of the 7th international conference on Multimodal
interfaces (2005)
10. Clerckx, T., Luyten, K., Coninx, K.: DynaMo-AID: a Design Process and a Run-
time Architecture for Dynamic Model-Based User Interface Development. Engi-
neering Human Computer Interaction and Interactive Systems (2005)
11. Graphical Editing Framework (2007), http://www.eclipse.org/gef/
MDA Applied: A Task-Model Driven Tool Chain 27
12. Gamma, E., Helm, R., Johnson, R.E.: Design Patterns - Elements of Reusable
Object-Oriented Software. Addison-Wesley, Reading (1995)
13. Java Emitter Templates (JET) Tutorial (2005), http://www.eclipse.org/
modeling/emf/docs/2.x/tutorials/jet1/jet tutorial1 emf2.0.html
14. Miller, J., Mukerji, J.: MDA Guide Version 1.0.1 (2003), http://www.omg.org/
docs/omg/03-06-01.pdf
15. Koch, T., Uhl, A., Weise, D.: Model Driven Architecture (2002), http://www.omg.
org/docs/ormsc/02-01-04.pdf
16. Paterno, F., Mancini, C., Meniconi, S.: ConcurTaskTrees - A Diagrammatic Nota-
tion for Specifying Task Models. In: Proceedings of the International Conference
on Human-Computer Interaction (1997)
17. Object Management Group: Unied Modeling Language - Superstructure (2004)
18. Sottet, J., Calvary, G., Favre, J., Coutaz, J., Demeure, A.: Towards Mapping and
Model Transformation for Consistency of Plastic User Interfaces. ACM Conference
on Computer Human Interaction (2006)
19. Puerta, A., Eisenstein, J.: Towards a general computational framework for model-
based interface development systems. In: Proceedings of the 4th international con-
ference on Intelligent user interfaces (1999)
20. Burmeister, R., Pohl, C., Bublitz, S., Hugues, P.: SNOW: A Multimodal Approach
for Mobile Maintenance Applications. In: Proceedings of the 15th IEEE Interna-
tional Workshops on Enabling Technologies: Infrastructure for Collaborative En-
terprises (2006)
Extending a Dialog Model with Contextual
Knowledge
1 Introduction
When developing interactive computer applications, a lot of time is spent de-
signing and implementing the user interface. This is in particular true with 3D
multimodal interfaces for Virtual Environments (VEs). The process of creating
or selecting interaction techniques for such interfaces is not straightforward. One
possible approach is model-based user interface design as described in [1,2,3].
First, in model-based user interface design, the tasks that the user can perform
in the application and the tasks that the computer must execute accordingly
are modelled in the task model, for example the ConcurTaskTrees notation [4],
which orders these tasks in a hierarchical tree with time dependencies. Next,
this model is used to dene the interaction between the user and the system.
An example of such a task is selecting an object in a virtual world. This is
one of the basic tasks in 3D multimodal user interfaces. In order to specify
this interaction, several high level notations have been introduced: NiMMiT [5],
ICO [6], Interaction Object Graphs [7], InTml [8], ICon [9] and CHASM [10].
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 2841, 2007.
c Springer-Verlag Berlin Heidelberg 2007
Extending a Dialog Model with Contextual Knowledge 29
Besides being useful for discussion and giving insights into the interaction these
models can also be interpreted at runtime such that the interaction can be
prototyped. In all these notations we need to assign which devices/modalities
should be used during interaction and what events of this device are used, for
example the choice between a spacemouse and voice input. In this paper we use
the exibility with respect to interaction metaphors and devices as an example
to explain our approach to modelling context at the dialog level.
No consent denition of context exists [11]. In this work we will look at context
as inuenced by dierent factors: user, environment, services, and platform as
dened in the CoDAMoS context ontology [12]. Deys denition of context [11]
states that context is only relevant when it has an inuence on the users task.
About the inuence of context on the interaction with the system we can distin-
guish several distinct levels [13]. Two of these are important for the remainder
of this paper:
Task Level: context inuences the tasks that are enabled in a certain state
of the user interface. A change of context may imply a change of active tasks.
Dialog Level: context inuences which state is currently active in the dialog
model. Thus, dialog level inuence of context may cause a transition to
another state of the user interface.
When the assignment of a device/modality is static, the interaction descrip-
tion has to be changed for any situation in the interaction technique where the
user would possibly like to switch input devices/modalities. In order to make
this switching more dynamic section 3 introduces a combined approach in order
to benet from task level and dialog level context inuence.
The validation of our approach will be presented through a case study in
section 4. This case study contains some crates that can be positioned by the
user. The user can navigate through the environment and select, move or rotate
these crates. How the interaction with the environment occurs depends on the
setup the user is in. While sitting at a desktop computer, interaction is done
by means of keyboard and mouse input but when the user stands in front of a
large projection screen he uses a tracking glove in combination with voice input
in order to manipulate the scene.
State Transitions: Finally, when a task chain has been executed completely,
a state transition moves the diagram into the next state. A choice between
multiple state transitions is also possible, based upon the value of a certain
label.
2.2 Example
By means of gure 1 we will give a brief overview of how the NiMMiT nota-
tion should be interpreted. The start-state of this diagram responds to 4 dif-
ferent events (called EVENT1 to EVENT4). When EVENT1 or EVENT3 is
red, Taskchain1 will be invoked. Taskchain2 however will only be invoked
if EVENT2 and EVENT4 occur at the same time, which is dened by the
melting pot principle [14].
When a task chain is invoked, all tasks within the chain are executed one
after the other (from top to bottom) using each others output when necessary.
The output of a task can be stored in a label in order to be used by a task in
an other task chain. In the example the evaluation of Taskchain1 will trigger
32 L. Vanacken et al.
the execution of Task1 and Task3 of which the last task, Task3, results in a
boolean value that will be stored in the label OutputT3
When all tasks in the chain are successfully executed, the next state is deter-
mined based on the exitlabel of the task chain. In Taskchain1 no exitlabel is
dened so we return to the Start-state waiting for new events to be red.
As indicated, Taskchain2 will only be executed if EVENT2 and EVENT4
are red simultaneously. During the execution of Taskchain2, the output of
Task3 (stored in the label OutputT3) is used as input for Task2, which
again results is a boolean value (stored in label OutputT2). Since OuputT2 is
used as exitlabel for this task chain, the result of Task2 will determine the next
state of execution: if the value in OutputT2 is false, the next state will again
be the Start-state; if however, the result of Task2 is true, the End-state is
reached and the execution of the interaction nishes.
(a) (b)
Fig. 2. (a) Combining modality constraints with the decision task notation (task model
and dialog model). (b) Merged dialog models.
or more modality categories per task and relate the selected categories with a
CARE relation [14]. This enables a runtime selection of a suitable modality with
respect to the available interaction techniques surrounding the user at a certain
moment in time.
However this is not enough considering the scope of this paper. We would
like to take into account more information than the devices populating the VE
(application context) to select the appropriate modality. For example in our case
study (which will be presented in more detail in section 4) we have 2 dierent
setups (external context) in which we would like to interact with the VE and
both setups require other modalities/devices to be used.
One way to overcome this problem is to use the approach we have discussed
in section 3.1. This is illustrated in gure 2(a). In this example task t2 is divided
into two distinct tasks t2a and t2b . In this way the designer can attach distinct
constraints, m1 and m2 to the two tasks. As a result at runtime the task that
will be active is chosen with respect to the context status (as shown in the
corresponding dialog model in gure 2(a)).
The above described approach works well when just a few tasks require a
context-aware selection of the appropriate modality. However when a lot of leaf
tasks require a context-aware modality a lot of dialog models are generated and
used to describe the same interaction ow. Suppose a task model has got n leaf
tasks where a context-aware selection of the appropriate modality is desired and
each task is divided into two tasks by means of a decision mode. When the dialog
models are extracted from the task specication,
all possibilities of context sta-
tuses are taken into account resulting in n2 dialog models. It is obvious that n
should not be that high to result in an impractical amount of dialog models. This
is because the actual purpose of the decision task was to specify dierent tasks in
dierent context statuses. However in the scope of this paper, the tasks remain
the same and for this situation we propose context at the dialog level as a more
ecient approach. Note, however, that this way of working can still be combined
34 L. Vanacken et al.
with context modelling at the task level. This is for instance useful when really
dierent interaction metaphors are oered to the user that do not rely on highly
similar task chains. Usually such interaction metaphors are represented as leaf
nodes in the task tree, and are modelled with separate NiMMiT diagrams.
A solution to overcome the above mentioned problem of an exploding number
of dialog models is to combine the approach of making a distinction between
tasks at the task level and the approach of taking care of context at the dialog
level. In previous work [13] we showed how transitions in the dialog model can be
executed by a change of context information. A combination of the two distinct
approaches of context inuence at the two levels can be seen as follows. Instead
of having two distinct dialog models, we can merge these two together, and make
only a distinction between where a dierence is made by a context status. This is
illustrated in gure 2(b). The two states containing t1 are merged into one state
in the same dialog model, but a choice is made which state will be reached by
means of the context status. In this way the decision at the task level is modelled
at the dialog level. In the next section we introduce this concept in the NiMMiT
notation.
Fig. 3. The context view of an abstract NiMMiT Diagram, EVENT1 and EVENT3
were added to a specic context
Extending a Dialog Model with Contextual Knowledge 35
4 Case Study
4.1 Setup
As mentioned earlier we will illustrate our approach through a case study in
which a simple scene can be manipulated. In the constructed VE it is possible
Fig. 4. Setup of the case study: a wall projection combined with a tracked glove and
speech and a desktop setup with mouse and keyboard
36 L. Vanacken et al.
to select, move and rotate some crates onto a plane. To validate our context
integration we created 2 setups for this application between which the user can
switch at runtime. On one side we have a desktop environment in which the user
can interact by means of a keyboard and a mouse, and on the other side we have
a large wall projection in which interaction is done using a tracking glove and
voice input. The complete setup is depicted in gure 4. For a movie on the case
study and our approach see1 .
4.2 Creation
The scene modelling application has been created by a more recent version of Co-
GenIVE, a tool supporting the model-based design process depicted in gure 5.
For an overview of CoGenIVE and the supported design process we refer to [1,20].
The process starts with the creation of a ConcurTaskTree (CTT) describing the
dierent tasks that are available within the application (gure 6). In this case,
some initialisation is done in the Load-task and consequently the World Mode-
task becomes enabled. In this task, the user can navigate through the world and
manipulate (select, move and rotate) the objects within the environment.
The next step is to dene the leaf-tasks of the CTT. The application tasks can
be mapped onto system tasks but we have experienced that the user interaction
can better be expressed by means of a NiMMiT diagram. Since only the selection
and manipulation tasks are context sensitive in this case study, we will focus on
these tasks in the remainder of the paper. More specic, we will use the select
task to illustrate our approach. The NiMMiT diagram of the Select-task will
be briey explained in the remainder of this section and the next section will
clarify how context is integrated into the notation.
As shown in the diagrams in Figure 7 the Start-state responds to 2 events
(KEYBOARD.MOVE and KEYBOARD.BUTTON PRESSED.0) for the desk-
top setup (Figure 7(a)) and 2 events (GLOVE.MOVE and SPEECH.SELECT)
for the wall setup (Figure 7(b)). The bottom part of both diagrams is the same
and can be seen in gure 8.
When either the keyboard or the glove res a MOVE-event the right hand
task chain will be invoked and all tasks within the chain are executed: rst the
UnhighlightObjects-task is executed, then the newly collided crates are de-
tected and nally the HighlightObjects-task will highlight the found objects
and store these objects in the selected-label. If the chain has been fully evalu-
ated, the diagram returns to the Start-state.
In order to select the highlighted objects the left-hand task chain should be
executed. Therefor the user should press a key on his keyboard (in the desktop
setup) or issue the speech-command (in the wall setup). Once the SelectObjects-
task is executed the diagram gets to the End-state and the interaction technique
nishes.
Fig. 8. NiMMiT Diagram of the Select Interaction with the context arrows
Extending a Dialog Model with Contextual Knowledge 39
the task chains keep the same structure in dierent contexts. We augmented our
own high level notation NiMMiT with contextual knowledge and illustrated our
approach using a case study in which a simple scene can be manipulated, for
a movie about this work see2 . We learned that our approach works simple and
eective and allows designers to use the same interaction descriptions in dierent
contexts.
In the future we plan to try our approach using other context factors such
as the user. The user prole (and possibly also user actions) can indicate which
modalities are appropriate in certain interaction descriptions expressed in (ex-
tended) NiMMiT diagrams.
Acknowledgments
Part of the research at EDM is funded by ERDF (European Regional Develop-
ment Fund), the Flemish Government and the Flemish Interdisciplinary institute
for Broadband technology (IBBT). Both the VR-DeMo (Virtual Reality: con-
ceptual Descriptions and Models for the Realization of Virtual Environments)
project (IWT 030248) and the CoDAMoS (Context-Driven Adaptation of Mobile
Services) project (IWT 030320) are directly funded by the IWT, a Flemish sub-
sidy organization. The authors would like to thank Tim Tutenel for his valuable
contributions in the development of CoGenIVE.
References
1. Cuppens, E., Raymaekers, C., Coninx, K.: A model-based design process for in-
teractive virtual environments. In: DSVIS 2005. Proceedings of 12th International
Workshop on Design, Specication and Verication of Interactive Systems, New-
castle upon Tyne, UK, pp. 225236 (2005)
2. Willans, J., Harrison, M.: A toolset supported approach for designing and test-
ing virtual environment interaction techniques. International Journal of Human-
Computer Studies 55, 145165 (2001)
3. Kulas, C., Sandor, C., Klinker, G.: Towards a development methodology for aug-
mented reality user interfaces. In: Proc. of the International Workshop exploring
the Design and Engineering of Mixed Reality Systems - MIXER 2004. CEUR
Workshop Proceedings, Funchal, Madeira (2004)
4. Paterno, F.: Model-Based Design and Evaluation of Interactive Applications.
Springer, Heidelberg (1999)
5. Vanacken, D., De Boeck, J., Raymaekers, C., Coninx, K.: NiMMiT: A notation
for modeling multimodal interaction techniques. In: GRAPP 2006. Proceedings of
the International Conference on CG Theory and Applications, Setubal, Portugal
(2006)
6. Navarre, D., Palanque, P., Bastide, R., Schyn, A., Winckler, M., Nedel, L., Freitas,
C.: A formal description of multimodal interaction techniques for immersive virtual
reality applications. In: Proceedings of Tenth IFIP TC13 International Conference
on Human-Computer Interaction, Rome, IT (2005)
2
http://research.edm.uhasselt.be/lvanacken/Tamodia07/Tamodia07.wmv
Extending a Dialog Model with Contextual Knowledge 41
7. Carr, D.: Interaction object graphs: An executable graphical notation for specifying
user interfaces. In: Formal Methods for Computer-Human Interaction, pp. 141156.
Springer, Heidelberg (1997)
8. Figueroa, P., Green, M., Hoover, H.J.: InTml: A description language for VR ap-
plications. In: Proceedings of Web3D 2002, Arizona, USA, pp. 5358 (2002)
9. Dragicevic, P., Fekete, J.D.: Support for input adaptability in the ICON toolkit.
In (ICMI 2004). Proceedings of the 6th international conference on multimodal
interfaces, State College, PA, USA, pp. 212219 (2004)
10. Wingrave, C., Bowman, D.: CHASM: Bridging description and implementation of
3D interfaces. In: New Directions in 3D User Interfaces Workshop in IEEE Virtual
Reality, Bonn, Germany, March 12, 2005 (2005)
11. Dey, A.K.: Providing Architectural Support for Building Context-Aware Applica-
tions. PhD thesis, College of Computing, Georgia Institute of Technology (2000)
12. Preuveneers, D., Van den Bergh, J., Wagelaar, D., Georges, A., Rigole, P., Clerckx,
T., Berbers, Y., Coninx, K., Jonckers, V., Bosschere, K.D.: Towards an Extensible
Context Ontology for Ambient Intelligence. In: Markopoulos, P., Eggen, B., Aarts,
E., Crowley, J.L. (eds.) EUSAI 2004. LNCS, vol. 3295, pp. 148159. Springer,
Heidelberg (2004)
13. Clerckx, T., Van den Bergh, J., Coninx, K.: Modeling Multi-Level Context In-
uence on the User Interface, pp. 5761. IEEE Computer Society, Los Alamitos
(2006)
14. Coutaz, J., Nigay, L., Salber, D., Blandford, A., May, J., Young, R.M.: Four easy
pieces for assessing the usability of multimodal interaction: the care properties. In:
IFIP Conference Proceedings, pp. 115120. Chapman & Hall, Sydney, Australia
(1995)
15. Pribeanu, C., Limbourg, Q., Vanderdonckt, J.: Task Modelling for Context-
Sensitive User Interfaces. In: Johnson, C. (ed.) Interactive Systems: Design, Spec-
ication, and Verication, pp. 6076 (2001)
16. Van den Bergh, J., Coninx, K.: Contextual concurtasktrees: Integrating dynamic
contexts in task based design. In: PerCom Workshops, pp. 1317. IEEE Computer
Society, Los Alamitos (2004)
17. Paterno, F., Santoro, C.: One model, many interfaces. In: Kolski, C., Vanderdonckt,
J. (eds.) Computer-Aided Design of User Interfaces III, vol. 3, pp. 143154. Kluwer
Academic, Dordrecht (2002)
18. Clerckx, T., Luyten, K., Coninx, K.: DynaMo-AID: A Design Process and a
Runtime Architecture for Dynamic Model-Based User Interface Development. In:
Bastide, R., Palanque, P., Roth, J. (eds.) Engineering Human Computer Interac-
tion and Interactive Systems. LNCS, vol. 3425, pp. 7795. Springer, Heidelberg
(2005)
19. Clerckx, T., Vandervelpen, C., Coninx, K.: Task-Based Design and Runtime Sup-
port for Multimodal User Interface Distribution. In: Engineering Interactive Sys-
tems 2007: EHCI/HCSE/DSV-IS (2007)
20. De Boeck, J., Gonzalez Calleros, J.M., Coninx, K., Vanderdonckt, J.: Open issues
for the development of 3d multimodal applications from an MDE perspective. In:
MDDAUI workshop 2006, Genova, Italy (2006)
21. Clerckx, T., Luyten, K., Coninx, K.: Generating Context-Sensitive Multiple Device
Interfaces from Design, pp. 281294. Kluwer, Dordrecht (2004)
Practical Extensions for Task Models
Daniel Sinnig1, Maik Wurdel2, Peter Forbrig2, Patrice Chalin1, and Ferhat Khendek1
1
Faculty of Engineering and Computer Science,
Concordia University, Montreal, Quebec, Canada
{d_sinnig, chalin, khendek}@encs.concordia.ca
2
Department of Computer Science,
University of Rostock, Germany
{maik.wurdel, pforbrig}@informatik.uni-rostock.de
1 Introduction
In the domain of human-computer interaction (HCI) task analysis is an effective
requirements elicitation device as it helps to gain understanding of how people
currently work. According to Johnson, the role for the task analysis is to provide an
idealized, normative model of the tasks users carry out to achieve goals [1].
In recent years, with the advent of model-based UI development [2-5], task models
are not only used as analysis models, they are used as a specification of the
envisioned user interface as well. Based on a task model specification, more concrete
design specifications (e.g. dialog model [5], presentation model [2]) are successively
derived until the implementation level has been reached. Within such a model-based
development lifecycle, purely idealised task models, as proposed by Johnson, are
insufficient since human errors and system errors are not taken into account. Instead,
task specifications including failure and error cases are needed in order to obtain a
complete specification of the user interface.
Unfortunately the construction of task specifications remains a challenging and
cumbersome activity [6]. Based on our experiences while working with task models,
we discovered that the current operator set is not sufficient to effectively describe task
specifications. For example, CTTone of the most popular task modelling
notationsdoes not have an operator defining the premature termination of a scenario
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 42 55, 2007.
Springer-Verlag Berlin Heidelberg 2007
Practical Extensions for Task Models 43
(whether it is due to human or system error). Error handling with the traditional
operator set results in an explosion of complexity which diminishes the readability of
the task model [6]. Moreover, from a structural point of view, task models are defined
as monolithic task trees. Such an approach, does not scale well for applications of
medium and large sizes.
In order to overcome these shortcomings we propose a set of practical extensions
for task models. The extensions are categorized in three different dimensions: (1)
extensions to the operator set, (2) structural extensions, and (3) extensions in support
of cooperative task models. The former directly addresses the problem of creating a
complete task specifications of the UI by introducing additional temporal operators,
namely stop, instance iteration, non-deterministic choice, and deterministic choice. In
the second set of extensions we propose structural enhancements for task models. A
task model is no longer defined as a monolithic task tree but in a modular fashion
where a task tree may include references to other sub-ordinate task trees. Moreover
we define a specialization relation between task models and propose a high-level
notation called Task Model Diagram. The third dimension addresses the creation of
task models for cooperative applications (e.g. multi-user smart environments). In
particular we define a concept of a cooperative task model. Within such a cooperative
task model the execution of a task of one model may enable or disable the execution
of a task in a different task model.
The structure of the remainder of this paper is as follows. Section 2 briefly reviews
the task modelling notation CTT and presents relevant related work. In Sections 3 and
4 we propose extensions to the operator set and structural enhancements, respectively.
Section 5 presents a new concept for collaborative task models. Finally in Section 6,
we draw the conclusion and provide an outlook to future research.
operators together with their interpretation can be found in [10]. We note that most
binary operators (except for suspend/resume) have similar (yet not semantically
identical) counterparts in LOTOS [12].
In order to support the specification of collaborative (multi-user) interactive
systems, CTT has been extended to CCTT (Collaborative ConcurTaskTrees) [11]. A
CCTT specification consists of multiple task trees. One task tree acts as a
coordinator and specifies the collaboration and global interaction between involved
user roles. The individual tasks of each user role are, furthermore, specified by
separate task trees which contain special activity nodes called connection tasks.
Nodes, of this type exhibit temporal dependencies to connection tasks of other task
trees. These temporal dependencies are described in the coordination task model. In
this paper we further extend CCTT by taken into account that a role is typically
fulfilled by several users. For each user we create a copy (instance) of the
corresponding role task model. At runtime the various instances of the task model are
executed concurrently. Synchronization points between instances are specified in
TCL (task constraint language). A coordinator task model, as specified in CCTT, is
not needed.
In recent years various attempts were made to extend the CTT notation. In [13; 14]
Klug and Dittmar propose additional modelling constructs, namely input/output ports
and object dependencies, respectively. Luyten [15] introduces a new node type
(decision node) which allows to augment task models with context of use
dependencies. Forbrig et al. [16] propose a mechanism which allows the definition of
temporal relationships between arbitrary tasks of a task treethis is in contrast to
CTT, where temporal relationships are limited to sibling task, only.
In order to overcome CTTs inability to specify task failures and error cases
Bastide and Basnyat introduce the concept of error patterns [6]. In this paper, we
tackle the same limitation but instead of using error patterns, we define a new
temporal operator stop which denotes a premature termination of the current scenario.
In order to define a consistency relation between use cases and task models, Sinnig et
al. [17] suggest that a distinction be made between choices (of two tasks) that happen
non-deterministically vs. deterministically from the users point of view. In this work,
we introduce a corresponding temporal operator for both kinds of choice.
In the next three sections, we present our proposed extensions to task models. The
extensions are organized into the following categories: extensions to the operator set,
structural enhancements, and extensions in support of cooperative task modelling.
Task models were originally introduced as analysis artefacts, describing how a user
achieves a goal. As such, task models can be seen as idealised descriptions of how the
user accomplishes involved tasks; failure of task execution, errors and their
consequences were not directly taken into account. Fig. 1 depicts such an idealised
description of a login task included in a secure mail system. The task model is
idealised as the possibility of login failure is not specified.
Practical Extensions for Task Models 45
In recent years, with the advent of model-based UI development, task models have
not only been used as analysis models, but also as a requirements specification of the
user interface. In model-based UI development frameworks, the task specification
typically serves as a starting point for the derivation of more concrete design models
such as the dialog and the presentation model [2-5]. Task models used in such a
context must not only capture the case of successful task completion but must also
cope with failure and error scenarios.
Evidently a purely idealised modelling approach is suitable at the analysis phase,
but it is incomplete at the design level, since possible interactions between the user
and the system are not captured. In this paper, we argue that with the current set of
CTT operators the creation of non-idealised task models is impractical. Work-a-
rounds are cumbersome and require a high degree of duplication of tasks and sub-
tasks. In what follows, we propose a set of additional temporal operators that ease the
modelling of design-task models. Specifically, we present two unary operators (stop
and instance iteration) and two binary operators (deterministic choice and non-
deterministic choice).
scenario. The only work-a-round for this shortcoming is to (artificially) create a high-
level choice between the scenarios that terminate prematurely and the scenarios that
terminate normally.
We therefore propose the introduction of a unary Stop operator. It signifies the
unsuccessful termination of a task. A task flagged with the Stop operator cannot
enable any tasks. The execution of a Stop task inevitably leaves the super-ordinate
tasks incomplete which eventually leads to the premature termination of a scenario.
Syntactically, stop is represented by a STOP sign hovering above the affected task.
Fig. 2 illustrates a non-idealised task model of the Secure Mail Client. It is more
detailed than the idealised task model of Fig. 1. In particular, the tasks Provide
Authentication and Provide Feedback have been refined or modified. The former
takes into account that the user may Cancel the Login task whereas the later takes
into account that the login task can fail. Both cases lead to the premature termination
of a scenario as the subsequent tasks Provide Feedback and Use Mail Client
respectively will never become enabled.
In model-based UI development, task models capture the behaviour of the UI. The
system is viewed at a level of abstraction which focuses on input-output interactions
and omits internal system operations. These internal system operations are irrelevant
for UI design. Opting for such a level of abstraction may lead to apparent non-
determinism in the task-model specification. For example, in Fig. 2, the execution of
the Provide Authentication task may lead to two different system states. In one state
the system provides Success Feedback, whereas in the other the system provides
Failure Feedback. Since internal system states are not part of the model, the choice
between the two alternatives, Failure Feedback and Success Feedback, is made
internally by the system. The user does not participate in the decision making and
views the choice as non-deterministic. In contrast to this, the choice between
Practical Extensions for Task Models 47
The unary CTT Iteration operator (*) specifies that a task may be re-executed after
completion. The constraint of task completion before another iteration takes place
proves to be too rigid for certain tasks. For illustration purposes let us consider the
example of writing e-mails using a mail client. Fig. 3 depicts that in order to send an
e-mail the user has to sequentially perform a number of sub-tasks. After he decides to
write an e-mail the system displays the input form and the user can compose the e-
mail. Finally the user either submits the e-mail or dismisses it. Furthermore, following
the paradigm of modern mail clients, the user is allowed to write several e-mails
concurrently; i.e. he may interrupt the composition of the current e-mail in order to
start with a new e-mail. In other words, another instance of the Send Mail task may
be executed before the execution of the current instance has terminated.
In CTT it is not possible to directly specify such a form of instance iteration. Due
to its frequent applicability (e.g. writing / reading mails, managing calls under
waiting, browsing websites, etc.) we therefore propose the definition of a new unary
operator Instance Iteration.
Definition: (Instance Iteration). The unary operator Instance Iteration (#) is defined
as follows: A# = [A||| A#].
The behaviour of the operator is optional and is specified as the concurrent execution
of the operand task itself and a recursive execution of the instance iteration again. In
Fig. 3, the tasks Send Mail and Read Mail are defined using the Instance
Iteration operator. For sake of conciseness, only the Send Mail task has been
inflated. As an execution example let us assume that there are two instances of the
Send Mail tasks that are performed concurrently. Then we can extract the following
possible trace of sub-tasks: <<Select Compose Mail(1), Display Input Form(1), Select
Compose Mail(2), Display Input Form(2), Compose Mail(1), Compose Mail(2), Dismiss
Mail(1), Submit Mail(2)>>. We have used superscripts to distinguish between the tasks
of the two iterations.
4 Structural Enhancements
In this section we propose two structural operators and a high-level notation for task
models. Both result from a research project, which had as its goal the cross-
pollination of use-case models and task models. In particular, we define modular task
models and a specialization relationship between task models. The former was found
useful in reducing the complexity of task models whereas the latter helps ensuring
consistency across multiple UIs. Finally we introduce the graphical notation Task
Model Diagram which can be used to visualize the high-level structure of task
models.
a series of steps. First the designer identifies a set of user-goal tasks that directly
address a goal of the user. Next, task models for each of these user-goal tasks are
specified. Note that this step could be carried out concurrently by a team of UI
designers. Finally, the various user-goal task models are unified within a single global
task model.
With the advent of ubiquitous and mobile devices there has been a shift towards the
development of multiple user interfaces. That is, the same application can be accessed
through different user interfaces supporting different devices (e.g. laptops, desktops,
palmtops, mobile phones, etc.). In such a context it is important to ensure consistency
between the various interfaces. Consistency can be achieved on different levels
ranging from the way tasks are supported by the system to a consistent presentation
and Look & Feel across the different UIs. One way to accomplish the former is to
develop the underlying task models of the various UIs based on a common coarse
grained task description.
For that purpose we propose a specialization relation between task models. It links
a sub task model to its super task model such that the former is a specialization of the
latter. The specialization is possible in two different ways: (1) structural refinement:
i.e. breaking previously atomic tasks into sub-tasks; (2) behavioural refinement: i.e.
restricting the set of possible scenarios.
Task Type in Super Task Model Valid Task Type(s) in Sub Task Model
abstract abstract, interaction, user, application
interaction interaction
user user
application application
Structural Refinement. The sub task model may contain more information than its
super task model. This can be achieved by further refining the action tasks (tasks at
the leaf level) of the super task model. The specialization is deemed valid if the type
refinements of Table 2 are preserved. In essence, while abstract tasks can be arbitrarily
refined, interaction, user and application tasks must only be refined by subtasks of the
same type.
Behavioural Refinement. We define behavioural refinement in such a way that a sub
task model does not allow more scenarios than the original task specification. The sub
task model may even further restrict the set of scenarios. Such a specialization can be
achieved by applying one or many of the following five restrictions:
1. A deterministic choice ([]D) is restricted to either alternative. (Note that a non-
deterministic choice ([]N) cannot be further restricted)
2. An optional task [T] becomes obligatory or is removed. (Note that the optional
operator can be defined as follows: [T] = T []D ), where the symbol is a
placeholder for an empty task.
50 D. Sinnig et al.
<<
>
in
e>
c lu
lud
de
inc
>>
<<
2
In contrast to the task model of Fig. 2, Login is factored out of from Secure Mail Client
model (for the sake of increased modularity).
Practical Extensions for Task Models 51
Within the software lifecycle the task models related by virtue of specialization are
typically build from top to bottom, i.e. first the generic task models are created, which
are then further refined. This corresponds to our assumption of Section 4.2 that
specialization can be used to ensure consistency in terms of supported tasks across
multiple-user interfaces. As a side-effect we also envision that the specialization
relation between task models will contribute to the creation of task-model libraries,
where recorded task models can be further refined to rapidly and conveniently create
specialized task models.
turn are captured in different instance task models. In essence we define a cooperative
task model as a tuple consisting of a set of roles, a set of task specifications (one for
each role), a set of actors where each actor belongs to a certain role and a set of global
constraints. Similar to CCTT, our extension requires the creation of a separate task
model for each role involved in the interaction. The role task models for the before-
mentioned ubiquitous meeting setting are portrayed in Table 3.
Next and in contrast to CCTT we create, for each actor, an individual copy
(instance) of the respective role task model. We denote this process of assigning a task
model to an actor as instantiation of a role-task model. It is important to note that our
approach is based on the assumption that in limited and well-defined domains the
Chairman
Presenter
Listener
Practical Extensions for Task Models 53
In a non-cooperative model the enabled tasks after an execution trace are easily
determined by examining the temporal relationships defined within the model. Within
a collaborative task model, however, additionally the global constraints have to be
taken into account. A task T is defined to be enabled, if the following holds: T is
enabled according to the local temporal relationships and T is enabled by virtue of the
global constraints. At this point it is important to note that the semantics of
collaborative task models allows for the possibility of deadlocks due to conflicting
constraints. Intuitively a total deadlock occurs when all enabled tasks of all task
model instances are blocked by TCL constraints. A deadlock is partial, if only a
subset of the instance task model is affected. For example, a global constraint that
makes use of the Choice operator causes a deadlock if the operand tasks are
obligatory in the corresponding instance task models.
54 D. Sinnig et al.
Finally we would like to mention that our approach is based on the assumption that
the behaviour of an actor can be approximated through its role. In such a case we
argue that modelling and simulating smart environments by using cooperative task
models is highly beneficial for the development of proactive assistance. On the one
hand cooperative task modelling helps establishing a thorough understanding of the
requirements of the envisioned system. On the other hand the cooperative task
specification can serve as input for the derivation of probabilistic models, such as
Dynamic Bayesian Networks, which are widely used in the research field of proactive
assistance in ambient environments [20].
References
[1] Johnson, P.: Human Computer Interaction: Psychology, Task Analysis and Software
Engineering. McGraw-Hill, London (1992)
[2] Berti, S., Correani, F., Mori, G., Patern, F., Santoro, C.: TERESA: A Transformation-
based Environment for Designing and Developing Multi-Device Interfaces. In:
Proceedings of Extended abstracts of the CHI 2004, Vienna, Austria, pp. 793794 (2004)
[3] Molina, P., Trtteberg, H.: Analysis & Design of Model-based User Interfaces. In:
Proceedings of CADUI 2004, Funchal, Portugal, pp. 211222 (2004)
[4] Patern, F., Santoro, C.: One Model, Many Interfaces. In: Proceedings of CADUI 2002,
Valenciennes, France (2002)
[5] Sinnig, D., Forbrig, P., Seffah, A.: Patterns in Model-Based Development. In: Workshop
entitled. Software and Usability Cross-Pollination: The Role of Usability Patterns
Switzerland (2003)
[6] Bastide, R., Basnyat, S.: Error Patterns: Systematic Investigation of Deviations in Task
Models. In: Coninx, K., Luyten, K., Schneider, K.A. (eds.) TAMODIA 2006. LNCS,
vol. 4385, pp. 109122. Springer, Heidelberg (2007)
[7] Card, S., Moran, T.P., Newell, A.: The Psychology of Human Computer Interaction. (1983)
[8] Veer, G., Lenting, B., Bergevoet, B.: GTA: Groupware Task Analysis - Modeling
Complexity. Acta Psychologica, 91, 297332 (1996)
[9] Annett, J., Duncan, K.D.: Task Analysis and Training Design. Journal of Occupational
Psychology 41, 211221 (1967)
[10] Patern, F.: Model-Based Design and Evaluation of Interactive Applications. Springer,
Heidelberg (2000)
[11] Mori, G., Patern, F., Santoro, C.: CTTE: Support for Developing and Analyzing Task
Models for Interactive System Design. IEEE Trans. Softw. Eng. 28, 797813 (2002)
[12] ISO_8807, Information Process Systems - Open Systems Interconnection - LOTOS- A
Formal Description Based on Temporal Ordering of Observational Behaviour (1988)
[13] Klug, T., Kangasharju, J.: Executable task models, chapter In: Proceedings of international
workshop on Task models and diagrams, pp. 119122. Gdansk, Poland (2005)
[14] Dittmar, A., Forbrig, P., Heftberger, S., Stary, C.: Support for Task Modeling - A
Constructive Exploration. In: Bastide, R., Palanque, P., Roth, J. (eds.) Engineering
Human Computer Interaction and Interactive Systems. LNCS, vol. 3425, pp. 5976.
Springer, Heidelberg (2005)
[15] Luyten, K.: Dynamic User Interface Generation for Mobile and Embedded Systems with
Model-Based User Interface Development, PhD Thesis in University Limburg (2004)
[16] Forbrig, P., Dittmar, A., Mller, A.: Adaptive Task Modelling: From Formal Models to
XML Representations, chapter in Multiple User Interfaces, pp. 169-192
[17] Sinnig, D., Chalin, P., Khendek, F.: Consistency between Task Models and Use Cases. In:
Proceedings DSV-IS 2007, Salamanca, Spain (2007)
[18] Cockburn, A.: Writing Effective Use Cases. Addison-Wesley, Boston (2001)
[19] Patterson, D., Liao, L., Fox, D., Kautz, H.: Inferring High-level Behavior from Low-
Level Sensors. In: Dey, A.K., Schmidt, A., McCarthy, J.F. (eds.) UbiComp 2003. LNCS,
vol. 2864, Springer, Heidelberg (2003)
[20] Franklin, D., Budzik, J., Hammond, K.: Plan-based Interfaces: Keeping Track of User
Tasks and Acting to Cooperate. In: Proceedings of IUI 2007, New York, pp. 7986
(2002)
[21] UML, Unified Modeling Language: Superstructure [Internet] (last Update: 2004)
(accessed June 2007), Available from: http://www.omg.org/docs/formal/05-07-04.pdf
Towards Developing Task-Based Models of Creativity
1 Introduction
There are a number of reasons why it is important to develop user interfaces and
systems that both enhance, and provide informed and principled support for creativity,
and creative processes. One reason is wealth creation. Both industry and governments
are conscious of the need for creativity and innovation in our daily lives and
creative products sell. A further reason is that creative problem solutions and the
resulting products typically involve pleasure and surprise, humour and fun, and
thereby improve the quality of life. Moreover, studying the nature of creativity
underpins a primary motivation to generate more creative artefacts. In order to
achieve this goal, it is necessary to identify theories, methods and tools across
disciplines to provide both a multidisciplinary and interdisciplinary perspective. If
this goal were to be achieved then this would serve as a potential leading edge for
design, providing that technological support for creativity is well designed. Thus the
role for HCI is paramount.
There is an opportunity to go beyond the current situation by exploiting the best
aspects of research in the different disciplines, but there are also obstacles to
advancing the state of the art. Different foci, concepts, semantics and language -
terms, labels, and connotations, make working across disciplines a challenge.
Additionally, modelling creativity is complex due to the assumed magical or
creation out of nothing characteristic that is frequently referred to in communication
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 56 69, 2007.
Springer-Verlag Berlin Heidelberg 2007
Towards Developing Task-Based Models of Creativity 57
about creativity, but which actually may not be reflected in either reality, or in the
activities engaged in.
Creativity has a long history of being modelled, therefore making sense of the
status of the different models can only be a long-term goal. It would indeed provide a
significant new contribution to the literature to be able to relate the different models
with respect to the phenomena modelled, and the uses to which the resulting models
are put. The focus of the research we are currently undertaking is to understand the
different models that exist within and across disciplines, relate those models by
identifying commonalities and differences, and finally, identify and overcome what
might constitute gaps in the research.
Consequently, we are pursuing both long-term and short-term goals. The long
term research goals are to:-
i) identify the different cognitive and behavioural structures, mechanisms and
processes of creativity for both individuals and groups across a range of tasks, within
different contexts and with different resources, opportunities and constraints;
ii) establish a way of conceptualizing or framing the different models and the
mappings between them this means developing an analytical structure which
represents the cognitive and behavioural constituents outlined in i);
iii) consider how current models of creativity relate to the proposed analytical
structure;
iv) investigate the role task models might play in providing further understanding
and explanation of creative activities; and,
v) investigate the role task models might play in informing creativity support tool
design.
Our short-term goals for this paper, given space brevity, are to outline selective
models of creativity, distinguish between them, and briefly consider how they can be
related. Additionally, the inherent problems experienced in both engaging in this
activity, and in applying some of the models, will be outlined. We will then consider the
potential benefits and the role task-based models, with a theoretical underpinning, (in this
case, Task Knowledge Structures [8]), might play in advancing the state of the art.
The paper is structured as follows: Section 2 selectively reviews models of
creativity; Section 3 makes distinctions between various models and outlines
problems with modelling in general, in this area. This section also discusses the
problems in relating the different models discussed in the paper, and in their
application. Section 4 presents a proposal for moving research forward, and the role
task-based models of creative tasks might play in this endeavour. Section 5 concludes
the paper.
are not able to review all approaches in this paper, a more comprehensive review can be
found in [6]. Consequently, in this section our goal is to briefly refer to a number of
models of creativity and creative processes which are well-known, acknowledged and
frequently cited.
The creative process has a long history of being modelled as a series of stages. In
1926, Wallas [17] outlined a model incorporating four creative stages: preparation,
incubation, illumination, and verification. The preparation stage involves
understanding the problem, and searching for solutions through exploration of
conceptual spaces, (see also [2]). The preparation stage is considered to involve hard
work and is followed by a more relaxed stage of incubation where people filter
information from conscious awareness to the subconscious to be used for creative
insight. Creative insight is thought to come in the illumination stage. Tentative
solutions which evolved during the illumination stage are then subjected to a
verification phase that involves testing, elaborating and developing.
Whilst Wallass model is still widely cited, the stages have frequently been
modified. Kneller [9] introduced a stage before preparation called first insight. Yet
other researchers describe the creative process as a generative brainstorming stage
followed by an evaluative focusing stage ([4]; [5]).
The stages however are neither as separate nor sequential as some creative process
models seem to suggest, but are interdependent and iterative. For example, idea
evaluation frequently leads to reformulation of the initial problem, making the process
cyclical [5]. Similarly, [10] defines creativity as a cycle of re-representations used for
conceptual exploration.
One often-cited model, is provided by [1]. She proposed the following stages as
being involved in individual creative processes:
This model has provided the basis for research in other disciplines, contexts and
spheres of influence.
There are also creative process models within HCI that build upon previous
models, such as that of [17]. Shneiderman [15], for example offers a four-phase
creativity framework that builds on past models but also departs from them by
Towards Developing Task-Based Models of Creativity 59
Collect: Learn from previous works stored in libraries, the web etc;
Relate: Consult with peers and mentors at early, middle and late stages;
Create: Explore, compose, and evaluate possible solutions;
Donate: Disseminate the results.
The four phases are not intended as purely sequential, but as iterative. Shneiderman
[15] then proposes eight activities that could be supported by computer tools that
occur during the genex phases.
Fig. 1. Genex phases and the eight iterative activities typically occurring within them
This is clearly not an exhaustive list and is an initial step in considering how group
reflection of creative tasks could be supported. These features necessarily need to be
elaborated to incorporate CSCW and HCI requirements for groupware.
In this section we have very briefly referred to models of creativity and creative
processes from psychological, HCI and architectural/design perspectives. These
instances will demonstrate the difficulty of relating and making correspondences
between the different approaches and models. However, it is necessary to establish
these relationships and mappings given the divergent sets of requirements generated
for supporting creative processes. The next section describes distinctions between a
selection of the models referred to in this section, and notes the problems in applying
the models.
For the purposes of this paper the models can be primarily distinguished on the
different levels of abstraction, and generality, of the phenomena modelled. Some
high-level models describe general psychological capabilities such as reflection
[13,14], re-representation and hypothesis testing [10], not limited to creativity. Other
models describe rather more specific aspects of creative stages, such as the activities
of reviewing alternatives [1], whilst yet others describe general processes of
62 H. Johnson and L. Carruthers
There are also a number of issues that have arisen as a result of our application of
existing creative process models. In this paper we will highlight problems experienced
in applying models to the results of a small-scale study that involved two dissimilar
tasks in two distinct domains, with different types of support, [3]. As a consequence of
this research, it is clear that three issues of importance need to be addressed by
creative process modellers.
The first issue is concerned with devising a means to perceive and interpret,
through whatever means, the creative activities and thought processes of study
participants. This is not just an issue for creativity and HCI, but also for the
behavioural sciences, and consequently there is a relevant literature with the
possibility of exploitation.
The second issue is to understand and map the different levels of abstraction within
and between models. For example, there is a difference between the generic Consult
phase of [15], and the more cognitive phase of thinking by free association and
also the Response Generation phase of [1]. Currently differences in level of
abstraction are not attended to, but are important when considering tool support.
The third issue relates to mapping observed or reported activities and processes to
model stages. Shneiderman [15] has made a very real effort to achieve this in his
genex framework.
However, there are a number of sub-issues related to any mapping process:
In addition, there are mappings between these layers, which we have yet to fully
develop theoretically, and identify empirically within our existing range of case
studies. This will include both top down and bottom up research activities. It is
important to do this in order that we can produce more than rich descriptions of
behaviour. Explicitly, we are claiming that stipulating the nature of the mappings
allows us to generate prescriptions, predictions and explanations that currently do not
exist. Once the mappings are fully developed, by observing aspects of behaviour we
will be able to traverse through the layers of the analytical structure, enabling us to
identify goals and furnish rationales for implicit and explicit behaviour, and also make
assumptions about the cognitive activities needing to be supported for goals to be
accomplished. Eventually, this will provide a well-informed and principled means to
develop creativity support tools.
Attempting to fit current creative process models within this analytical structure is
the next step. However, this is a complex undertaking because some models only
have representation at one layer whilst others clearly exist at two or three, but not all,
layers. Moreover, some modellers emphasize either higher or lower level layers that
clearly have implicit relationships, often not postulated, with other layers.
Taking specific examples, reflection, as in Schns [12,13] model fits within the
cognitive layer, whilst the activities of designing, and possibly seeing-as and seeing-
that within this model fit within the activity layer. Bodens [2] work on conceptual
spaces fits within the cognitive layer, but there are clear implications for behaviour.
Amabiles [1] model of developing alternatives fits within the activity layer but has
implications for problem solving and reflection at the cognitive level, and so on.
Finally, Shneidermans [15] Collect, Relate, Create, Donate creative phases exist at
a number of levels. Collect and Create are activities within the activity layer with
appropriate behaviour at the behavioural layer. The Relate and Donate phases include
modelling at the task and behaviour levels. However, it is clear that these two phases
also have some social, cultural function within the field or domain, which needs to be
taken into account in the next version of the analytical structure that will also
accommodate collaborative creativity.
An initial and possibly cursory analysis of the fit of models within the analytical
structure has suggested to us the paucity of research explicitly discussing purposes,
needs, and goals and how these might be fulfilled. Specifically, as discussed in
section 3 we have found very few attempts to model the causal links between
activities and behaviour that demonstrates priming, cueing, or following on
relationships. One possible conclusion is that the task layer for most creative models
either does not exist, is not specified in enough detail, or the mappings between this
and the other layers are not sufficiently defined to be able to derive any conclusions
about what creative performers intended to do, what they will do next, and why.
Basically we do not know the derivation of activities and/or behaviours.
A model at the task layer needs to be able to understand when, how and why
activities occur, the nature of the causal relationships, and the enabling and resultant
states.
One solution is to construct task-based models of the creative process and then
objectively assess whether they provide any explanatory purchase. In the next section
66 H. Johnson and L. Carruthers
In the previous section we argued that one possible benefit to constructing task-based
models is the potential ability to understand when, how and why activities occur, the
nature of the causal relationships, and the enabling and resultant states.
A further benefit is that task models have been used effectively in the past for
representing task knowledge and execution, generating requirements and design solutions
for everyday simple and complex tasks, supported by technology. Consequently, we
believe developing task models will provide a pivotal role in informing the design of
computer-based creativity support tools.
Finally, there is a role for task-based models not only in informing design, but also
in exploiting the existing theoretical underpinnings. TKS is one of many task
modelling approaches benefitting from a theoretical foundation. One question to
address is how does it relate to the analytical structure?
The analytical structures cognitive layer consists of psychological structures,
mechanisms and processes. For instance, in the case of memory, the cognitive level
consists of knowledge structures and processes associated with learning through
experience and undertaking tasks and activities. These processes include acquisition,
modifying, categorizing and re-structuring, and retrieving knowledge. This
knowledge is represented in either Fundamental Knowledge Structures (FKS, see [7])
or Task Knowledge Structures (TKS, see Figure 2).
FKS represent fundamental psychological knowledge, abilities, and processes that
are general, high-level and occur across all tasks and behaviour. These include for
example, collaboration, communication and explanation; hypothesizing and problem
solving; representation, re-representation, reflection and evaluation; decision-making
and risk assessment, and so on. They are fundamental in the sense that they are
necessary for the successful functioning of humans in their everyday lives. TKS by
contrast represent lower level, task-specific knowledge structures, abilities and
processes that relate to specific tasks, such as designing posters, or writing poems.
In the case of creativity, a subset of appropriate FKS knowledge would be recruited
in order to problem solve, reflect on solutions, and make decisions about which
solution(s) to pursue. In collaborative creativity, the FKS for collaboration [7] would
also be instantiated.
At the task layer, the TKS would represent the following knowledge:-
i) categorization of task artefacts;
ii) structure in tasks central/important, high priority and typical concepts and
activities;
iii) causal relationships between task objects leading to cueing, priming or follow-
on task behaviour, and supporting principles of categorical structuring and
procedural dependency;
iv) roles; goals; plans within different contexts; current, enabling, conditional, and
desired states; strategies; procedures; actions and objects.
Towards Developing Task-Based Models of Creativity 67
role(s) within/between
TKS
Objects
Goal(s)
It is likely that each of the above elements exist in creative tasks as they do in other
everyday simple and complex tasks. As an example related to i) above, in recent
funded research, artists in an artists forum recounted the role categorization, in the
form of snippets, photographs, videos, etc. plays in creative inspiration. Therefore,
supporting the categorization processes of organizing, storing and re-organising this
material effectively, has implications for generating creative ideas and solutions, thus
facilitating creative insight and inspiration.
Again, in the case of creativity, there is likely to be central and typical elements of
creative artefacts that need to be preserved and which dictate how the task is
structured and organised.
In considering the activity layer, this would be comprised of the different task
procedures, and the action-object couplings for individual creativity, and
collaboration mechanics from [11] if collaborative. Finally, the behavioural layer
would include the low level behaviours such as typing, drawing and so on this is
68 H. Johnson and L. Carruthers
5 Conclusion
In this paper we have referred to selective creative process models, and made attempts
to relate and apply the models. Theoretical and empirical issues related to modelling
creative tasks, such that we are in a position to move beyond description to
explanation of activities, have been discussed.
In section 4.1 we outlined an initial analytical structure that represents different
levels of abstraction, and provides a means to relate different models.
Finally, we briefly consider aspects of TKS that might constitute a task-based
model of creative tasks. A future research agenda includes further development of the
analytical structure and its application.
Acknowledgements
We are grateful to the participants of the workshops undertaken as part of the
creativity in design cluster.
We are also grateful to AHRC/EPSRC who funded the Designing for the 21st
century: Enhancing and Supporting Group Creativity in Design research cluster.
References
1. Amabile, T.M.: The Social Psychology of Creativity. Springer, New York (1983)
2. Boden, M.A.: The Creative Mind: Myths and Mechanisms, Weidenfeld and Nicolson:
London (1990)
3. Carruthers, L.: Modelling creativity. Tech Report. University of Bath (2004)
4. Dartnell, T.: Artificial Intelligence and Creativity: An introduction. Artificial Intelligence
and the Simulation of Intelligence Quarterly 85 (1993)
5. Dennett, D.: Brainstorms: Philosophical Essays on Mind and Psychology. Harvester Press
(1978)
6. Johnson, H., Carruthers, L.: Supporting creative and reflective processes. International
Journal of Human Computer Studies 64(10), 9981030 (2006)
7. Johnson, H., Hyde, J.K.: Towards modelling individual and collaborative construction of
jigsaws using Task Knowledge Structures (TKS). Transactions on Computer Human
Interaction. December 10(4), 339387 (2003)
8. Johnson, H., Johnson, P.: Task Knowledge Structures: Psychological basis and integration
into system design. Acta Psychologica 78, 326 (1991)
9. Kneller, G.F.: The Art and Science of Creativity, Holt Rinehart and Winston, New York
(1965)
Towards Developing Task-Based Models of Creativity 69
10. Oxman, R.: Design by re-representation: a model of visual reasoning in design. Design
Studies 18, 329347 (1997)
11. Pinelle, D., Gutwin, C.: Group Task Analysis for Groupware Usability Evaluations. In:
Proc. IEEE WetIce 2001, IEEE Computer Society Press, Los Alamitos (2001)
12. Schn, D.: The reflective practitioner: How professionals think in action. Basic books,
New York (1983)
13. Schn., D.: Designing as reflective conversation with the materials of a design situation.
Knowledge based systems 5(3) (1992)
14. Shneiderman, B.: Codex, memex, genex: The pursuit of transformational technologies. Int.
J. Hum-Comput. Interact. 10(2), 87106 (1998)
15. Shneiderman, B.: User Interfaces for supporting innovation. ACM Trans. on Computer-
Human Interaction 7(1), 114138 (2000)
16. Sternberg, R.J., Lubart, T., Kaufman, J.C., Pretz, J.E.: Creativity. In: Cambridge Handbook
of Thinking and Reasoning, pp. 351369 (2005)
17. Wallas, G.: The Art of Thought. Harcourt Brace: New York (1926)
Articulating Interaction and Task Models for the Design
of Advanced Interactive Systems
LIIHS IRIT
118 Route de Narbonne, F-31062 Toulouse cedex 9, France
{charfi, emmanuel.dubois, bastide}@irit.fr
1 Introduction
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 70 83, 2007.
Springer-Verlag Berlin Heidelberg 2007
Articulating Interaction and Task Models 71
initially chose a notation of each model: K-MAD for task modelling and ASUR for
mixed interaction modelling.
In this paper, we present an illustrative case study RAPACE, an interactive
prototype dedicated to be exhibited in a museum of natural history. Then, we position
our work according to different design steps of MIS, we briefly introduce the two
notations we selected and finally, we present a first set of articulation rules between
K-MAD [2] and ASUR [3].
Fig. 1. Projection of the cladogram on the vertical stand, projection of results on the horizontal
stand and pictures of animals ready to be used on the left
Verifying Implementation
coherence
3.1 Strategy
Our articulating of a task model and a mixed interaction model is based on the
K-MAD and ASUR notations. The task model describes the activity on a higher level
than the interaction model since it does not describe the interaction.
K-MAD (Kernel of Model for Activity Description) is centred on the task unit,
which can be described according to two aspects, the decomposition and the body.
1) The decomposition of a task unit of a given level gives place to several unit
tasks of lower level. The decomposition offers operators of synchronization, temporal
and auxiliary scheduling.
In our example, the task compare animals is composed of four subtasks: (1)
indicate comparison during which the user locates the animal of reference and starts
the comparison, (2) process information during which the system identifies the
animals and their common criteria, (3) return results during which the system
returns the results of the identification and (4) observe criteria. Here the temporal
scheduling of the decomposed task is described as sequential.
2) The body supports the characterisation of the task and consists of the core, the
conditions and the state of the world:
- The core gathers a set of textual attributes such as name, number, priority, goal,
etc: for example, the task insert is number 2, the task insert manually is
number 2.1 and the task insert automatically is number 2.2.
74 S. Charfi, E. Dubois, and R. Bastide
These components are not autonomous and need to communicate during the task
realisation. Such communication is modelled with ASUR relationships.
2) ASUR Relationships. We identified three different types of ASUR
relationships.
- Data exchange (AB) means that component B may perceive information
rendered by component A. In our example, the user observes the animal of
reference (Robject U) and data displayed by the video projector (AoutU). The
camera localizes animal of reference and animal of comparison (Robject Ain, Rtool
Ain), and transmits positions to system to identify the animals (Sinfo) and search
common criteria (Sinfo). After processing the data, results are sent to the video
projector (Sinfo Aout).
- Trigger (ADB) is always linked to a data exchange (CD): the data transfer
from C to D will only occurs when a specific spatial condition is reached between
A and B. The relationship (Ain Sinfo criteria) occurs when the animal of
reference is close to animal of comparison (Robject D Rtool). The relation between
the trigger and the data exchange that is triggered, is only specified as a trigger
property, and is not graphically represented on the ASUR diagram.
- Physical proximity (A==B) denotes the physical link that exists between two
entities. No such link is used in this model.
3) ASUR Characteristics. Additional characteristics are used to refine this
modelling:
- Location and perception/action sense indicate where the user has to focus to get
the information and through which human sense it is perceivable: perception and
action sense used with animal of reference are visual and physical action. The
location is the place where data is projected.
- Dimension (1D, 2D, 3D) and point of view refine the description of information
transfer.
Articulating Interaction and Task Models 77
ASUR has his own development environment called Guide-Me [9] which is used
to model examples in this paper.
Other mixed interaction notations exist. TAC paradigm [10] and MCPrd [11]
architecture describe the elements required in Tangible User Interfaces. Close to
ASUR, some notations support the exploration of Mixed Interactive Systems design
space [12], [13]: they are based on the identification of artefacts, entities,
characteristics and tools relevant to a mixed interactive system. More recent works in
mixed interactive systems try to link design and implementation steps by projecting
scenarios on software architecture models [14][15] or combining Petri Nets and
DWARF components [16].
A concise presentation of the notation can be given through its metamodel (Fig. 6).
An ASUR model is composed of components and relationships to describe a task. A
component can be either a computer system - Sinfo, Stool, Sobject - or a real entity - Rtool,
Robject -, or an adaptor - Ain, Aout - or a user. Components are connected by
relationships. A relationship can be either a data exchange, a representation, a real
association or a trigger.
The reasons why we chose ASUR are multiple. First, it allows identifying physical
and digital entities implied in the task. ASUR notation completes this representation
by a detailed description of the role and nature of the entities implied in the
interaction, and by identifying a predefine set of types of ASUR components and
relationships. ASUR also contributes to the ergonomic analysis of the system by
expressing properties with combinations of characteristics of the components and/or
ASUR relationships, as the perceptual or action sense, perception place, language
78 S. Charfi, E. Dubois, and R. Bastide
Our study of the two metamodels emphasized articulatory elements between the
metamodels: some elements of the metamodel refer to the same concept and
constitute direct links of the articulation, while others are specific to each metamodel.
It arises from the analysis of the metamodels that three elements of design are
commonly expressed by K-MAD and ASUR: the concept of task, object and user.
L1 The subset of the metamodel gathering task, event and task-group in K-MAD
(Fig. 4. left) represents the unit task independently of the tree. This subset refers to the
same concept than the element task of the ASUR metamodel (Fig. 6 left). K-MAD
describes the activity of the user in a procedural way, while ASUR describes the
interaction of the user with the system for a given task. So K-MAD conveys a global
vision of the task while ASUR adopts an atomic vision. An ASUR task thus
corresponds either to a K-MAD leaf, or a K-MAD aggregate of subtasks.
L2 The subset of the metamodel gathering the objects in K-MAD, object,
attribute, concrete object, concrete attribute and object-group (Fig. 4 top right) refers
to the same concept than the elements Real Entity and Computer System of the ASUR
metamodel (Fig. 6 right). Indeed, K-MAD objects are the domain objects used in the
task. Thus, the objects can be physical or digital depending on the conceptual choices.
However, the objects are strongly categorised and described in ASUR in a formal
way, while K-MAD describes the same objects of the world in a textual way.
L3 The subset in the K-MAD metamodel gathering user and actor (Fig. 4
bottom right) refers to the same concept than the element user of the ASUR
metamodel (Fig. 6 right). A user is always required in an ASUR model since the
notation describes an interactive task, while in K-MAD a user is only present in user
and interactive task. Furthermore, only one user is present in the ASUR models while
several actors might be used in K-MAD model.
L4 To a lower extent, we also identify a fourth common element: the subset in
the K-MAD metamodel gathering expression, precondition, postcondition, iteration
(Fig. 4 right) refers to the same concept than the elements constraint and trigger of
the ASUR metamodel (Fig. 6 bottom). The concept carried by these elements is
expressing constraints on tasks.
These links constitute a first set of articulation rules that will be useful to study the
coherence between a K-MAD model and an associated ASUR model.
There thus remain three elements of the metamodels specific to each metamodel:
Articulating Interaction and Task Models 79
- The performer is specific to K-MAD (Fig. 4 bottom left) notation: the performer
does not need to be characterized in ASUR model since with ASUR the task
described is always interactive.
- The attribute operator and name of the task are specific to K-MAD (Fig. 4). The
operator specifies the synchronisation of the tasks. This concept is not used in
ASUR which describes an atomic task. The name is not characterised in ASUR
also.
- The adaptor is an ASUR component (Fig. 6 top right) but is not described as
object in K-MAD because the goal of K-MAD is not to describe the task at a
device level.
- The relationship in ASUR (Fig. 6 left) represent the relation between ASUR
components and such relationship is not represented in K-MAD.
The specific elements of the metamodels will influence the establishing of the
additional rules. We present these rules in the following section.
As already mentioned K-MAD and ASUR do not share the same design goal and are
at different level of abstraction. Thus, arise two essential questions:
- When does the transition from a task model to an interaction model occur, i.e. at
which level of modelling should a designer move from K-MAD to ASUR?
- How can the transition from a task model to an interaction model be achieved, i.e.
which links can be drawn between elements of K-MAD and ASUR models to
facilitate this transition and articulate the two models?
In this section, we first present the rules identifying the level of transition, then the
rules of articulation between the subsets or elements of the metamodels (italic in the
rest of this section).
To illustrate the rules, we consider the K-MAD interactive subtask compare animals
(Fig. 3). The equivalent ASUR model is presented in Fig. 5. The K-MAD model of the
task compare animal, that is to say the sub tree starting from this task, involved only
one user, the visitor of the museum using this interactive exhibit and one object of the
task, the physical picture representing the animal of reference. According to rules R1
and R2, we can refine the K-MAD model with a unique ASUR description.
According to the rule R4, the equivalent to the object animal of reference and
animal of comparison in K-MAD (Fig. 3) can be in ASUR physical and/or digital
object (Real entity and/or Computer System): in our case the equivalent will
respectively be the Robject animal of reference, the Rtool animal of comparison, the Sinfo
animal of reference, animal of comparison and criteria in ASUR (Fig. 5).
According to the rule R3, the equivalent to the name of the K-MAD task locate
animal of reference is the name and/or the meaning of the relationships in ASUR: in
our case the equivalent is the name locate of the data exchange relationship between
Robject Animal of reference and the user. The equivalent to the name of the K-MAD
subtasks identify animals and search criteria are the names of the data exchange
relationships identify and search (Fig. 5 left).
We then consider the K-MAD subtask indicate comparison. The task tree
describes the activity of the user at an abstract level so the subtask start comparison
has an unknown performer. This subtask can be interactive or performed by the
system or a user. We chose in the interaction design level to describe this task as a
user task. According to the rule R5, the equivalent to the user performer in K-MAD
is the user component in ASUR (Fig. 5 right) (R5.1); and the equivalent to the sensori-
motor modality in the K-MAD subtask locate animal and start comparison is the
value of the characteristic perception/action sense in ASUR: in our example, the
82 S. Charfi, E. Dubois, and R. Bastide
equivalent is respectively the visual sense as the value of the perception sense and
physical action as the value of the action sense relating to the Robject (R5.2). Choosing
a system performer for this task would mean that the system is a demonstrator rather
than an interactive exhibit. As mentioned by R6.1, an adaptor to display the selected
animal would therefore be required.
We consider now the subtask process information. According to the rule R6, the
adaptor is necessary in the ASUR model since the physical object animal of reference
is used in the K-MAD system task (R6.1): in our example, the video projector.
Finally, we consider the K-MAD subtask return results. According to the rule
R7, the equivalent to the interactive K-MAD task is an ASUR model containing a
user and an adaptor (R7.1): in our example, the ASUR model contains user and the
video projector as Aout.
The equivalences expressed previously show that the equivalent to the sub tasks of
the K-MAD task compare animals is the ASUR partial models (R3). For example:
the equivalent to the K-MAD interactive task return result is the ASUR partial
model composed of: Sinfo {animal of reference, animal of comparison, criteria} Aout
user (Fig. 5 right).
References
1. Milgram, P., Kishino, F.: A Taxonomy of Mixed reality Visual Displays. Transactions on
Information Systems E77-D(12), 13211329 (1994)
2. Scapin, D.L.: K-MADe, COST294-MAUSE 3rd International Workshop, Review, Report
and Refine Usability Evaluation Methods (R3 UEMs), Athens (March 5, 2007)
3. Dubois, E., Gray, P.D., Nigay, L.: ASUR++: a Design Notation for Mobile Mixed
Systems. Interacting with computers 15(3), 497520 (2003)
4. Ashlock, P., D.: The uses of cladistics. Annual Review of Ecology and Systematics 5, 81
99 (1974)
5. Hirokazu Kato, H., Mark Billinghurst, M.: Marker Tracking and HMD Calibration for a
Video-Based Augmented Reality, Conferencing System. In: Proceedings of the 2nd IEEE
and ACM International Workshop on Augmented Reality, p.85 (1999)
6. Dubois, E., Gauffre, G., Bach, C., Salembier, P.: Participatory Design Meets Mixed
Reality Design Models. In: CADUI 2006. conference Proceedings of Computer Assisted
Design of User Interface. Information Systems Series, pp. 7184. Springer, Heidelberg
(2006)
7. Baron, M., Lucquiaud, V., Autard, D., Scapin, D., L.,: K-MADe: un environnement pour
le noyau du modle de description de lctivit. In: Proceedings of the 18th international
conference on Association Francophone dInteraction Homme-Machine IHM 2006 (2006)
8. Patern, F., Mancini, C., Meniconi, S.: ConcurTaskTrees: A Diagrammatic Notation for
Specifying Task Models. In: Proceedings of the IFIP TC13 International Conference on
Human-Computer Interaction, pp. 362369 (1997)
9. Viala, J., Dubois, E., Gray, P., D.: GUIDE-ME: graphical user interface for the design of
mixed interactive environment based on the ASUR notation. In: UbiMob 2004.
Proceedings of the 1st French-speaking conference on Mobility and ubiquity computing
(2004)
10. Shaer, O., Leland, N., Calvillo-Gamez, E.H., Jacob, R.J.K: The TAC paradigm: specifying
tangible user interfaces. Personal and Ubiquitous Computing, 359369 (2004)
11. Ishii, H., Ullmer, B.: Emerging Frameworks for Tangible User Interfaces. IBM Systems
Journal 39(3/4), 915931 (2000)
12. Trevisan, D.G., Vanderdonckt, J., Macq, B.: Conceptualising mixed spaces of interaction
for designing continuous interaction. Virtual Reality 8(2), 8395 (2005)
13. Coutrix, C., Nigay, L.: Mixed Reality: A Model of Mixed Interaction. In: Proceedings of
AVI 2006, pp. 4553. ACM Press, New York (2006)
14. Delotte, O., David, B., Chalon, R.: Task Modelling for Capillary Collaborative Systems
based on Scenarios. In: Proceedings of TAMODIA 2004, pp. 2531. ACM Press, New
York (2004)
15. Renevier, P., Nigay, L., Bouchet, J., Pasqualetti, L.: Generic interaction techniques for
mobile collaborative mixed systems. In: Proceedings of CADUI 2004, pp. 307320. ACM,
New York (2004)
16. Hilliges, O., Sandor, C., Klinker, G: Interaction Management for Ubiquitous Augmented
Reality User Interfaces. Diploma Thesis, Technische Universitt Mnchen (2005)
A Survey of Model Driven Engineering Tools for User
Interface Design
1 Introduction
Model-based approaches aim at helping developers understand user needs and design
solutions in an effective way. In the HCI domain, models can be declarative in order
to describe the future interactive system, but also generative to (semi-) automate the
code generation. If the quality of the generated interfaces can be disappointing [22],
models remain interesting for their declarative power. As a matter of fact, interactive
systems are more and more complex: they can use everyday life objects to propose
tangible interfaces; they can couple the virtual and the physical worlds in augmented
reality systems; they can adapt themselves to the user context, etc. They are
increasingly difficult to design. So new models appear to represent augmented reality
systems [11, 27] or the user context (with a user model, a platform model and an
environment model [28]).
In terms of tools, the HCI community uses different tools to support the design of
interactive systems, e.g. CTTE [21], GUIDE-ME [32] K-MADe [4], and Teresa [5].
These tools mainly give support to model editing for task models (CTTE, Teresa and
K-MADe) or specific models such as ASUR models (GUIDE-ME). In addition, some
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 84 97, 2007.
Springer-Verlag Berlin Heidelberg 2007
A Survey of Model Driven Engineering Tools for User Interface Design 85
of them [33, 4] allow model simulation. However, many others operations are
possible on models, in particular to increase their generative power.
Model management aims at providing techniques and tools for dealing with models
in more automated ways. It has been studied independently for years by several
research communities in the context of databases, document management and
software engineering. Nowadays, a federative approach emerges: model driven
engineering (MDE [14]). At the origins of the movement, the Object Management
Group proposes the Model Driven Architecture for object-oriented technologies. But
this dependence on a technology and the absence of clear concept definitions lead to a
more general approach, MDE. In MDE, any kind of models can be taken into account.
So MDE is spreading quickly, in particular in the HCI domain as can be seen by the
recurring workshop Model Driven Development of Advanced User Interfaces at
one of the main conferences about MDE, MoDELS.
Based on related work on MDE for HCI, this paper tries to understand the HCI
actual design needs related to MDE and proposes a survey of MDE tools for HCI. Our
goal is not to identify the best tool for HCI design but to find criteria that could help
HCI designers in the choice of a MDE tool.
The paper is organized as follows. Section 2 provides the basic definitions of MDE
concepts. Section 3 describes the existing HCI works related to MDE. Section 4
provides a survey of MDE tools for HCI in terms of metamodeling, model
transformation and others operations. Finally, conclusions are presented.
2 MDE Concepts
MDE is a recent paradigm where code is not considered as the central element of
software. Code is an element, a model produced by merging different modeling
elements. So in MDE, everything can be considered a model. Minsky [20] defines
that To an observer B, an object M* is a model of an object M to the extent that B
can use M* to answer questions that interest him about M. This definition shows a
model is an object intended to represent a particular behavior, dependent on a
particular disciplinary context. In the context of MDE, interesting models are those
that can be formalized to make them productive. Some authors integrate this
limitation directly into the definition of the notion of model: a model is a description
of (part of) a system written in a well-defined language [18]. This definition makes an
explicit reference to the notion of well-defined language. In MDE, such a language is
described by a meta-model. A meta-model is a specification model that defines the
language for expressing a model. It defines the concepts that can be used in the
models, which conform to it. In this way, a meta-model allows designers to specify
their own domain-specific languages. Models and meta-models are the first main
concepts in MDE.
be code, test cases, graphical modeling models, etc. The goal of transformations is
double: on the one hand, they capitalize on know-how; on the other hand, they permit to
automate this know-how. So transformations provide the generative power of models.
There are several kinds of generation. Classically, code can be generated from
given models. But in reverse engineering, the models are produced from the code.
There are many examples of translation of a model to another model such as the
generation of UML models from formal specifications. In MDE, all these operations
on models are considered as transformations. This is one of the key ideas in MDE that
permits to consider all the generative operations in the same manner.
A difficulty remains in finding a language to express the transformations. Many
different kinds of transformation languages exist: graphical languages like TrML1;
XML XSLT-based2 languages; languages based on a programming language (for
instance, JMI3 expresses Java-like transformations); ad-hoc languages like MOLA
[17] and MTL [33]; and finally languages based on the OMG standard QVT4. QVT
principles have been implemented in several languages, of which ATL (ATLAS
Transformation Language [1]) that is currently most widely used.
MDE is not limited to model transformations. [9] argues that transformations are not
sufficient to manage the generative power of models and proposes another operation
called model weaving. Model weaving [9, 10] is an operation on models that specifies
different kinds of links between model elements. In order to explain model weaving,
let us consider the simple information system for a library described in [10]. In this
context, an example of transformation of one relational database R1 into its equivalent
XML representation X1 is proposed (Figure 1). A model weaving operation is
specified to capture the links between both schemas with all the information
semantically relevant.
These links are represented in the R1_X1 mapping as illustrated in figure 1. In this
example, both schemas represent the same information but distinct data structures are
used. For instance, whereas the subjects have a Name in R1, they are called Descr in
X1. The equality between these elements can be represented by the Equals links in the
weaving. Moreover, one must also take into account the structure of both schemas:
the foreign key constraints and the nested elements are respectively represented by
FK and Nested links.
This example shows that a weaving is specific to a domain. The weaving
relationships, e.g. Equals or Nested, depend on the concepts of the models to be
manipulated. Thus, a weaving, like any model, must be in accordance with a meta-
model. It allows afterward to define transformations from the mapping.
Model management is not limited to model transformation or weaving. Other kinds
of operations can be applied to models. Models can be simulated, consistency can be
checked between them, etc. If these operations are important to make models more
1
TrML. Transformation modelling language, http://www2.lifl.fr/west/trml/
2
W3C. World Wide Web Consortium, http://www.w3.org/TR/2007/REC-xslt20-20070123/
3
JMI. Java Metadata Interface, http://java.sun.com/products/jmi/
4
Query/View/Transformation. OMG Specification, http://www.omg.org/docs/ptc/05-11-01.pdf
A Survey of Model Driven Engineering Tools for User Interface Design 87
useful, they are generally not presented as part of MDE for MDE concentrates on the
generative power of models. We can note that it is important that MDE tools can be
easily connected to other tools that will provide other operations on models.
The use of MDE and meta-models is not limited to the adaptation of the user
interface to its context. Other domains of HCI also define meta-models for specific
notations such as ASUR, a graphical notation for augmented reality systems [12] or
for specific tools like in [16].
All these meta-models are independent, but they are instances of the same meta-
meta-model (i.e. MOF). They are defined from scratch without being the extension of
well-known meta-models. Another approach could be to extend an existing meta-
model. In particular, UML proposes profiles to extend the UML meta-model to a
specific domain. So the meta-models defined as UML profiles take advantage of the
already existing semantics of UML and must conform to its semantics. For instance,
some extensions have been proposed for HCI through UMLi [25] and for context-
sensitive user interfaces [31].
The study of these existing works leads us to conclude that user interfaces design
needs MDE tools, which support domain-specific meta-models and models. Unlike
for software engineering (SE), there is no consensus on the models for HCI. In
addition, even different notations are proposed for task modeling. So the HCI domain
must manage several meta-models for task models. This diversity brings the need to
use MDE tools that permit designers to create their own meta-model or to modify an
existing one.
Finally if designers want to create links between HCI and SE models, all the meta-
models must be instance of the same meta-model. As SE and MDE communities use
the MOF as the meta-meta-model reference, it is important that the HCI domain
conforms to this practice. So the HCI meta-models must be instance of the MOF and
they must be represented by an UML class diagram.
More than weaving, transformation operations represent the heart of the MDE.
Section 2.2 showed that there are several kinds of transformations and that many
languages have been proposed to represent them. In this section, we study how the
HCI community uses transformations for user interface design.
In the current implementation of HHCS, the mappings between the task model, the
workspace and the CUI are expressed in ATL; an example is illustrated in figure 4.
The first rule illustrates the generation of a task into a workspace; it consists in
creating a space for every task with the assignment of the name of the task. The
second rule illustrates the transformation of a binary operator into a chain; it considers
only the operator "Or" and is written in two parts: the first one consists in the
selection of the binary operators of type "or"; the second describes the access given by
the space representing the mother task to spaces representing their two daughters.
to know in order to determine the future operations that can be realized on the
resulting model.
Both at the commercial and research levels, several tools for MDE are either available
or in development. These tools are designed as frameworks [2] or as plug-in [1].
Several classification works [13, 26] and tool comparisons [30] were proposed.
However, no classification estimates the functional criteria that we defined towards
our needs, in particular in terms of specific models used in HCI domain.
Table 1 shows a list of tools that we have considered realizing our survey. This list
is focused on the MDE tools which could be used in the HCI domain as the
manipulated models are not limited to UML models.
These tools will be studied according to the needs listed in the previous sections.
These needs are general to the HCI domain. Any HCI designer must refine them to
choose his MDE tool. So we do not intend to find the best tool but rather to provide
relevant information to choose a MDE tool. We will present our survey in terms of
92 J.-L. Prez-Medina, S. Dupuy-Chessa, and A. Front
the MDE important concepts: models and meta-models, operations on models and
other functionalities.
Regarding models and meta-models, the HCI community needs tools that do not just
consider UML models, but also specific models. Our list of tools being limited to this
kind of tools, any tool in the list can be suitable for HCI in terms of model and meta-
model support. Nevertheless, to refine our comparison, we introduce a criterion about
the way of expressing models and meta-models: models and meta-models can be
represented either textually or graphically. We also note if constraints can be added to
complete models and meta-models. Constraints are written in OCL, the constraint
language for UML.
From the previous table, we would recommend that a user interface designer
should better choose a tool allowing a graphical expression of models and meta-
models, because graphical representations are of course easier to use than textual
representations for non specialists.
implementations of QVT are different and the compatibility between tools is not
guaranteed. We also showed in section 3.3 that XSLT and ATL were nowadays the
only two languages used by the HCI community. So to support the creation of
transformations libraries for HCI, the tools ADT and UMLX, which support XSLT
and ATL, should be preferred in the HCI domain. Moreover ATL is already widely
used in the SE domain. So ATL is a good candidate to facilitate links between HCI
and SE models.
Moreover it is important to identify the form (text or model) of the generated
models in order to identify which kind of tools can manipulate them. In table 3, the
word "Text" is used when the result of a transformation is textual. Generally the result
is some code written in a programming language (java, C, C++, Cobol, Fortran,
VB.net, etc.) that can be compiled or interpreted. The term XMI is used when the
result of the transformation is a model in the XMI form (XML Metadata Interchange),
which can be loaded in many design tools. Here again ATL and UMLX (with other
tools) have an advantage as they provide the XMI and the textual format.
Considering model operations, two tools are good candidates for the HCI domain:
ATL that is the solution for works in the SE spirit and UMLX which is more adapted
for works with web technologies.
Transformation
Tool Language Graphical (G) or Generated model Weaving
Textual (T) Expression XMI Text
ACCELEO QVT, JMI T - Yes -
AndroMDA ATL, MofScript T Yes Yes -
ADT ATL T Yes Yes Yes
AToM3 Multi formalism (python) G Yes -
DSL tools Notation XML T Yes Yes -
Kermeta QVT T - Yes -
ModFact QVT T - Yes -
Merlin QVT, JET T - Yes -
MDA Workbench QVT T - Yes -
MOFLON JMI G - Yes -
OptimalJ QVT T - Yes -
QVT Partners QVT T Yes Yes -
SmartQVT QVT T Yes Yes -
UMLX XSLT, QVT T Yes Yes -
The studied MDE tools offer good solutions for meta-modeling and transformations.
But one may want to reuse models, meta-models or transformations into another tool,
so it is very important to know the capacity of a tool to interoperate with other tools.
In sections 3.1 and 3.3, we noted the importance of the format to exchange models
and meta-models and to bridge the gap with the SE domain. A great part of the tools
is centred on the MOF specification. So they can cover the modelling needs of
different domains and especially of HCI. Several implemented formats have been
proposed for the MOF: ECore, MDR (Metadata Repository), KM3 (Kernel Meta-
94 J.-L. Prez-Medina, S. Dupuy-Chessa, and A. Front
Meta Model), DSL (Domain Specific Language) and CWM (Common Warehouse
Meta-model). Nevertheless, DSL does not conform to MOF's implementation. Thats
why KM3 was created: KM3 is a specialized language to specify meta-models and is
used as a bridge between MOF and DSL. The most used format is ECore, which is a
simplified version of the MOF. Moreover MDE tools provide many libraries of
predefined models and meta-models in ECore. So the choice of a ECore compliant
tool is important to guarantee the development and the exchange of reusable models
and meta-models.
Regarding model transformation, XMI is proposed for transformations but it is not so
widely chosen. As a matter of fact, many other tools prefer textual transformations, in
particular for QVT tools. In terms of interoperability, Eclipse proposes de facto methods
for the storage and the recovery of models based on XMI. So the great majority of MDE
tools is based on Eclipse and can interoperate with other Eclipse tools.
Finally, what is more important in the HCI domain is the interoperability of MDE
tools with existing HCI design tools. Generally HCI design tools do not have a known
meta-model. However the models produced with them can be saved in an XML
format. The interoperability between MDE and HCI design tools can be easily
guaranteed by transforming every XML file in a ECore compatible format, so that it
could be recovered by the MDE tools that support this format. A longer term solution
is that HCI tools incorporate the MDE standards and create mechanisms to import or
export information based on the XMI format.
Interoperability
Tool Repository
with others tools
Metamodeling Model transformation Constraints
ACCELEO DSL, MDR, ECORE - XMI Eclipse, Netbeans
AndroMDA MOF, DSL - XMI Eclipse
ADT DSL, KM3, MDR, ECORE Text (ATL) XMI Eclipse, Netbeans
AToM3 Proprietary graphical multi - formalism -
DSL tools DSL - Proprietary notation XML / XMI - Eclipse, Netbeans
Kermeta ECORE Text (QVT) XMI Eclipse
ModFact ECORE XMI XMI Eclipse
Merlin ECORE Text (QVT) XMI Eclipse
MDA
ECORE XMI XMI Eclipse
Workbench
MOFLON ECORE - XMI Eclipse
OptimalJ CWM, ECORE XMI XMI Eclipse
QVT
ECORE Text (QVT) XMI Eclipse
Partners
SmartQVT ECORE Text (QVT) XMI Eclipse
UMLX ECORE XMI, XSLT XMI, XSLT Eclipse
5 Conclusion
The goal of this paper is to propose a survey of MDE tools in order to help the HCI
community in the choice of a MDE tool. Considering existing works in the HCI
domain, we think that the HCI domain shows a clear need for the MDE approach and
A Survey of Model Driven Engineering Tools for User Interface Design 95
tools. First, considering models and meta-models, HCI designers use a lot of domain-
specific models such as task models, ASUR models, etc. that conform to specific
meta-models. Transformation models and weaving models are also needed in HCI
domain. In particular, model weaving has been used on the notion of mapping where
a user interface is described as a graph of models and mappings both at design time at
run-time. Moreover, transformations allow to generate code from models, but also to
produce new models from other ones. Two types of transformations are then needed,
those that generate code (more generally, a text file that can be compiled or
interpreted) and those that generate graphical models (more generally, a structured
file that can be manipulated by design tools).
Based on these needs, we draw a survey of several MDE existing tools. Several
conclusions can be drawn from this comparison. In terms of modeling, a great part of
the tools are centered on MOF and allow to model domain-specific models. In terms
of transformations, there is no standard language to use, but it is important to know
the language manipulated by the tools and to specify if they are graphical or textual.
Moreover, it is important to know the format (text or model) of the generated models
in order to identify the kind of tools that can then manipulate them. Our conclusion is
that MDE is able to answer the specific needs of the HCI community in terms of
models. Nevertheless, the HCI community has to incorporate the proposed standards
that MDE is nowadays using. We hope this comparison will be useful to any HCI
designer who wants to select a MDE tool based on functional needs in terms of
graphical (or textual) expression of domain specific models, models transformation,
models weaving and interoperability with specific HCI tools.
References
1. Allilaire, F., Idrissi, T.: ADT: Eclipse development tools for ATL. In: Proceedings of the
2nd European Workshop on Model Driven Architecture (MDA) with an emphasis on
Methodologies and Transformations (EWMDA-2), Canterbury, UK. England, pp. 171
178. Computing Laboratory, University of Kent (September 2004)
2. Amelunxen, C., Knigs, A., Rtschke, T., Schrr, A.: MOFLON: A Standard-Compliant
Metamodeling Framework with Graph Transformations. In: Rensink, A., Warmer, J. (eds.)
Model Driven Architecture - Foundations and Applications: 2nd European Conference.
LNCS, vol. 4066, pp. 361375. Springer, Heidelberg (2006)
3. Bandelloni, R., Patern, F., Santoro, C.: Reverse Engineering Cross-Modal User Interfaces
for Ubiquitous Environments. In: EIS 2007. Proceedings of the Engineering Interactive
Systems Conference. LNCS, Springer, Heidelberg (to appear, 2007)
4. Baron, M., Lucquiaud, V., Autard, D., Scapin, D.: K-MADe: un environement pour le
noyau du modle de description de lactivit. In: Proceedings of 18th French-speaking
conference on Human-Computer Interaction (IHM 2006), pp. 287288. ACM Press, New
York (2006)
96 J.-L. Prez-Medina, S. Dupuy-Chessa, and A. Front
5. Berti, S., Correani, F., Mori, G., Patern, F., Santoro, C.: TERESA: A Transformation-
Based Environment for Designing Multi-Device Interactive Applications. In: Proceedings
of CHI 2004, CHI 2004 extended abstracts on Human factors in Computing Systems, pp.
793794. ACM Press, New York (2004)
6. Boedcher, A., Mukasa, K., Zuehlke, D.: Capturing Common and Variable Design Aspects
for Ubiquitous Computing with MB-UID. In: Proceedings of the International Workshop
on Model Driven Development of Advanced User Interfaces (MDDAUI 2005) organized
at MoDELS 2005, Jamaica, October. CEUR Workshop Proceedings vol. 159 (2005)
7. Botterweck, G.: A Model-Driven Approach to the Engineering of Multiple User
Interfaces. In: Khne, T. (ed.) MoDELS 2006. LNCS, vol. 4364, pp. 106115. Springer,
Heidelberg (2007)
8. Brning, J., Dittmar, A., Forbrig, P., Reichart, D.: Getting SW Engineers on board: Task
Modelling with Activity Diagrams. In: EIS 2007. Proceedings of the Engineering
Interactive Systems Conference. LNCS, Springer, Heidelberg (to appear)
9. Didonet Del Fabro, M., Bzivin, J., Jouault, F., Breton, E., Gueltas, G.: AMW: a generic
model weaver. In: Grard, S., Favre, J.-M., Muller, P.-A., Blanc, X. (eds.) Proceedings of
the 1re Journe sur lIngnierie Dirige par les Modles (IDM 2005), Paris, France, pp.
105114 (2005)
10. Didonet Del Fabro, M., Jouault, F.: Model Transformation and Weaving in the AMMA
Platform. In: Lmmel, R., Saraiva, J., Visser, J. (eds.) GTTSE 2005. LNCS, vol. 4143, pp.
7177. Springer, Heidelberg (2006)
11. Dubois, E., D., P., G., Nigay, L.: ASUR++: a Design Notation for Mobile Mixed Systems.
Interacting With Computers 15, 497520 (2003)
12. Dupuy-Chessa, S., Dubois, E.: Requirements and Impacts of Model Driven Engineering
on Mixed Systems Design. In: Grard, S., Favre, J.-M., Muller, P.-A., Blanc, X. (eds.)
Proceedings of the 1re Journe sur lIngnierie Dirige par les Modles (IDM 2005),
Paris, France, pp. 4354 (2005)
13. Eclipse Modeling Project. Official site (February 2007), http://www.eclipse.org/modeling/
14. Favre, J.-M.: Towards a basic theory to model driven engineering. 3er UML Workshop in
Software Model Engineering (WISME 2004) joint event with UML 2004 (October 2004),
Available online at: http://www.metamodel.com/wisme-2004/papers.html
15. Foley, J., Sukaviriya, N.: History, Results, and Bibliography of the User Interface Design
Environment (UIDE), an Early Model-based for User Interface Design and Development.
In: Patern, F. (ed.) Interactive Systems: Design, Specification, Verification, pp. 314.
Springer, Heidelberg (1994)
16. Ian Bull, R., Favre, J.M.: Visualization in the Context of Model Driven Engineering. In:
Ian Bull, R., Favre, J.M. (eds.) Proceedings of the International Workshop on Model
Driven Development of Advanced User Interfaces (MDDAUI 2005) organized at
MoDELS 2005, Jamaica (October 2005)
17. Kalnins, A., Barzdins, J., Celms, E.: Model Transformation Language MOLA. In:
Proceedings of Model-Driven Architecture: Foundations and Applications (MDAFA
2004), Linkoeping, Sweden, June 10-11, pp. 1428 (2004)
18. Kleppe, A., Warmer, S., Bast, W.: MDA explained: The model-driven architecture:
Practice and promise, p. 192. Addison-Wesley, Reading (2003)
19. Mens, T., Van Gorp, P.: A Taxonomy of Model Transformation. Electronic Notes in
Theorical Computer Science 152, 125142 (2006)
20. Minsky, M.: Matter, Minds, and Models. In: Proceedings of International Federation of
Information Processing Congress, New York, United States, vol. 1, pp. 4549 (1965)
A Survey of Model Driven Engineering Tools for User Interface Design 97
21. Mori, G., Patern, F., Santoro, C.: CTTE: Support for Developing and Analyzing Task Models
for Interactive Systems Design. IEEE Transactions on Software Engineering 28(8), 797813
(2002)
22. Myers, B., Hudson, S.E., Pausch, R.: Past, Present, and Future of User Interface Software
Tools. ACM Transactions on Computer-Human Interaction 7(1), 328 (2000)
23. Nbrega, L., Jardim, N., Coelho, H.: Mapping ConcurTaskTrees into UML 2. In: Gilroy,
S.W., Harrison, M.D. (eds.) Interactive Systems. LNCS, vol. 3941, pp. 237248. Springer,
Heidelberg (2006)
24. Patern, F.: Model-Based Design and Evaluation of Interactive Application. Springer,
Heidelberg (1999)
25. Pinheiro da Silva, P., Paton, N.: User Interface Modeling in UMLi. IEEE Software 20(4),
6269 (2003)
26. Planet MDE, Official site (September 2007), http://planet-mde.org/index.php?option=
com_xcombuilder&cat= Tool&Itemid=47
27. Shaer, O., Leland, N., Calvillo, E.H., Jacob, R.J.K.: The TAC Paradigm: Specifying
Tangible User Interfaces. In Personal and Ubiquitous Computing 8(5), 359369 (2004)
28. Sottet, J-S., Calvary, G., Favre, J-M., Coutaz, J., Demeure, A., Balme, L.: Towards Model
Driven Engineering of Plastic User Interfaces. In: Satellite Proceedings of the ACM/IEEE
8th International Conference on Models Driven Engineering Languages and Systems,
MoDELS/UML 2005. LNCS, pp. 191200. Springer, Heidelberg (2005)
29. Sottet, J.-S., Calvary, G., Coutaz, J., Favre, J.-M.: A Model-Driven Engineering Approach
for the Usability of Plastic User Interfaces. In: EIS 2007. Proc. of the Engineering
Interactive Systems Conference. LNCS, Springer, Heidelberg (to appear)
30. Tariq, N., Akhter, N.: Comparison of Model Driven Architecture (MDA) based tools
Karolinska University Hospital; A Thesis Document, Sockholm, Sweden, p. 74 (June
2005)
31. Van den Bergh, J., Coninx, K.: Using UML2.0 and Profiles for Modeling Context-
Sensitive User Interfaces. In: Proceedings of the International Workshop on Model Driven
Development of Advanced User Interfaces (MDDAUI 2005) organized at MoDELS 2005.
CEUR Workshop Proceedings, Jamaica, October 2005, vol. 159 (2005)
32. Viala, J., Dubois, E., Gray, P.: GUIDE-ME: Environement Graphique de Manipulation de
la Notation ASUR. In: Canals, G., Giboin, A., Nigay, L., Pinna, A.-M., Tigli, J.-Y. (eds.)
ACM Proceedings of the French conference: Mobilite et Ubiquite. 2004, Nice, France, pp.
7478 (June 2004)
33. Vojtisek, D., Jzquel, J.-M.: MTL and Umlaut NG: Engine and Framework for Model
Transformation. ERCIM News, Nro. 58, Special Issue on Automated Software
Engineering , 4245 (2004)
From Task to Dialog Model in the UML
1 Introduction
The ConcurTaskTrees notation (CTT) [12] is one of the most popular notations
for hierarchical task modeling used in academia for model-based design of user
interfaces. Since the Unied Modeling language(UML) [11] is one of the most
established modeling notations for software models, several approaches have been
presented to integrate the ConcurTaskTrees notation into UML.
Nunes and e Cunha[10] made a mapping to UML class diagrams as part of
the Wisdom notation. They mapped each task in the task model to a UML
class. The relations between parent and child tasks are represented using aggre-
gation relationships while the relations between siblings are represented using
constraints. All task categories are represented using the same task symbol.
Nobrega et al. [9] present a dierent approach which emphasizes the fact that
tasks in the ConcurTaskTrees notation represent activities. Therefore, they show
tasks using the UML notation for actions. They also propose new symbols for
the temporal operators of the ConcurTaskTrees. All changes they proposed were
made to integrate the ConcurTaskTrees notation both visually and semantically
into the UML.
In earlier work [16] we opted to extend the class diagram to represent the
CTT but to keep the appearance closer to the original. This resulted in some
notable dierences with the approach presented in [10]: The relations between
tasks are represented by stereotyped associations and each task category keeps
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 98111, 2007.
c Springer-Verlag Berlin Heidelberg 2007
From Task to Dialog Model in the UML 99
its original symbol (and properties), thus keeping the look of the model closer
to the original CTT specication.
A close relationship of the CTT with UML state machines has not been es-
tablished. It has however been shown that (this type of) hierarchical statecharts
can eectively be used to describe [3] and generate user interfaces [1,14]. In this
work, we extend the state-of-the-art by presenting a semantic mapping of the
dynamic aspects of the task model and exploiting this mapping for a compact,
powerful dialog modeling notation using UML state machines.
The mapping between CTT and UML state machines is established by giving
a behavioral specication for a task using UML state machines and is discussed
in section 4 after a short introduction of both notations. This specication is then
used in section 5 to express the behavioral semantics of all temporal operators.
These specications are used to derive a dialog model from a CTT model. A
UML stereotype is used to reduce the visual complexity of the model. The paper
is concluded by a discussion of related work and conclusions.
2 ConcurTaskTrees
UML state machines are an object-based variant of Harel statecharts [2]. Both
have the advantage over other forms of statecharts and state transition networks
that they support concurrent states. This means that when a UML state machine
is executed, it can be in multiple states at a given moment in time. Furthermore
Harel statecharts as well as UML state machines allow hierarchical composition
of states.
100 J. Van den Bergh and K. Coninx
Fig. 2 gives an overview of the relevant symbols in this notation. The initial
pseudo state and the nal state respectively mark the start and the end of
a composite state or state machine. An exit point can be used to mark an
alternative end point (e.g. due to abnormal behavior). A fork symbol can be
used to specify that a single state is followed by two or more concurrent states.
A join allows to do the opposite. A choice pseudo state can be used to specify
multiple alternative next states. Finally, a small black dot (not shown in Fig. 2 is
the symbol for a junction which allows to merge or split transitions (displayed as
arrows). For a detailed discussion of UML state machines we refer to the UML
Superstructure specication [11].
Fig. 2. UML state machine symbols: (a) Initial pseudostate, (b) Exit point, (c) Final
state, (d) State with specication of behavior on entry, during and on exit of state,
(e) Composite State with two regions specifying concurrent behavior, (f) Fork/join (g)
Choice pseudostate
UML state machines have no direct formal mapping, although partial map-
pings are already specied to stochastic petrinets for UML state machines with
the UML realtime stereotype extensions applied [15].
4 Tasks in UML
As mentioned in the introduction, dierent representations of the CTT in UML
have been proposed. Fig. 3 shows the CTT task model of Fig. 1 using the Wisdom
From Task to Dialog Model in the UML 101
Fig. 4 shows the state machine that can be associated with a task T1. When-
ever the state T1 is activated (T1 being the name of the task), the task is con-
sidered active. The inactive states are depicted in Fig. 4 for completeness. They
will not be depicted in further diagrams. On entry and exit of the states T1 and
Executing, an activity is specied. These activities broadcast an event, which
can trigger state changes for other tasks. All these events have an attribute that
species the source of the event. On entry of the state T1 an event, activated,
is broadcasted indicating that T1 is active. When the actual execution of a task
starts an event, started, is broadcasted. When an event stopped is sent, the task
is no longer executing, but the execution might be resumed. The event ended
indicates that the task has become inactive.
The exact meaning of these states depends on the task category and on the
platform and context in which the interaction is taking place. Table 1 shows a
possible mapping for the states of an interaction task to a concrete context: desk-
top interaction using a multi-window desktop such as MS Windows or MacOS.
The states of an application task presenting data to the user can be described in
a similar manner. When the application task is not directly represented on the
screen the states might be mapped to the states of the thread or process that
executes the task. Giving a concrete description of the states for a user task in
general is not as straightforward, although it should be easy to do for user tasks
that involve physical activity on a case by case basis.
concurrency. The concurrency operator (|||) expresses that two task can be
executed in any order and can interrupt each other. It has a straightforward
mapping to UML state machines when the state machine denition in Fig. 4 is
used. Fig. 5(a) shows that two parallel tasks can be represented by embedding
each task representation in a separate region of a complex state.
From Task to Dialog Model in the UML 103
Table 1. States of task execution for an interaction task on a pc using a graphical user
interface
state description
active the window containing the user interface controls associated to the
task is shown on the screen
sleeping the user interface controls associated to the task are disabled or not
visible
executing the user interface controls associated to the task are enabled and
visible
order independence. When the order independence operator (|=|) is used be-
tween two sibling tasks T1 and T2, these tasks can be executed in any order but
not concurrently. This means that when T1 is executing, T2 cannot start exe-
cution and vice versa. When one of the tasks is completed, the other can start
executing. Fig. 5(b) also clearly shows that the two tasks cannot interrupt each
other.
enabling. The main property of the enabling operator is that the tasks that are
its operands are executed one after the other. In terms of the proposed state
machine for tasks, this means that there is only a constraint on the order of
the executing state. Fig. 7 thus shows two dierent state machines that satisfy
that constraint. Fig. 7(a) corresponds to the situation where only the tasks that
belong to the same enabled task set (ETS) are presented in the user interface. An
ETS is a set of tasks that are logically enabled to start their performance during
104 J. Van den Bergh and K. Coninx
Fig. 6. Suspend/Resume
(a) (b)
Fig. 7. Enabling
the same period of time [12]. Fig. 7(b) corresponds to a situation where two
ETSs are merged into a single task set. [13] proposes some heuristics for when
such a merge can be desired. Note that unlike many dialog models, Fig. 7(b)
still shows that T2 cannot be executed until T1 is nished. This ensures that
the dialog model is consistent with the task model, even when ETSs are merged.
deactivation. The deactivation operator ([>) can be used to let one task inter-
rupt the execution of another task and prevent further execution of that task.
Fig. 8 shows what this means in terms of our UML state machine representa-
tion. T1 [> T2 means that when T2 ends execution, T1 immediately becomes
inactive. Note that both diagrams in Fig. 8 result in the described eect. The
approach in Fig. 8(a) can be extended to work when T2 has subtasks (although
the rst subtask of T2 should be used in this case to be compliant with the CTT
specication, while the approach in Fig. 8 cannot but oers a simpler syntax
instead.
choice. The choice operator ([]) oers the option to choose between two tasks of
which only one may be completed. As soon as one of the tasks starts execution,
the other tasks become inactive. This prevents that more than one task is ex-
ecuting at the same time. This type of choice is called a deterministic choice in
From Task to Dialog Model in the UML 105
(a) (b)
Fig. 8. deactivation
[4]. The same article also describes a non-deterministic choice. This latter type
of choice only allows one task to complete; the other options will not become
inactive until one of the tasks has been completed (see Fig. 9(b)).
Fig. 9. Choice
task iteration. The two possibilities that are oered by the CTT notation for
the expression of iterating task can be expressed as is shown in Fig. 10. Both
diagrams using UML state machines clearly show the semantics of the iteration
operators in the CTT; the repeatable task has to be completed before another
iteration of the task can be repeated.
The previous section showed that it is possible to combine the UML state ma-
chine description of two (or more) sibling tasks. To create an eective dialog
model, however, a complete task model has to be converted into a UML state
machine. In this section, we demonstrate by example that it is possible to do so
for the task model in Fig. 1.
Fig. 13. Simplied notation of Fig. 11 using the stereotype << task>>
UML state machine, the stereotype should be applied to all instances of the
metaclass State, i.e. all states in the diagram.
Seven tagged values are dened within the stereotype, which relate to the dif-
ferent temporal operators: executeAfter species the task after the completion
of which the current task can start executing. It is used in case an overlap in the
active state of the two tasks is desired as is specied in Fig. 7(b). alternativeTo
allows to specify an alternative task. The collection of tasks specied by this
tagged value contains one or more tasks when the corresponding task in the
CTT is an operand of the choice operator. concurrentAlternative is set to
true in case of non-deterministic choice. disables is a non-empty collection of
From Task to Dialog Model in the UML 109
tasks when the corresponding task in the CTT is the right-operand of a deactiva-
tion operator and the mapping to Fig. 8(a) is chosen. nonInterruptable is true
when the corresponding task in the CTT is an operand of the operator order
independent. optional is true when the operator optional is applied to the cor-
responding task in CTT. repeatable is set to true when the corresponding task
is repeatable. repetitionCount can be used to set the number of repetitions.
Fig. 13 shows the simplied version of the state machine in Fig. 11. This di-
agram is clearly more readable, because the added complexity of the substates
and associated transitions of the task state active is removed from the diagram.
Since all states have the stereotype << task>> applied to them, this stereotype
is not shown in the diagram. For those states that have a tagged value that con-
tains one or more values, the name of the tagged value as well as the values are
shown below the state name between parentheses. For states whose correspond-
ing task is optional or nonInterruptable or repeatable only the name of the
tagged value will be shown. Note that there is no such task in this example.
Taking into account the concrete semantics for the task states presented in
section 4 and table 1, we can state that we can consider the diagram in Fig. 13 to
be a high-level dialog model. A complex state can correspond to a single dialog
or window or a part thereof. Fig. 13 can thus describe the dynamic composition
of a single window.
7 Related Work
One can nd several approaches to dene the semantics of the temporal operators
of the CTT in literature. Some provide an informal denition of the temporal
operators such as Mori et al. [7]. They also present an algorithm to transform
the CTT to a set of enabled task sets (ETSs). Mori et al. [8] also propose an
abstract user interface model that contains a dialogue model description. This
notation is based on task sets and transition tasks.
A more formal denition of the CTT is given by Luyten al. [6] who use these
denitions to dene an alternative transformation algorithm from the CTT to a
set of ETSs. They do not give semantics of the temporal operators except that
two of them cause transitions: the enabling and deactivation operators.
Both aforementioned approaches do not support nested states, which means
that merging task sets creates inconsistencies between the task model and ab-
stract user interface model.
Nobrega et al. [9] provide a mapping of the CTT to UML 2.0. They dene
the semantics of most of the operators by dening a mapping to UML 2.0 ac-
tivity diagrams. In contrast to this work, they do not provide a denition for
the suspend/resume operator. They do propose an extension to UML, with a
hierarchical task notation, which reuses as much symbols of UML as possible
for the newly introduced concepts. This notation is, however, not used to derive
further specications, such as a dialog or abstract user interface model.
Elkoutbi et al. [1] propose a semi-automated approach to derive interactive
prototypes from scenarios specied using UML use cases, class diagrams and
110 J. Van den Bergh and K. Coninx
8 Conclusion
In this paper we proposed a general description of the task execution cycle using
UML state machines. We described the inuence of the temporal operators on
this description. An example that combined the states for a complete task model
into one stage machine demonstrated the complexity of the notation for larger
compositions. We thus proposed an abbreviated notation for this integrated no-
tation using a small UML prole. This prole adds extra semantics to the states,
which can be used to generate the complete specication. The support for nested
states oers enhanced expressiveness over other solutions such as state transition
networks.
The usage of UML enables the application of proven transformation tools
to be applied on the models to generate dialog models at dierent levels of
abstractions and adapted to dierent contexts of use. Further exploration of this
route is planned as future work. Building on the work of [15] would allow to
exploit all formal work done on petri nets.
References
1. Elkoutbi, M., Khriss, I., Keller, R.: Automated prototyping of user interfaces based
on uml scenarios. Automated Software Engineering 13(1), 540 (2006)
2. Harel, D.: Statecharts: a visual formalism for complex systems. Science of Com-
puter Programming 8(3), 231274 (1987)
3. Horrocks, I.: Constructing the User Interface with Statecharts. Addison-Wesley
Professional (1999)
4. Limbourg, Q.: Multi-path development of User Interfaces. PhD thesis, Universite
Catholique de Louvain (2004)
5. Logrippo, L., Faci, M., Haj-Hussein, M.: An Introduction to LOTOS: Learning by
Examples. Computer Networks and ISDN Systems 23(5), 325342 (1991)
6. Luyten, K., Clerckx, T., Coninx, K.: Derivation of a Dialog Model from a Task
Model by Activity Chain Extraction. In: Jorge, J.A., Jardim Nunes, N., Falcao e
Cunha, J. (eds.) DSV-IS 2003. LNCS, vol. 2844, pp. 203217. Springer, Heidelberg
(2003)
7. Mori, G., Paterno, F., Santoro, C.: CTTE: support for developing and analyzing
task models for interactive system design. IEEE Transactions on Software Engi-
neering 28(8), 797813 (2002)
From Task to Dialog Model in the UML 111
8. Mori, G., Paterno, F., Santoro, C.: Design and development of multidevice user
interfaces through multiple logical descriptions. IEEE Transactions on Sofware
Engineering 30(8), 507520 (2004)
9. Nobrega, L., Nunes, N.J., Coelho, H.: Mapping concurtasktrees into uml 2. In:
Gilroy, S.W., Harrison, M.D. (eds.) Interactive Systems. LNCS, vol. 3941, Springer,
Heidelberg (2006)
10. Nunes, N.J., Cunha, J.F.e.: Towards a uml prole for interaction design: the wisdom
approach. In: Evans, A., Kent, S., Selic, B. (eds.) UML 2000. LNCS, vol. 1939, pp.
101116. Springer, Heidelberg (2000)
11. Object Management Group. UML 2.0 Superstructure Specication (October 8,
2004)
12. Paterno, F.: Model-Based Design and Evaluation of Interactive Applications.
Springer, Heidelberg (2000)
13. Paterno, F., Santoro, C.: One model, many interfaces. In: Kolski, C., Vanderdonckt,
J. (eds.) CADUI 2002, vol. 3, pp. 143154. Kluwer Academic, Dordrecht (2002)
14. Sauer, S., Durksen, M., Gebel, A., Hannwacker, D.: Guibuilder - a tool for model-
driven development of multimedia user interfaces. In: MoDELS 2006. LNCS,
vol. 214, Springer, Heidelberg (2006)
15. Trowitzsch, J., Zimmermann, A.: Using uml state machines and petri nets for
the quantitative investigation of etcs. In: valuetools 2006: Proceedings of the 1st
international conference on Performance evaluation methodolgies and tools, p. 34.
ACM Press, New York (2006)
16. Van den Bergh, J.: High-Level User Interface Models for Model-Driven De-
sign of Context-Sensitive Interactive Applications. PhD thesis, Hasselt University
(transnationale Universiteit Limburg) (October 2006)
Towards Method Engineering of
Model-Driven User Interface Development
1 Introduction
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 112125, 2007.
Springer-Verlag Berlin Heidelberg 2007
Towards Method Engineering of Model-Driven User Interface Development 113
more empirically because there is still resistance to the application of usability method-
ologies in software organizations [26], such as resource constraints and lack of knowl-
edge about usability are the factors that most influence professionals. But, a formal UID
method requires efficiency to be integrated into software development organizations.
Model-based UID comes as a solution to improve efficiency by reusing models, reduc-
ing development efforts, among other benefits [3].
To make model-based UID methods applicable in the competitive reality of soft-
ware development organizations, they need to be explicitly defined with the possibil-
ity of easy adaptation when it is necessary to consider constraints pertaining to
specific projects [27,33]. Software organizations and their projects have specific char-
acteristics, which require methods to be tailored, for instance, the skills and quantity
of professionals affect how the method could be applied. UID is a creative process in
which professionals feel the need for flexibility in their work to address the growing
complexity of interactive systems. Therefore, a rigid method is no longer desired and
there is a need to support method definition and adaptation. In the reality of software
organizations and the need for tailoring the method for specific projects, the possibil-
ity to reuse pre-defined method specifications aids in accomplishing efficiency.
Considering this scenario, our main research question is: How can method engi-
neers define a model-based (or model-driven) UID method appropriate for the reality
of the software organization and its projects?
This research work aims to contribute in supporting the application of model-based
UID methods efficiently by providing flexibility in its definition. Considering that the
existing methods are diffused and applied in different projects around the world, such
knowledge and experience acquired can not be taken for granted. Therefore, it is not
the intention of this work to define a method nor to compare existing methods be-
cause we consider that a more appropriate method is adapted to the problem domain
or context of the project, which has been investigated since the early 90s [17,27].
Concerning a possible automation for this support, it is important to address issues
related to the creation and maintenance of a method base with propagation of changes
in method specifications; how the model editors are integrated with the method tool;
collaboration between professionals in the creation of models; the automatic or semi-
automatic generation of UIs; coordination of the use of tools; change management of
models; and support coordination of cooperative work. Solutions for these issues are
appropriately addressed by technology for process automation, which allows execut-
ing methods. But such technology requires explaining many details that are not the
focus of this work, but subject for another ongoing work.
This paper compares some existing solutions for the definition of methods and points
out some shortcomings when considering model-based UID. In the upcoming sections,
it proposes an approach for defining a model-based UID method by analyzing goals and
activities, and it concludes by presenting the expected advantages and future work.
2 Related Work
A survey performed on Model-Based User Interface Development Environments (MB-
UIDE) [16] showed that most of them provide a methodology for UI generation. These
environments however support the execution of the methodology by automating some
114 K. Sousa, H. Mendona, and J. Vanderdonckt
steps to generate a running UI or a specification of the UI; and even though some favor
concurrent work or different sequence possibilities, they do not allow adapting the
methodology according to the context of the project.
There are many MB-UIDEs that follow a formalized method [6,28,32], but their
supporting tools do not provide facilities to change the sequence of the method activi-
ties, thus restricting the possibilities to adapt the method. Fig. 1 depicts the level of
method flexibility of MB-UIDEs over time: oldest systems in the 90s had no method
at all, except perhaps the one induced by the software; old systems like TRIDENT [5]
has a very limited method flexibility since the method is completely coupled to the
software and no tailoring is possible; TEALLACH [16] offers some flexibility since the
design can start from one of the task, domain, and presentation models and evolve to
the other models depending on the project; Cameleon-compliant software [10] are
much more numerous today ([14,17,22,28,30] among others) and provide some adap-
tation of the method they rely on.
Method
flexibility
high
medium
low
none
t
1990 1995 2000 2005
CT-UIMS Trident Teallach Cameleon-
compliant
tools
The TEALLACH design process [16] aims to support the flexibility for the designer
lacking in existing environments by providing a variety of routes in the process; from
one entry point, the designer/developer can select any model to design independently
or associate with other models. Even though this is a flexible approach to design UIs,
it still hinders a complete flexibility because it is restrictive to the sequence of ma-
nipulation of models. Its flexibility is not extended enough to address the entire set of
activities, roles, tools, and artifacts. For example, if a software organization aims at
applying a method with such characteristic, it is limited by a set of models and activi-
ties implemented in the environment. Following, we present an overview of the as-
sessment of model-based methodologies considering three main criteria:
Explicitness. Most methodologies have some kind of method definition, but not all
aspects are explicitly defined, such as the association of roles, activities, models, and
tools. For instance, some define the lifecycle as a sequence of transformation between
models [32], some associate activities with the creation of models, but there is no as-
sociation with the role responsible for executing them [6], while others have the
methodology implemented in the environment, but not explicitly defined. Most of
Towards Method Engineering of Model-Driven User Interface Development 115
them do not mention tool in the lifecycle because their proposal is an environment to
support the lifecycle.
Flexibility. The methodologies that are part of a MB-UIDE are not flexible enough
[16], but TEALLACH comes as a solution to fulfill this need. Even though it provides a
flexible approach, in the point of view of software development organizations, flexi-
bility has a broader sense, which advocates the ability to change any aspect of the
method and integrate with any existing process and tool.
Reuse. Some methodologies in MB-UIDE have a set of activities to be performed,
within them, there is usually a set of activities that are not mandatory and can be exe-
cuted or not, depending on the projects need. But, the idea of reuse is to offer a larger
set of activities that provide a wider range of possibilities in different types of projects
that could be selected for the method as necessary. This type of strategy is not com-
mon in MB-UIDE since the methodology is composed of a small set of activities tar-
geted at a specific goal, such as in the use of patterns [28].
For application in real projects, existing approaches and their environments require
organizations to start from scratch to apply the methodology available in the envi-
ronment. To enhance the effect of methods, we need to adapt existing methods or cre-
ate a new one that fits to the characteristics of each new project [27].
In a response to this demand, the term method engineering has been introduced as
the engineering discipline to design, construct and adapt methods, techniques and
tools for the development of information systems. [7,8]
As an effort to address demands of flexible methods, there are several proposals to
automate method engineering, as one of them, Computer Aided Method Engineering
(CAME) supports building project-specific methods [27]. CAME has two types of
tools; the first one is a method editor that creates a method and the second one is a
generator of model editors based on the method meta-model to support the created
method. This approach to generate CASE tools based on the method description de-
creases the possibilities of applying the newly created method with external tools,
which are currently widely accepted for modeling software systems, as proposed in
[17]. This work does not mention how this proposal applies in projects in which the
software organization already has standardized a set of tools.
MetaEdit+ offers a CAME environment that allows method specification, integra-
tion, management, and maintenance [33]. It focuses on reuse and maintenance aspects
for methodology specifications. It provides five strategies when requirements change
may affect both the generated models and also the methodology. One detected draw-
back is that there is still no feature to support the reuse operation in building relation-
ships between methodologies. We envision that during method specification it is
primordial to allow integration with other methodologies because software organiza-
tions already applying a method may want to accommodate new techniques, in order
not to start from scratch with a brand new method.
Decamerone [19] provides a way to adapt and integrate methods stored in a
method base. Mentor [29] provides patterns for method engineers to easily design the
method. An important aspect is that the generated methods and/or model editors are
aimed for information system development, such as database systems, such editors do
not address the complexity and creativity necessary in model-based UID.
After analyzing some approaches, the major weaknesses in these approaches is
that MB-UIDEs focus on a specific and not so flexible methodology and CAME
116 K. Sousa, H. Mendona, and J. Vanderdonckt
tools, even though they provide explicitness, flexibility and reuse, they only focus
on system development, letting aside the concerns of usability, therefore not fully
addressing the definition of model-based UID methods. MB-UIDEs do not allow
the definition or adaptation of a method according to the characteristics of the or-
ganization and project, which makes them difficult to introduce certain activities
that support model-based UID, such as version control. CAME tools are limited to
software engineering models and method fragments and since they use a product
meta model to generate model editors, they can profit from a meta model for UI
models. Therefore, there is a need of interaction between MB-UIDEs and method
engineering environments.
In this paper, our goal is to suggest a Model-Based User Interface Method Engi-
neering that can address issues related to method engineering for model-based UID.
We shall investigate model-based UID activities to be performed by designers and
other usability team members to envision how usability goals specified by stake-
holders in the beginning of the project affect the way the usability team works. In
other words, we seek to demonstrate the relationship between model-based UID
method activities and the desired usability goals and how this association helps out-
line a method that best suits the context of the project.
3 UID Activities
Considering the evolution of MB-UIDEs and their methodologies over time, it is no-
ticeable the increase in flexibility, as presented in Fig. 1. The Cameleon Reference
Framework [10] brings a solution that supports the realization of multiples types of
development paths within a single framework. This framework structures a set of
models that provide a support for the current user interaction challenges. This frame-
work has 5 models distributed in 4 levels of abstractions in order to express the UID
life cycle for different contexts of use. These levels of abstraction are aligned with the
model-driven approach, which aims to reduce both the amount of developer effort and
the complexity of the models used [18].
The language UsiXML [22] was created as a XML extension to describe UIs for
multiple contexts of use, such as graphical, auditory and vocal user interfaces, virtual
reality, and multimodal user interfaces. As a language explicitly based on the Came-
leon Reference Framework, it adopts four development steps: 1) Task & Concepts, 2)
Abstract User Interface (AUI), 3) Concrete User Interface (CUI), and 4) Final UI. The
first step generates the task model, domain model and context model, the second step
generates the AUI, and the third step generates the CUI. The language does not con-
sider the Final UI as the framework does. The UsiXML methodology is structured as
presented in Fig. 2 [30].
The UsiXML language will be used to exemplify our proposal in the next sections
since it provides the necessary support to represent models in a structured form and it
supports the flexibility provided by the Cameleon Framework.
There is a suite of tools, automated techniques, and a framework to support the
creation of models, and there is also a running effort to define a detailed model-based
UID method. As follows, we explain how we intend to define such a method and how
to integrate it with a software development process.
In this section, we describe the main theoretical concepts considered as the foundation
of our proposal: model-based UID method engineering.
The proposed structure is based on the definition of method content from the Soft-
ware Process Engineering Metamodel (SPEM), a meta-model for defining software
development processes [25]. Considering that SPEM is limited to the minimal ele-
ments necessary to define any software and systems development process, without
adding specific features for particular development domains or disciplines [25], we
aim to add specific elements for UID. The main goal is to make usability as a central
point not only for UI designers, but even before they come into action during software
development processes; making usability also a concern for method engineers.
Fig. 3 depicts a class diagram with the most relevant elements for the definition of
a model-based UID method. This proposal shall evolve progressively to address the
organization of method activities in a process lifecycle nor does it consider the
method enactment (or execution). This proposal extends the basic elements of a
method engineering notation by associating usability goals with activities, which will
be presented in the next sub-section. In general, a method is defined by describing
Activities, which are selected for a Project based on Usability Goals. Activities are
performed by Roles, and act upon Work Products using Tools to manage the work
products, which can be UI Models.
Usability Goals should be established early in the project to drive professionals
into focusing on UID efforts, and to use these goals as precise resources to evaluate
their work towards accomplishing these goals. Usability goals can shorten the UID
lifecycle, as stated in the Usability Engineering Lifecycle [23]. This methodology es-
tablishes usability goals in the requirements analysis phase and uses them to assess
UIs during usability evaluation. In our work, usability goals have yet another purpose
because they are used in the identification of activities that are appropriate for a spe-
cific project. The impact that usability goals can bring to method definition is to pro-
vide a manner to make method engineers (as well as project managers) more aligned
with usability from the beginning until the end of the project, in order to make sure
that all stakeholders value the importance to check whether or not such goals were ac-
complished in the end.
Projects are composed of activities that are performed to develop a system. Activi-
ties represent the work that is performed by roles when acting upon work products
and using a tool. Roles define a set of competencies that professionals must have to
execute such role by performing activities and being responsible for work products.
Work Products are assets or artifacts that are used, produced or updated during the
execution of activities using a tool. Work Products can be input or output of activities
118 K. Sousa, H. Mendona, and J. Vanderdonckt
performed by roles. For a model-based UID method, the main work products are UI
models. Tools support the execution of activities by managing work products, that is,
a tool can manage one or more kinds of work products.
Activities can also be supported by other kinds of implementation besides tools,
when it is necessary to implement functionalities that do not need tools or that can be
available in more than one tool. In such cases and considering the current technology
for process automation, we propose the use of web services.
In general, web services allow access to a functionality via the web using a set of
open standards that make the interaction independent of implementation aspects, such
as the operating system platform and the programming language used [12]. This
technology promotes a high level of coherence and a low level of coupling, which
contributes to assemble services to compose a method. Business Process Execution
Language (BPEL) [4] was defined by W3C to promote assembling services. It has
reached a good maturity and it is supported by the main architectures available in the
market, such as JEE and .NET.
An activity can be associated with one or more usability goals, which is the case of
the UID activity Create task model. But, this does not mean that once the position
and ordering of this activity has been defined, it has to be repeated twice for the dif-
ferent goals to be accomplished. On the other hand, it means that if a project needs to
achieve both goals, the execution of this activity addresses both of them.
Depending on the usability goals, activities can be selected independently of each
other, which is the case for the activities Create task model and Create context of
use model with their own specific goal. But, in cases of a usability goal triggering
more than one activity, their order of execution is clearly specified because one activ-
ity has a direct impact on the other, which is the case of executing the activity Create
context of use model before the activity Create task model for the usability goal
Adapt the user interaction according to users personal characteristics.
In cases when stakeholders state that they want some kind of automation in UID to
achieve more productivity, certain activities can be selected depending on the goal.
For instance, the activity Transform task and domain models into AUI model is ap-
propriate when various devices are considered and the activity Transform AUI
model into CUI model also aids in the productivity level of designers since they re-
ceive UIs with the necessary objects as a starting point to work on the look-and-feel.
The activity Transform task and domain models into CUI model is useful when one
specific device is the aim.
UID activities that are commonly used may already be included in software devel-
opment processes, such as defining a style guide, prototyping, usability evaluation,
among others. But, in cases where such activities are not yet part of the organizational
software process, usability goals must be considered to correctly apply these activi-
ties. It is our intention to further improve the list in Table 1 with usability goals associ-
ated to such activities.
Tool support for method engineers can be very useful for their productivity when de-
fining or customizing methods. The process of deciding which are the most appropri-
ate activities for specific projects requires knowledge and experience, but tools can
help them to maintain a base of experiences and learned lessons, when easily accessed
can add value to their work. Therefore, in addition to the strategy presented in the
previous section, we selected Business Process Modeling Notation (BPMN) as a stan-
dard with available tools to support method engineers.
BPMN was proposed to be applied in the representation of organizational proc-
esses [24], and we propose to use BPMN in method definition because: i) it has be-
come a pattern for process modeling; ii) there are many tools available in the market
implementing it; iii) it has been intended as a human-readable layer that hides the
complexity of designing transactional business processes; and iv) BPMN can be trans-
formed in BPEL to be automated using web services, as described at the end of
section 3.1.
There are many tools available that implement BPMN, which provide the neces-
sary support for method engineers that follow a common structure as in the tool pre-
sented in Fig. 4. But, after the assessment of model-based UID methods, we noticed
the need to use method engineering techniques to improve method definition.
Towards Method Engineering of Model-Driven User Interface Development 121
Therefore, we have analyzed the alignment of BPMN with a software engineering no-
tation, more specifically with SPEM. The alignment and complementary aspect is
confirmed by quoting the SPEM documentation [25]: SPEM 2.0 does not aim to be a
generic process modeling language, nor does it even provide its own behavior model-
ing concepts. SPEM 2.0 focuses on providing the additional information structures
that you need for processes modeled with UML 2.0 Activities or BPMN/BPDM to
describe an actual development process. Using a process modeling tool to define a
method, we have followed three steps, as pointed out in Fig. 4:
1. Definition of activities we have defined a list of activities for a model-based UID
method based on the Cameleon Framework.
2. Association of BPMN and SPEM we have associated BPMN elements with
SPEM elements to give meaning and use business process elements in the method
engineering domain.
3. Reuse of activities drag and drop activities from the pre-defined list (on the left
of the tool) and reuse them when defining the method for a specific project, in the
desired or recommended order.
The method defined on the right side of the tool in Fig. 4 is clearly related with the
concepts defined in Fig. 3. For example, the Role Usability Expert performs the Ac-
tivity Create AUI and acts upon (by creating) the Work Product, which in this case
is a UI Model AUI Model by using the Tool IdealXML. To complete, this activ-
ity is present in this method because the stakeholders stated the Usability Goal De-
sign for many devices, which is directly associated with the activity Create AUI.
After analyzing which activities are important to achieve certain usability goals and
selecting the appropriate ones, it becomes easier to define a method. We must fur-
thermore be able to define methods that are applicable in software development
projects and also provide support for model-based UID. Following, we demonstrate
an example of integration of model-based UID activities in a software development
process.
4 Integration of Methods
In an attempt to make UID methods really effective in real projects, there have been
various efforts to bridge the gap between software engineering and HCI. Some pro-
posals focus on user involvement [15], on how to help software engineers execute us-
ability techniques [13], on addressing usability issues using architectural patterns
[20], others are product-oriented and adapt an object-oriented notation to support HCI
techniques [11], but all aim at making usability techniques applicable in real-life soft-
ware development projects.
The technique to define project-specific methods from parts of existing methods is
called method assembly [8], which can produce a powerful new method. Using this
technique, we integrate the best from both domains: activities from a world-wide ac-
cepted commercial software development process, the Rational Unified Process
(RUP) [21]; and activities for creating UI models. Works, such as [9], demonstrate
that the integration with RUP can make model-driven methods in general more acces-
sible to a wider audience of software engineers.
While some HCI methods have specific and unique structures, like the Usability
Engineering Lifecycle [23], many proposals that integrate SE and HCI are based on
the RUP structure, such as the integration of development activities with usability
techniques [13] is based on the RUP process structure; and the UCD [15] creates a
new discipline for usability design in the RUP.
This is an example of the integration of a model-based UID method and a software
development process. Picture a software organization that already has a well-deployed
software development process, such as the RUP and wants to focus on UID. For
instance, when the organization already has a standard way to do tests, reviews, and
controls of change requests, but it wants to increment its way of working with models,
it is possible to make a smooth integration. In Fig. 5, we present activities related to
model-based UID: create context of use model and create AUI, and SE activities: re-
view requirements, review the design, and submit change request.
Our proposal to support the integration scenario is provided with the association of
goals with activities that can be appropriately allocated in the method. For instance, if
a new project aims at designing UIs for many devices, the activity Create AUI is
included in the organizational software process to accomplish this usability goal, as
specified in Table 1. In addition, the method engineer might also need support in de-
fining the sequence of the activities; therefore, a proposed model-based UID method
that integrates UID activities and RUP activities can be provided as a source of guid-
ance, which is subject for future work.
5 Conclusion
The main goals we intend to achieve with our proposal of a model-based UID method
engineering is to aid method engineers when creating methods more efficiently and
also to make model-based UID methods applicable in the competitive reality of soft-
ware development companies.
Method engineers can define a model-based UID method appropriate for the reality
of the software organization and its projects using an activity-based strategy. This
strategy is founded on usability goals and brings together two different domains:
method engineering and UID methods. In other words, when method engineers rely
on usability goals to define a method, they also profit from clearly specifying goals
that must be accomplished after each activity is concluded.
Our ongoing and future works are related to extending this proposal to address the
organization and sequence of UID activities in a process lifecycle, such as the organi-
zation of activities in phases and disciplines; to provide guidance for the integration of
UID and software engineering activities; to define activities related to UID, but not
necessarily to model-based design and associate them to usability goals; and to pro-
pose a solution to execute the method and a strategy for model traceability [1].
References
1. Aizenbud-Reshef, N., Nolan, B.T., Rubin, J., Shaham-Gafni, Y.: Model traceability. IBM
Systems Journal 45(3), 515526 (2006)
2. Ayed, M.B., Ralyte, J., Rolland, C.: Constructing the Lyee method with a method engi-
neering approach. Knowledge-Based Systems 17(7-8), 239248 (2004)
124 K. Sousa, H. Mendona, and J. Vanderdonckt
3. Barclay, P.J., Griffiths, T., McKirdy, J., Kennedy, J.B., Cooper, R., Paton, N.W., Gray, P.:
Teallach - a flexible user-interface development environment for object database applica-
tions. Journal of Visual Language and Computing 14(1), 4777 (2003)
4. BEA Systems, IBM Corporation, Microsoft Corporation, SAP AG, Siebel Systems: Busi-
ness Process Execution Language for Web Services, V1.1 (May 2003)
5. Bodart, F., Hennebert, A.-M., Leheureux, J.-M., Vanderdonckt, J.: Computer-Aided Win-
dow Identification in Trident. In: Nordbyn, K., Helmersen, P.H., Gilmore, D.J., Arnesen,
S.A. (eds.) Proc. of 5th IFIP TC 13 Int. Conf. on Human-Computer Interaction Interact
1995, Lillehammer, July 1995, pp. 331336. Chapman & Hall, London (1995)
6. Botterweck, G., Hampe, J.F.: Capturing the Requirements for Multiple User Interfaces. In:
Proc. of 11th Australian Workshop on Requirements Engineering AWRE 2006, Adelaide,
December 9, 2006, Univ. of South Australia (2006)
7. Brinkkemper, S.: Method engineering: Engineering of information systems development
methods and tools. Information Software Technology 38(4), 275280 (1996)
8. Brinkkemper, S., Saeki, M., Harmsen, F.: Meta-Modelling Based Assembly Techniques
for Situational Method Engineering. Information Systems 24(3), 209228 (1999)
9. Brown, A.W., Iyengar, S., Johnston, S.: A Rational approach to model-driven develop-
ment. IBM Systems Journal 45(3), 463480 (2006)
10. Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., Vanderdonckt, J.: A
Unifying Reference Framework for Multi-Target User Interfaces. Interacting with Com-
puters 15(3), 289308 (2003)
11. Costa, D., Nbrega, L., Nunes, N.: An MDA Approach for Generating Web Interfaces
with UML ConcurTaskTrees and Canonical Abstract Prototypes. In: Proc. of 5th Int.
Workshop on Task Models and Diagrams for user interface design Tamodia 2006. LNCS,
vol. 4385, pp. 95102. Springer, Heidelberg (2006)
12. Fensel, D., Lausen, H., Polleres, A., Bruijn, J., Stollberg, M., Roman, D., Domingue, J.:
Enabling Semantic Web Services - The Web Service Modeling Ontology. Springer, Berlin
(2007)
13. Ferr, X., Juristo, N., Moreno, A.M.: Framework for Integrating Usability Practices into
the Software Process. In: PROFES 2005. Proc. of 6th Int. Conf. on Product Focused Soft-
ware Process Improvement, Oulu, June 13-18, 2005. LNCS, vol. 3547, pp. 202215.
Springer, Heidelberg (2005)
14. Furtado, E., Furtado, J.J.V., Silva, W.B., Rodrigues, D.W.T., Taddeo, L.S., Limbourg, Q.,
Vanderdonckt, J.: An Ontology-Based Method for Universal Design of User Interfaces. In:
Seffah, A., Radhakrishnan, T., Canals, G. (eds.) Proc. of Workshop on Multiple User Inter-
faces over the Internet: Engineering and Applications Trends MUI 2001 (Lille, September
10, 2001)
15. Gransson, B., Gulliksen, J., Boivie, I.: The usability design process - integrating user-
centered systems design in the software development process. Software Process: Im-
provement and Practice 8(2), 111131 (2003)
16. Griffiths, T., Barclay, P.J., McKirdy, J., Paton, N.W., Gray, P.D., Kennedy, J.B., Cooper,
R., Goble, C.A., West, A., Smyth, M.: Teallach: A Model-Based User Interface Develop-
ment Environment for Object Databases. In: Proc. of UIDIS 1999, pp. 8696. IEEE Com-
puter Society Press, Los Alamitos (1999)
17. Grundy, J.C., Venable, J.R.: Towards an integrated environment for method engineering.
In: Proc. of IFIP WG 8.1 Conf. on method Engineering, pp. 4562. Chapman and Hall,
Sydney, Australia (1996)
18. Hailpern, B., Tarr, P.: Model-driven development: The good, the bad, and the ugly. IBM
Systems Journal 45(3), 451461 (2006)
Towards Method Engineering of Model-Driven User Interface Development 125
19. Harmsen, F.: Situational Method Engineering. Moret Ernst & Young Management Con-
sultants (1997)
20. Juristo, N., Lpez, M., Moreno, A.M., Snchez-Segura, M.I.: Improving software usability
through architectural patterns. In: ICSE Workshop on SE-HCI 2003, pp. 1219 (2003)
21. Kruchten, Ph.: The Rational Unified Process - An Introduction. Addison-Wesley, New Jer-
sey (2000)
22. Limbourg, Q., Vanderdonckt, J.: UsiXML: A User Interface Description Language Sup-
Porting Multiple Levels of Independence. In: Matera, M., Comai, S. (eds.) Engineering
Advanced Web Applications, pp. 325338. Rinton Press, Paramus (2004)
23. Mayhew, D.: The Usability Engineering Lifecycle - A Practitioners Handbook for User
Interface Design. Morgan Kaufmann Publishers, San Francisco (1999)
24. OMG, Business Process Modeling Notation Specification, V1.0 (February 2006)
25. OMG, Software Process Engineering Metamodel Specification, V2.0 (February 2007)
26. Rosenbaum, S., Rohn, J.A., Humburg, J.: A toolkit for strategic usability: Results from
Workshops, Panels and Surveys. In: Proc. of ACM Conf. on Human Factors in Computing
Systems Proceedings CHI 2000, pp. 337344. ACM Press, NY (2000)
27. Saeki, M.: Came: The first step to automated software engineering. In: Proc. of the OOP-
SLA 2003 Workshop on Process Engineering for Object-Oriented and Component-Based
Development, pp. 718 (2003)
28. Sinnig, D., Gaffar, A., Reichart, D., Seffah, A., Forbrig, P.: Patterns in Model-Based Engi-
neering. In: Proc. of CADUI 2004, pp. 195208. Kluwer Academic Publishers, Dordrecht
(2004)
29. Si-Said, S., Rolland, C., Grosz, G., MENTOR,: A Computer Aided Requirements Engi-
neering Environment. In: Constantopoulos, P., Vassiliou, Y., Mylopoulos, J. (eds.) CAiSE
1996. LNCS, vol. 1080, pp. 2243. Springer, Heidelberg (1996)
30. Vanderdonckt, J.: A MDA-Compliant Environment for Developing User Interfaces of In-
formation Systems. In: Pastor, ., Falco e Cunha, J. (eds.) CAiSE 2005. LNCS,
vol. 3520, pp. 1631. Springer, Heidelberg (2005)
31. Visual Paradigm. Business Process Visual Architect. Available at: http://www.visual-
paradigm.com/product/bpva/
32. Wolff, A., Forbrig, P., Dittmar, A., Reichart, D.: Linking GUI elements to tasks: support-
ing an evolutionary design process. In: Proc. of TAMODIA 2005, pp. 2734. ACM Press,
New York (2005)
33. Zhang, Z., Lyytinen, K.: A Framework for Component Reuse in a Metamodelling-Based
Software Development. Requirements Engineering 6(2), 116131 (2001)
Modeling Group Artifact Adoption for Awareness in
Activity-Focused Co-located Meetings
1 Introduction
Advances in technology over the last twenty years have enabled work groups to
become increasingly geographically distributed. However, this is not the way that
many small organizations choose to work. Work groups that have co-located meetings
to schedule their individual tasks, discuss and progress group objectives and build
group knowledge, despite usually working alone or in sub-groups, are a common
pattern in reality [21] and the laboratory-based study reported in this paper has been
designed to emulate this work pattern. In this paper, we focus on the weekly co-
located meetings of the groups in our study.
The development of groupware that effectively supports work groups is always
limited by how well groups are understood and, consequently, how well they can be
modeled to support system design. A better understanding of group activities would
also provide a better basis to determine requirements for collaborative systems [14]. In
this paper we report on an empirical study of groups that has led to the development of a
taskwork support model that can be used to aid group awareness.
Awareness has previously been taken to mean group members sensitivity to each
others behavior, whilst engaged in their own activities [10], although sometimes it
can be used to describe awareness of more specific elements of group work, such as
collaboration [17] or workspace [9]. In this paper we show that awareness of task is
as important as awareness of group when complex tasks are attempted.
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 126 139, 2007.
Springer-Verlag Berlin Heidelberg 2007
Modeling Group Artifact Adoption for Awareness 127
Stahl [23] suggests that knowledge can be viewed as a type of artifact in group
work. Dealing with knowledge in this way presents us with some new challenges.
For example, something physical like a mobile phone would generally be identified as
a single artifact, and two phones as two artifacts, but with intangibles such as
knowledge it is harder to identify this boundary. It is also important to note that there
is a hierarchical nature to knowledge, where some knowledge artifacts exist at a meta-
level to groups of others, providing such things as organizational information about
them. Practically, however, group knowledge is a resource that is used to inform
other activities. In the model reported in this paper, the development of group-owned
knowledge artifacts supports the understanding of the set task and its sub-division into
well-bounded, clearly understood sub-tasks.
Artifacts are adopted into a group through negotiation; a concept that has also been
extended to include knowledge and information [24]. Olson and Olson [20] saw this
process as one of clarification, and split clarification activities according to whether
the group was clarifying issues, goals or other activities. The negotiation process can
lead to the adaptation of artifacts, as well as their adoption [7], and this process leads
to there being a difference between the artifact proposed by a individual and what is
finally used by a group. The nature of this adaptation depends upon the physical
adaptability of the artifact; if a tangible artifact is not easily adaptable, a group can
adapt their understanding of it instead, so that novel uses develop as group emergent
knowledge. Rittel and Webber [22] claim that in wicked problems, or those that are
essentially unique or ill defined, rebounding the issues is an essential part of the
negotiation process. In this paper we also consider the reverse influence of how
rebounding the task affects the adoption of knowledge artifacts.
The principal difference between the groups was that one was asked to support
their survey and produce a poster only using pen and paper, whereas the other group
was asked to maintain their group records and produce the poster on a computer.
Both groups members had individual diaries, in which they were asked to record the
work that they undertook during the week in between meetings, as well as any
communication with other group members relating to the survey. There were no
restrictions placed on the groups as to how they communicated between themselves in
between meetings.
At the beginning of each meeting, the room layout was always laid out in the same
way for both groups, including the distribution of resources. There was a central table
around which the chairs were initially placed; the other resources were distributed
around the room, the group record (notepad or laptop) on a desk at one end of the
room and the resources to make a poster (desktop, or pens/paper/scissors/glue) on
another desk at the other end of the room.
The layout of the room gave the group members three distinct areas in which they
could work. In the middle of the room they had their meeting area, and at the two
ends they had resource areas. The purpose of defining these spaces was to observe
how the group divided its members according to the sub-tasks they wanted to work on
at any given time.
There was no restriction placed upon group members as to whether, when or by
what means they could communicate between the fixed meetings. If they felt that they
required extra meetings, then this was allowed too. In fact, only one extra meeting
was requested by one group, and this was during the last week of the study when they
preferred to split their work for poster production over two days, into planning and
output sessions.
Normally, communication between meetings was limited to e-mails or unplanned
face-to-face contact (i.e., bumping into each other on campus). Group members
recorded these interactions in their individual diaries and copies of e-mails were
forwarded to the researchers. Video recordings were made of all the scheduled co-
located meetings, using two fixed cameras and additional cameras or computer output
capture, as appropriate, to capture a quad mixed image.
We encoded the verbal and non-verbal communication of group members in the
co-located meetings using SYMLOG, a system for the multiple-level observation of
groups devised by Bales and Cohen [2]. The system enables an observer to construct
messages that describe group behavior. One feature of SYMLOG is that it separates
the behavior of the group members towards the target of each interaction from their
behavior towards the subject of that interaction, which we have used to analyze
interactions specific to taskwork and task development.
In making this coding, we discovered an interesting recurring pattern in the
encoded meetings that showed specific periods of activity-focused interaction. We
identified this by analyzing the communication instances when the groups task was
the subject of the interaction and the target was one, some or all of the other group
members. The pattern that recurred was one where a group member had a brief
period of clear understanding about part of the task, which they communicated to one
or more other group members. Whenever this type of interaction was observed, the
group made significant developments in their work towards task completion.
130 C.P. Middup and P. Johnson
Fig. 1. The negotiation process for group artifact adoption, showing how the artifacts in-group
ownership shifts between the individual and group levels
Artifacts were adopted (or rejected) by the groups through negotiation, followed by
a sign-off. The negotiation process (Figure 1) begins with an artifact being
introduced to the group by one of its members. At this point, the introducer can be
considered to be the sponsor of the artifact, and the discussion begins with them
making a case for it. Whether the artifact is tangible or not, the case for the sponsor
will be linked to how it progresses a sub-task and how it fits with the overall
understanding of the main goal at any given time.
How well the proposal meets the needs of the group depends on the common
ground [6] that the group members can draw upon to understand a shared perspective.
So, in early group meetings these negotiation processes will drive the group towards
shared understanding, which in itself is the negotiation and adoption of group
knowledge. Later, these knowledge artifacts will help establish group norms as part
of the group members shared history [8], which limit the appropriation of further
artifacts to within defined boundaries.
Modeling Group Artifact Adoption for Awareness 131
The negotiation process that leads to the group deciding whether or not to adopt the
artifact can also lead to the generation of further knowledge artifacts, which are also,
implicitly or explicitly, proposed and considered for adoption. This multi-threading is
partly responsible for the difficulty that groups have in seeing this process as they
perform it. Once group norms begin to be established, the negotiation processes
become quicker and more focused, because fewer concurrent negotiations are
required to reach a point of common understanding and make a decision.
When the group makes a positive decision to adopt an artifact, the individual has to
relinquish control of it. It is no longer theirs to shape in terms of content or use,
without reference to the group. By contrast, if an artifact is not adopted by the group,
then it is returned to the individual. Often the same artifacts, tangible or otherwise,
are re-presented to the group at other times, when the proposer thinks that something
has changed in the task understanding to justify another attempt.
We have analyzed the data to produce a taskwork support model (Figure 2), which
explains the behaviors and activities that take place in low-level group work. It can
be used by designers to help support the interactions that co-located groups use to
understand and complete tasks. Tasks are frequently carried out with various levels
of interleaving and interruption [15] and the task of structuring a groups work is no
exception. This model restructures the complexity into a series of recurring states, so
that it can be better understood. Each of the states in the model represents a key
phase of group interaction, through which the group gradually understands and
completes their original unstructured, complex task.
The periods of activity-focused interaction that we observed progress the group in
a particular state and make it necessary for them to shift states, as shown by the
arrows in the model, as it becomes necessary to develop their taskwork in a different
context.
The model identifies six key phases within group taskwork that need to be
supported. Each of these can be supported by awareness of a groups artifact
adoption and how these in turn drive activity-focused interaction.
Understanding the task. This is usually the first problem a new group needs to face,
where a complex task needs to be assessed and group members contribute what they
think they understand about it. For the flora and fauna survey, both groups first tried
to identify skills that they had within the group that might help them progress the task.
In terms of artifact negotiation and adoption, the acceptance that someone has a
potentially useful skill becomes a group knowledge artifact. The negotiation process
involves not only a group acceptance that one of their members has a particular skill,
but also that it is relevant and useful to the task and so their perceived understanding
of the task increases.
At some point the group members become aware that their understanding of the
task has increased to a level where they need to use the new understanding. This is
the point as which they shift state with a period of activity-focused interaction, with
132 C.P. Middup and P. Johnson
one or more group members deliberately changing the focus of the group to identify
sub-tasks or consider the main task boundary.
This phase was continually revisited in the flora and fauna survey as individual and
group knowledge increased, providing new insights into the original requirement.
Because none of the participants were experts in flora and fauna, they were forced to
continually revise what they knew about extrapolating their observations to the rest of
the environment. For example, there is a period early on in the second meeting of one
group where a group member, STA, uses his report on his sub-task progress to
question the detail that the group is looking for.
STA One question I have is how detailed do we go on bugs?
The nature of this communication shows how the speakers interaction with the
team and task can have different concurrent moods. To the group, he is submissive
but friendly: he is genuinely seeking their opinion and his tone suggests that he
appreciates their input. At the same time, however, the speaker is demonstrating
control over the task he doesnt know how to overcome his problem, which is why
he is asking the group, but he has developed a clearer understanding of what the
problem is, and so is taking personal control of the task development by asking the
question.
The impact of this statement on the task development is that the group now has to
define part of the task more closely and think about how this affects sub-tasks that
they have already identified, as well as potential new ones. It also begins a
knowledge artifact adoption cycle. Although it isnt fully formed, the knowledge
artifact proposed by STA is an entity containing the groups understanding of their
requirement with respect to insects.
Bounding the task. In order to limit and focus the work, group members will try to
define or redefine the boundary of the task. Such a definition requires the approval of
other group members and changes in the boundary definition can lead to a reappraisal
of outstanding sub-tasks.
Again, the shared understanding of the task boundary is a knowledge artifact that is
proposed, negotiated and then accepted into the groups domain. If the perceived
boundary of the task changes, then the next group state will be to focus back on
understanding the task within the new domain. It might be that previously accepted
knowledge artifacts need to be modified by the group. This is an example of the task-
artifact cycle [5] working at the micro level.
In the flora and fauna survey, one of the biggest problems each group had to
overcome was deciding what was possible within the four-week survey period. In
particular, they had to resolve to competing pressures of breadth versus depth in the
survey. The following dialogue comes from one of these discussions:
TIC Common things we can deal with, but obviously theres going to be like
a thousand types of plant.
TIH I think we should aim at the big things, and not worry about the little
details
Modeling Group Artifact Adoption for Awareness 133
Fig. 2. The Taskwork Support Model, showing the interactions required understand and
complete a complex, unstructured task
Although this example shows a more negative attitude towards the task, it still also
exhibits awareness of what is required to progress it. TIC has identified a specific
problem with the granularity of data that they are trying to gather and, in voicing this
issue, is encouraging his teammates to re-evaluate their plans for data gathering. This
was negotiated within the groups several times, but each time they would reach a
point at which someone decided they had the correct balance and proposed this to the
group. Once accepted, this naturally led the group members to reconsider what they
now thought the task meant, what they understood and what was still missing. Each
134 C.P. Middup and P. Johnson
When the group is operating in this state, it needs to manage its repository of
artifacts so that they support the sub-tasks as available resources. The negotiation
process in the group is aimed at defining meta-level knowledge artifacts that tie
together existing artifacts, tangible or otherwise, into a package that supports a low-
level goal.
The conversation in this example shows the difficulty that groups have in framing
their existing knowledge in a way that is suitably structured for the way they decide to
split tasks. In order that some sub-tasks can be performed by individuals or sub-
groups, the group has to work very hard so that the correct group knowledge is
explicitly tied to the correct sub-task, in a way the whole group agrees upon.
We observed that the outcome or breakdown of this negotiation process could
move the group to three other states. If the negotiation process led to agreement that
the group had a fully supported sub-task then usually at some point there would be a
phase of activity-focused interaction that led the group to move to the state where
they negotiated the allocation of work instead. Occasionally, however, someone
would identify that the group knowledge development had given the group sufficient
resource to complete some sub-task and then the activity-focused interaction would
shift the groups state to negotiating sign-off for completed sub-tasks.
At other times, the negotiation of sub-tasks led to the creation of knowledge
artifacts that group members identified as important in developing existing sub-tasks
and then the new knowledge would be used to shift the group into the state of
developing existing sub-tasks.
When a group has co-located meetings as part of primarily distributed work, as in
the flora and fauna study, this state is critical to the success of the meeting. Group
Modeling Group Artifact Adoption for Awareness 135
members leave with a schedule of tasks and a personal mandate to use a subset of the
groups artifacts to try to progress or complete those tasks before the next meeting.
Developing sub-tasks. As the group develops its understanding of the main task, they
may need to redefine sub-tasks because their needs have changed, or they may see
more complexity in a sub-task that shows it needs to be further sub-divided or
modified.
In the flora and fauna study, this state was shown to always be a precursor to re-
bounding the main task. During the negotiation of how sub-tasks should be defined, a
group member always noticed that the new knowledge artifacts created has
challenged their existing understanding of the boundary of the task. In our particular
study, we often observed that this was triggered by discussions of extra complexity
that had been identified during data gathering between meetings.
In the following dialogue, the group is challenged by MAT to define more clearly
what their output is going to be. This is an example of how clear activity focus can be
generated by group members challenging each other to improve on their ideas.
MATs original question is not itself clearly activity focused he had no particular
insight but it forced the team to collaborate in defining their approach to the
problem more clearly.
MAT Have we any thought at all on how were going to present this? if we
have any idea now, it might save us hassle further down the line
ADA The way Id imagined was that wed draw a map on it, with little lines
coming off, but that might incredibly busy, so we might have to get selective with the
pictures The discussion continues between MAT and ADA, but then DUN says
DUN I thought we were going to do areas, the areas that we identified as being
similar This is controlled by ADA, who shows that the two ideas are the same.
ADA But that would be an elaboration of the map idea, yeah?
From the progression of this sub-task, the group are now able to re-evaluate what
they have been doing individually, and how this now fits into the overall picture. If
the sub-task itself is sufficiently complex it may only be defined as an area of work
the group knows it needs to address then this state becomes a new iteration of the
whole taskwork support model, but at a lower level.
This example clearly shows the negotiation process for the adoption of knowledge
into a group. ADA starts with a very clear idea of what he believes the group needs
and proposes it, but the other group members go to great trouble to modify the idea,
until what is finally adopted has been jointly constructed as part of a collaborative
exercise.
Distributing work between group members. Early in a groups development,
members find it easier to identify sub-tasks that suit their own skills and
competencies, and then volunteer to complete them. As group members gain a
greater awareness of each others skills and competencies they are more able to
suggest work for other people or shared work.
Group collaboration requires the group members to take responsibility for parts of
the shared work [11]. In the flora and fauna study, group members negotiated
136 C.P. Middup and P. Johnson
individual responsibility from the shared pool of identified sub-tasks. Combined with
this was the return to individual responsibility for the artifacts previously associated
with each sub-task. This cycle of knowledge responsibility is important when it
comes to trying to complete sub-tasks. Group members take knowledge that the
group has agreed to be usable for a sub-task, attempt the sub-task and then re-present
the knowledge back to the group in a revised manner. The negotiation of acceptance
of this revision is effectively the group deciding whether to sign-off the sub-task as
complete or not. If they are unable to do this, then the group will have to rebound the
task again, as they clearly have not all understood the goal for the sub-task in the
same way.
In describing the development of the sub-tasks, we discussed a three-way
discussion between group members as they tried to identify and define zones on a
map that would be a suitable sub-division of the survey. However, it was the fourth
member of the group that waited for this discussion to resolve itself, before joining in
with an attempt to divide the surveying of these zones among the group.
MAT I was going to say, if were doing it in that way, then it might make sense
seeing how Ive done woodland here (points to map) then I might as well do the
woodland there, there and there (more pointing) because then we dont duplicate
stuff
This encourages ADA to explain areas he has looked at, and so what he thinks he
is more suited to. This interaction leads to a period where a feeling of clear
understanding of the task is less apparent. The group is working with the newly
formed idea of zones, and so they are trying to feel for a best way to use it. They
begin to rely on other group members more, rather than trying to force through their
own fully formed ideas.
The group members will try to complete the sub-tasks allocated to them with the
artifacts that the group has negotiated to be fit for that purpose. Once the individual
owner of a sub-task has made this attempt, they will need to present this to the group,
so that acceptance or rejection of the completion can be negotiated.
Completing sub-tasks. For a sub-task to be completed, the work needs to be approved
by the whole group in terms of a sign-off. If a sub-task is not signed-off by the
group, then group members will have difficulty in integrating that piece of work into
the overall work towards completing their main task, forcing the group to re-evaluate
what the main task boundary should be.
In the flora and fauna surveys, group members often proposed this sign-off by
sharing information that they had collected individually during the week. Because
individual information capture is goal-oriented [3], the proposer has a particular
purpose in collecting it and presenting it to the group. However, in the negotiation
process group members might see a wider scope for the information, or see that it
affects the overall understanding of the task boundary. Individuals presenting new
knowledge to the group can quickly drive the group from low-level sub-task
discussion to high-level main task discussion, because other group members see
different things and make different links with the new knowledge artifact. This is
another example of an artifact being modified at a low level by the task.
Modeling Group Artifact Adoption for Awareness 137
An example of this from the observed data came when a group member had taken
some photos and got somebody else to identify the fauna in the photos for him. He
tries to get the group to accept that this data is complete, but one other group member
refuses to accept it. The discussion continues for about four minutes without being
resolved, so in this case the appropriate sign-off has not been made, finishing with:
PET I think weve just hit the conflict that this survey was made to encounter,
which was depth or breadth
TIH Im not asking for depth. Im asking for accuracy.
The discussion does lead to the group then discussing what is good and bad about
this data, which then feeds back into their own sub-tasks and their understanding of
the overall problem.
to capture and structure knowledge, few use this knowledge to tailor the KM system to
the group. Mandviwalla and Olfman [19] found that one of the key requirements of
groupware was that it should be adjustable to the groups context and, while this has been
addressed at a high level, the model presented in this paper shows how lower level group
interactions can be structured as useful knowledge artifacts. Malone et al. [18]
introduced the idea of radical tailorability, where users can easily see and modify the
reasoning processes of their support systems, as well as the data captured within them.
This is the approach needed to develop the next generation of groupware that deal with
interactions at a much lower level than those in existence today.
Additionally, the research area of computer-supported collaborative learning
(CSCL) has provided insights into many of the issues facing task-oriented work
groups [23], but the generalisability of these findings is often undersold. Learning is
just as important outside the domain of formal education and all group development is
tightly coupled with learning within the group.
The observations reported here, and the conclusions drawn from them, all relate to
synchronous co-located groups and how groupware might better support them. In
further work we will look to establish the generalizability of these findings, including
how well they model distributed and asynchronous interactions.
References
1. Adair, J.: Effective Teambuilding. Gower (1986)
2. Bales, R.F., Cohen, S.P.: SYMLOG: A system for the multiple level observation of
groups. Free Press, New York (1979)
3. Brown, B.A.T., Sellen, A.J., OHara, K.P.: A Diary Study of Information Capture in
Working Life. In: Proceedings of the International Conference on Computer-Human
Interaction (CHI) (2000)
4. Card, S.K., Moran, T.P., Newell, A.: The Psychology of Human-Computer Interaction,
LEA, Hillside, NJ (1983)
5. Carroll, J.M., Kellogg, W.A., Rosson, M.B.: The task-artifact cycle. In: Carroll, J.M.,
Kellogg, W.A., Rosson, M.B. (eds.) Designing Interaction: Psychology at the Human
Computer Interface, pp. 74102. Cambridge University Press, New York (1991)
6. Clark, H.H.: Using Language. Cambridge University Press, Cambridge (1996)
7. Dourish, P.: The Appropriation of Interactive Technologies Some Lessons from
Placeless Documents. Journal of Computer Supported Cooperative Work 12, 465490
(2003)
8. Feldman, D.C.: The Development and Enforcement of Group Norms. Academy of
Management Review 9(1), 4753 (1984)
9. Gutwin, C., Greenberg, S.: A Descriptive Framework of Workspace Awareness for Real-
Time Groupware. Journal of Computer Supported Cooperative Work 11, 411446 (2002)
10. Heath, C., Svensson, M.S., Hindmarsh, J., Luff, P.: Configuring Awareness. Journal of
Computer Supported Cooperative Work 11, 317347 (2002)
Modeling Group Artifact Adoption for Awareness 139
Abstract. This paper claims that the design and construction of safety critical
interactive systems require both a task centred approach to support efficiently
operators goals and activities and a system centred approach to increase the
dependability of the system. The approach presented proposes a model-based
approach integrating tasks and system models. This integration is done at the
model level (in a similar way as in [13]) and at the tool level exploiting PetShop
environment [3] for the system side and AMBOSS [1] for the task side. The
tool level integration describes three different protocols each of them having
advantages and limitations. The model-based approaches are introduced
through a case study in the field of command and control systems. The
application called AGENDA allows operators to define and organize work plan
for satellite ground systems.
Keywords: Model-based design, Task modelling, Dialog modelling, Scenarios
based simulation.
1 Introduction
Model based approaches have been identified for a long time now as a mean of
dealing with the intrinsic complexity of interactive systems [18]. Models are used to
organize and store various type of information according to the area of interest of the
designer. User models [4] capture information about user capabilities, knowledge or
beliefs for instance. Context models aims at capturing information about the various
contexts in which a given interactive can be used [8]. Such models are more and more
important when dealing with interactive systems that can be used on the move i.e.
confronting the users with radically different environmental constraints. Other models
like domain models, behavioural models are not specific to interactive systems and
thus are not addressed in this paper, but approaches like UML [5] are dedicated to
model-based design of non interactive aspects of software. Research work in the field
of HCI has been trying to extend UML to support the interactive aspects of software
(like14]) through various means like inclusion of usability aspects in RUP (the
development process associated to UML) or via the extensions capabilities in UML
like stereotypes [15].
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 140 154, 2007.
Springer-Verlag Berlin Heidelberg 2007
On the Benefit of Synergistic Model-Based Approach 141
This paper focuses on two models of primary importance for interactive systems
design: tasks models and system models. Task models gather information related to
users goals and activities while system models provide a complete description of
system behaviour. As far as interactive systems are concerned, such description must
make explicit all the possible states of the system and, for each state, which actions
are available to the user on the interface. On the rendering side, the system model
must describe, according to any state change how this state change is presented to the
user. As the system model describes the actions available to the user and as the task
model describes the actions that have to be performed by the user in order to reach a
goal, these two models provide two different views on the same elements.
For these reasons, this paper focuses on the possible articulations of task models
and system models. This integration is done at the model level (in a similar way as in
[13]) as well as at the tool level exploiting PetShop environment [3] for the system
side and AMBOSS [1] for the task side. Other approaches such as [7] [20] provide a
similar view on the complementarities of tasks and systems descriptions even though
they dont address the modelling aspects directly. Other research works, instead of
using the complementarity of models, propose the generation from one model to
another one such as in [19] and [10] where the authors generates the system model
from the task model, or in [9] where the authors do the opposite. The tool level
integration describes three different protocols each of them having advantages and
limitation (section 3 of the paper). The model-based approaches are introduced
through a case study (section 2) in the field of command and control systems. The
application called AGENDA allows operators to define and organize work plan for
satellite ground systems.
2 Case Study
The work presented in this paper is partly based on the study from both the tasks point
of view and the system point of view of the interactive application called AGENDA
used in the field of command and control for space-ground systems. AGENDA is a
tool that allows an operator from a Satellite Control Planning Facilities such as for
SPOT4 or HELIOS1 (SCPF) to monitor the sequence of basic tasks performed by one
or more satellite.
In the following paragraphs, with use terms used from SCPF activities that are
explained hereafter:
An operating task is called a Procedure.
A sequence of tasks is called a Chain.
A working plan is called a PGT and is a set of chains that may evolve in
parallel.
Due to space constraints, in the following parts of the paper we only used a very small
sub part of the specification of the AGENDA to illustrate our approach, event if the
work was done on most of the AGENDA application. This part of the application is
based on a simple task which consists in providing a list of conditioning procedures
for one procedure. A PGT may be seen as a workflow where basic tasks are
procedure, and the possible execution of these procedures may be related to the
correct execution of previous procedures. The AGENDA adds some constraints to
these conditioning procedures by fixing their maximum number to five.
For this sub part of the AGENDA, the following two sections present first the
related task model by recalling basics of the approach called Amboss, and then
present the system model using the ICO notation.
3 Two Approaches
This section presents the two approaches used in the work presented in this paper. The
choice of these two notations and tools is the result of the cooperation the two groups
(from the University of Paul Sabatier and from the University of Paderborn), where
both group was trying to find a notation with which a synergistic cooperation should
On the Benefit of Synergistic Model-Based Approach 143
be possible. The work presented here is surely adaptable to others task centred and
system centred approaches.
There are various approaches that aim to specify tasks. They differ in aspects such as
the type of formalism they use, the type of knowledge they capture, and how they
support the design and development of interactive systems. In this paper we consider
task models that have been represented using the Amboss notation. Amboss [1] is a
free tool developed at the University of Paderborn supporting hierarchical task
modelling.
In Amboss tasks are described at different abstraction levels in a hierarchical
manner, represented graphically in a tree-like format (see Fig. 2 for an example for
both the notation and the tool). Amboss provides a set of temporal relations between
the tasks like; sequential: The subtasks have to in a fixed sequence, serial: where the
subtasks have to execute in an unsystematic sequence, parallel: in this relation he
subtasks can start and end at random relation to each other, simultaneous: here the
subtasks start in an arbitrary sequence with the constraints that the must be a moment
when all tasks are running simultaneously before any task can end, alternative: just
one randomly selected subtask can execute and the last temporal relation called
optional: in this case one or no subtask at all can be executed. There almost the same
temporal relations which can be found in TOMBOLA. [21] A task node with out any
subtasks is automated noted as an atomic task.
The software has got distinct additional views of a task model, which can be used
for inspecting particular attributes of the tasks. For example if an analyst likes to
observe what kind of objects (for example procedures from our case study) are
manipulated in a system by a particular tasks, he can switch to the object view, take a
look over the model and analyse the dependencies between tasks and objects. This
tool allows editing as well as directly manipulating the task structure in an easy and
intuitive way.
One of the challenges related to modelling socio-technical systems is to involve
communication and its parameters into a model. In the model the communication is
depicted with white ovals between the tasks.
It is allowed precise description of communication with parameters describing the
physical condition using options with respect to the medium of communication, form of
message as well as type of transfer. For example if a message is critical for a system, the
user can mark the message with a red envelop. In addition the user is able to describe
what type of feedback is required in a particular communication process and also if a
communication is controlled by a protocol. Both parameters ensure the communication
process; additionally control object can be applied to protect information.
The main purpose during the development of AMBOSS was to provide a
hierarchical task modelling environment that provides support for developing and
analyzing task models in safety critical domains. For modelling tasks in such an
environment the model needs to be enhanced with more adequate parameters.
Amboss allows specifying parameters like barriers protecting human life or computer
systems, riskfaktors estimating the risk and also timing describing the time frame of
tasks. Additional the user is able to describe what kind of object is associated to a
particular task and what kind of access (read or write) the task does. Furthermore
there is a possibility to describe actors related to a task.
By using these parameters it is possible to describe a task model more in detail and
have a good overview of the tasks. In order to mark a task as critical the user can change
the colours of a task to red. A task modified this way can be easily found in a model.
Similar to other modelling approaches [11] Amboss is able to simulate a task
model. The Simulator is depicted on the Fig.4 and shows to the user exactly what
happen in a task environment on a particular moment.
A finished task model can be simulated by taking into account the task hierarchy,
temporal relations providing the task execution order, communication flow showing
messages including their parameters. Additionally during the simulation the user is
able to observe the activation and deactivation of barriers, so he can see if a necessary
barrier is active or inactive while a critical task is executed. For analysing and reusing
different threads of simulation there is a possibility to save scenarios in an xml file.
System modelling is done using the ICO formalism and its development environment
is called PetShop. Both of them are presented through the case study. The ICO
formalism is the continuation of early work on dialogue modelling using high-level
Petri nets [1].
This section recalls the main features of an ICO specification and illustrates them
using the case study. The ICO formalism is a formal description technique dedicated
to the specification of interactive systems [2]. It uses concepts borrowed from the
object-oriented approach (dynamic instantiation, classification, encapsulation,
inheritance, client/server relationship) to describe the structural or static aspects of
systems, and uses high-level Petri nets [6] to describe their dynamic or behavioural
aspects.
ICOs are dedicated to the modelling and the implementation of event-driven
interfaces, using several communicating objects to model the system, where both
behaviour of objects and communication protocol between objects are described by
the Petri net dialect called Cooperative Objects (CO) [1].
In the ICO formalism, an object is an entity featuring four components: a
cooperative object which describes the behaviour of the object, a presentation part,
and two functions (the activation function and the rendering function) that make the
link between the cooperative object and the presentation part.
Behaviour: Fig. 4 presents the behaviour of the case study. The detailed description
of this behaviour is partly out of the scope of this paper, but to summarize it, the Petri
net may receive events when a procedure is added (or removed) to (from) the set of
conditioning procedures. When it is an addition, the behaviour asks the functional
core to check if the procedure is a valid as a conditioning procedure or not. The place
availableSlots initially contains 5 tokens, and every time a procedure is added, a token
is removed from this place, and every time a procedure is removed, a token is added.
When empty, this place disabled the transition askForAdding (which leads to the
popup of the procedure selection window) so that it respects the constraints of
maximum 5 conditioning procedures.
146 D. Navarre et al.
Presentation part: The presentation of an object states its external appearance. This
presentation is a structured set of widgets organized in a set of windows. Even if the
method used to render (description and/or code) is out of the scope of an ICO
specification, it is possible for it to be handled by an ICO in the following way. The
presentation part is viewed as a set of rendering methods (in order to render state
changes and availability of event handlers) and a set of user events, embedded in a
software interface, in the same language as for the CO interface description.
The presentation part is made up of a set of widgets that are used for both rendering
information and provides the user with means to interact with the interactive systems.
On the Benefit of Synergistic Model-Based Approach 147
The layout of the presentation part (Fig. 5) is out of the scope of the ICO
specification, but this presentation part is seen as a collection of rendering methods and
ways to provide events as shown in Fig. 6.
Activation function: The user actions on the system (inputs) only takes place through
widgets. Each user action on a widget may trigger one of the CO event handlers. The
relation between user services and widgets is fully stated by the activation function
that associates each event from the presentation part with the event handler to be
triggered and the associated rendering method for representing the activation or the
deactivation.
Fig. 7 present the activation function related to the case study. Each line of this
table links one of the events from the presentation part (listed by the enumeration in
Fig. 6) to an event handler from the behaviour. For instance, when the user select of
procedure in the list, the presentation part triggered the event select which finally
leads to the firing of the event handler selectProcedure. And, when the event handler
becomes available (or not) the activation rendering method setSelectionEnabled is
called with parameters that describe it as available (or not).
Rendering function: The system rendering to the user (outputs) aims at presenting
the state changes that occurs in the system to the user. The rendering function
maintains the consistency between the internal state of the system and its external
appearance by reflecting system states changes.
Fig. 8 presents the rendering function related to the case study. Each line links a
change of the behaviour state to the call of a rendering method of the presentation
part. For instance, when a token enters the place ConditioningProcedures (e.g. a
procedure has been added), the rendering method showConditioningProcedures is
called with the marking of the place as a parameter.
148 D. Navarre et al.
4 Integration Protocols
The integration framework we have followed takes full advantage of the specific tools
that we have been developed initially in a separate manner. One advantage of this
separation is that it allows for independent modification of the tools, provided that the
interchange format remains the same.
We have previously investigated the relationship between task and system models.
For instance in [16] we proposed a transformation mechanism for translating UAN
tasks descriptions into Petri nets and then checking whether this Petri net description
was compatible with system modelling also done using Petri nets. In [17] we
presented the use of CTT for abstract task modelling and high level Petri nets for low-
level task modelling. In that paper the low-level task model was used in order to
evaluate the complexity of the tasks to be performed, by means of performance
evaluation techniques available in Petri net theory.
In [13] we proposed a synergistic use of the tools CTTE and PetShop through the
exchange of scenarios (provided as files) from CTTE to PetShop. The two notations
model slightly different aspects: as CTT is a notation for task modelling whereas ICO
is a notation for specifying concurrent systems, an automatic conversion from one
notation to the other one would have been difficult. We have preferred a different
solution that is easier to implement and better refers to the practice of user interface
designers. Indeed, often designers use scenarios for many purposes and to move
among the various phases of the design cycle. So, they can be considered a key
element in comparing design solutions from different viewpoints.
The main gap the user of this framework had to face was the important length of
iterations while producing scenarios (i.e. build a scenario and save it as a file), testing
it on the system model (i.e. load both the task model and the scenario within the
system dedicated tool), change the scenario and/or the task model...
On the Benefit of Synergistic Model-Based Approach 149
The work presented in this paper is based on the work done in [13] and is basically
the investigation of overriding this gap, by first presenting the basic bricks for the
integration of the two tools, then by presenting a solution to the gap presented above
and finally by presenting a prospective reflection on a stronger integration.
As our main interest in this paper is to show it is possible to make task modelling and
system modelling cooperate, we present in this section features from each notations
and their associated environment as basic tools for the integration framework.
Amboss. As described above, Amboss environment provides a set of tools for
engineering task models. For the purpose of integration we only use the interactive
tool for editing the tasks and the simulation tool for task models that allows scenario
construction from the task models. Thus the two main outputs are a set of task models
and a set of scenarios. These two sets are exploited in the following way:
From the tasks specification a set of human and system tasks is extracted
providing a set of manipulations that can be performed by the user on the
system and outputs from the system to the user.
While building a scenario Amboss notifies the evolution of this scenario as
Amboss provides an API that allows receiving data from the simulator.
For the case study the interesting tasks are the leaves of the task tree:
ICO. Amongst the features of the ICO environment (PetShop) presented above, the
one that is used for the integration is the tool for editing the system model. It allows
executing the system model.
From this specification we extract the activation and rendering function which may
be seen as the set of inputs and outputs of the system model.
From the case study, we use each line of the activation and rendering functions
presented on Fig. 7 and Fig. 8.
As in the paper [13], the integration protocol is made up with two phases: the
definition of the correspondence between the two models and the execution of the
system model controlled by a scenario provided by the Amboss simulator.
Correspondence between models. The principle of editing the correspondences
between the two models is to put together user input tasks (from the task model) with
system inputs (from the system model) and system outputs (from the system model)
with system output tasks(from the task model). Correspondence may show
150 D. Navarre et al.
inconsistency between the task and system model. The correspondence edition
process may be seen as presented on Fig. 9, where each tool provides the
correspondence editor with API in order to notify it each time modifications are done
one both the task model or the system model.
The description of the correspondence edition is illustrated hereafter using the case
study (see Fig. 10).
While observing Fig. 10, we may see that there are two weak correspondences:
Task check procedure validity does not find any corresponding feedback
within the system. It may be a problem because it means that the system does
not validate the selected procedure and does not provide any feedback for this. A
solution may be to add a new rendering adapter to the rendering function (and a
new rendering method to the presentation part), such as:
ObCSNode name ObCS event Rendering method
InvalidProcedures token_enter showInvalidProcedure
figure is the correspondence edition part, presented above. The bottom part of the figure
presents the architecture of the execution part. The principle is the following one:
Through an API, the Amboss simulator notifies the Simulation controller of the
evolution of the current scenario (it notifies whether a task begins or ends).
Through another API, the Simulation controller fires the corresponding
activation adapter (according to the correspondence provided by the
Correspondence editor), simulating a user event.
Tasks Adapters
Correspondence
Amboss Editor PetShop - ICO
Correspondences
Notifications Firing
Simulation
Simulator Interpretor
Controller
As a scenario may be seen as a sequence of tasks and as we are able to put an input
task and an activation adapter into correspondence, it is now possible to convert the
scenarios into a sequence of firing of event handlers in the ICO specification.
An ICO specification can be executed in the ICO environment and behaves
according to the high-level Petri net describing its behaviour. As the Amboss
scenarios can be converted into a sequence of firing of event handlers, it can directly
be used to drive the execution of the ICO specification.
The gap identified in the introduction of this section finds a solution as the direct
link between the Amboss simulator and PetShop allows a designer to make the two
models co-evolve, removing the activity that consists in creating scenario files and in
fully separating the two tools.
We have started a prospective work on how coupling these two tools in a more
synergistic way, introducing a communication from the ICO environment to Amboss.
The architecture of this second integration protocol is presented on the Fig. 12. The
correspondence edition (the top part of the figure) remains the same as in the one
described in the previous section. Even if only input aspects are addressed by the first
protocol of integration, the correspondence edition links both input and output tasks
from the task model with input and output adapters from the system model. The
bottom part of the figure proposes a more integrated way to make the two tools
communicate. The communication from Amboss to PetShop remains the same
(Amboss notifies the Simulation Controller which then ask the firing of the related
corresponding adapters in the system model). The addition is the following one:
152 D. Navarre et al.
API
API
API
Fig. 12. Protocol 2 for the tools integration
The advantages of this integration protocol a real co-evolution of the two models,
as the execution of both tools impacts the execution of the other tool. This integration
protocol still provides the designer with shorter iterations in the task and system
modelling process in the same way as for the previous protocol. But this protocol may
also be an improvement for the final user. The principle would be to use the execution
of the system to point out where the user is on the task model. The advantage is to use
the task model as an input for providing the user with a partly automated contextual
help in two phases:
As the system model execution point out the corresponding task in the task
model, it is easily possible to provide the corresponding task description and
attached help.
Knowing the task on which the user works, it is possible to extract from the task
model possible scenarios which start with this task, and then play it on the
system model as a demonstration of what is possible to do knowing the current
context.
5 Conclusion
This paper addressed the issue of integrating task models and system models within a
single framework. It claims that modelling approaches for these two critical
components of the design of interactive systems provide a valuable mean for
managing the complexity of interactive systems. The paper presented on a case study
the information that is conveyed by a task model and the one embedded in a system
model.
On the Benefit of Synergistic Model-Based Approach 153
Beyond this modelling level, the paper also presents different ways of relating
these two modelling approaches. It presents three protocols that have been identified
and describes the advantages and limitation of each of them.
Finally, the paper presents how one of these protocols has been implemented
through the coupling of two modelling environments: AMBOSS for the edition and
simulation of tasks models and PetShop for the edition and simulation of system
models.
The work presented in this paper belongs to a longer term research programme
targeting at the design of resilient interactive systems using model-based approaches.
Future work targets at exploiting these two models to support the usability evaluation
of interactive systems. Indeed, task models provide a unique view on the goals and
sequences of actions the users have to perform in order to reach such goals, while
system models provide a unique view of the inner behaviour of the system.
References
1. Amboss, http://wwwcs.uni-paderborn.de/cs/ag-szwillus/lehre/ws05_06/PG/PGAMBOSS
2. Bastide, R., Palanque, P., Le, D.-H., Muoz, J.: Integrating Rendering Specifications into a
Formalism for the Design of Interactive Systems. In: DSV-IS 1998, Abingdon, U. K,
Springer, Heidelberg (1998)
3. Bastide, R., Navarre, D., Palanque, P.: A Model-Based Tool for Interactive Prototyping of
Highly Interactive Applications. In: Tool demonstration. CHI 2002, Minneapolis (USA)
(2002)
4. Blandford, A., Butterworth, R., Curzon, P.: Models of interactive systems: a case study on
Programmable User Modelling. International Journal of Human-Computer Studies 60(2),
165216 (2004)
5. Booch, G., Rumbaugh, J., Jacobson, I.: The UML reference manual. Addison-Wesley,
Reading
6. Genrich, H.J.: Predicate/Transitions Nets. High-Levels Petri Nets: Theory and Application.
In: Jensen, K., Rozenberg, G. (eds.) Predicate/Transitions Nets, pp. 343. Springer,
Heidelberg (1991)
7. Green, T.R.G., Benyon, D.R.: The skull beneath the skin; Entity-relationship modelling of
Information Artefacts. International Journal of Human-Computer Studies (1996)
8. Jameson, A.: Modelling both the Context and the User. Personal Ubiquitous Comput. 5(1),
2933 (2001)
9. Lu, S., Paris, C., Vander Linden, K.: Towards the automatic generation of task models
from object oriented diagrams. In: Chatty, S., Dewan, P. (eds.) Engineering for Human-
Computer Interaction, Kluwer academic publishers, Boston (1999)
10. Lu, S., Paris, C., Vander Linden, K., Colineau, N.: Generating UML Diagrams From Task
Models. In: proceedings of CHINZ03, the fourth annual international conference of the
New Zealand chapter of the ACMs SIGCHI, Dunedin, New Zealand (July 3-4, 2003)
11. Mori, G., Patern, F., Santoro, C.: CTTE: support for developing and analyzing task
models for interactive system design. IEEE Trans. Softw. Eng. 2(8), 797813 (2002)
154 D. Navarre et al.
12. Navarre, David, Palanque, Philippe, Bastide, Rmi, Sy, O.: Structuring Interactive
Systems Specifications for Executability and Prototypability. In: 7th Eurographics
Workshop on DSV-IS 2000, Limerick, Ireland. LNCS
13. Navarre, D., Palanque, P., Bastide, R., Patern, F., Santoro, C.: A tool suite for integrating
task and system models through scenarios. In: 8th Eurographics workshop on Design,
Specification and Verification of Interactive Systems, DSV-IS 2001. LNCS, vol. 2220, pp.
88113. Springer, Heidelberg (2001)
14. Nunes, J.N., Cunha, J.F.: Towards a UML Profile for Interaction Design: The Wisdom
approach. In: Evans, A., Kent, S. (eds.) Proceedings of the Unified Modeling Language
Conference, UML 2000. LNCS, vol. 1939, pp. 100116. Springer, Heidelberg (2000)
15. Nunes, N.J., Cunha, J.F.: Wisdom A UML Based Architecture for Interactive Systems
(PDF 44.73 Kb). In: Palanque, P., Patern, F. (eds.) DSV-IS 2000. LNCS, vol. 1946, pp.
191205. Springer, Heidelberg (2001)
16. Palanque, Philippe, Bastide, R., Sengs, V.: Validating Interactive System Design Through
the Verification of Formal Task and System Models. In: EHCI 1995. 6th IFIP Conference
on Engineering for Human-Computer Interaction, Garn Targhee Resort, Wyoming, USA,
August 14-18, Chapman et Hall, Sydney, Australia (1995)
17. Palanque, Philippe, Bastide, R., Patern, F.: Formal Specification As a Tool for the
Objective Assessment of Safety Critical Interactive Systems. In: Interact 1997. 6th IFIP
TC13 Conference on Human-Computer Interaction, Sydney, Australia, July 14-18, 1997,
pp. 323330. Chapman et Hall, Sydney, Australia (1997)
18. Patern, F.: Model-Based Design and Evaluation of Interactive Application. Springer,
Heidelberg (1999)
19. Patern, F., Breedvelt-Schouten, I., de Konig, N.: Deriving Presentations from Task
Models. In: Proceedings EHCI 1998, Creete, Kluwiert Publisher (1998)
20. Sawyer, J.T., Minsk, B., Bisantz, A.M.: Coupling User Models and System Models: A
Modeling Framework for Fault Diagnosis in Complex Systems Interacting with computer
(1996)
21. Uhr, H.: TOMBOLA: Simulation and User-Specific Presentation of Executable Task
Models, Paper. In: Human-Computer Interaction: Theory and Practice (Part I),
Proceedings of HCI International 2003, pp. S263267. Lawrence Erlbaum Associates,
Mahwah, NJ (2003)
Remote Evaluation of Mobile Applications
1 Introduction
In remote usability evaluation, evaluators and users are separated in space and possibly
time during the evaluation [1]. This type of evaluation is becoming increasingly
important for the number of advantages it offers. Indeed, it allows the collection of
detailed information on actual user behaviour in real contexts of use, which is especially
useful in contexts in which it is not possible (or convenient) having an evaluator directly
observing or recording the session. In addition, the fact that the users carry out the
evaluation in their familiar environments contributes to gain more natural users
behaviour.
In order to have a complete picture of what users did during the session and derive
consequent conclusions about the usability of the application, it is crucial for the
evaluators to reconstruct not only the interactions that users carried out during the
session, but also the contextual conditions that might have affected the user
interaction itself. Indeed, if such conditions are not completely known, the evaluators
might draw incorrect conclusions about the usability of the considered application.
This problem becomes even more difficult to address when dealing with mobile
applications. Indeed, while for desktop applications the lack of co-presence between
users and evaluators can be compensated to some extent by equipping the test
environment with devices such as web cams, mobile applications require different
solutions that are able to flexibly support evaluation in different contexts without
being too obtrusive on the user side. When dealing with remote applications for
mobile devices, there are some additional problems that make it more difficult to
gather data to remotely evaluate such applications. For instance, we have to take into
account the more limited capability of mobile devices, which imposes constraints on
the kinds of techniques to be used for tracking user interactions. In addition, there is
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 155 169, 2007.
Springer-Verlag Berlin Heidelberg 2007
156 F. Patern, A. Russino, and C. Santoro
the further problem of detecting the environmental conditions in which the session
takes place.
In this paper we discuss a novel extension of a tool able to remotely process
multimodal information on users interacting with desktop applications. The new
solution is enriched with the possibility of tracking and evaluating also user interfaces
of mobile applications, including the detection of environmental conditions that might
affect user interaction (e.g.: noise in the surrounding environment, battery level
consumption, etc.). The new tool, MultiDevice RemUsine, is able to identify where
users interactions deviate from those envisioned by the system design and represented
in the related task model. In addition, we also improved the graphical representations
that are provided to the evaluators for visualizing the gathered data. Indeed, especially
when dealing with a large amount of information, it is very important to use effective
representations highlighting relevant information so as to enable evaluators to better
identify potential issues and where they occur.
The structure of the paper is the following one: in the next section we discuss
related work, and next we introduce our general approach. Then, we present the main
features of the additional component (Mobile Logger) we developed for supporting
the detection of user interactions with mobile applications and environmental
conditions that might affect the performance of users activity. In the following, we
also discuss the issue of more effective visualisation techniques for representing the
data that have been collected regarding the user activity. Lastly, we conclude with
some remarks and indications for future work.
2 Related Work
3 General Approach
Our approach is mainly based on a comparison of planned user behaviour and actual
user behaviour [13]. Information about the planned logical behaviour of the user is
contained in a (previously developed) task model, which describes how the tasks
should be performed according to the current design and implementation. The task
model can be built in various way. It can be the result of an interdisciplinary
discussion involving end users, designers, application domain experts, and
developers. There are also reverse engineering techniques able to build automatically
the system task model of Web pages starting with their implementation.
The data about the actual user behaviour are provided by the other modules (eg: the
logging tools), which are supposed to be available within the client environment. An
overview of the general approach is described in Figure 1. A logging tool, which
depends on the type of application considered, stores various user or system-
generated events during the user session. In addition, other sources of information
regarding the user behaviour can be considered, such as Web Cams showing the
actual user behaviour and face expressions or eye-trackers detecting where the user is
looking at.
As for the expected user behaviour, CTT [14] task models are used to describe it
by their graphical representation of the hierarchical logical structure of the potential
activities along with specification of temporal and semantic relations among tasks. It
is worth pointing out that, with the CTT notation used, the designer might easily
specify different sequences of paths that can correspond to the accomplishment of the
same high-level task: this is possible thanks to the various temporal operators
Remote Evaluation of Mobile Applications 159
available in the CTT notation, which also include, for instance, the specification of
concurrent, multitask activities, or activities that interrupt other ones. On the one
hand, for the designer is quite easy to specify in a compact manner even a complex
behaviour, on the other hand the behaviour of such operators is automatically mapped
by the underneath engine into all the corresponding possible paths of behaviours.
In order to enable an automatic analysis of the actual user behaviour identified by
the sequences of actions in the logs against the possible expected behaviours
described by the task model there is a preparation phase. In this phase the possible log
actions are associated with the corresponding basic tasks (the leaves in the task
model). Once this association is created then it can be exploited for analysing all the
possible user sessions without further effort. In this way, the tool is able to detect
whether the sequence of the basic tasks performed violates some temporal or logical
relation in the model. If this occurs then it can mean that either there is something
unclear on how to accomplish the tasks or the task model is too rigid and it is not able
to consider possible ways to achieve user goals. Thus, by comparing the planned
behaviours (described within the task model) with the information coming from log
files, MultiDevice RemUsine is able to offer the evaluators useful hints about
problematic parts of the considered application. To this regard, it is worth pointing out
that the tool is able to discriminate to what extent a behaviour deviates from the
expected one (for instance, whether some additional useless tasks have been
performed but they did not prevent the user from completing the main target task, in
comparison with other cases in which the deviation led to unsuccessful paths).
4 Mobile Logging
With mobile interaction there are some contextual events that should be considered
since they can have an impact on the users activity. Among the relevant events that
might be taken into consideration there are the noise of the environment, its lightness,
the location of the user, the signal power of the network, as well as other conditions
related to the mobile device, e.g. the residual capacity of the battery.
When we deal with usability evaluation in which stationary devices are used, the
contextual conditions under which the interaction occurs and involving the location of
the user remain unchanged over the experiment session. In this case, the information
about user interaction might be sufficient. When we consider interaction with mobile
devices, since the interaction occurs in an environment that can considerably change
not only between two different executions, but also within the same execution, this is
no longer valid. Thus, it is important to acquire comprehensive and detailed
information about the different aspects of the context of use in which the interaction
currently occurs since they might have an impact on the users activity.
Each of these variables, as well as combinations of them can affect the user
interaction, and in our tool we developed a separate module for detecting to what
extent each of these components can affect the user interaction. Currently, we
consider aspects connected with the current position of the user (the position itself,
together with the noise and lightness of the surrounding environment, according to
such a position) together with other variables, which are more connected to objective,
160 F. Patern, A. Russino, and C. Santoro
The task of tracking the activity of the user is carried out by a procedure that is
executed by the application evaluated, which uses libraries included in the operating
system to detect events, and which exploits an inter-process communication model
based on exchanges of messages. The execution of an interactive graphical
application is driven by events, notified in Windows systems by means of messages.
Indeed, WindowsCE, like other Windows systems, is an operating system based on
the push mechanism: every application has to be coded to react to the notifications
(namely: messages) that are received from the operating system. Each window has a
window procedure that defines the behaviour of the component.
Therefore, it is theoretically possible to derive all the information associated with
the interaction from such messages. However, it is worth noting that not all messages
received by the window procedure of a window/component are useful to reconstruct
the user interaction. Indeed, there are messages that do not directly regard the user
Remote Evaluation of Mobile Applications 161
interaction, for instance the WM_PAINT message forces the refreshing of the display
of a certain component, but it is not triggered by the user. As a consequence, only a
subset of the messages is considered for our purposes. Such set includes, for instance:
WM_LBUTTONDOWN, a message that is received from every component as soon
as a click event is detected on it; WM_KEYDOWN, a message that is sent to the
component that currently has the focus as soon as the user presses a key on the
keyboard.
The functionality to track and save all the interactions of the user with the system
is not centrally delegated to a single module but instead distributed over multiple
modules that track the activity of the user according to a specific aspect:
NoiseMod: It is the module that has to track possible conditions that
might interfere with the user activity on the audio channel. In order to
track the conditions on the audio channel, this module executes at regular
intervals of time a sampling of the audio. Depending on the samplings
recorded, the value to be recorded in the log file is calculated.
PowerMod: It is the module that monitors the battery consumption of the
device. The values are saved as they are provided by the system, without
performing any calculation on them.
LightMod: It is the component that is in charge of tracking conditions
that might interfere on the visual channel, for instance variations on the
brightness of the surrounding environment.
SignalMod: Some applications might depend on the availability of a
communication network and on the intensity of the signal. In these cases,
the task of recording the power of such a signal is delegated to this
module.
PositionMod: Some applications might be affected by the current
position of the user. In this case, this module will track the location of the
user and how it changes over the time.
These modules have been identified taking into account the possibilities opened up
by the sensing technologies of current mobile devices. Such modules for gathering
environmental data are dynamically loaded only if a logging session is started and the
activation of these specific components is requested. They record events using a
sampling frequency algorithm, which is able to adapt the frequency at which the
sampling is taken.
Therefore, the sampling is not carried out at fixed time intervals. It starts with
setting an initial interval of time in which events are acquired. Then, it proceeds in the
following way: if, in the last interval of time no variation has been registered, the
interval of time to be considered for the next acquisition becomes larger, otherwise it
decreases (following an exponential law). This choice is based on the consideration
that using a fixed interval of time for the sampling frequency might not be a good
solution. For instance, if the sampling frequency is much smaller than the frequency
at which the environmental condition changes, a more flexible algorithm can avoid
the activation of useless event detection during some intervals of time, saving battery
consumption, which is not an irrelevant aspect for mobile devices.
162 F. Patern, A. Russino, and C. Santoro
The tool receives notification messages from the application to be tested, and delivers
XML-based log files in which the events are saved according to a specific structure
that will be detailed later on in the section.
Therefore, Mobile Logger communicates with Multi-Device RemUsine through
the log files: in such files the logging tool records the detected events, and from such
log files Multi-Device RemUsine gets the information needed to reconstruct the users
activity.
The log file is an XML-based file, and it is structured into two main parts: a header
and the list of events that have been registered by the logger. The header contains
information related to the entire evaluation session, for instance, the username of the
tester, the temporal interval spent performing the test, the application tested, the list of
contextual aspects that have been registered and the related parameters of sampling.
The events are recorded according to the following structure: (temporal event, type
of event, value), and they have been categorised into different classes: contextual
events (all the events that have been registered as a consequence of a contextual
condition); intention event (which is used to signal that the user has changed the
target task, which has to be explicitly indicated); system event (the events generated
by the system for replying to a users action); interaction event, further specialised
into different categories like: click focus, select, check, scroll, edit.
As an example, we can consider an application of the tool focusing on the use of
information regarding noise and battery power. In this case, the tested application was
a museum guide available on a PDA device. When the tool is activated it appears as
shown in Figure 3(a): the user is supposed to fill identification information, then
specify the aspects of the environment s/he is interested to consider, and also specify
the target task (intention) that she wants to achieve (Fig. 3c), by selecting it from a list
of high-level tasks supported by the application (Figure 3-b).
Fig. 3. The logging tool: (a) when it starts; (b) after selecting the environmental conditions of
interest; (c) after selecting the intention
Remote Evaluation of Mobile Applications 163
Then, after selecting the Start button the log file regarding the users activity is
created. Figure 4 shows an excerpt of the log file indicating the detection of the noise
with an initial frequency of 500 ms and an increment factor of 50 ms. In addition,
only variations not less of 3dB with respect to the previously detected values will be
tracked. As for the battery level, the temporal parameters used are similar apart that
the resolution is of only 1 percentage point.
During the session evaluated, the user is supposed to interact with the application.
Figure 5 shows an excerpt of the log file highlighting some events that have been
registered and referring to the abovementioned scenario. From top to bottom, we have
highlighted two environmental data regarding battery and noise; then, we have the
notification of a system event (the loading of a window in the application), lastly, we
have the notification of the selection of the target task (intention) made by the user.
Fig. 6. Representing the evaluation results in the previous version of the tool (top) and in the
new version (bottom)
Remote Evaluation of Mobile Applications 165
of the tool offers graphical visualisations of the data gathered, which can be managed
more easily by the evaluators (an example is shown in Figure 6-bottom part), thereby
representing a step forward with respect to the previous visualisation technique. The
new graphical representations will be described in further detail in the next sections.
One of the most important points to bear in mind when deciding the technique to use
for representing evaluation data is that such representation should make it easy to
identify the parts of the application where users encounter problems. Therefore, the
information represented is effective insofar as it is able to highlight such information,
and consequently enable the evaluator to draw conclusions about the usability of the
application. One relevant aspect for effectively reconstructing a user session is
providing data according to its evolution over the time. In addition, evaluators should
be able to easily carry out comparisons between the behaviour of different users;
therefore, the use of graphical representations (rather than e.g. lengthy text-based
descriptions) can also provide the evaluators with an overview of the collected data
and allow them to compare data on different users.
Such considerations led to the type of representation we have investigated to
represent usage data, the timelines. In particular, we identified three types of
timelines:
Simple Timeline: linear representations of the events that have been recorded;
State-timeline, which is an extension of the first one, enriched with
information about the current state of the target task, which is represented
through different colours associated with disabled, enabled or active;
Deviation-timeline, which is a representation of the registration over three
different parallel levels, in which squared elements indicate the level of
deviation from a sort of ideal path.
In particular, we developed a number of panels in which not only whole sessions but
also segments of them are represented both in relation to a single user and group of
users. Figure 7 shows an example of both representations (the white circles identify
the temporal occurrence of basic tasks whose performance is detected by an analysis
of the log file), each one associated with a different user. The lines contained within
the State Timeline identify the evolution of the state of the target task that has been
selected: disabled, enabled, active, which are represented in different colours. For the
Deviation timeline (see Figure 7), each square represents a degree of deviation from
the ideal path which was supposed to be followed by the designer.
As Figure 7 shows, the evaluators can select the preferred type of representation
and specify if they are interested to visualise data associated with a whole session or
associated with single tasks. The two solutions are basically similar, but the second
one is especially useful when the evaluator wish to perform some comparisons,
because the selection of a single task provides information independent of absolute
times. In this way, a target task explicitly selected by a user after a certain period of
time from the start of the session will be perfectly lined up with another one from a
different user, which started exactly at the beginning of the session. Within the
timelines it is possible not only to identify when the task occurred, but also the type of
task that occurred, through the use of a particular colour (see Figure 8).
The knowledge about the different contexts in which the user session evolved is
relevant for deriving whether any condition might have interfered with the users
activity, then it is important for completely reconstructing the conditions in which the
experiment took place. Contexts that are relevant for the evaluation might physically
correspond to a certain place and situation, but they might also be associated with the
variation of some specific aspects (for example, noise or light) even if the user is still
in the same position. Then, two basic manners for defining contexts can be
Remote Evaluation of Mobile Applications 167
considered: on the one hand, there is the possibility to explicitly list the contexts that
are judged relevant and define each of them in terms of the various contextual
dimensions we are interested in. For instance, it might be the case that we are
interested to only two specific contexts, one characterised by high level of noise, light
and network connectivity (such as the office), another one characterised by low levels
of noise, medium level of light and low level of network connectivity, which might be
at home. On the other hand, we might wish to specify in other cases just the variations
that determine the change of context, e.g. the variation of a certain parameter beyond
a specific threshold value or percentage. For instance, we might want to investigate
the impact on the usage of the application whenever a reduction/increase of 30% in
light is registered in the environment.
Fig. 9. Possibility of selecting task performance information related to a specific context of use
Once the different contexts have been identified, various aspects can be analysed
by the evaluator. For instance, if the achievement of a certain goal is obtained by
carrying out the related activities partially in an environment and partially in other
environments, it might be interesting to see how the performance of a certain task
evolves. In other cases, it might be interesting for the evaluator to carry out an
analysis that takes into account a specific context and understand the evolution of the
sessions in that specific context. For instance, it might be useful to understand the
168 F. Patern, A. Russino, and C. Santoro
amount of time the user has spent in a specific environment and the number of tasks
that have been completed in such an environment. In Figure 9 the time spent for
carrying out a certain task in a specific context is visualised: the evaluator can select
the specific context (Home in Figure 9) and then the tool shows how much time
was needed by the user in order to carry out the tasks in that specific context.
References
1. Hartson, R.H., Castillo, J.C., Kelso, J.T., Neale, W.C.: Remote Evaluation: The Network
as an Extension of the Usability Laboratory. CHI 1996, 228235 (1996)
2. Ivory, M.Y., Hearst, M.A.: The state of the art in automating usability evaluation of user
interfaces. ACM Computing Surveys 33(4), 470516 (2001)
3. Tullis, T., Fleischman, S., McNulty, M., Cianchette, C., Bergel, M.: An Empirical
Comparison of Lab and Remote Usability Testing of Web Sites. Usability Professionals
Conference, Pennsylvania (2002)
4. Lister, M.: Streaming Format Software for Usability Testing. In: Proceedings ACM CHI
2003, Extended Abstracts, pp. 632633 (2003)
5. Tennent, P., Chalmers, M.: Recording and Understanding Mobile People and Mobile
Technology, E-social science (2005)
6. Andreasen, M., Nielsen, H., Schrder, S., Stage, J.: What happened to remote usability
testing?: An empirical study of three methods. In: CHI 2007, pp. 14051414 (2007)
7. Waterson, S., Landay, J.A., Matthews, T.: In the lab and Out in the wild: remote web
usability usability Testing for Mobile Devices. In: CHI 2002, Minneapolis, Minnesota,
USA, pp. 796797 (April 2002)
8. Stoica, A., Fiotakis, G., Simarro Cabrera, J., Frutos, H.M., Avouris, N., Dimitriadis, Y.:
Usability evaluation of handheld devices: A case study for a museum application. In:
Bozanis, P., Houstis, E.N. (eds.) PCI 2005. LNCS, vol. 3746, Springer, Heidelberg (2005)
Remote Evaluation of Mobile Applications 169
9. Avouris, N., Komis, V., Margaritis, M., Fiotakis, G.: An environment for studying
collaborative learning activities. Journal (2004)
10. Serrano, M., Nigay, L., Demumieux, R., Descos, J., Losquin, P.: Multimodal interaction
on mobile phones: Development and evaluation using ACICARE. In: Proceedings of the
8th Conference on Human-Computer Interaction with Mobile Devices and Services,
Mobile HCI 2006, Helsinki, Finland, pp. 129136 (September 12-15, 2006)
11. Waterson, S., Hong, J., Sohn, T., Heer, J., Matthews, T., Landay, J.: What Did They Do?
Understanding Clickstreams with the WebQuilt Visualization System. In: Proceedings of
the ACM International Working Conference on Advanced Visual Interfaces, Trento, Italy
(September 12-15, 2002)
12. Maly, I., Slavik, P.: Towards Visual Analysis of Usability Test Logs Using Task Models
(Paper in Conference Proceedings). In: Coninx, K., Luyten, K., Schneider, K.A. (eds.)
TAMODIA 2006. LNCS, vol. 4385, pp. 2438. Springer, Heidelberg (2007)
13. Paganelli, L., Patern, F.: Tools for Remote Usability Evaluation of Web Applications
through Browser Logs and Task Models, Behavior Research Methods, Instruments, and
Computers. The Psychonomic Society Publications 35(3), 369378 (2003)
14. Patern, F.: Model-based design and evaluation of interactive applications. Springer,
Heidelberg (1999)
Defining Task Oriented Components
Abstract. For many years, tailorability has been identified as a very important
property of system design in order to take care of the emerging users needs to-
wards their working environments. In the same time component-based ap-
proaches have been revealed as an interesting solution for tailorability, allowing
dynamic integration of components in global environments supporting specific
tasks. However, component technologies still face some drawbacks mainly due
to a semantic problem. In order to palliate these lacks we propose in this paper
a new solution that tends to merge tasks models, from the HCI research field,
and existing component models. It particularly consists in a new design ap-
proach the Task Oriented (TO) approach supported by STOrM, a tool
dedicated to the creation and manipulation of Task Oriented Components
(TOCs).
Keywords: Component, integration, task, modeler.
1 Introduction
For the past years and building their experience on multidisciplinary, many different
research fields related to the HCI (Human Computer Interaction) research domain
have demonstrated that tailorability is a key concept that has to be taken into account
while designing software applications [27]. In the same time and following the
growth of the Internet, the need for global environments supporting complex and
eventually cooperative activities has been identified. In this track, CSCW (Computer
Supported Cooperative Work) researchers have shown that component-based ap-
proaches could support tailorability in global environments [15].
As defined by Szyperski and Pfister [25], Components can be deployed independ-
ently and are subject to composition by third parties. The fact that a component
should be designed for use by third parties implicitly raises questions about its inte-
gration in the environment where it will be used: this integration should not be real-
ized by its developers but by its users.
We have been working for many years in the CSCW research domain while creating
groupware systems like CooLDev (Cooperative Layer supporting software Develop-
ment) [11]. This work leads us to identify a strong issue in existing component-based
technologies regarding their integration means. In this paper, we will firstly show that
this issue is closely linked to a semantic loss. The second part will propose a new
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 170 183, 2007.
Springer-Verlag Berlin Heidelberg 2007
Defining Task Oriented Components 171
message in the chat tool, thus warning the community about the changes, and that it will
dynamically modify the users rights regarding other tools/components thus for example
allowing the testers to evaluate and annotate the new software version.
Finally, it is important to remember that platform tailorability is a strongly required
property [27][16][28]. Tailorability involves that the components supporting a par-
ticular activity do not know each other a priori: in other words, the chat and the CVS
have not been created by the same developers and do not know that they will be in-
volved in such a synergy. In the same time, CooLDev cannot anticipate the future
needs of its future users and thus, it cannot know in advance which components it will
eventually integrate. The main issue is then to propose the means making these com-
ponents open enough and well designed to be easily and finely integrated in order to
support the mechanisms that we have just presented.
method that effectively registers the user in a server channel should have been called
first. Of course, in the chat example, one can imagine that someone integrating the
component and discovering connect and sendMessage will certainly understand
that authentication has to be performed before sending messages. This can be ex-
plained because this corresponds to a well-known and stereotyped task (a kind of pat-
tern), thus also explaining why we have chosen this example in this paper. However,
comprehension is less evident while considering the changeUserInfo method.
One can imagine that if the users data like her/his nickname or icon are stored in
her/his own computer, it is possible to call changeUserInfo without a preliminary
connection step. On the other hand, one can imagine that if the data are centralized in
the server, the connect method has to be called first From the component user,
the ambiguity exists and the question cannot be solved without proceeding to fastidi-
ous tests, or without having to explore the component implementation, if available.
More generally, and considering more complex and less stereotyped examples, with
components proposing method names making sense for their designers but not neces-
sarily for their users, it appears that the components comprehension allowing their
appropriate integration is still a strong issue.
For all these reasons, only very motivated developers are usually capable to really
integrate most of the components emerging from the Internet because, by studying ex-
isting source code, they have to mentally reconstruct almost all the functioning
mechanisms of the tool they need to integrate. This issue limits reuse to very special-
174 G. Bourguin, A. Lewandowski, and J.-C. Tarby
ized users and reveals a strong drawback in the existing component-based technolo-
gies regarding the expectation they have generated in trying to support the creation of
tailorable environments.
This analysis let us think that the difficulties encountered in component integration
mainly come from a semantic loss in their documentation. In fact, the interested re-
search community has already noticed this semantic lack and some work trying to
propose new solutions is already in progress [10][14]. However, and considering the
existing models, even computer scientists have difficulties in integrating external
components to create their applications. This explains why we can notice that these
new propositions are still directed to experienced developers. In our own work re-
garding tailorability [4], we have shown that facilitating the fine and dynamic compo-
nent integration would also be very valuable for less experienced users. However, one
point seems important for guiding our work: these users, computer scientists or not,
are not necessarily familiar with the involved technology, but they all are guided by
the task they need to perform.
task-oriented approaches, even if several propositions have tried to palliate this lack
[5][18][20][23]. Unfortunately, we can only observe the lack of concrete results in re-
cent software development teams tools.
Thus, it appears that the tasks model used in preliminary phases of the design proc-
ess is progressively diluted in the implementation and is not explicitly accessible
anymore in the delivered component. This mainly explains why, as we mentioned be-
fore, the integrators have to go in the components code to try and extract the func-
tioning and especially the use logic [22]. In other words, the integrator has to mostly
completely and mentally reconstruct the underlying tasks model in order to make it
explicit again.
This is why we propose to better use the components tasks models that can be
seen as a kind of missing link disappearing between the design phase and the pro-
duced code. We call Task Oriented Component (TOC) this new software component
type. As shown in Fig. 1 (lower part), the basic principle behind a TOC is that it con-
tains its classical OO documentation and is explicitly augmented by the tasks model
describing its use logic. In this approach, some parts of the functional code (compo-
nent methods) are linked to the tasks model they come from, thus allowing TOC con-
textualization from a higher abstraction level.
Moreover, we can notice that tasks models when created already serve as
shared objects facilitating a better communication between the different actors (in-
cluding end-users) involved in the complex design process. Thus, the TOC approach
should also serve as a better support for collaboration between these actors.
4 Creating TOCs
Fig. 3. Linking tasks model and implementation code through the integration wrapper
Defining Task Oriented Components 177
it is important to notice that, with this augmentation, the designers clearly indicate
that connect is the first method to be called when the chat is instantiated.
demand. This class provides the entry points that will be involved in the future TOC
integration. Considering the chat augmented tasks model already introduced, the
ChatIntegrationWrapper class has been generated and is shown in Fig. 5. A
Javadoc embryo is also generated, taking benefit from information (e.g. annotations)
available in the task associated to each method. This skeleton is a basis element that
the development team will just have to complement while implementing the meth-
ods bodies and the Javadoc from its point of view.
5 Using TOCs
Even if the classical OO introspection mechanism is still available, the TOC provides
a new viewpoint over its integration methods because its tasks model is delivered
with the component. This way, opening a TOC in STOrM (or any future compatible
tool) provides a new introspection type that helps in discovering the integration meth-
ods, not only through a simple list, but now through the task that is supported by the
component. This new viewpoint palliates the semantic problems we exposed before.
Considering the chat example (cf. Fig. 6), it is now possible to easily study its func-
tioning and to quickly discover its functions that have been judged as key elements by
its designers. Each possibility offered by the component regarding integration corre-
sponds to an augmented task that is contextualized in the frame of the global task
supported by the component. The integration methods are linked to these tasks.
Thanks to this, the ambiguity described before and introduced by the changeUser-
Info method directly disappears because the augmented tasks model, with the task
transition semantics, clearly indicates that the connection task has to be realized first.
The integrator does not need to know where data are stored: the tasks model is clear
enough. Finally, we will underline that, going further in displaying the different as-
pects of the TOC, STOrM can simultaneously display the components augmented
tasks model and each methods Javadoc. This contextualizes classical information,
coming from a lower abstraction level like comments about the methods parameters,
in the frame of the task higher abstraction level.
6 Collateral Development
In this paper and due to a lack of space, we have limited our demonstration to the
components integration issue. However, we also would like to briefly underline an-
other point closely linked to our approach. Another issue about component-based
technologies is related to the means to verify how a component effectively supports
the task it has been designed for. More precisely, we are interested in finding solu-
tions that help to generate and analyze components use traces.
From this point of view, our work uses the Aspect Oriented Programming (AOP)
[8] to generate traces [26] reflecting the components use: its execution involves
method calls that support the interactions between the user and the component; trac-
ing these methods help to analyze the task performed by a user with the component.
This technique offers several benefits that we will not describe here. However, with-
out our TOC approach, one drawback is that the ergonomist wanting to trace a com-
ponent has to browse the code to identify and select the methods that have to be
traced. Due to the semantic loss we have described in this paper, this work can only
hardly be realized since an ergonomist is usually not a computer scientist and because
the available implementation methods do not easily correspond to the user tasks he
needs to trace.
This further explains how the TOC approach also truly tries to take care about the
ergonomists needs, and how this method should amplify and favor a balanced coop-
eration: in the first step of the method, while creating the tasks model, the ergonomist
also describes the tasks he want to trace later. While augmenting this tasks model in
step 2, the computer scientist then also defines key methods for tracing. These meth-
ods are not necessarily integration methods constituting the wrapper, but correspond
to the core implementation code of the component, i.e. the components classes that
may be involved in the wrappers implementation as described in Fig. 3. Using a tool
like STOrM, the corresponding skeleton can be generated and implemented according
to the augmented tasks model. Thus, thanks to the TOC technology, an ergonomist
can more easily create aspects that will generate the expected use traces. He does not
have to hardly browse the components code anymore. He now just has to identify the
tasks he wants to trace, which corresponds to his/her abstraction level. Since the tasks
model is directly connected to the corresponding methods, STOrM can help in gener-
ating the aspects over the (implicitly) selected methods.
7 Conclusion
As demonstrated by many years of multidisciplinary research involved in software
design, tailorability has to be an intrinsic property of new interactive systems in order
to take into account the inevitable emerging users needs. It has been shown that
Defining Task Oriented Components 181
References
1. Augustin, L., Bressler, D., Smith, G.: Accelerating software development through collabo-
ration. In: 24rd International Conference on Software Engineering, pp. 559563 (2002)
2. Baron, M., Lucquiaud, V., Autard, D., Scapin, D.L.: K-MADe: Un environnement pour le
noyau du modle de description de lactivit. In: 18me Conf. Francophone sur
lInteraction Homme-Machine IHM 2006, pp. 287288 (2006)
182 G. Bourguin, A. Lewandowski, and J.-C. Tarby
3. Booth, D., Liu, C.K.: Web Services Description Language (WSDL) Version 2.0.
(2006)Avalaible at: http://www.w3.org/TR/wsdl20-primer
4. Bourguin, G., Derycke, A., Tarby, J.C.: Beyond the Interface: Co-evolution Inside Interac-
tive Systems - A proposal Founded on Activity Theory. In: Blandford, Vanderdonckt,
Gray. (eds.) Interaction without Frontiers, Proc. of HCI 2001. People and Computer, pp.
297310. Springer, Heidelberg (2001)
5. Bruins, A.: The Value of Task Analysis in Interaction Design. In: Task to Dialogue: Task-
Based User Interface Design Workshop, CHI 1998 (1998)
6. Clerckx, T., Luyten, K., Coninx, K.: DynaMo-AID: a Design Process and a Runtime Ar-
chitecture for Dynamic Model-Based User Interface Development. In: 9th IFIP Working
Conference on Engineering for Human-Computer Interaction. Pre-Proceedings, Hamburg,
Germany, July 11-13, pp. 142160 (2004)
7. Dougiamas, M.: Moodle: open-source software for producing internet-based courses
(2001), Available at: http://moodle.com
8. Filman, R., Elrad, T., Clarke, S., Aksit, M.: Aspect-oriented software development. Addi-
son-Wesley, Reading (2004)
9. Heineman, G.T., Councill, W.T.: Component-based software engineering: putting the
pieces together. Addison-Wesley Longman Publishing Co., Inc, Boston, MA (2001)
10. Kiniry, J.R.: Semantic Component Composition. In: Cardelli, L. (ed.) ECOOP 2003.
LNCS, vol. 2743, Springer, Heidelberg (2003)
11. Lewandowski, A., Bourguin, G.: Supporting Collaboration in Software Development Ac-
tivities. In: 10th International Conference on Computer Supported Cooperative Work in
Design (CSCWD 2006), May 3-5, 2006, vol. 1, pp. 381387. IEEE Press, Los Alamitos
(2006)
12. Lu, S., Paris, C., Vander Linden, K., Colineau, N.: Generating UML Diagrams From
Tasks models. In: CHINZ 2003, Dunedin, New Zealand (2003)
13. Maes, P.: Concepts and experiments in computational reflection. In: Object-oriented pro-
gramming systems, languages and applications, New York, USA, pp. 147155 (1987)
14. Medjahed, B., Bouguettaya, A., Elmagarmid, A.K.: Composing Web services on the Se-
mantic Web. The International Journal on Very Large Data Bases 12(4), 333351 (2003)
15. Mrch, A.: Three levels of end-user tailoring: customization, integration, and extension.
In: Method and Tools for Tailoring Object-Oriented Applications: An Evolving Artifacts
Approach. PhD thesis, Dept of Informatics, University of Oslo, pp. 4151 (1997)
16. Mrch, A.I., Stevens, G., Won, M., Klann, M., Dittrich, Y., Wulf, V.: Component-based
technologies for end-user development. Communications of the ACM 47(9), 5962 (2004)
17. Mori, G., Paterno, F., Santoro, C.: CTTE: Support for Developing and Analysing Tasks
models for Interactive System Design. IEEE Transactions on Software Engineering 28(8),
797813 (2002)
18. Nunes, N., Cunha, J.F.e.: Towards a UML pro-file for interaction design: the Wisdom ap-
proach. In: Evans, A., Kent, S., Selic, B. (eds.) UML 2000. LNCS, vol. 1939, Springer,
Heidelberg (2000)
19. Object Technology International, Inc.: Eclipse Platform Technical Overview (2003),
Available at: http://www.eclipse.org
20. Pinheiro da Silva, P.: Object Modelling of Interactive Systems: The UMLi Approach. PhD
Thesis, University of Manchester, United Kingdom (2002)
21. Reichart, D., Forbrig, P., Dittmar, A.: Tasks models as basis for requirements engineering
and software execution. In: [24], pp. 5158 (2004)
22. Richard, J.F.: Logique de fonctionnement et logique dutilisation. Rapport de recherche
INRIA n Avril 202 (1983)
Defining Task Oriented Components 183
23. Scogings, C., Phillips, C.: Linking Task and Dialogue Modeling: Toward an Integrated
Software Engineering Method. In: Diaper, D., Stanton, N. (eds.) Handbook of Task
Analysis for Human-Computer Interaction, pp. 551566. Lawrence Erlbaum Associates
Pubs, Mahwah, NJ (2004)
24. Slavk, P., Palanque, P.: Proceedings of the 3rd Int. Workshop on Tasks models and Dia-
grams for User Interface Design - TAMODIA 2004, pp. 1516. ACM, New York (2004)
25. Szyperski, C., Pfister, C.: Workshop on component-oriented programming, summary. In:
Cointe, P. (ed.) ECOOP 1996. LNCS, vol. 1098, Springer, Heidelberg (1996)
26. Tarby, J.C., Ezzedine, H., Rouillard, J., Tran, C.D., Laporte, P., Kolski, K.: Traces using
aspect oriented programming and interactive agent-based architecture for early usability
evaluation: Basic principles and comparison. In: HCI International, Beijing, P.R. China.
LNCS, vol. 4550, pp. 632641. Springer, Heidelberg (2007)
27. Vicente, K.J.: HCI in the global knowledge-based economy: Designing to support worker
adaptation. Communications of the ACM 7(2), 263280 (2000)
28. Won, M., Stiemerling, O., Wulf, V.: Component-Based Approaches To Tailorable Sys-
tems. In: Lieberman, H., Patern, F., Wulf, V. (eds.) End-User Development. Human-
Computer Interaction Series, pp. 115142. Kluwer Academic, Dordrecht (2005)
Patterns in Task-Based Modeling of User Interfaces
1 Introduction
User interfaces have to handle increasing challenges. They convey the output of the
underlying application and the input from application users and hence have to cope
with the complexity of both sides [6], as Fig. 1 illustrates.
On the one hand the increasing complexity of software applications in general in-
fluences the user interface design, since the application functionality has to be ac-
cessed via the user interface.
On the other hand the user interface needs to accommodate different types of users,
ranging from computer novices to computer experts. Additionally, a wide spectrum of
new devices beside the desktop PC, like PDAs or mobile phones, has caused a grow-
ing demand for device spanning applications in the last years. However, running the
application on different devices often meant developing a user interface for each of
the devices. Eventually different and dynamically changing environments, where the
applications are used in, have to be considered.
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 184197, 2007.
Springer-Verlag Berlin Heidelberg 2007
Patterns in Task-Based Modeling of User Interfaces 185
Model-based user interface development has gained momentum in the recent years.
Various development approaches have been suggested. However, they often differ in
the underlying models, the modeling method and the modeling notation used to de-
scribe the models. Beside the benefits of model-based user interface development,
creating the models and linking them to each other is still a time-consuming activity.
Furthermore the approaches lack an advanced concept of reuse.
To avoid these disadvantages patterns may be employed. Patterns describe recurring
solutions in a generic form, so that they are applicable in different contexts while
adapting the solution to the given situation. Since the solution has to be specified only
once when creating the pattern, patterns provide an advanced concept of reuse. Fur-
thermore patterns are suitable to reduce complexity of model-based approaches, be-
cause they provide a more aggregated perspective in the development.
In this paper an approach for integrating model-based and pattern-driven ap-
proaches for user interface development is introduced. In the next chapter related
work is discussed. Following, in the third chapter, a framework is introduced that
describes the general idea of how model-based approaches for user interface devel-
opment can be extended with patterns. Afterwards, in the fourth chapter, it is shown
exemplarily how to implement the framework in order to derive a concrete pattern-
driven model-based approach. The capability of the derived approach will be illus-
trated by presenting a case study within the fifth chapter. Finally the concepts intro-
duced in this paper will be summarized and future avenues will be outlined.
2 Related Work
The idea of model-based development of user interfaces is to describe the user inter-
face by a set of models, whereby each specifies a certain aspect of the user interface.
For this purpose various model-based approaches have been suggested in recent
years. Usually each approach contains a modeling method that describes which mod-
els have to be created in order to specify the final user interface and a modeling lan-
guage that is used to specify the single models.
The left box in Fig. 2 highlights exemplarily three model-based approaches. The
Mobi-D (Model-based Interface Designer, [13]) approach includes a modeling
method that is based on a user-task, a domain, a user, a dialog and a presentation
model. Furthermore XIML (Extensible Interface Markup Language, [23]) is included
in the approach in order to specify the single models. The One Model Many Inter-
faces [12] approach suggests task, abstract and concrete models for user interface
186 F. Radeke and P. Forbrig
modeling. It aims to support the development of multimodal user interfaces. For the
purpose of specifying the single models it contains the Teresa XML notation [16].
The UsiXML (User Interface Extensible Markup Language, [9]) approach is struc-
tured according to the Cameleon Unifying Reference Framework [5] that attempts to
characterize the process of developing user interfaces applicable in different contexts
of use. The UsiXML approach contains among others task, domain, context, abstract
and concrete user interface models.
Patterns, pattern languages and a process of pattern application were first proposed
by Christopher Alexander in the domain of urban architecture [1, 2]. According to
Alexander a pattern describes a problem which occurs over and over again in our
environment, and then describes the core of the solution to that problem, in such a
way that you can use this solution a million times over, without doing it the same way
twice. [2].
The pattern concept was quickly adapted to other domains. First references to
Alexanders work in user interface related papers were published in User Centered
System Design [10]. However, interest in pattern languages for interaction design
has gained momentum only in recent years [3]. As outlined on the right hand of Fig. 2
user interface patterns have been suggested in form of pattern collections and pattern
languages. Van Duyne et al. [19] focus on patterns that describe solutions in cos-
tumer-centered web design. Thereby the authors follow closely Alexanders format of
pattern representation. Tidwell [17] defines a pattern language that may be employed
in user interface design for desktop applications as well. While emphasizing on how
and why usability is improved by employing their patterns, Wilie et al. [21] focus on
providing user-centered solutions for user interface design. The patterns in the men-
tioned languages are mainly described in a textual or graphical notation.
Patterns in the context of user interface modeling are a rather new and rarely exam-
ined research field. Such patterns encapsulate solutions for the creation of the single
user interface models. In [11] Patern suggests task and architectural patterns. The
task patterns capture a high level description of recurring activities performed while
interacting with the application. For describing the patterns a textual pattern notation
is used. Additionally the task structure of the task patterns is described using the CTT
(ConcurTaskTree, [11]) notation. In line with task patterns the architectural patterns
describe recurring system components used to support interaction with the user
Patterns in Task-Based Modeling of User Interfaces 187
independent of the implementation language. Some task patterns based on the sug-
gested approach are introduced in [4].
Sinnig [15] proposes a framework for employing patterns in model-based user in-
terface design. The framework includes a set of models, a model-based user interface
development method for constructing these models and a set of user interface patterns
that can be used in this construction. The patterns are described in a uniformed textual
and graphical notation. For the task patterns TPML (Task Pattern Markup Language)
a machine-readable notation was proposed.
abstract components of the framework. The abstract components are specified when
the framework is implemented. In what follows the frameworks phases, its main
elements and the abstract components are discussed in more detail.
During the pattern selection phase the designer selects an appropriate user inter-
face pattern that shall be applied to the user interface models. The pattern is chosen
out of the pattern repository which contains the available patterns of the pattern lan-
guage. Within the pattern language the patterns are hierarchically structured into
patterns and sub-patterns. Each pattern in the pattern language is specified according
to a specific pattern notation. A user interface modeling pattern in the framework
contains a model fragment that describes the pattern solution. Compared to concrete
user interface models the model-fragment is generic in order to be applicable in vari-
ous contexts and to allow the pattern to be instantiated in multiple ways.
The generic parts of the selected pattern are concretized during the pattern instan-
tiation phase. This interactive process results in a pattern instance derived from the
original pattern. Since all generic pattern parts are concretized the resulting pattern
instance does not differ in its structure from a concrete user interface model.
Eventually the pattern instance is integrated in the user interface models. This is
done in the pattern integration phase. First the model fragment of the created pattern
instance is integrated into the corresponding model. Next the model elements contrib-
uted by the pattern instance are linked with the existing elements of the user interface
model. As a result one coherent model is obtained.
Patterns contained in pattern languages suggested so far describe the single patterns
mainly in a textual and graphical form. This information helps the developer to de-
termine whether a specific pattern is applicable in a concrete design situation. In the
following it is referred to this kind of information as contextual information. How-
ever, the patterns usually do not contain machine-readable information about the
solutions captured by them. Thus implementing the solutions is usually left up to the
developer. Including this information in the pattern would enable computer support
for the entire pattern application process. Such machine-readable information is re-
ferred to as implementational information in the following.
In the context of the pattern application framework the UsiPXML (User Interface
Pattern Extensible Markup Language) has been developed. It allows describing as
well contextual and implementational information for a pattern. The composition of
UsiPXML is illustrated in Fig. 5.
The contextual information in UsiPXML is structured according to the format as
suggested by PLML (Pattern Language Markup Language, [8]). PLML was an output
of the CHI 2003 (Conference on Human Factors in Computing Systems) workshop
with the goal to define a common structure for patterns. Up to that point most authors
used their own format for describing their patterns. PLML contains common ele-
ments, like for instance the pattern name, the problem and the solution, that can be
found in most of the patterns suggested so far.
As mentioned before user interface modeling patterns describe the solution in form
of model fragments. In order to specify these fragments in a machine-readable way
UsiPXML is based on UsiXML (User Interface Extensible Markup Language, [18]).
Since UsiXML is suitable for user interface model specification it is as well suitable
for description of the pattern solution captured in form of a model fragment. How-
ever, by definition patterns describe the solution in a generic way. Thus the solution
can be applied in different contexts. In order to describe the solution in a machine-
readable but generic way UsiXML has been extended with pattern-specific compo-
nents as outlined in the lower part of Fig. 5. These components are structure attrib-
utes, variable declarations and assignments and pattern references and interfaces.
These extensions will be introduced in the following using an illustrative example. A
more detailed description of the extensions can be found in [14].
190 F. Radeke and P. Forbrig
Fig. 6 (a) shows the UsiPXML structure of the Form Pattern [15]. It can be em-
ployed in situation where the user shall enter a set of related values. Fig. 6 (b) shows a
possible instance generated from the pattern. A first concept that can be found in
UsiPXML are structure attributes. Structure attributes are used to assign the numbers
of allowed occurrences to elements that are contained in the pattern structure. The
minimum and maximum number of allowed occurrences of an element is indicated in
brackets behind the element. For instance the element Box: Single Input (0, un-
bound) that can be found in the middle of Fig. 6 (a) is allowed to occur arbitrarily
often in the final pattern instance. The concrete occurrence number of the element is
determined by the designer during the pattern instantiation. In the instance shown in
Fig. 6 (b) the single input element occurs five times.
Furthermore variables can be defined within patterns. They serve as placeholders
for concrete values. During the pattern instantiation the designer is prompted to assign
values to all variables that occur within the selected pattern. The Form Pattern con-
tains for instance a variable introductionText that allows to specify an introduction
text, which is displayed in top of the form as shown exemplarily in Fig. 6 (b). Vari-
ables are evaluated by assignment elements in the pattern. This evaluation returns a
value that is assigned to attributes of pattern elements. To summarize, variables repre-
sent the design decisions of the user interface designer. Assignments evaluate these
decisions and according to this evaluation adapt the structure of the pattern solution.
A last concept that can be found in UsiPXML are pattern references and pattern in-
terfaces. Pattern references allow employing sub-patterns in order to refine a pattern
solution. As shown in the lower part of Fig. 6 (b) the Form Pattern refers to the
Unambiguous Format Pattern [15, 20] as a sub-pattern. The purpose of the Unam-
biguous Format Pattern is to provide a single input element depending on the type of
information that is entered in this input. Therefore the type of information that shall
be entered in the input is passed to the sub-pattern via its pattern interface. The sub-
pattern evaluates this information and provides the appropriate input element.
It can be summarized that UsiPXML allows describing contextual and implementa-
tional information of a pattern. The implementational information describes the pat-
tern solution in a machine-readable, generic way.
Patterns in Task-Based Modeling of User Interfaces 191
(a)
(b)
Fig. 6. UsiPXML structure (a) of the "Form Pattern" and a pattern instance generated from the
pattern (b)
A last component that has to be specified in order to implement the pattern applica-
tion framework is the pattern language. It contains the available patterns and relations
among these patterns. For this purpose a set of patterns in UsiPXML format has been
developed and has been integrated into the User Interface Modeling Pattern Lan-
guage. Fig. 7 shows that the language is divided in four pattern classes.
Task patterns describe recurring tasks in a generic manner. A set of task patterns
has already been specified by Sinnig [15] in the TPML (Task Pattern Markup Lan-
guage) pattern notation. Some of these patterns have been transformed to the
UsiPXML pattern notation and have been integrated into the pattern language as dis-
played in the top left of Fig. 7. Dialog patterns describe recurring navigational struc-
tures of user interfaces. They are employed in the creation of the dialog model.
192 F. Radeke and P. Forbrig
Layout patterns capture recurring solutions for the layout of user interface elements.
Examples are the positioning of elements or setting layout attributes like size, color or
font. Presentation patterns describe recurring concrete user interface structures. That
may be groups of concrete user interface elements or single user interface elements in
more specific patterns. In the next chapter some patterns will be introduced using an
illustrative example. The entire pattern language can be found in [14].
The task modeling is started with creating an initial task model as outlined in Fig. 8. It
shows that the user has to authenticate himself before he can access the main func-
tionality. After having accessed the main functionality he can concurrently perform
the Manage Service Schedule, Find Documentation and Assemble Maintenance
Jobs tasks.
For the further refinement of the initial task model task patterns are employed. Exem-
plarily the application of the Login [15] task pattern for refining the Authenticate
task is shown. The Login Pattern is applicable when the user needs to identify him-
self in order to access secured data or perform authorized operations. The Login
Pattern, as outlined in Fig. 9 (a), employs the Multi Value Input Form Pattern [11,
15] as a sub-pattern.
The Multi Value Input Form Pattern can be used when the user has to provide a
set of related values. In the context of the Login Pattern it is employed to specify,
which coordinates have to be provided to authenticate the user. Fig. 9 (b) shows the
pattern instance that has been achieved while instantiating the Login and its sub-
pattern for the Maintenance Support System application. In the next step this pat-
tern instance is integrated into the initial task model while replacing the Authenti-
cate task. In a similar way the other task may be refined by applying appropriate task
patterns. This shall not be discussed here any further.
(a) (b)
A dialog pattern that can be employed in the creation of the applications dialog
model is the Clear Entry Point Pattern [17]. It suggests a navigational structure
where, starting from an entry view, transitions to all main sub-views are provided.
The user thus can easily overlook the provided content and navigate to the desired
sub-view. Fig. 10 (a) shows the UsiPXML structure of the Clear Entry Point
Pattern.
In the creation of the dialog model for the Maintenance Support System applica-
tion the Clear Entry Point Pattern is employed to design a Main View from where
the user can navigate to a Manage Schedule View, Find Documentation View and
Assemble Maintenance Jobs View. The resulting instance is shown in Fig. 10 (b).
(a) (b)
Fig. 10. "Clear Entry Point Pattern" (a) and instance (b)
Presentation and layout patterns are employed to refine the abstract user interface
model as described in 4.1. In order to keep the example simple it will be focused on
refining the Assemble Maintenance Jobs window, which was automatically derived
from the corresponding view.
Within the Assemble Maintenance Jobs window the technician can select single
jobs, retrieve detailed information or send this information to the PDA. Therefore the
window is split in two panes. One pane contains a list of maintenance jobs and the
second provides interaction elements needed to perform the actions described above.
To split the window in different panes the Split Pane layout pattern is employed.
The structure of the pattern is outlined in Fig. 11 (a). The first variable declaration
assignment pair allows determining the orientation of the single panes. The second
pair allows setting the size of each single pane. The instance may contain an arbitrary
number of such panes. Fig. 11 (b) shows two possible instances of the pattern. The
Patterns in Task-Based Modeling of User Interfaces 195
(a) (b)
Fig. 11. "Split Pane Pattern" (a) and instances (b)
instance that has been instantiated to split the content of the Assemble Maintenance
Jobs window is similar to the instance on the left-hand side of Fig. 11 (b).
The Table presentation pattern is employed to fill the left pane provided by the
Split Pane Pattern instance. It may be used when multiple records of similar struc-
ture have to be listed in a user interface. In the context of the maintenance support
system it is applied to display the maintenance jobs information.
In order to fill the right pane provided by the Split Pane Pattern instance the
Button Group Pattern is instantiated. The pattern provides a group of buttons that
may be employed to access related functionality. During instantiation of the pattern
the number, orientation and labeling of the buttons can be determined. Fig. 12 shows
the resulting content of the Assemble Maintenance Jobs Window.
support for the pattern application process. Furthermore the User Interface Modeling
Pattern Language was introduced that contains the patterns, which can be applied to
models. Finally the capability of the derived pattern-driven model-based approach
was demonstrated with a case study of developing the user interface of a Mainte-
nance Support System application.
For the further development of the proposed pattern-driven model-based user inter-
face development approach additional patterns have to be specified in order to pro-
vide the designer for each design situation with a sufficient set of available patterns.
Additionally it has to be examined how the application of patterns on one modeling
level influences the application of patterns on other model levels. For instance the
application of the Multi Value Input Form Pattern at the task level tends to employ
the Form Pattern on the presentation level. Prospective patterns could integrate both
pattern solutions. Eventually a way to identify already applied patterns in models in
the context of reengineering has to be examined.
References
1. Alexander, C.: The Timeless Way of Building. Oxford University Press, New York (1979)
2. Alexander, C., Ishikawa, S., Silverstein, M., Jacobson, M., Fiksdahl-King, I., Angel, S.: A
Pattern Language. Oxford University Press, New York (1977)
3. Borchers, J., Thomas, J.: Patterns: Whats In It For HCI. In: Proceedings of Conference on
Human Factors in Computing (CHI) 2001, Seattle (2001)
4. Breedvelt, I., Patern, F., Severiins, C.: Reusable Structures in Task Models. In: Proceed-
ings of Design, Specification, Verification of Interactive Systems 1997, Springer, Heidel-
berg (1997)
5. Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., Vanderdonckt, J.: A
Unifying Reference Framework for Multi-Target User Interfaces. Interacting with Com-
puters 15(3), 289308 (2003)
6. da Silva, P.: User Interface Declarative Models and Development Environments: A Sur-
vey. In: Palanque, P., Patern, F. (eds.) DSV-IS 2000. LNCS, vol. 1946, pp. 207226.
Springer, Heidelberg (2001)
7. Eclipse 2007. Eclipse - An open development platform. Internet Resource. Accessed at
(January 2007), http://www.eclipse.org
8. Fincher, S.: CHI 2003 Workshop Report - Perspectives on HCI Patterns: Concepts and
Tools (introducing PLML). Interfaces 56, 2628 (2003)
9. Limbourg, Q., Vanderdonckt, J., Michotte, B., Laurent, B., Murielle, F., Trevisan, D.:
USIXML: A User Interface Description Language for Context-Sensitive User Interfaces.
In: Proceedings of ACM AVI 2004 Workshop Developing User Interfaces with XML:
Advances on User Interface Description Languages, Gallipoli (2004)
10. Norman, D.A., Draper, S.W.e.: User Centered System Design: New perspectives on Hu-
man-Computer Interaction. Lawrence Erlbaum Associates, Hillsdale, New Jersey (1986)
11. Patern, F.: Model-based design and evaluation of interactive applications. Springer, Ber-
lin (2000)
12. Patern, F., Santoro, C.: One Model, Many Interfaces. In: Proceedings of: CADUI 2002,
Valenciennes, France (2002)
13. Puerta, A., Eisenstei, J.: Towards a General Computational Framework for Model-Based
Interface Development Systems. In: Proceedings of IUI 1999: International Conference on
Intelligent User Interfaces, Los Angeles (1999)
Patterns in Task-Based Modeling of User Interfaces 197
1 Introduction
Task analysis and task modelling are well-known techniques in HCI. They are
mainly used for designing and evaluating user interfaces. Basically, a task is con-
sidered as an activity undertaken by one or more agents to achieve a certain
change of state in a given domain. It is assumed that task knowledge is rep-
resented in a persons memory... which is assumed to be activated during task
execution [20]. It is furthermore assumed that the underlying mental activity of
work can be elaborated, analysed, and represented as cognitive task models. The
comparison in [24] reveals that most existing task analysis approaches like HTA
(Hierarchical Task Analysis, [2]), GOMS (Goals, Operators, Methods, Selection
rules, [6]), TKS (Task Knowledge Structures, [20]), and CCT (Concur Task
Trees, [27]) characterise tasks in terms of goals, actions, operations, task domain
objects, roles etc. Although there are dierences in the use of basic concepts and
the level of detail task structures are decomposed hierarchically and temporal
dependencies between sub-tasks are described.
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 198212, 2007.
c Springer-Verlag Berlin Heidelberg 2007
Towards Activity Representations for Describing Task Dynamics 199
levels of abstraction. However, one activity representation reects not only one
single task but a state of merging of multiple tasks at a certain level of detail.
Humans never develop complete plans containing all possible alternatives of
how to achieve a certain goal but elaborate them to a great deal on demand.
Plans are seen as means that people use to (try to) direct their behaviour. How-
ever, human actions imply constant rening of plans as well as re- and on-the-y
planning activities. We believe that this is supported by maintaining of and oper-
ating on activity representations with dierent grades of stability. In this paper,
a description of models is assumed, which is grounded in existing approaches.
We still focus on goal-oriented or intentional behaviour which is controlled by
feedback. However, by proposing activity representations as an open system we
emphasise the interplay between mental and physical actions to enable humans
to adapt to unforeseen changes of the environment (to be in harmony). We fo-
cus on the following aspects of multiple and collaborative tasks: task redenition
(Sect. 2.2), task grouping and polymotivated actions (Sect. 2.3), activity spaces
(Sect. 2.4), and goal elaboration and abstraction (Sect. 2.5). We further explore
the interplay between externalised and internalised task descriptions. We discuss
the higher-order property of activity representations and the interplay between
habits and learning (Sect. 3). Finally, we show how an enriched task understand-
ing can inuence the understanding of interaction design (Sect. 4).
domains (see Fig. 2a). They consider tasks as means by which the work system
changes the application domain. Goals are desired future states of the application
domain that the work system should achieve by the tasks it carries out (in [11]).
Fig. 2. a) General model of work (in [11]), b) Task redenition according to [19]
In this sense, task models like those in Fig. 1 are external representations of
normative task knowledge. This is not a problem as such. Human behaviour
is determined by division of labour, existing artifacts and norms. It becomes
problematic if norms are drawn from dogmatic visions of life. One can use a task
description to convey domain knowledge. However, one can also use them to gain
control over others by imposing ones orders on them (e.g. [33]). But this is a
general problem with any artifact. Humans are always responsible for what they
create and how they use it. In our example, we could imagine that an assistant
whom we call Paul got the task descriptions in Fig. 1 from his professor. Now he
has to internalise these assignments to be able to accomplish his work. This is
an active process called task redenition.
202 A. Dittmar and P. Forbrig
In the example, let us assume that following points refer to Pauls understand-
ing and attidudes concerning the task of supervising projects.
- If the rst version of the SRS (Software Requirements Specication) of a project
group is okay they dont need to supply a revised version.
- Project goals cannot fully be explained at the beginning. They have to be developed
over time. Their elaboration depends strongly on available tools and skills.
- Project groups have to solve their problems by themselves.
The model in Fig. 3 describes Pauls redenition as a snapshot. It does not
explain why and how Paul developed his task understanding. Furthermore, we
can also discuss whether it is an expressive description of above mentioned points
but it is one. And it illustrates the following points.
- Temporal relations are modied. For example, sub-task P23 is optional, P1 is
iterative and is performed at any time before P3.
- Sub-tasks are discarded (e.g. P5).
- Sub-tasks are rened (e.g. P1).
- The task hierarchy is re-structured. For example, P4 is now a sub-task of P1.
Now imagine we ask Paul why he doesnt help students who have trouble to
work together. He answers: I know the professor expects me to do that but it
makes me sick to mediate between people. Take note that although Paul acts
upon his redened model the underlying assignment is still there and inuences
his acting. He might feel a tension between it and his redenition caused by the
image of himself and the belief to have to fulll his professors assignments.
Tasks have to be accomplished by actions of individuals who, typically, work
in several domains and groups. Their collaboration is, among other things,
Towards Activity Representations for Describing Task Dynamics 203
sociated with properties like completeness or consistency (see e.g. [8]). Even if
complete and consistent descriptions were possible it would not necessarily be
useful. For example, the absence of gaps in plans and a too detailed descrip-
tion of action sequences can interfere with the ability of a person or a group
to cope with interruptions. Second, an activity representation at one level of
abstraction describes fragments of knowledge about several tasks and how to
interleave or merge them in a hopefully eective, ecient, and sustainable way.
In consequence, the description of a single task is spread over several representa-
tions. We suggest that activity representations at a lower level of abstraction are
more ephemeral. They help people to organise their day-to-day activities. Take
note, they are still explicit representations that people hold when planning and
executing tasks. Certainly low-level activities can have patterns that are very
persistent like habits or proceduralised action sequences. The creation of lower-
level models is supported by more stable, higher-level representations reecting
the recognition of recurrent structures in the world [3] (goals, values, beliefs,
but also activity rhythms as mentioned in [36], domain knowledge in general).
Fig. 5. Prof. Smith and Paul: Activity representations at dierent levels of abstraction
Let us assume that Paul realised this Monday morning while reviewing the
SRS that the students didnt really understand statecharts. Although he has
prepared a tutorial about class diagrams (point 9 in Fig. 5) he decides to discuss
statecharts again. He puts away the paper from yesterday and starts to read
in a paper about statecharts & task modelling instead. Maybe he can also nd
some examples for his tutorial this afternoon... This is a typical example for
interrupting the actual activity and for re-planning as a response to unexpected
changes in the environment. Here, Pauls assumptions about the skills of his
students have changed. (Take note, that the reading of the statecharts paper
is also polymotivated: for literature research and for preparing the tutorial).
Re-planning and on-the-y planning is supported by the concept of activ-
ity representations rather than by single task models. First, it is easier to give
up actual plans, and then to use more stable representations to create new or
modify actual plans. Second, it allows inconsistencies between representations
(e.g., point 2 in Fig. 5). This supports an acting, which is rather guided but
not fully controlled by norms. Third, it is easier to add (or remove) an activity
representation. It can reect a more ne-grained interleaving of multiple tasks
(elaboration) as necessary e.g. to cope with interruptions. It can also abstract
from non-relevant aspects with respect to a single task.
One of the reviewers of a previous attempt to explain our task modelling
ideas criticised that this approach is based on perspective that the interaction
is completely structured and structurable. Of course, we do not believe that
human acting is completely structured and structurable (a description of their
actions is probably somehow). However, people plan and reect their behaviour
(anticipatory reection). They think in and by action. Mental representations
206 A. Dittmar and P. Forbrig
are a resource for but also a result of human acting. By doing actions again and
again habits are developed and mental models are ne tuned.
The authors of [15] criticise that task models consider objects of a task environ-
ment always as second class. However, task activities are not only centered
around the creation of artifacts they can only be accomplished by interacting
with them. The role of triggers in timing and pacing a task is, for example,
well explored in [15]. In the literature, it is often distinguished between physical
artifacts which are important in sequencing, triggering and closure of tasks, and
cognitive artifacts as physical objects made by humans for the purpose of aid-
ing, enhancing, or improving cognition (Spillers in [36]). Kirsh coined the term
activity space to emphasise that not the environment itself is important but how
people deliberately alter it according to their goals (in [32]).
The question when things in their environment really become artifacts for hu-
mans still remains open. In our example, it is probably not only the fact that this
paper about statecharts & task modelling is lying on his desk that lets Paul re-
plan this Monday in the way described in Sect. 2.3. There must be internal coun-
terparts to such external clues that let them work as artifacts. Paul must have
internalised the assignment of reading research papers. The concept of functional
organs in activity theory might give an explanation of this phenomenon. They are
created by individuals through the combination of both internal and external re-
sources. Functional organs combine natural human capabilities with artifacts to
allow the individual to attain goals that could not be attained otherwise [22].
Human eyes in combination with eyeglasses are an example.
Fig. 6. Prof. Smith reduces the cost of his mental operations by task delegation
of the environment, in particular of own and other peoples habits. Prof. Smith
might know that he usually forgets the time and that his secretary as reminder
is a more safe trigger to interrupt his morning work and to leave for the lecture.
He might also know that his assistants are more engaged in supervising projects if
they can propose some themes (see Fig. 4). Or that most students (in his culture)
only do their homework if they get some points. But of course everything changes,
and so do human habits. Maybe, lists with points and marks for homework will
not be necessary some day ;) To summarise, there is a constant learning and a
constant adaptation of mental models to actual situations.
Fig. 7. Prof. Smith: Two elaboration steps to shape the goal understanding
208 A. Dittmar and P. Forbrig
to the environment, see Sect. 2.4). On the one hand, this may result in polymoti-
vated actions, and then in routines or habits as recurrent, often unconscious pat-
terns of behaviour which are honed in such a way that the most minimal of ac-
tions [often shared between two or more people] has a wealth of signicance and
well understood mutual accountabilities [34]. On the other hand, it may result
in more stable goals and values as well as in deeper domain knowledge.
In the last section, the idea of activity representations was introduced and in-
tegrated in existing work. It was argued that humans develop activity represen-
tations by collaborating and, generally, by acting in the world. Deep knowledge
(ne tuning of models) can only be acquired by doing actions over and over
again, by reecting processes and results, by empathising with collaborators and
so on. It was also argued that tasks belonging to certain work systems and appli-
cation domains (Fig. 2) are typically represented by several activity representa-
tions at dierent levels of abstraction. Higher-level representations rather reect
single tasks. They are more stable and developed over a longer period of time.
Lower-level, more ephemeral representations rather describe an interleaving of
fragments of multiple tasks which seems to t a concrete situation.
To summarize, activity representations are seen as mental congurations hu-
mans develop in the hope that they evoke adequate mental or physical responses
when confronted with certain cues in a situation. This argumentation is in line
with Rorty who says that knowledge [is not] a matter of getting reality right,
but rather... a matter of acquiring habits of action for coping with reality (cited
in [22]). It is also in line with Hacker who speaks of Wissensinseln (islands of
knowledge) [18]. Activity representations are such islands. They are constantly
evolving, neither complete nor consistent. On the contrary, inconsistencies
between dierent representations are seen as important in order to cope with
actual situations. Humans constantly make exceptions to rules.
In [26], Naur points out that [d]escribing people in terms of knowledge
or mental models has the consequence that the dynamic of thought, the way
thoughts develop, tend to be ignored. In particular the all-pervading importance
of habit on all human activity is lost from sight. This view on thoughts as results
of habitual thought processes might be better supported by activity represen-
tations. Though they are still mental models the related higher-order approach
does not emphasise task structures as such (as in traditional task analysis) but
also their development and use. Generally, a combination of an activity repre-
sentation and a cue triggers a certain habit. However, activity representations
do not only trigger physical behaviour in combination with physical cues. They
can also serve as cues themselves for other representations to trigger a certain
mental behaviour. They may guide elaboration steps like sequencing, renement,
merging to create polymotivated actions, interleaving to support task grouping
in multiple tasks, or creation of assignments to support task delegation. They
may guide abstraction activities like selection, generalisation (e.g. of temporal
Towards Activity Representations for Describing Task Dynamics 209
a user. We think that activity representations can help to elaborate and rene
this approach. In particular, the idea of goal elaboration and abstraction and
of polymotivated actions could serve as foundation for developing tools to help
users organize their activities.
Acknowledgements. The rst author would like to thank Jorgen Dahlke for discussing
task grouping and the participants of RE course 23054 2006 for inspiring the example.
We are particularly grateful to an anonymous reviewer for their feedback.
References
1. Anderson, B.: Work, Ethnography and System Design. In: Kent, A., Williams, J.G.
(eds.) The Encyclopedia of Microcomputers, vol. 20, Marcel Dekker (1997)
2. Annett, J., Duncan, K.D.: Task analysis and training design. Occupational Psy-
chology 41 (1967)
2
In [17], Fallman points out that sketching is not only useful for communicating with
other designers and stakeholders. It is not simply an externalization of ideas already
in the designers mind, but on the contrary a way of shaping new ideas. Sketching
supports the idea of design as a dialogue, a reective conversation.
Towards Activity Representations for Describing Task Dynamics 211
26. Naur, P.: CHI and human thinking. In: Proceedings of NordiCHI 2000 (2000)
27. Paterno, F., Mancini, C., Meniconi, S.: ConcurTaskTrees: A notation for specifying
task models. In: INTERACT 1997 (1997)
28. Paterno, F.: Model-Based Design and Evaluation of Interactive Applcations.
Springer, Heidelberg (2000)
29. Paterno, F., Santoro, C.: One Model, Many Interfaces. In: Proc. of the Fourth
International Conference on Computer-Aided Design of User Interfaces, Kluwer
Academic Publishers, Dordrecht (2002)
30. Payne, S.J.: Users Mental Models: The Very Ideas. In: [7]
31. Randall, D., Hughes, J., Shapiro, D.: Steps towards a partnership: Ethnography
and system design. In: Jirotka, M., Gougen, J. (eds.) Requirements Engineering:
Social and Technical Issues, Academic Press, San Diego, Ca (1994)
32. Spillers, F.: Task Analysis Through Cognitive Analysis. In: [10]
33. Suchman, L.: Do categories have politics? The language/action perspective recon-
sidered. Computer-Supported Cooperative Work (CSCW) 2 (1994)
34. Tolmie, P., Pycock, J., Diggins, T., MacLean, A., Karsenty, A.: Unremarkable
computing. CHI 2002
35. Wild, P.J., Johnson, P., Johnson, H.: Understanding Task Grouping Strategies. In:
Proc. of HCI 2003: Designing for Society, pp. 320. Springer, Heidelberg (2003)
36. Workshop on the Temporal Aspects of Tasks. HCI 2003,
http://www.cs.bath.ac.uk/ hci/TICKS/temporalaspects.html
A Framework for Light-Weight Composition and
Management of Ad-Hoc Business Processes
1 Introduction
The amount of unstructured, knowledge intensive processes in organizations is in-
creasing. Conventional workflows do not provide sufficient flexibility to reflect the
nature of such processes and to provide adequate support for their optimization [3],
[18]. Therefore the need arises to elaborate more flexible approaches, able to repre-
sent and manage underspecified, highly dynamic user tasks. This is accompanied with
the increasing demand to facilitate effective Knowledge Management (KM) in or-
ganizations, which could increase the efficiency of business users, engaged in non-
routine tasks and which could enable them to better shape their everyday work
through application of shared best-practices [12], [20].
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 213226, 2007.
Springer-Verlag Berlin Heidelberg 2007
214 T. Stoitsev, S. Scheidl, and M. Spahn
2 Related Work
Software support for unstructured, knowledge-intensive processes has been in the fo-
cus of extended research in the last years. The reuse of emerging task hierarchies
within a global enterprise infrastructure is often described as one of the major possi-
bilities to support such processes. Riss et al. [17] discuss the challenges for the next
generation business process management by suggesting the generation, recognition
and application of reusable task patterns and process patterns as an alternative to
static workflows. The task pattern technique is further considered by Grebner et al.
[9], who describe basic directions for the utilization of task-based approaches to sup-
port users engaged in intensive, unstructured knowledge work. Within the presented
paper a task is generally referred to as a self contained unit of work, which can be re-
fined through an arbitrary number of sub tasks and aims to achieve a certain business
goal. Thereby the focus is set on high-level tasks, representing single steps in ad-hoc
business processes, and the notion of task patterns presented in the above studies is
used. In the literature task patterns are discussed also regarding reusable structures
for task models in the field of interactive systems design [8], [14], [15]. However,
such observations focus on low-level interactive activities like e.g. searching, brows-
ing or providing generic system input, and are beyond the scope of the presented
paper.
A comprehensive approach, addressing the gap between completely ad-hoc proc-
esses, which are in the focus of Computer Supported Cooperative Work (CSCW), and
rigid, predefined business processes, which are well supported through conventional
workflow solutions, is provided by Bernstein [7]. This approach provides contextual
basis for situated improvisation by enabling delivery of process models, process
fragments, and past cases for tasks and providing shared, distributed-accessible, hi-
erarchical to-do lists, where different process participants can access and enrich task
resources and information. An extended state of the art study in the area of flexible
workflows and task management and a further approach for integrating ad-hoc and
routine work is presented by Jorgensen [13]. He reveals major issues concerning
business process flexibility and how it can be facilitated through interactive processes
models.
A Framework for Light-Weight Composition and Management 215
3 Solution Approach
This study is based on intra-organizational knowledge sources accumulating customer
requirements as well as on dedicated site visits and interviews at companies represent-
ing predominantly small and medium enterprises from various industries (automotive,
software, textile), and builds on the state-of the art research in the areas of task man-
agement, flexible workflows and CSCW.
Unstructured, knowledge-intensive processes are generally executed through situ-
ated actions [4]. Within this paper we assume that tasks can be executed, cancelled
or completed without meeting any pre- or post-conditions. Thereby the process flow
is determined solely through the sequence of the task execution, which is decided by
the end users according to their current work situation. This raises various flexibility
and KM issues, related to the overall process structure and context information. Van
der Aalst et al. [1] discuss business process flexibility by stating that: Workflows are
case-based, i.e., every piece of work is executed for a specific case: an order, an in-
216 T. Stoitsev, S. Scheidl, and M. Spahn
surance claim, a tax declaration, etc. The objective of a workflow management system
is to handle these cases (by executing tasks) as efficiently and effectively as possible.
The authors further describe, that a task is executed through specific resources which
might be e.g. a tool or an employee and suggest three basic dimensions of a workflow
case, process and resource dimension. Human activities thereby comprise
cases, which are handled with corresponding tasks using the appropriate resources.
Unpredictability of human activities hence implies deviations in case handling and
dynamic adaptations of tasks and resources. The framework presented herewith fo-
cuses on intrinsic flexibility and KM aspects considering these basic issues.
As a concrete solution approach, the framework suggests tracking of user actions,
which are executed on personal workspace level in a common user working environ-
ment, and unobtrusive (implicit) replication of task data on central enterprise reposito-
ries. This process is further referred to as externalization. Externalized task structures
and the accompanying data of different users are integrated in the repositories to
overall enterprise processes. Furthermore, logically unconnected tasks from different
processes and users can be associated in the central repositories based on different cri-
teria to provide advanced KM. Concretely, the CTM prototype uses Microsoft Out-
look as an office integration environment by exploiting the fact that tasks and email
are provided in the same office application. Web services are used to track user ac-
tions, executed in the CTM Outlook Add-In, on a CTM back-end application. It is
based on the Java Platform Enterprise Edition and deployed on a JBoss server. The
tracked data is persisted in a MySQL Database (DB), which provides the repository
functionality for the basic framework entities. The CTM prototype is not explicitly
discussed in this paper as the focus here is set on generic flexibility and KM concepts
for supporting dynamic resource and task adaptations and for handling case deviations
in unstructured, knowledge-intensive processes. These concepts are implemented
through the basic framework entities, described in the following sections. Certain
CTM functionalities are mentioned as clarifications to the discussed concepts.
4 Artifacts
An artifact refers to a file, e.g. a text document, a spreadsheet or an executable file,
which is associated (attached) to a task. Artifacts generally represent resources (cf.
Sec. 3), which are used or generated during task execution. The presented framework
provides three basic types of task artifacts. These are described in Fig. 1. The depicted
entities are designed equally in all figures in the paper.
An EMA is an artifact, the content of which is managed by a user or a user group out-
side of the scope of a user task. An EMA can be e.g. a document, which is being elabo-
rated by multiple users as part of a concrete process. Collaborative authoring tech-
niques are known in the literature (cf. [11]) and are not discussed in this paper. Another
type of EMA could be a document, which is provided as a template from a company
department and is used in various processes throughout an enterprise. Such could be
e.g. an employment contract template provided by a Human Resources (HR) depart-
ment. The user or user group managing the artifact content, e.g. HR employee(s), is re-
A Framework for Light-Weight Composition and Management 217
ferred to as external artifact manager(s) (see Fig.1). The latter can edit the artifact con-
tent in their workspaces and submit a consolidated EMA version to a globally accessi-
ble artifact repository. It can be e.g. file system or DB based and should be able to
maintain artifact history. References to an EMA can be added in user tasks. Within the
presented framework an EMA reference in a task stores a unique identifier and a ver-
sion number of the EMA. Changes to an EMA increase its version on the repository
and trigger notifications to all referring tasks. An owner of such a task can thereby
switch the reference to the updated version or preserve the current reference.
In the CTM prototype, the artifact repository is realized through an artifact table in
the MySQL DB, containing paths to actual artifact files on the server file system. Us-
ers are able to view different artifacts and artifact versions and submit an EMA
through an Artifact Repository Explorer component which is part of the CTM Out-
look Add-In. This component enables also setting of references to an EMA in a user-
defined task and EMA reference handling upon notifications.
Fig. 1. User-defined tasks (gray ellipses with a black outline) reside with their sub task hierar-
chies in the workspaces (top layer denoted on the right) of users (U1 and U2). A group of exter-
nal artifact managers (G1) edit an Externally-Managed Artifact (gray circle with a black, dot-
ted outline), in the following referred to as EMA, in their local workspaces and submit it to a
central artifact repository (bottom layer denoted on the right). An EMA reference (a white cir-
cle with a black, dotted outline) can be set in a user-defined task (A2). An artifact, emerging as a
common user attachment to a task is either explicitly protected as Locally-Managed, Non-
Externalized Artifact (black circle in A1.1) or implicitly replicated to a central artifact repository
as Externalized Artifact (a gray circle with a black outline), in the following referred to as EA.
The task preserves a local EA representation (white circles with a black outline in A1.1 and B2),
comprising a local copy of the attachment and a reference to the EA in the repository.
manner, without additional user effort by tracking user actions on tasks in the local
workspace and replicating task attachments to a central artifact repository. Tasks are
themselves replicated to a task repository (cf. Sec. 6). During artifact externalization a
single artifact copy, identified in a unique manner, should be created in the artifact re-
pository for artifacts with the same name and the same content. As a consequence a
one-to-many relation can be created from a single EA to multiple tasks, which are us-
ing it. In Fig. 1 task A1.1 and task B2 use the same EA in two independent processes.
Furthermore, queries with different criteria can be executed in the central repositories
to retrieve similar artifacts and the referencing tasks. Externalization hence enables
unobtrusive detection of recurring tasks and recognition of global optimization possi-
bilities based on usage of similar resources in dispersed, independent processes.
The second consequence from task externalization is that in case of extraction of a
Task Pattern (TP) (cf. Sec. 7) from a user-defined task containing an artifact, a result-
ing TP document can contain only a short reference to the EA in the repository. This
prevents from any explicit encoding of binary content, which could result in increased
TP document size, and further provides a system dependent representation of artifacts
within reusable task structures. Consequently, artifacts will not be provided outside of
the system context and the appropriate artifact access policy. When a TP is reapplied
for reuse artifact content can be retrieved upon request from the central artifact reposi-
tory based on the unique identifier and according to the repository access policy.
The access policy for artifacts in the artifact repository might not suffice the privacy
needs of end users in different business domains and occupation areas. The frame-
work hence provides possibility to store artifacts in a local, non-externalized manner
(see task A1.1 in Fig.1). Tasks using such kind of representation however do not bene-
fit from the unobtrusive KM and data protection enabled through EA and extended
flexibility provided through EMA.
5 Human Actors
The framework uses a light-weight representation of human actors, associated to
tasks. In related literature human actors are considered resources for tasks (cf. [1],
[13]). The representation of human actors within the framework has mainly the pur-
pose to store knowledge about the person, who has expertise related to a given task.
This knowledge is important for unstructured, ad-hoc work. Ribak et al. state for ex-
ample that employees key asset is their network of contacts and those people they
can approach for advice or help [16].
To avoid the need of introducing domain-specific roles, which may harm the ge-
neric character of the framework, two basic roles for human actors are currently pro-
vided owner and recipient. The owner of a task is a person, whos to-do list contains
the task, i.e. who is or was responsible for the task execution. If a task owner decides
to delegate a task to another person, recipient information is additionally stored in the
owner task. The recipient is a person, who has received a task through a delegation
from other system user. Thereby we generally suggest that delegations are handled by
A Framework for Light-Weight Composition and Management 219
creating a copy of the requester task at recipient site. The recipient task, generated
through this, holds the same context information and artifacts as the requester task
and can be further adapted by the recipient. A requester task hence contains two hu-
man actor representations: owner referring to the requester; recipient describing
the recipient of the task delegation. The recipient task holds a single human actor an
owner referring to the recipient. On the lowest level human actors in both roles are
represented through an email address and a human-readable name. In the current
CTM implementation such representations are stored within the user-defined tasks,
where an owner is always set when a task is inserted in a personal to-do list and a re-
cipient is set when a delegation is triggered. User data is also replicated in a central
user repository during task tracking. In CTM the repository is a user DB table.
6 Tasks
Within the presented framework enterprise processes emerge as dynamic, user-
defined task hierarchies, where tasks are represented through system objects, de-
scribed through certain attributes like e.g. subject, description, owner, due date, status
etc. Artifacts and human actors are associated to tasks as described in the above sec-
tions. The framework enables association of tasks according to the collaborative flow
in human-centric processes and association of tasks of logically independent proc-
esses for KM purposes.
Fig. 2. Individual task hierarchies of different users (U1 - U4) are contained in users personal
workspaces (dotted-line areas). In collaborative processes tasks are distributed between users
through delegations (dotted line arrows), which enable interconnection of personal task hierar-
chies to an overall Task Delegation Graph (TDG).
220 T. Stoitsev, S. Scheidl, and M. Spahn
Graph (TDG), which emerges through the evolution of user-defined task hierarchies
beyond personal workspaces (see Fig. 2). A TDG has the purpose to integrate indi-
vidual task hierarchies to a complete, end-to-end process structure. Within the pre-
sented framework a TDG is generated through tracking of user actions, which are
executed on tasks in the personal user workspace, on a shared enterprise repository.
The individual task hierarchies of multiple users are integrated through tracking of
email exchange for task delegation. As a consequence task hierarchies, defined by end
users in the personal workspace, are replicated with all context, artifact and human
actor information and connected to overall enterprise processes in a central task re-
pository. The structure of these processes is determined by the adaptations of the
individual task hierarchies (to-do lists) within the local workspace of each process
participant, and by the collaborative flow for task delegation. No further process
modeling or definition of rules is required.
While a TDG connects task hierarchies with respect to process flow, tasks may be re-
lated in process independent manner for KM purposes. Such relations are enabled
through EMT (see Fig.3). Similarly to an EMA, an EMT is managed according to
specific expertise by one or more users, in the following referred to as external task
manager(s). While an EMA enables reuse of resources, an EMT addresses the reuse
of others process knowledge for the elaboration of the individual tasks. Two major
types of EMT can be distinguished. To the first type belong tasks, which are part of
concrete processes. Referencing such a task in a task from another process, results in
cross-process references which allow peering of related (parallel) tasks. An EMT of
the second type represents a recommendation of best-practice. Such a task can be cre-
ated e.g. by a Quality Assurance (QA) department in a software company to describe
routines, which need to be executed by developers prior to code submissions. This
task will represent certain organizational policy and will need to be used by all devel-
opers for the organization of their personal tasks.
An EMT is generally provided in a shared task repository. The tracking of tasks
used for the generation of TDG has the consequence that all system users are implic-
itly external task managers to their own tasks. Therefore the task tracking repository
is also an EMT repository. An EMT in the repository can contain further references to
other EMT, which provides recurring task flexibility. An EMT can contain artifacts of
all presented types (cf. Sec. 4). However, only EMA and EA will be externally acces-
sible. An important note is that the artifacts, contained in tasks in the local workspace
of users U1 and U2 in Fig. 3 also have references to artifacts in the artifacts repository
(see Fig. 2), which are not shown for simplicity reasons.
When an EMT reference task is declared in a local user workspace, it may be syn-
chronized with the repository to copy locally the complete EMT structure and context
information. In Fig. 3 no sub tasks are given for A2 and B2.1 for simplicity. When an
EMT is updated or removed, notifications are sent to all owners of reference tasks. An
owner can accordingly update a changed reference task, remove it or release the EMT
reference and preserve the currently used local copy. The latter operation corresponds
to an apply pattern operation (cf. Sec. 7) and will generate the corresponding ances-
tor/descendant references.
A Framework for Light-Weight Composition and Management 221
Fig. 3. An Externally-Managed Task (a gray ellipse with a black, dotted outline see Q and R),
in the following referred to as EMT, is defined and edited in the workspace(s) of one or more
external task managers (G1 on the left and G2 on the right) and submitted for reuse in a central
task repository (middle layer denoted on the right). Users, including external task managers,
may reuse an EMT through an EMT reference task (a white ellipse with a black, dotted outline
see A2 and B2.1). A reference chain ends with an EMT without further references.
Fig. 4. A task P with a sub task hierarchy is created by user Ux and exported as a Task Pattern
(TP) in XML format. The task P from the TP is applied in the workspaces of users U1 and U2
respectively on tasks A and B. Thereby the complete content and structure of P is applied to
tasks A and B and replicated in the central task, artifact and user repositories. The replicated
structures and the artifact and user repositories are not explicitly represented for simplicity rea-
sons. The initial TP structure under task B has been changed by user U2. To enable tracing and
evaluation of such deviations ancestor/descendant relationships are set when the TP is applied.
is hence implicitly also a TP (case) repository. The current CTM implementation en-
ables search, extraction and editing of TP in a Task Pattern Explorer/Editor compo-
nent. A TP can be exported in XML format. The TP format represents the generic task
model for the framework (cf. Sec. 7.2). All entities contained in the TP structure pre-
serve their type EMA and EA are represented through unique system identifiers and
may not contain any binary content. An EMT can be included only through a unique
task identifier without explicit task structure and context information.
Changes in reusable best-practice might often be required to adapt it to the current
work situation. In Fig. 4 task A has preserved the same structure as P. However, user U2
has changed the initial TP structure of task B. The framework provides advanced KM
techniques, which help to evaluate deviating solutions for similar cases. This is accom-
plished through ancestor/descendant relationships, which emerge when a TP is applied.
Ancestor references are set iteratively for all tasks in a task hierarchy. In
Fig. 4 an ancestor reference to task P (P) is set in tasks A and B, an ancestor reference
A Framework for Light-Weight Composition and Management 223
Fig. 5. The XML schema definition provides the format for Task Pattern (TP) description docu-
ments and an implementation and an overview of the task model used in the framework. Each
complex type is described with its elements, which are given with their name, followed by oc-
currence ([1..1] required; [0..1] optional; [0..*] zero or more; [1..*] one or more ) and a con-
tent type. While simple types like String and base64binry provide implementation specifics, the
presented complex types depict basic model building blocks, which refer to the framework enti-
ties described in the previous sections artifact, user (human actor) and task.
mining capabilities, and enables end users to proactively tailor underspecified busi-
ness processes.
The next steps in our research will include the evaluation of the framework through
the CTM prototype by conducting user tests with real end users from partner compa-
nies. The prototype and the generic framework may then be extended according to the
received user feedback and its detailed analysis.
Acknowledgments. The work, this paper is based on, was supported financially by
the German Federal Ministry of Education and Research (BMBF, project
EUDISMES, number 01 IS E03 C). We thank all participants in the customer studies
for their time and cooperation.
References
1. Aalst, W.M.P.v.d., Basten, T., Verbeek, H.M.W., Verkoulen, P.A.C., Verhoeve, M.:
Adaptive Workflow: On the interplay between flexibility and support. In: Proceedings of
the first International Conference on Enterprise Information Systems, Setubal, Portugal,
pp. 353360 (1999)
2. Aalst, W.M.P.v.d., Weske, M., Grnbauer, D.: Case Handling: A New Paradigm for Busi-
ness Process Support. Data and Knowledge Engineering 53(2), 129162 (2005)
3. Abbott, K.R., Sarin, S.K.: Experiences with Workflow Management: Issues for the Next
Generation. In: Proceedings of the ACM Conference on Computer Supported Cooperative
Work, pp. 113120. ACM Press, New York (1994)
4. Bardram, J.E.: Plans as Situated Action: An Activity Theory Approach to Workflow Sys-
tems. In: Proceedings of the European Conference on Computer Supported Cooperative
Work, Lancaster, UK, p. 1732 (1997)
5. Bellotti, V., Dalal, B., Good, N., Flynn, P., Bobrow, D.G., Ducheneaut, N.: What a To-Do:
Studies of Task Management towards the Design of a Personal Task List Manager. In:
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp.
735742. ACM Press, New York (2004)
6. Bellotti, V., Ducheneaut, N., Howard, M., Smith, I.: Taking Email to Task: The Design
and Evaluation of a Task Management Centered Email Tool. In: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems, Ft. Lauderdale, Florida,
USA, pp. 345352. ACM Press, New York (2003)
7. Bernstein, A.: How Can Cooperative Work Tools Support Dynamic Group Processes?
Bridging the Specificity Frontier. In: Proceedings of the ACM Conference on Computer
Supported Cooperative Work, pp. 279288. ACM Press, New York (2000)
8. Gaffar, A., Sinnig, D., Seffah, A., Forbig, P.: Modeling patterns for task models. In: Pro-
ceedings of the 3rd annual Conference on Task Models and Diagrams, pp. 99104. ACM
Press, New York (2004)
9. Grebner, O., Ong, E., Riss, U., Brunzel, M., Bernardi, A., Roth-Berghofer, T.: Task Man-
agement Model (Last visited September 01, 2007), http://nepomuk.semanticdesktop.org/
xwiki/bin/view/Main1/D3-1
10. Gruen, D., Rohall, S.L., Minassian, S., Kerr, B., Moody, P., Stachel, B., Wattenberg, M.,
Wilcox, E.: Lessons from the ReMail Prototypes. In: Proceedings of the ACM Conference
on Computer Supported Cooperative Work, pp. 152161. ACM Press, New York (2004)
226 T. Stoitsev, S. Scheidl, and M. Spahn
11. Holz, H., Rostanin, O., Dengel, A., Suzuki, T., Maeda, K., Kanasaki, K.: Task-based proc-
ess know-how reuse and proactive information delivery in TaskNavigator. In: Proceedings
of the 15th ACM International Conference on Information and Knowledge Management,
pp. 522531. ACM Press, New York (2006)
12. Jennex, M.E., Olfman, L., Addo, T.B.A.: The Need for an Organizational Knowledge
Management Strategy. In: HICSS 2003. Proceedings of the 36th annual Hawaii Interna-
tional Conference on System Sciences, p. 117.1. IEEE Computer Society, Washington
(2003)
13. Jorgensen, H.D.: Interactive Process Models. Ph.D. Thesis, Norwegian University of Sci-
ence and Technology, Trondheim, Norway (2004)
14. Palanque, P., Basnyat, S.: Task Patterns for Taking into Account in an Efficient and Sys-
tematic Way Both Standard and Abnormal User Behaviour. In: IFIP 13.5 Working Con-
ference on Human Error, Safety and Systems Development, Toulouse, France, pp. 109
130 (2004)
15. Patern, F.: Model-Based Design and Evaluation of Interactive Applications. Springer,
Heidelberg (2000)
16. Ribak, A., Jacovi, M., Soroka, V.: Ask Before You Search: Peer Support and Commu-
nity Building with ReachOut. In: Proceedings of the ACM Conference on Computer Sup-
ported Cooperative Work, pp. 126135. ACM Press, New York (2002)
17. Riss, U., Rickayzen, A., Maus, H., van der Aalst, W.M.P.: Challenges for Business Proc-
ess and Task Management. Journal of Universal Knowledge Management 0(2), 77100
(2005)
18. Schwarz, S., Abecker, A., Maus, H., Sintek, M.: Anforderungen an die Workflow-
Untersttzung fr wissensintensive Geschftsprozesse. In: Proceedings of 1st Conference
for Professional Knowledge Management (WM 2001), Baden-Baden, Germany (2001)
19. Siu, N., Iverson, L., Tang, A.: Going with the Flow: Email Awareness and Task Manage-
ment. In: Proceedings of the ACM Conference on Computer Supported Cooperative Work,
pp. 441450. ACM Press, New York (2006)
20. Wiig, K.M.: People-focused knowledge management: How effective decision making
leads to corporate success. Elsevier ButterworthHeinemann (2004)
Model-Based User Interface Design in the
Context of Workow Models
1 Introduction
Enterprise Resource Planning (ERP) systems are o-the shelf business applica-
tions providing a tightly integrated solution to organizations information sys-
tem needs [27]. ERP benets include best practice business processes, real-time
access to information and shared practices across the entire enterprise. One im-
portant characteristic of ERP systems is the fact that they are pre-built software
packages designed to meet the general needs of a business sector instead of the
unique requirements of a particular organization [1]. To be able to deliver such
huge software packages, ERP vendors use dierent business process models in
their overall description of the system to describe the supported processes and or-
ganizational structures together with the structure of data and objects [13]. The
reference models are founded upon what the vendor considers being the indus-
trial best practices, that is, the most ecient way the business processes should
be structured [5]. SAP uses Event Process Chain (EPC) models to document
the systems functionality [12] while Microsoft uses Business Process Modeling
Notation (BPMN) to describe the business domain. These are descriptive models
documenting the existing software (in contrast to prescriptive models that are
used as a specication of what to create) [15].
In this article we use models and information collected from a large company
developing ERP systems and show how prescriptive task models can be con-
nected to descriptive workow models. The company currently runs a project
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 227239, 2007.
c Springer-Verlag Berlin Heidelberg 2007
228 R. Kristiansen and H. Trtteberg
where the ERP systems functionality is modeled using workow modeling. The
intention is to use the models as documentation in implementation projects. In
addition there is an interest in investigating how these models can be reused in
other contexts. We want to show how they can take advantage of model-based
user interface design (MBUID) to allow exible role-centered composition of user
interfaces in the context of the workow models. Role-based access and portal
solution is considered the answers to the severe usability problems identied in
ERP systems [7].
A challenge with role-based systems is how to keep the number of roles on
a manageable level. When new functionality is added, should this result in the
creation of a new role? A single person typically fullls several roles, and the
combination of roles users have diers among companies. Flexibility in creating
user interfaces (UI) for various combinations of roles is therefore important.
We will explore a systematic way to dene what needs to be included in the
UI for one particular user based on her participation in the workow process.
The workow model denes what tasks need to be fullled and their possible
ordering; hence the workow model is suitable as a frame for creating task
models. A task model typically focuses on modeling the work of an individual
user.
A short introduction to task and workow modeling is given in section 2,
and we discuss how MBUID and workow models by virtue of coming from
dierent research traditions have dierences in concepts, focus and pragmatics.
Our work take advantage of existing modeling languages proved useful in one
context, and proposes how they can be combined to add value in an industrial
context. Section 3 describes relevant aspects of the ERP vendor organization, and
describe our approach by showing a practical example. In section 4 we explain
how to make use of pattern structures to compose role-oriented user interfaces so
that the highly detailed, executable dialogue models can be wrapped into easier
to work with lesser detailed components. We have discussed our approach with
the user interface developers in the company and report some of their rst-hand
comments. Finally, in section 5 we conclude and give some notes on future work.
We will give a short introduction to task modeling and explain how task models
relate to other models used in MBUID. Workow modeling is then introduced
before the relationship between task modeling and workow modeling is dis-
cussed. Based on our discussion we argue for the choice of modeling languages
used in the case study.
Task modeling is often used rst in the analysis phase to understand and com-
municate the problem domain (resulting in a descriptive model), and later on
as a prescriptive task model for the system to be designed (as e.g. the DUTCH
Model-Based User Interface Design in the Context of Workow Models 229
method using GTA [33]). Examples of task modeling languages are: Methode An-
alytique de Description des taches (MAD)[26], Task Knowledge Structure (TKS)
[11], GroupWare Task Analysis (GTA) [32] and ConcurTaskTrees(CTT)[19]
which all support designers by hierarchically decomposing tasks, dening ob-
jects manipulated and the role responsible for performing the task.
The vast number of task modeling notations results in semantic and syntactic
dierences which are discussed by e.g. [14] and [35]. Based on their analysis
a uniform task model is created which includes concepts like: task and goal
hierarchies, operators that express temporal constraints between task, some role
concept to deal with co-operative aspects, and objects with possible actions.
Task models are considered one of the viewpoints in the model-based com-
munity [20]. Viewpoints are related to both abstraction level and focus of the
model. Is the level of detail high and is the focus on the task or on the UI?
Models with dierent viewpoint are:
1. Task model and object model represent the highest level of abstraction and
their focus is on users goals, tasks and what objects that are manipulated
(the object model is often referred to as a domain model).
2. The second layer is the abstract user interface describing the structure and
behaviour of the user interface [29].
3. The third level involves building a concrete user interface specication den-
ing the platform dependent look and feel of the interface.
4. The fourth level is the nal user interface which is the running interface
implemented on a specic software environment.
Model-based user interface design (MBUID) processes often start with a task
related model that is evolved through an incremental approach to the nal UI
[4]. In each of the transformation phases the designer has the possibility to
manually change the generated artefact, and the modication is preserved when
regenerating the UI.
The concept of tasks is very similar to that of processes (in a workow); the
dierence is mainly that of scope and focus. Processes typically relate directly to
organizational goals, while tasks focus on the goal and actions of individual users
playing a role. Hence, a task model may be seen as a renement of a process
model, in the context of a specic user role [28].
Many workow modeling languages have formal semantic built on Petri nets
[25]. A Petri net is a directed graph with a mathematical formalism facilitating
visual modeling on the one hand and formal analysis, verication and validation
of the model on the other. An example of such a language is Yet Another Work-
ow Language (YAWL) [31]. Informal workow modeling languages includes
Event-Process Chain (EPC) [12], Action Port Model (APM) [2] and Business
Process Modeling Notation (BPMN) [8]. BPMN is dened by the Object Man-
agement Group and oers a rich notation for workow modeling. The notation
supports decomposition of processes into sub-processes and tasks. A task is an
atomic activity and cannot be decomposed further. A task can usually be per-
formed by an end-user and/or an application [8].
factors [21] and are used with the aim of increasing usability of computerized
systems. This naturally leads on to the second dierence:
Dierence in focus: In workow models the focus is on how to reach
organizational goals. In task modeling the focus is on the goals of individual
users. It is important to be aware of the dierence between organizational
goals, the individual goals and how they are related as they might not be
aligned [34].
Dierences in concepts: Section 2 pointed to the mixture in concept def-
inition between task models, and the same mixture is present if we consider
concepts across task models and workow models. As [21] point out, a con-
cept dened in a task model can be used in a workow model with a dierent
meaning. This gives a pragmatic problem across modeling languages.
When a combination of workow models and task models are considered these
dierences must be taken into consideration. In the next section we choose which
modeling languages we will use in our case study.
User and Organizational Model has the individuals and their organiza-
tional relationship as focus. The users are described using Personas [3] [24].
A Persona is an archetype of an actual user and included in the Persona
description is information stating what roles a Persona can take and what
tasks he or she is responsible for. The numerous Personas are grouped into
departments, and each department is illustrated by organizational charts.
Business Process Model has a supply chain perspective and is decom-
posed into the activities involved in the business process. The processes are
grouped together and placed within departmental borders, showing which
department is responsible for which processes. The business process shown
in gure 2 is one of seven business processes grouped under the Operations
department.
The two model representations describe the same world, but with dierent
perspective. The information used in this case study is based on this generic
model together with documentation that was provided by two other projects.
ProERP had a project that decomposed the business process model into BPMN
diagrams and the uppermost diagram in gure 3 is from that project. In addition,
documentation from a user interface development project lead by the UI design
team is used.
When new functionality is designed, the Personas that should participate are
identied and used as leading actors when developing scenarios [22] describing
the functionality. Detailed information concerning the business domain and what
is required for the new functionality is provided by domain experts participating
on the design project. The UI design specication consists of sketches of the user
interface drawn with a drawing tool and supplemented by textual description of
the interaction. For usability evaluation Powerpoint slides are used.
Model-Based User Interface Design in the Context of Workow Models 233
The process steps in gure 2 are further decomposed into BPMN diagrams
showing which activities that are carried out to execute the process. In topmost di-
agram in gure 3 one of the decompositions under Manage purchase requisitions
and orders is shown. Sara (which is a Purchaser) rst creates a purchase order
(PO). The PO is then transferred to the stack of outgoing, awaiting POs and she
can choose when she wants to communicate the PO by sending it to the supplier.
The supplier must send an order acknowledgment before a pre-dened time has
elapsed, otherwise a rule triggers and the PO is put into the stack of POs awaiting
for order acknowledgment. A reminder should then be sent to the supplier.
Each of the boxes in the BPMN model is a task suitable for one person and can be
considered the highest level in a task model hierarchy modeled in a task modeling
language. The decomposition of the Create PO task is shown in the task structure
in the lower part of the gure 3. The rounded rectangles are tasks, with an identi-
er and a name in the top compartment. The lower compartment is optional and
contains the resources necessary for the task and the actor performing the task
(shown in the parent task). A middle compartment can be added with a task de-
scription, but we have not used this compartment in this gure. The resources that
are sent between tasks are ow resources triggering the execution of the following
task (e.g. PO and Product). The circle enclosing the arrow means that the tasks
need to be executed in a xed sequence. To create a purchase order Sara has to nd
the products, add them to a requirement list and then generate a purchase order
from the requirement list. How to accomplish a task is a question of design and
requires domain knowledge and knowledge about constraints in the software. In
the current user interface design process, the designers in ProERP create scenario
descriptions and sketches of the user interface to describe the functionality.
The task model describe what to do but does not include the how to do
it knowledge describing how users accomplish a task in the UI using interaction
objects, state and data ow specication. This is information typically specied
in a dialogue model. Dialogue models are suitable for representing abstract inter-
action tasks as selecting an element from a set of elements or pushing a button
to trigger some functionality. We have developed dialogue models for each of the
leaf node tasks of the Create PO tasks tree in gure 3 based on the UI designers
description of how the UI should look like and behave. As it seems to be essential
for the user interface designer to have complete control of the design process, we
have not pursued a more formal derivation of the dialogue model from the task
model as done e.g. by [16].
234 R. Kristiansen and H. Trtteberg
Fig. 3. Workow model showing the process in which a PO is manually created and a
task model that show the decomposition of one of the tasks
Model-Based User Interface Design in the Context of Workow Models 235
The abstract user interface model is drawn using DiaModl notation, and
gure 4 shows the dialogue model for the task 1.1 Find Product. Interactors
are drawn as rectangles with a name describing their functionality. Attached to
the interactors left side are gates which dene user input (pointing outwards)
and output to the user (pointing inwards). The free oating triangle is a com-
putation with functionality as indicated by the description (match). The edges
between elements are connections, and dene ow of data. The Product object
is from the domain model. To nd a product the user rst search for the product
by typing the product number. For each digit the user types, a match function
lters the product list and highlights the rst product that match. Some of the
attributes of the supplier and product object is displayed to the user.
The dialogue model shows an abstract model of the interaction which can be
used as a specication for the concrete implementation of the UI. The interactors
input/output signature determines a set of concrete interactor objects (e.g. a set
of standard widgets) that can replace the abstract user interface component.
The lower model in gure 4 show how the abstract interactors are replaced with
concrete ones matching their input/output signature.
When creating a homepage for a specic user with a dierent role composition
than the ones in the pre-dened Persona description, the BPMN diagram can
be used as a starting point for creating task models. Using task models, the
necessary steps for solving the BPMN tasks identied in the workow model can
be modeled in a user-centric way. Our experience from the case study indicate
that each of the BPMN tasks are candidates for being the top level of a task
structure accessible directly from the employees personal homepage.
Since the low-level tasks encapsulate a dialogue structure, a task-oriented user
interface can be created by assembling the dialogue fragments for the required
set of tasks. As noted by [18] modeling the user interface of an interactive system
in sucient details to be run soon becomes an overwhelming task - and an
abstraction mechanism is required to get the big picture of the system. To
reduce this complexity we suggest using task model components as patterns for
how standard tasks can be solved. Patterns give a generic solution to a problem
and should be adapted to the specic problem [19]. Composing a UI then will
consist of dening which tasks are needed, plug together the dialogue fragments
and do possible adaption to the standard structure. For example if a specic user
needs to search for a product using supplier name instead of product number as
the abstract dialogue model in gure 4 prescribe, it is possible to edit the model
to support search on supplier name by adding the supplier object as a resource
in the interactor.
In large software development organizations like ProERP, UI designers and
developers work in dierent, disperse teams. The designer wants to be in charge
of the UI design, but as paper prototypes do-not-y in ProERP, they need
to spend much time drawing the UI and implementing the interaction using
236 R. Kristiansen and H. Trtteberg
Fig. 4. Dialogue model of the task 1.1 Find Product, the domain model and a con-
crete UI specication of the abstract dialogue model
Model-Based User Interface Design in the Context of Workow Models 237
In an ERP domain many of the same or similar tasks are performed by dierent
people having dierent subsets of roles within an organization. We have proposed
an approach where models from the eld of model-based user interface design are
used in the context of workow models to allow role-centric composition of ERP
systems UI. As the suggested UI components are dened using an executable
modeling notation, they can be edited and thereby allowing tailoring of the UI.
Typical cases where editing would be relevant is when a user should be allowed to
take shortcuts compared to what is considered the standard process (e.g. create
a purchase order without getting a requisition from the manager).
In the suggested approach the transition from a task model to a dialogue
structure is a matter of design decisions from the UI designer. We do not provide
design support for determining a useful mapping from the task model to the
abstract user interface model as done in the methodology proposed by [23].
238 R. Kristiansen and H. Trtteberg
They provide a decision tree for selecting an abstract interaction object tting
the task. We need to consider whether such support would be appreciated by the
UI designers in the ERP domain. Also, the appropriate size of the UI components
needs to be investigated further.
References
1. Brehm, L., Heinzl, A., Markus, M.L.: Tailoring erp systems: A spectrum of choices
and their implications. In: Proceedings of the 34th Hawaii International Conference
on System Sciences (2001)
2. Carlsen, S.: Conceptual Modeling and Composition of Flexible Workow Models.
PhD thesis, Norwegian University of Science and Technology (1997)
3. Cooper, A.: The inmates are running the asylum: Why high-tech products drive
us crazy and how to restore the sanity. Sams Publishing (1999)
4. Cuppens, E., Raymaekers, C., Coninx, K.: A model-based design process for in-
teractive virtual environments. In: Gilroy, S.W., Harrison, M.D. (eds.) Interactive
Systems. LNCS, vol. 3941, pp. 225236. Springer, Heidelberg (2006)
5. Curran, T.A., Ladd, A.: SAP R/3 Business Blueprint: Understaning Enterprise
Supply Chain Management. Prentice-Hall, Englewood Clis (2000)
6. Diaper, D., Sanger, C.: Tasks for and tasks in human-computer interaction. In:
Interacting with Computers, vol. 18, pp. 117138. Elsevier B.V, Amsterdam (2006)
7. Gilbert, A.: Business apps get bad marks in usability (2003), Accessed at:
http://news.com.com/2100-1017-980648.htmlon8/12/2006
8. Object Management Group. Business process modeling notation specication, nal
adopted specication dtc/06-02-01 (2006)
9. Hammer, M.: The oa mirage. Datamation 30, 3646 (1984)
10. Hollingsworth, D.: Workow management coalition -the workow reference model.
Document Number TC00-1003 (January 1995)
11. Johnson, P., Johnson, H., Waddington, R., Shouls, A.: Task-related knowledge
structures: Analysis, modeling and application. In: People and Computers IV, pp.
3562 (1988)
12. Keller, G., Taufel, T.: SAP R/3 Process-Oriented Implementation. Addison-Wesley,
Reading (1998)
13. Klaus, H., Rosemann, M., Gable, G.G.: What is erp? Information Systems Fron-
tiers 2(2), 141162 (2000)
14. Limbourg, Q., Vanderdonckt, J.: Comparing task models for user interface design.
In: Diaper, D., Stanton, N. (eds.) The Handbook of Task Analysis for Human-
Computer Interaction, vol. 6, pp. 135154. Lawrence Erlbaum, Mahwah (2003),
http://citeseer.ist.psu.edu/limbourg03comparing.html
15. Ludewig, J.: Models in software engineering an introduction. In: Software and
Systems Modeling, vol. 2, pp. 514. Springer, Heidelberg (2003)
16. Luyten, K., Clerckx, T., Coninx, K., Vanderdonckt, J.: Derivation of a dialog model
from a task model by activity chain extraction. In: Jorge, J.A., Jardim Nunes, N.,
Falcao e Cunha, J. (eds.) DSV-IS 2003. LNCS, vol. 2844, pp. 203217. Springer,
Heidelberg (2003)
17. Marshak.: Workow: Applying automation to group processes. In: Coleman, D.
(ed.) Groupware - Collaborative Strategies for Corporate LANs and Intranets, pp.
143181. Prentice Hall PTR, Englewood Clis (1997)
Model-Based User Interface Design in the Context of Workow Models 239
18. Paquette, D., Schneider, K.: Interaction templates for constructing user inter-
faces from task models. In: Jacob, R.J.K., Limbourg, Q., Vanderdonckt, J. (eds.)
Computer-Aided Design of User Interfaces IV, pp. 223234. Springer, Heidelberg
(2004)
19. Paterno, F.: Model-Based Design and Evaluation of interactive Applications.
Springer, Heidelberg (2000)
20. Paterno, F.: Model-based tools for pervasive usability. Interacting with Computers ,
125 (2004)
21. Pontico, F., Farenc, C., Winckler, M.: Model-based support for specifying eser-
vice egovernment applications. In: Coninx, K., Luyten, K., Schneider, K.A. (eds.)
TAMODIA 2006. LNCS, vol. 4385, Springer, Heidelberg (2007)
22. Preece, J., Rogers, Y., Sharp, H.: Interaction Design: beyond human computer
interaction. John Wiley & Sons, Chichester (2002)
23. Pribeanu, C., Vanderdonckt, J.: A methodological approach to task-based design
of user interfaces. Studies in Informatics and Control 11, 145158 (2002)
24. Pruitt, J., Grudin, J.: Personas: practice and theory. In: DUX 2003: Proceedings
of the 2003 conference on Designing for user experiences, pp. 115. ACM Press,
New York (2003)
25. Salimifard, K., Wright, M.: Petri net-based modelling of workow systems: An
overview. European Journal of Operational Research 134, 664676 (2001)
26. Scapin, D., Pierret-Golbreich, C.: Towards a method for task description: Mad. In:
Work With Display Units (WWU 1989) (1989)
27. Shehab, E.M., Sharp, M.W., Supramaniam, L., T.A.: Spedding. Enterprise resource
planning: An integrative review. Business Process Management Journal 10(4), 359
386 (2004)
28. Trtteberg, H.: Modeling work: Workow and task modeling. In: CADUI 1999
(1999)
29. Trtteberg, H.: Model-based User Interface Design. PhD thesis, Norwegian Uni-
versity of Science and Technology (2002)
30. Hallvard., Trtteberg.: A hybrid tool for user interface modeling and prototyping.
In: Computer-Aided Design of User Interfaces V. Springer Science+Business Media
B.V. (2006)
31. van der Aalst, W.M.P., ter Hofstede, A.H.M.: Yawl: Yet another workow language
(revised version). Information Systems 30, 245275 (2005)
32. van der Veer, C.C., Lenting, B.F., Bergevoet, B.A.J.: Gta:groupware task analysis-
modeling complexity. Acta Psycologica 91, 297322 (1996)
33. van der Veer, G., van Welie, M.: Task based groupware design: putting theory into
practice. In: Proceedings of DIS 2000 (August 2000)
34. van Welie, M.: Task-based User Interface Design. PhD thesis, Vrije universiteit
(2001)
35. van Welie, M., van der Veer, G.C., Eliens, A.: An ontology for task world models.
In: Proc. Intl Eurographics Workshop Design, Specication, and Verication of
Interactive Systems (DSV-IS 1998), pp. 5770 (1998)
The WebTaskModel Approach to Web Process Modelling
Birgit Bomsdorf
Abstract. Task modelling has been entering the development process of web
applications. However, modelling web processes from a usage-centred perspec-
tive is still challenging due to the strong distinctions of traditional interactive
systems and state-of-the-art web applications. This paper proposes the Web-
TaskModel approach, by which task model concepts are adapted for the purpose
of modelling interactive web applications. The main difference to existing task
models is the introduction and build-time usage of a generic task lifecycle.
Hereby the descriptions of exceptions and error cases of task performance
(caused by, e.g., the stateless protocol or Browser interactions) are on the one
hand appended to the task while, on the other hand, being clearly separated.
1 Introduction
Current solutions (platforms, protocols, frameworks, etc.) are well-suited for the de-
velopment of traditional web-sites, but cause problems in realizing state-of-the-art
web applications. Modelling the special requirements of web processes is still a criti-
cal point. State-based task sequences, for example, have to be implemented based on
the stateless HTT Protocol. A further problem results from interactions enabled by
Web browsers that allow the user to backtrack to an earlier sub-task of a sequence,
bookmark an interaction and come back to it later to finalize the task. This situation
comes along with an increasing occurrence and importance of processes in web appli-
cations in general. Task modelling has been entering the development process to face
the problems. Both Web Engineering (WE) and Human-Computer-Interaction (HCI)
contribute to this but with different emphasis on various aspects in each community
due to the respective origin and background.
Traditional interactive systems (desktop applications) and web sites of the first
days (content-driven web sites) are quite different. Their characteristics are contrasted
in Table 1, whereby the focus is on usage related aspects. Web applications are in-
between the two, as shown by the grey fields marking their key features. A single
application, however, might cover a feature with different intensity. For example,
when a customer visits an online book store information about books and relations
between them may be in the foreground. Once the customer wants to check out, the
activities to be performed are dominating. From the users point of view, however, the
distinction between content-oriented interactions (accessing the information space)
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 240 253, 2007.
Springer-Verlag Berlin Heidelberg 2007
The WebTaskModel Approach to Web Process Modelling 241
different perspectives taken during modelling of the processes to point out where our
approach fits in. Task model concepts cannot be applied straightforwardly due to the
differences between traditional user interfaces and web applications. The second part
of section 2 introduces basic concepts by means of an example. Section 3 goes on
with the WTM presentation detailing the description of behavioural aspects. After-
wards, section 4 depicts the connections to role, object and context models. This is
followed by the introduction of a first WTM simulation tool in section 5.
Since the build-time usage of task state machines leads from time to time to misun-
derstandings, we first reflect different modelling perspectives. Web applications, as
considered here, are characterized by three kinds of processes:
Usage-oriented processes represent from the perspective of the users how they per-
form tasks and activities by means of the web application.
Domain-oriented processes result from the domain and the purpose of the web appli-
cation. Examples of such processes are business processes in e-commerce or di-
dactical design in e-learning. The process specification reflects the view point of
the site owner and his goals.
System-based processes are specified from the perspective of the developer aiming at
implementation. The description is conditioned by, e.g., business logic and sys-
tem internal control information. This group of processes also includes the mod-
els of web services which are specified by business processes as well.
Both WE and HCI provide answers of how to model such web processes but with
different emphasis of the usage perspective and different utilization of the resulting
specifications in subsequent design steps. The inclusion of process specification in
existing modelling approaches leads to the adoption and adaptation of different mod-
els, whereby business processes (OO-H and UWE [9], OOHDM [14], workflow
(WebML [6]) or user task models (WSDM [7], CTT [11]) are most commonly util-
ized. In principle they provide similar concepts, but usage in existing approaches
differs.
As a rule of thumb, task models concern mainly usage-oriented processes, whereas
business process models and workflows are more used to cover the domain- and sys-
tem-oriented perspective. Generally, process/workflow models focus more on respon-
sibilities and task assignment, whereas the concept tasks relate more to user goals.
Control structures used in process/workflow models are basically adopted from pro-
gramming languages, whereas constructors in task models are more influenced by the
domain and the users. The prevalent focus in modelling differs as well. Task models
put the decomposition structure into the foreground which is typically denoted by
means of a hierarchical tree notation. Process models lay the primary focus on the
sequencing information, formulated in most cases in terms of UML activity diagrams.
Each model is dedicated to one perspective or a mixture. All perspectives and their
relations are needed (separation of concerns). The WebTaskModel (WTM) describes
The WebTaskModel Approach to Web Process Modelling 243
web processes from the perspective of using a web application, whereby it provides
clear interfaces to link task with additional process descriptions.
WTM enhances our previous work on task based modelling [5]. Since that approach
was from the beginning similar, but not identical to other task models it can not make
use of existing tools and notations, such as CTTE [11]. In our current work we extend
the modelling concepts to account more appropriately for characteristics of interactive
web applications. In contrast to other approaches of task modelling, we assume the
developer not to describe the complete application by means of a connected task
model; instead task modelling can be applied whenever usage-centred aspects are
under investigation. In the case aspects of the information space (objects and their
semantic relations) are dominating the modelling focus, the well-known models and
notations (such as UML and Entity-Relationship diagrams) are applied. The resulting
specification consists of several task models, interrelated and linked to data-centric
model parts. Since from this a first navigation structure is derived [3], neither the task
nor the content structure dominates the entire web user interface but only those parts
where appropriate.
As an example of task modelling, Figure 1 shows parts of a model of an online
travel agency.1 As in general, the task hierarchy, showing decomposition of a task into
its subtasks, and different task types are modelled. In the specification of high-level
usage behaviour we distinguish cooperation tasks (represented by ) to denote pieces
of work that are performed by the user in conjunction with the web application; user
tasks ( ) that denote the user parts of the cooperation and are thus performed without
system intervention; system tasks ( ) to define pure system parts. Abstract tasks (
), similarly to CTT are compound tasks the subtask of which belong to different task
categories.
Figure 1 depicts three separate task models specifying the login/logout procedure,
the booking of a flight and a hotel, and the single-task model get tourist information.
We define no dependency between these models to allow a user to switch between the
tasks, e.g., to perform the login process at every point within the booking process. At
this modelling stage, all isolated task models are conceptually related by the web ap-
plication (here Flight Application). The position in the final site and thus inclusion of
the related interaction elements into pages depends on the navigation and page design.
The number of task executions is constrained by cardinalities of the form
(min,max), whereby no label indicates mandatory performance, i.e, card=(1,1). The
task perform login process is marked with (1..*) to denote that the user can repeat it
as often as he wants. Labels indicating min=0 define optional tasks (in the example
alter shipping data and alter payment data). Additionally, the label T is used to define
transactional tasks, i.e., task sequences that have to be performed completely success-
fully, or not at all (payment in the example).
The order of task execution is given by temporal relations, which are assigned con-
ceptually to the parent task so that the same temporal relation is valid for all of the
subtasks. Relations typically used in task models are sequential task execution,
1
The representations are used here to explain the concepts but not to introduce a new notation.
244 B. Bomsdorf
provide provide
departure arrival 0-1 0-1
alter shipping data alter payment data
parallel task execution, and selection of one task from a set of alternative tasks. Fur-
ther relations are described in [11] and [5]. In the notation used in Figure 1, temporal
relations are denoted by abbreviations. The tasks find a flight, choose a hotel and
payment have to be performed strictly one after the other (denoted by Seq) in the
specified order (denoted by ).
Tasks of an arbitrary sequence, such as provide departure and provide arrival or
alter shipping data and alter payment data, are performed one after the other in any
arbitrary order (denoted by ASeq), so that at one point in time only one of the tasks is
under execution. SeqB is an extension we made to describe task ordering that often
exists in web applications: going back systematically to an already executed task
of a sequence. Hereby, the results of that task or of the complete sequence up from
that task are rolled back and the tasks can be performed again. In the example, the
user is guided through the tasks of payment. Before he accepts the conditions or
confirms he is allowed to go back to re-perform alter data and accept conditions,
respectively. Since validate data is a system task, the user cannot step back to it, but
it is performed automatically after each execution of alter data. Guided tours as
traditionally implemented in web sites provide similar behaviour but the effect is
different. Visitors are guided through an information space enabling them to leave
the tour at any arbitrary point without any effect on the information space or domain
model.
The example shows some extensions made by WTM; further extensions are pre-
sented together with the task state machine, which is used as an explicit build-time
concept.
The WebTaskModel Approach to Web Process Modelling 245
State Meaning
Skip skipped initiated if all preconditions are
fulfilled the task can
Restart be started
initiated
skipped the task is omitted
events valid
in all states Start Restart running denotes the actual
timeout Suspend performance of the
navigate_out running suspended task and of its
navigate_in Resume subtasks, if applicable
End (vii completed marks a successful
Abort
task execution
completed terminated suspended the task is interrupted
terminated indicates an abort
We applied our extended task model in small projects (in industry as well as in
students projects) before implementing an editor. The experiences show that the
models are more structured and concise in the cases the developers could make use of
the task state machine directly. Although we do not regard this as a representative
evaluation, it motivated us to re-design our first editor conception. As a result, the
main task structure is modelled as before by means of a hierarchical tree notation but
additional behaviour can be assigned explicitly to states and transitions.
States and transitions can be extended by additional behaviours, which are specified
by triggers and actions. The actions of a behaviour may affect tasks, objects, roles
and/or conditions as well as context information. An action affects
246 B. Bomsdorf
The specific events Start, End, Skip, Restart, Suspend, Resume and Abort can be used
on the one hand to represent internal system events influencing task execution.
Hereby we realize the coordination of the usage-oriented processes with the domain-
oriented and system-based processes. On the other hand, the specific task events can
be used to represent events resulting from user interactions. Within the runtime sys-
tem the task control layer is complemented by a dialog layer that controls the user-
system dialog and forwards interactions events to the task control component [2, 4].
At build time the developer has to specify, which interactions should match a task
event.
The screen fragments in Figure 3 show two possible implementations of the same
task description, whereby the screen shots were taken after the execution of the first
2
This extension is also useful in the context of tradition interactive applications.
The WebTaskModel Approach to Web Process Modelling 247
task. On the right hand a solution with two selection lists (drop down boxes) is shown.
The lower list is customized according to the current selection resulting from the
From-selection. On the left hand the tasks are implemented by means of an interactive
map presenting all supported airports. Each time the mouse cursor is positioned upon
a city name all existing connections are visualized by means of lines. After activating
the left mouse button, they are fixed (as shown by the screenshot of the example) and
the user can perform the subsequent task by a mouse click on the destination. We
could take those solutions to implement the dialog for the example task enter flight
details, whereby the subtasks are to be performed in strict sequential order. The reali-
zation of a strict sequential dialog would impose a restriction, but would not violate
the predefined ASeq access specification. In the resulting web user interface different
interaction possibilities are provided for performing the same task. Thus, the respec-
tive events are bound to the same task events, e.g., both the selection from the list and
the mouse click on the map are bound to the End event of provide arrival.
Events resulting from interactions and task events can be mapped arbitrarily to
each other. Although all specific events can be linked to interactions, Start, End and
Skip are particularly significant in designing the web user interface since they signal-
ize the need for interaction elements. However, there is no need to define an explicit
interaction for each task event. For example, often a task is skipped by just perform-
ing another one. The optional subtasks of alter data (Figure 1) might be, for exam-
ple, skipped by accepting the conditions, i.e., by an interaction assigned to another
task.
248 B. Bomsdorf
The global events timeout, navigate_out and navigate_in are generated from the out-
side and are valid for all states but the end states (skipped, completed and termi-
nated). The timeout event is a pure system event introduced here to deal with the
occurrence of session timeouts. In contrast to user interfaces of traditional applica-
tions the Client/Browser provides additional navigation interactions (e.g., Back-
button, navigation history). The WebTaskModel provides the events navigate_in and
navigate_out to deal explicitly with unexpected user navigations by which he leaves
or steps into a predefined ordering of task execution. Such user behaviour as well as
session timeouts have to be considered at the level of task modelling since they may
significantly impact the predefined processes. Online shops are often exemplified in
this context, e.g., the Orbitz Bug (actual bug in the flight-reservation program of Or-
bitz.com) as reported in [10].
First of all the relevance of a global event for a specific task is to be decided:
Should something happen at all or should a global event cause no special behaviour of
the task. If it is relevant, the impact on further task executions and on the results
reached so far is to be fixed: Should a task be aborted, be interrupted or should occur
nothing? Should modifications on objects remain or is a rollback to be performed?
Reaction in each case is in general a matter of application design (examples are given
below). Furthermore, as in the case of a specific task event, the related trigger is to be
defined.
In our example we want to treat a navigate_out, occurring while the task select
flight is running, as an interruption, which is formulated by
select flight.running.navigate_out send Suspend to task select flight
The specification does not describe from what user interaction the navigate_out re-
sults. For example, it may be generated because the user starts to browse through the
tourist information:
get tourist information.running.on_entry send navigate_out to task select flight
In general, the specification of how to handle an event is uncoupled from its occur-
rence. The reactions are described locally in the context of each task. A navigate out,
however, cannot be detected in all cases (due to the HTTP protocol). The user may
navigate to an external web site leaving the current task process open. At a predefined
point in time the web application server will terminate the user session, whereby the
timeout event is generated. We could make use of this event to formulate the same
behaviour as defined for navigate_out:
select flight.running. timeout send Suspend to task select flight
However, if the user is not logged in we do not know how to assign to him the data
collected so far. So we model a system task handle timeout that differentiates the
cases:
select flight.running.timeout send Start to task handle timeout
Another irregular navigation transition is given by requesting a page assigned to a task
whose previous mandatory tasks were not carried out. All in all, there are diverse causes
and different ways of detecting navigations beyond task sequencing. It can be specified
at task level as well as at user interface level, e.g., supported by the JStateMachine
The WebTaskModel Approach to Web Process Modelling 249
framework that handles allowed and forbidden UI state transitions [1]. As shown by
these few examples, the task life cycle model can be used in a flexible way to describe
complex behaviour of high level tasks. The events timeout, navigate_out and navi-
gate_in are used only if they impact high-level behaviour. If, for example, two tasks are
represented and accessible, respectively, by the same page, it is rather useless to attach
reactions to the navigate_out and navigate_in events. Furthermore, a simple structured
task, e.g. entry of a word, does not require control by means of all states of the generic
life cycle. Our experience so far shows, that in particular navigate_out specifications are
not very often defined at the task abstraction layer, but if so they are effective in keeping
the web application behaviour consistent over all web pages presenting the same task.
In the example, the customer should be able to choose the departure from a set and
afterwards the destination depending on the selected departure. Thus sets for the two
selection tasks are provided (see Figure 4). The connections between tasks and task
objects are denoted by involves relations, which are defined by the specifications of
object-actions (see above). Additional information, such as constraints or derivation
from the domain objects, is attached informally. In the example, in the underlying
database we would provide a relation between departure and arrival entities, based on
which the second set can be dynamically derived and inserted into a web page.
Properties of task objects and task object types, respectively, are described by means
of attributes while their life cycles are specified by means of a finite state machine.
Hereby, only those aspects are modelled that are relevant to the user while performing
a task and interacting with the system, respectively. Later on in the process, we apply
UML diagrams by which the model can be described in a very detailed way and with
ongoing refinement the object-actions are replaced by, e.g., method invocations.
250 B. Bomsdorf
Role and context models are linked in the same way: Roles as well as context ob-
jects are described by attributes and state machines for specifying role and context
changes, respectively. Role change in web applications is often more dynamical than
in traditional desktop applications. For example, a user requesting the start page of the
online travel agency is unknown. Taken this role he is allowed to choose a flight and a
hotel. The task can be finalized only if he is logged in, adopting the role of a regis-
tered user. Role changes resulting from task execution as well as determination of a
task space resulting from a role change occurs more often than in traditional interac-
tive applications. In addition, contextual changes have to be handled likewise. In
WTM, these interplays are modelled by the actions resulting from state transitions and
events triggered from elements of the task, role, object, and context model.
Long-term dependencies are modelled by conditions. Task execution mostly de-
pends also on business rules, context information and results of tasks performed so
far. These dependencies are specified by pre- and post-conditions. A pre-condition is
a logical expression that has to hold true before starting the task. Once it is started the
condition is no longer checked. A post-condition is a logical expression which has to
be satisfied for completing the task. In contrast to pre- and post-conditions, temporal
relations decide on ordering of task execution. Once a task could be performed be-
cause of the sequencing information, the conditions can be evaluated to determine if
an execution is actually enabled and may be finalized, respectively. In WTM structur-
ing and composition of conditions are separated from their usage in task performance
(as well as from roles, objects, and context). A condition may be relevant for more
than one task, possibly assigned to a task as a pre-condition while being a post-
condition for another one. Since conditions are formulated separately from tasks,
objects, and roles they can be attached flexibly to the respective model.
3
Again we are not going to propose a new notation; the nice ellipses will be replaced.
The WebTaskModel Approach to Web Process Modelling 251
(ix)
(v)
(ii)
(iii)
(vii)
(viii)
The simulator provides not only the simulation of the tasks, theirs behaviours and
interdependencies, but also task performance in conjunctions with the other model
specifications. The object area (v) on the right side shows the task objects and their
manipulations during task execution. Similarly, the role area (vi) depicts all roles and
their states specified, allowing investigating role changes resulting from task execu-
tion as well as disabling and enabling of tasks because of role changes. The context
area (vii) represents the context specification, which is empty in this example. Like-
wise in the case of the role, task, and object models effects on context settings can be
tested and the reactions to context changes.
The condition area (viii) presents the conditions and their value changes resulting
from modifications occurring in the role, task, object and context model. All in all, the
mutual dependencies of all the models can be investigated. In addition, the special
case of a session timeout can be tested. A respective event is sent to all tasks if the
Session Timeout button is activated (ix).
6 Conclusion
The task model enhancements presented in this paper aim particularly at developing
web applications but are applicable to traditional interactive systems as well. The
main extension introduced by the WebTaskModel is the explicit description of task
performance by means of state information at build time. Application specific rules
can be added to the generic task behaviour and are used at run time as part of the
252 B. Bomsdorf
control information. Hereby also system functionality required for handling the inter-
ruptions can be invoked in the context of a task.
WSDM [7] and OOWS [12] are modelling approaches that also put a strong em-
phasis on the user-centred view; both describe the user tasks by means of a task model
(CTT notation [11] but in a slightly modified version). In OOWS the task model is
used for requirement elicitation and specification concerning system operations,
which is similar to our previous work [13]. The WebTaskModel approach, in contrast,
is similar to WSDM, where task models are part of the conceptual modelling and thus
more formally used in deriving the domain objects and the navigation model. Other
web modelling approaches, e.g., UWE [9] claim their activity diagrams to be user-
oriented, but mostly cover the system perspective.
In our approach we make use of less detailed described task objects, which is dif-
ferent to WE but often applied in HCI. Similarly to our work in [13] the modifications
on task objects are described by means of state-transition-diagrams. In that work,
however, task models are only used as an informal input to derive the objects transi-
tions. In the WebTaskModel approach we retain the task model and bind it to the
objects by means of conditions and events that guard the objects transitions. This is
replaced with method invocations with ongoing refinement of the objects, which are
then described by means of UML. This is more flexible and allows using both the task
and the object model as formal input within the subsequent navigation and interaction
design [3]. First experience showed that developers tend to favour only one of the
object types at a time depending on their background.
The general objective of our work is to provide a modelling and runtime support
for multiple user interfaces. One of the steps towards this direction is the presented
link to context models, which will be refined in a follow-up work. The WebTask-
Model is used at build time to generically define the task and domain specific behav-
iour of the site. The tasks can be refined down to the dialog level, e.g., as done in
WSDM. Alternatively, the dialog can be described by a separate dialog model, as for
instance introduced in [16]. Currently, we investigate both directions. In [2] a refined
WebTaskModel is combined with so-called Abstract Dialog Units. The resulting
models are transformed into a runtime system, whereby the task state machines be-
come part of the controller [4]. All in all, we make multi-use of runtime task models:
as a small scale workflow system within an e-learning application, as a generic exten-
sion of the application architecture, and within the simulation of task models. Hereby,
modelling the task-related behaviour has been gaining importance. In our work on the
WebTaskModel so far we concentrate mainly on its concepts and their applications in
projects. First proof-of-concepts editors and simulation tools have been implemented
as part of Bachelor theses and are currently developed further.
Acknowledgements
The author would like to thank Sebastian Schuth for implementing the first simulation
tool and also the reviewers (particularly Reviewer 2) for their valuable comments
about this paper.
The WebTaskModel Approach to Web Process Modelling 253
References
1. Anderson, D., OByrne, B.: Lean Interaction Design and Implementation: Using State-
charts with Feature Driven Development. In: Proceedings of the 2nd International Confer-
ence on Usage-Centered Design - ForUse 2003 (2003)
2. Betermieux, S., Bomsdorf, B.: Finalizing Dialog Models at Runtime. In: 7th International
Conference on Web Engineering - ICWE 2007. LNCS, vol. 4607, pp. 137151. Springer,
Heidelberg (2007)
3. Bomsdorf, B.: Modelling Interactive Web Applications: From Usage Modelling towards
Navigation Models. In: Proceedings of the 6th International Workshop on Web-Oriented
Software Technologies - IWWOST 2007, pp. 194208 (2007)
4. Bomsdorf, B.: First Steps Towards Task-Related Web User Interface. In: Proceedings of
the 4th International Conference on Computer-Aided Design of User Interfaces - CADUI
2002, pp. 349356. Kluwer, Dordrecht (2002)
5. Bomsdorf, B.: A Coherent and Integrative Modelling Framework for Task-Based Devel-
opment of Interactive Systems (in German), PhD Thesis, Heinz-Nixdorf-
Institut/Universitt Paderborn (1999), http://pi1.fernuni-hagen.de/bomsdorf
6. Brambilla, M., Ceri, S., Fraternali, P., Manolescu, I.: Process Modeling in Web Applica-
tions. In: ACM Transactions on Software Engineering and Methodology (TOSEM) (2006)
7. De Troyer, O., Casteleyn, S.: Modeling Complex Processes for Web Applications using
WSDM. In: Proceedings of the International Workshop on Web-Oriented Software Tech-
nologies (IWWOST 2003) (2003)
8. Klug, T., Kangasharju, J.: Executable task models. In: 4th Forth International Workshop
on Task Models and Diagrams for User Interface Design - TAMODIA 2005, pp. 119122
(2005)
9. Koch, N., Kraus, A., Cachero, C., Meli, S.: Integration of business processes in web ap-
plication models. Journal of Web Engineering 3(1), 2249 (2004)
10. Licata, D.R., Krishnamurthi, S.: Verifying interactive web programs. In: Proceedings of
the IEEE International Conference on Automated Software Engineering, pp. 164173.
IEEE Computer Society Press, Los Alamitos (2004)
11. Patern, F.: Model-based Design and Evaluation of Interactive Applications. Springer,
Berlin (1999)
12. Ruiz, M., Pelechano, V., Pastor, .: Designing Web Services for Supporting User Tasks:
A Model Driven Approach. In: Proceedings of the International Workshop on Conceptual
Modeling of Service-Oriented Software Systems - CoSS 2006, pp. 193202 (2006)
13. Szwillus, G., Bomsdorf, B.: Models for Task-Object-Based Web Site Management. In:
Forbrig, P., Limbourg, Q., Urban, B., Vanderdonckt, J. (eds.) DSV-IS 2002. LNCS,
vol. 2545, pp. 267281. Springer, Heidelberg (2002)
14. Schmid, H.A., Rossi, G.: Designing Business Processes in E-commerce Applications. In:
Bauknecht, K., Tjoa, A.M., Quirchmayr, G. (eds.) EC-Web 2002. LNCS, vol. 2455, pp.
353362. Springer, Heidelberg (2002)
15. Vilain, P., Schwabe, D.: Improving the Web Application Design Process with UIDs. 2nd
International Workshop on Web-Oriented Software Technology (2002)
16. Winckler, M., Vanderdonckt, J.: Towards a User-Centered Design of Web Applications
based on a Task Model. In: International Workshop on Web-Oriented Software Technolo-
gies - IWWOST 2005 (2005)
Exploring Usability Needs by
Human-Computer Interaction Patterns
Abstract. Covering quality aspects such as usability through the software devel-
opment life cycle is challenging. These -ilities are generally difficult to grasp
and usually lack an appropriate quantifiability, which would ease their system-
atic consideration. We propose a pattern-based development method supporting
the identification of usability requirements and their proper specification. By tak-
ing usability principles from Human-Computer Interaction (HCI) design patterns
and incorporate them into patterns for software analysis (problem frames), we ob-
tain a new kind of patterns applicable for requirements engineering: HCIFrames.
They are used for exploring usability needs of a given problem situation.
Patterns for developing software have become popular for quite some time. They sup-
port reuse of development knowledge, which has proven of value, and can assist devel-
opers to build software efficiently. A common approach is using design patterns, which
represent best practice solutions for recurrent, but also varying design problems. They
were originally introduced in architecture by Alexander et al. [1] and first transferred to
the software domain by Beck and Cunningham [2]. Gamma et al. [7] developed a pat-
tern catalog for Software Engineering (SE). Recently, design patterns have encountered
high interest in Human-Computer Interaction (HCI), where various pattern collections
for different purposes exist, e.g. web design [14], user interfaces [13], groupware appli-
cations [11], navigational design [10], or collections of general HCI design patterns [3].
From the SE point of view there are two drawbacks for HCI design patterns that
we take into account. Firstly, many HCI design patterns are still merely represented
by graphics such as screenshots and a corresponding text passage containing their
natural-language description, even though approaches for formalizing them exist [6].
This meets the philosophy to provide patterns understandable by laymen, but constrains
their methodical deployment in the software development life cycle. Secondly, there are
many synonym patterns in diverse collections, even if the pattern authors use different
names for their design patterns and describe them in different ways.
From the HCI point of view common SE approaches neglect to address quality as-
pects or so-called non-functional software properties [4]. Primarily, they concentrate on
describing the functionality of a program, although quality aspects, e.g. usability should
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 254260, 2007.
c Springer-Verlag Berlin Heidelberg 2007
Exploring Usability Needs by Human-Computer Interaction Patterns 255
be built systematically into software from the beginning. As different views of the term
usability exist [15], our understanding of usability refers to ISO 9241-110:2006 [9].
By means of patterns we start integrating Software and Usability Engineering
activities. Design patterns are used for solving problems, but they do not suffice to
describe development problems themselves. Therefore, we consider another kind of
pattern, which is used for characterizing problems that should be solved, namely the
problem frames approach by Jackson [8]. A problem frame is a pattern for structuring a
simple problem situation. It classifies the problem without determining how to solve it.
In this article, we continue our prior work [16], where we introduced HCIFrames,
which are patterns especially considering usability problems. In Section 2, we extend
and detail our method by extracting usability principles that refer to ISO 9241-110:2006
from given HCI design patterns. These usability principles are incorporated to problem
frames, which we introduce in Section 3 by deriving usability concerns for them. In
Section 4, a basic problem frame is extended by these usability concerns to obtain an
HCIFrame. Section 5 concludes our results and gives a prospect of future work.
Smith and Williams note that a pattern is a realization of one or more princi-
ples [12, p. 263], which can be embedded into various (anti-)patterns. These princi-
ples are applicable during the early phases of software development [12, p. 242] and
help to identify design alternatives [12, p. 241]. We examined the problem descrip-
tion sections of several design patterns and deduced their inherent, common principles
(Tab. 1, first column) for their classification, accordingly. This classification is still an
ongoing job, we present a part of the already obtained results in this article.
For patterns in the same row, we imply that they share the same principle to solve
problems, e.g. in the third line, the design patterns Memento, Command History,
and Elephants Brain are different applications of the same problem-solving principle,
256 M. Specker and I. Wentzlaff
ET!E1 Workpieces Y4
WP!Y2 E3 : {user event}
X
Editing Y4 : {workpieces status}
Command
Tool effects E1 : {machine command}
User E3 Y2 : {workpieces state}
US!E3 B
problems. Jackson provides a set of five basic problem frames, which can be extended
by combining them or creating variants of them [5], which we do not discuss further.
Each problem frame such as Jacksons Simple Workpieces is represented by a
frame diagram (Fig. 1, cf. [8] for more details) containing different kinds of domains
(boxes), interfaces with shared phenomena (labeled lines with related set of operations,
actions or events representing domain properties) and a requirements oval.
Problem frames support deriving specifications from requirements. Specifications
describe the desired machine behavior (interfaces at the box with two vertical bars) and
thus are translations of customer requirements into corresponding technical descriptions
of software services used by developers. Therefore, the basic frame concern of a prob-
lem frame must be addressed [8, p. 105ff]. In our method we represent the basic frame
concern of Simple Workpieces by template statements for its requirement Command
effects in (R CE) and its corresponding specification in (S CE) (angle brackets are
placeholders for domains and shared phenomena of Fig. 1):
(R1) : A Player, who commands the machine Game to execute move, expects to change
the Pac-Man state to new location.
(S1) : On behalf of Player command move the machine Game manipulates the Pac-Man
state position by turn to achieve the desired Pac-Man state new location.
GM!{turn}
PM!{position} PacMan "new location"
Game R1
Fig. 2. Instantiated problem frame Simple Workpieces used for a Pac-Man Game
258 M. Specker and I. Wentzlaff
By means of problem frames simple problem descriptions can be derived that suffice
for specifying the core functionality of a desired software system. However, currently
they lack of a systematic account of non-functional properties or quality attributes such
as usability. We incorporate our usability principles to problem frames by extending
their basic frame concerns where reasonable for guiding usability specifications.
ET!E1 Workpieces Y4
WP!Y2 E3 : {user event, cancel event, undo event}
X
Editing Y4 : {workpieces status}
R_CE,R_CA,
Tool R_UA E1 : {machine command,
User E3 mc cancel, mc restore}
US!E3 B Y2 : {workpieces state}
Fig. 3. HCIFrame for Simple Workpieces (additions in bold face and italic type)
The first usability principle users activity of Tab. 2 is already covered by the shared
phenomenon user event of interface E3. It does not cause any change of the frame
diagram itself, e.g. no additional domains have to be introduced. Now (HCI) design
patterns that support the principle users activity can be applied for solving a problem
specified by Simple Workpieces, e.g. Command and Unit of Work in Tab. 1.
In contrast to users activity, a new usability concern for cancel activity given by
the template requirement (R CA) and its corresponding specification (S CA) is added
to Simple Workpieces accompanied by a new shared phenomenon at interface E1.
Comparable to cancel activity a new usability concern has to be created for undo
activities. Without detailing its corresponding template statements, we add (R UA) to
the requirements oval and an additional phenomenon to interface E1, and E3 of the
frame diagram for Simple Workpieces in Fig. 3 for handling the undo activities prin-
ciple. The resulting HCIFrame shows, that if a problem fits the Simple Workpieces
Exploring Usability Needs by Human-Computer Interaction Patterns 259
frame, then specific usability needs such as introduced by the template requirements
(R CA), and (R UA) are of relevance and should be considered as well in addition to
(R CE). Developers only need to check, if one of these usability concerns is applicable.
Because HCIFrames and (HCI) design patterns are strongly related to their common
usability principles, an implementation of a corresponding solution for these usability
problems is supported. Getting back to the prior Pac-Man example, this means that
besides move Pac-Man, user interactions for cancel move and undo move should be
considered for affecting the games usability.
By extracting usability principles from HCI design patterns and incorporating them to
problem frames, we obtain HCIFrames, which are patterns for characterizing usability
problems. They allow the exploration of usability needs in early software development,
which is a prerequisite for building usability into software applications systematically.
By using HCIFrames, a developer is guided in the identification, specification and re-
viewing of usability demands and does not solely depend on personal experience any-
more. To determine the efficiency of HCIFrame use, more research is needed. Our ap-
proach already provides a basis for a continuous pattern-based software development
method by explicitly linking patterns of software analysis to corresponding patterns of
software design via common usability principles.
Motivated by the findings presented here, we are working on additional HCIFrames
and their proper implementation by means of corresponding (HCI) design patterns.
Furthermore, we are interested in investigating interactions of usability requirements
with other quality aspects such as security, safety or performance. How possible con-
flicts of these can be resolved by our method, is future research, as well.
References
[1] Alexander, C., Ishikawa, S., Silverstein, M., Jacobson, M., Fiksdahl-King, I., Angel, S.: A
Pattern Language. Oxford University Press, New York (1977)
[2] Beck, K., Cunningham, W.: Using Pattern Languages for Object-Oriented Programs.
OOPSLA- 1987 Workshop on the Specification and Design for OO-Programming (1987)
[3] Borchers, J.: A Pattern Approach to Interaction Design. John Wiley & Sons, USA (2001)
[4] Chung, L., Nixon, B.A., Yu, E., Mylopoulus, J.: Non-Functional Requirements in Software
Engineering. Kluwer Academic Publishers, Boston, USA (2000)
[5] Cote, I., Hatebur, D., Heisel, M., Schmidt, H., Wentzlaff, I.: A Systematic Account of Prob-
lem Frames. In: EuroPLoP 2007, Universitatsverlag Konstanz (to appear, 2008)
[6] Folmer, E., van Welie, M., Bosch, J.: Bridging Patterns: An Approach to Bridge Gaps
Between HCI and SE. Journal of Information and Software Technology 48(2) (2006)
[7] Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns Elements of Reusable
Object-Oriented Software. Addison Wesley, Boston, USA (1995)
[8] Jackson, M.: Problem Frames Analysing and Structuring Software Development Prob-
lems. Addison-Wesley, Reading (2001)
[9] ISO 9241-110:2006. Ergonomics of Human-System Interaction Part 110: Dialogue Prin-
ciples. International Organisation for Standardization (2006)
260 M. Specker and I. Wentzlaff
[10] Rossi, G., Schwabe, D., Lyardet, F.: User Interface Patterns for Hypermedia Applications.
In: Proc. of the Working Conference on AVI, ACM Press, New York (2000)
[11] Schummer, T.: A Pattern Approach for End-User Centered Groupware Development. PhD
thesis, FernUniversitat in Hagen (2005)
[12] Smith, C.U., Williams, L.G.: Performance Solutions: A Practical Guide to Creating Re-
sponsive, Scalable Software. Addison-Wesley Professional, Reading (2001)
[13] Tidwell, J.: Designing Interfaces, Sebastopol, USA. OReilly Media (2005)
[14] van Duyne, D.K., Landay, J., Hong, J.: The Design of Sites - Patterns for Creating Winning
Websites. Prentice-Hall, Englewood Cliffs (2002)
[15] van Welie, M., van der Veer, G.C., Eliens, A.: Breaking down Usability. In: Proceedings of
Interact 1999, Edinburgh, Scotland (1999)
[16] Wentzlaff, I., Specker, M.: Pattern-based Development of User-Friendly Web Applications.
In: Workshop Proceedings of the 6th ICWE, ACM Press, New York (2006)
From Task Model to Wearable Computer
Configuration
1 Introduction
The engineering of mixed [1] and mobile [2] systems is not an easy activity, because
it requires mastering interaction devices and technologies for wearable computer to
satisfy application requirements. Our objective was to elaborate a process organizing
the study and the selection of wearable computer and associated devices in adequacy
with the tasks allocated to the actor.
The diversity of the interaction devices used with a PDA or a Tablet PC, such as
HMD (head-mounted display) for augmented reality, datagloves, RFID readers and so
on, is very important. These devices are more or less specialized and adapted or
adaptable to tasks to carry out. Their great number and their specialization contribute
to make this choice difficult. An unsuited choice can compromise effective and ergo-
nomically valid tasks achievement. Moreover the tasks independence in relation to the
contexts and the devices is known as a very important constraint for user interface
plasticity [3]. How to determine and compose logically interaction devices most
adapted to the needs expressed by application tasks according to working contexts
and in the same time to maintain this independence as long as possible, is the question
which we try to answer. Our study aims to propose a process to determine in a con-
structive way the devices most adapted to the application tasks in adequacy with dif-
ferent contexts of use while minimizing the number of them.
Our process is organized in two major stages, in the first stage we acts to identify
and model the application tasks that the actor (user) will have to carry out, then we
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 261 266, 2007.
Springer-Verlag Berlin Heidelberg 2007
262 B. David et al.
decompose these tasks to discover interaction tasks which put in interaction the actor
and the augmented reality based information processing system, and finally to iden-
tify interaction atoms concerned. The second stage of our process provides to the
designer a reference model of interaction devices organized to facilitate the choice
according to an approach of Design Rationale [4]. The choice of devices is based on a
logical reasoning and rational QoC taking into account various criteria [5]. This proc-
ess proposes an iterative approach to analyze progressively the tasks to be carried out,
with the satisfaction of main criteria interaction continuity in and between the tasks,
to reduce of the number of devices in respect of working contexts.
Application Task
Is
decomposed
in
Device
independent
part
Interaction Tasks
Use
Interaction Atoms
Is realized by
Interaction devices
competence failure, would collaborate remotely with an expert. This last one is able to
guide the technician by graphic, oral or textual indications. The context of interven-
tion is the following: the technician is working in mobility in a noisy environment and
would collaborate remotely without the use of his two hands occupied to fix the ma-
chine. In the diagram of Figure 1 we summarize our approach. In the tree (on the right
part), we clearly let appear the various layers of Figure 1a and the context and the
devices independent and dependent parts. The connection between these two parts
takes place through interaction atoms "Navigation" and "Identification" and interac-
tion techniques "Voice text input" and "gestural interaction", technique integrating the
gestural modality. The input interaction devices which we are proposing are appropri-
ate in the context of the application task, i.e. gestural sensors (based on dataglove
technology) distributed on the body, in order to compensate impossibility for the actor
to use his two hands, and a microphone allowing vocal conversation.
Microphone
Dataglove
Table Eye-tracker
The assignment of notes to each criterion and to each device must be done with care,
most objectively possible and can be carried out only by field experts with appropriate
knowledge of mobile and AR systems in relation with usability appreciation. Once all
elements collected and modelled selection process can start. It is based on in-depth in-
spection of application tasks to which interaction tasks then devices are assigned accord-
ing to a compromise tending to maximize the performance, within the meaning of the
values of the device in each criterion. It is also necessary to take into account all the
interaction techniques used by the task in order to make a good choice of device in rela-
tion with the devices which already were selected to carry out other task and sub-task to
266 B. David et al.
be able to minimize the number of devices, the cost of them, their weight and to maxi-
mize the continuity of the interaction and any other criterion relating to the overall appre-
ciation of the result. A tool supporting the process manipulates different criteria and their
values, calculating the score progressively in relation with options chosen.
We present in figure 3, different matrixes device/criteria comparing the use of two
devices (mobile eye-tracker and mobile microphone) for three interaction techniques
(scroll, choose an item and confirm). We can observe that the wearable microphone
appears as the most appropriate device in relation with our needs.
4 Conclusion
In this paper, we described a process organizing the choice of devices for wearable
computer in the context of mobility and augmented reality. We described the various
elements which it requires and gave an outline of its effectiveness and its reproduci-
bility on a concrete example. This process wants to be generic and applicable to a
large set of existing interaction devices and to devices newly introduced or created
specifically mainly in the context of augmented reality and mobility. The transforma-
tional process presented can be improved by at least two levels of patterns, firstly
between interaction tasks and interaction atoms and secondly related to interactions
techniques depending on interaction devices. Both are important, for UI plasticity, to
remain as long as possible independent of interaction device is required. For aug-
mented reality, association with existing devices or design of new augmented real
objects is another important challenge which was not completely tackled in this paper.
References
1. Wellner, P., Mackay, W., Gold, R.: Computer Augmented Environments: Back to the Real
World. Communications of the ACM 36(7), 2427 (1993)
2. Plouznikoff, N., Robert, J.-M.: Caractristiques, enjeux et dfis de linformatique porte.
In: Actes du congrs IHM 2004, pp. 125132 (2004)
3. Thevenin, D.: Adaptation en Interaction Homme-Machine: Le cas de la plasticit. Thse
en informatique, Universit Joseph Fourrier, Grenoble, p. 212 (2001)
4. Moran, T.P., Carroll, J.M.: Design Rationale: Concepts, Techniques, and Use. Lawrence
Erlbaum Associates Publishers, Mahwah (1996)
5. Lingrand, D., de Morais, W.O., Tigli, J.Y.: Ordinateur port: dispositifs dentre-sortie. In:
Actes du congrs IHM 2005, pp. 219222 (2005)
6. Paterno, F.: Model-Based Design and Evaluation of Interactive Applications. Applied
Computing Series. Springer, Heidelberg (2000)
7. Foley, J.D., Wallace, V.L., Chan, P.: The human factors of computer graphics interaction
techniques. IEEE Computer Graphic Applications 4(11), 1348 (1984)
8. Yeh, R.B., Brant, J., Boli, J., Klemmer, S.R.: Large, Paper-Based Interfaces for Visual
Context and Collaboration. In: Dourish, P., Friday, A. (eds.) UbiComp 2006. LNCS,
vol. 4206, Springer, Heidelberg (2006)
9. Jacob, R.J.K., Leggett, J.J., Myers, B.A., Pausch, R.: Interaction Styles and Input/Output
Devices. Behaviour and Information Technology 12(2), 6979 (1993)
10. Champalle, O., David, B., Chalon, R., Masserey, G.: Ordinateur port support de ralit
augmente pour des activits de maintenance et de dpannage. In: UbiMob (2006)
Generating Interactive Applications from Task Models:
A Hard Challenge
Abstract. Since early ergonomics, notations have been created focusing on the
activities, jobs and task descriptions. However, the development of a wide vari-
ety of devices led to the generation of different interfaces from the same
description of the tasks. The generation of complete current interfaces needs
different types of information, some of which are not represented in usual task
models. The goal of this paper is to present information that seems to be lacking
in the task models.
1 Introduction
Since early ergonomics, activities, jobs and task descriptions have been the center of
any ergonomic diagnosis for assessment, evaluation and eventually for design and
redesign. Lots of efforts were dedicated to data gathering, such as interviewing methods,
and to identify issues with cognitive tasks (e.g. in air traffic control, nuclear power
plants, etc.). Models for description, namely task models, have been then published and
used [1, 2]. Tools supporting these models were later on developed, often not usually
formal enough to allow full simulation and reuse of data.
Benefits from task-based modeling are nowadays largely reported in research [3].
Validation appears as the first aim of task-based approaches. In order to facilitate
validation, some approaches, the model-based systems (MBS) [4], were developed to
product user interfaces (UI) from models (of whom one is task models). More, the
development of numerous devices and platforms requires to product UI capable of
adapting to the context of use [5]. In order to design these kind of UIs, one strategy is
to derive different UIs for several platforms from the same task model containing
common information. This approach has been followed by ARTStudio [6], TERESA
[7] and Dygimes framework [8]. After several steps, they produce final UIs adapted
for a particular platform.
All these approaches are based on the generation of (all or a part of) UIs. However,
during the different steps of generations, somme information are added (by users or
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 267 272, 2007.
Springer-Verlag Berlin Heidelberg 2007
268 S. Caffiau et al.
tools). Thus, the initial task model is modified. Then, validation according initial task
models becomes complicated.
This paper is based on both a literature survey and case studies (on an email sys-
tem, on a medical system). Exploring these case studies from the task model data
highlights four challenges imposing to complete task models: interface presentation,
definition of task to dialogue model transformations, connecting tasks with errors and
undo patterns, and finally support rich forms of interaction (post-WIMP). In this pa-
per, we expose these challenges respectively in section 2, 3, 4 and 5.
the user and the application. That last point stands for ordering possible user actions
depending on functional core and interface states, which is closely related to temporal
control.
Temporal Control. Dialogue and task model seems to be very closed as shown by
simulation tools of some task model editors (such as ConcurTaskTrees Environment
[3] (CTTE1) or K-MADe2 [10]). These task model editors supply designers to select
from a set of enabled tasks which they want to evaluate (to create scenarios). More, an
evaluation of the sets of feasible actions has to be performed on the dialogue control
of interactive applications. The evaluation of enabled task sets [3] and feasible actions
intuitively appears to share close similarities, and are even sometimes considered as
identical [8, 7].
Nevertheless, is it true that these two sets can be considered as identical? Due to
the difference between the points of view concerning the application that these two
sets represent, some differences between the feasible actions and the enabled tasks
sets exist. For example, task models present tasks entirely performed by user in the set
of feasible tasks whereas they may not correspond to actions. Thus, whilst task model
can express that a task can be performed only when a user task was carried out (using
enabled operator in CTT or sequential operator in K-MAD), it is impossible to trans-
late this relation in actions.
Furthermore, passing from a set of enabled tasks to another is performed through
the execution of specific tasks (amongst a set of enabled tasks), such as when a user
ought to enter a text before performing another task. In that example, only the user
knows when the text is completed, the interface knows the end of the execution of the
task when the user executes the following task. Thus, the second task can be per-
formed as soon as and only when the first task begins. In order to generate interface
from task models, detecting when the execution of these specific task ends has to be
done. How can this be automatically done?
Link between Tasks and Functional Core. Interfaces need to be linked with the
functional core in order to enable functions and procedures to be performed. Through
this link, the various available actions are translated according to previously executed
actions. Some information concerning the linked tasks is needed for the translation
process, and several variables are manipulated.
Task model links are used to represent task decompositions as well as temporal or-
ganization between tasks. In task models, links can be expressed between sister tasks
or between a mother task and its daughters. However, task executions may be linked
through other relationships. For example, when the execution of a task can interrupt a
set of others, they are linked together even if they are neither sisters nor mother and
daughter. Representing the relations between tasks in a task model is sometimes chal-
lenging whereas it is necessary to identify and exploit all relations to design the dia-
logue. A first approach lies in the use of the deactivation operator. However, if the use
of this operator allows to represent some conditions concerning the execution of the
tasks, it does not answer the deletion issue. The temporal operators are not satisfac-
tory for a precise control, but K-MAD [11] proposes the use of objects and conditions
1
http://giove.cnuce.cnr.it/ctte.html
2
http://www-rocq.inria.fr/merlin/kmade/ http://kmade.sourceforge.net/download.php
270 S. Caffiau et al.
to improve the description. The semantics of these objects and conditions allows to
represent some relation between tasks (the use of the same objects) but not all (dele-
tion). More, task models present the users viewpoint, thus the defined objects are the
ones manipulated by user (corresponding to the state-of-the-world objects) and not the
ones only required by the system (ex: boolean). How to deduct these non real-world
objects from task models while they are not manipulated by the user?
message field) may be important. Adding this information in the task models signi-
fies inserting data that do not belong to the abstraction level of task models.
Today, more and more applications use new interaction techniques, generally
grouped under the name post-WIMP, in order to enhance direct manipulation prin-
ciples [12]. Following these principles leads to the fact that user actions and system
responses are very close and to the deletion of intermediaries such as dialogue boxes
[13]. The order of the task execution may be modified according to the chosen inter-
action or instruments. Thus, some tasks may be deleted or added.
Even if all interactions for a given task could be represented in task models, how
can we indicate when a task may be performed by two different interaction tech-
niques? In SUIDT [14], a concrete task is created for each interaction. For example,
if a task can be performed either clicking a button or shortcutting (Ctrl+S), then
the task is refined in two concrete tasks linked by the alternative operator. This de-
sign increases the number of concrete tasks. Furthermore, as previously stated, the
interaction chosen may modify or delete completely a task. Then, modifications of
the task model may be made at a higher level and is not limited to adding concrete
tasks.
Moreover, the scheduling of tasks is very close to the dialogue of the application.
However, adding the interaction may modify the dialogue itself. For example, moving
or deleting a file are completely different tasks from the point of view of the task
model. Nevertheless, with Drag and Drop, their beginning phases are merged. The
user starts by clicking on the file, and drags it. At this stage, it is not possible to know
what is his/her goal, moving or deleting. The goal appears when the document is
dropped: at another place in the document for moving, out of the window for deleting.
The equivalence between the task model and the dialogue model is broken.
communication between these two models using, for example the MDE (Model-
Driven Engineering) approaches3 as Metamodels.
At last, we conclude with the study of rich models of interaction that evolve into
post-WIMP interfaces, opening new ways for transformations between models.
References
1. Balbo, S., Ozkan, N., Paris, C.: Choosing the right task-modeling notation: A taxonomy.
In: Diaper, D., Stanton, N.A. (eds.) The handbook of task analysis for human-computer in-
teraction, pp. 445466. Lawrence Erlbaum Associates, Mahwah (2004)
2. Limbourg, Q., Vanderdonckt, J.: Comparing task models for user interface design. In: Dia-
per, D., Stanton, N.A. (eds.) The handbook of task analysis for humain-computer interac-
tion, pp. 135154. Lawrence Erlbaum Associates, Mahwah (2004)
3. Patern, F.: Model-based design and evaluation of interactive applications. Springer, Hei-
delberg (2001)
4. da Silva, P.P.: User interface declarative models and development environments: A sur-
vey. In: 7th Eurographics workshop on Design, Specification and Verification of Interac-
tive Systems DSVIS 2000, pp. 207226. Springer, Heidelberg (2000)
5. Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Souchon, N., Bouillon, L., Florins,
M., Vanderdonckt, J.: Platicity of user interfaces: A revised reference framework, Tamo-
dia. Bucarest, pp. 127134 (2002)
6. Thvenin, D.: Adaptation en interaction homme-machine: Le cas de la plasticit. Thesis.
Universit Joseph Fourier. p. 213 (2001)
7. Mori, G., Patern, F., Santoro, S.: Tool support for designing nomadic applications. In: In-
telligent User Interfaces (IUI 2003), pp. 141148 (2003)
8. Luyten, K.: Dynamic user interface generation for mobile and embedded systems with
model-based user interface development. Thesis. School of Information Technology, Uni-
versity Limburg, Diepenbeek, Belgium, p. 194 (2004)
9. Bastien, C., Scapin, D.: Ergonomic criteria for the evaluation of human-computer inter-
faces. Technical report, INRIA (1993)
10. Baron, M., Lucquiaud, V., Autard, D., Scapin, D.: K- made: Un environnement pour le
noyau du modle de description de lctivit. In: IHM 2006, pp. 287288. ACM Publishers,
New York (2006)
11. Lucquiaud, V. (ed.): Proposition dn noyau et dne structure pour les modles de tches
orients utilisateurs. 17th French-speaking conference on Human-computer interaction, pp.
8390 (2005)
12. Shneiderman, B.: Direct manipulation: A step beyond programming languages. IEEE
Computer 16(8), 5769 (1983)
13. Beaudouin-Lafon, M.: Instrumental interaction: An interaction model for designing post-
wimp user interfaces. In: CHI, The Hague, Netherlands. pp. 446-453 (2000)
14. Baron, M., Girard, P.: Suidt: A task model based gui-builder. TAMODIA: Task MOdels
and DIAgrams for user interface design. Inforec Printing House, 6471 (2002)
15. Navarre, D., Palanque, P., Paterno, F., Santoro, C., Bastide, R.: A tool suite for integrating
task and system models through scenarios. In: Johnson, C. (ed.) Interactive systems de-
sign, specification, and verification (dsv-is 2001), pp. 88113. Springer, Heidelberg (2001)
3
http://planetmde.org
Investigating the Role of a Model-Based Boundary
Object in Facilitating the Communication Between
Interaction Designers and Software Engineers
Mara Greco de Paula and Simone Diniz Junqueira Barbosa
1 Introduction
Interactive systems development processes involve professionals from various disci-
plinary backgrounds, each one with a different focus and purpose. Among these dis-
ciplines, we may cite human-computer interaction (HCI) and software engineering in
general. HCI focuses, generally, on understanding the characteristics, needs, wants,
and values of the systems users, their usage context, the specific goals and tasks the
users need or want to achieve with the system, why and how, in order to design the
usersystem interaction and prototype the systems user interface, constantly evaluat-
ing with users the produced artifacts (Preece et al. 1994). And software engineering
has as its main goal the specification, implementation, and testing of the interactive
systems architecture and functionalities (Pressman 2005).
The work of each professional influences and constrains one another, and all share
a common goal: in the end, an interactive system must be built that addresses the
needs of the applications users and stakeholders. To achieve this goal, it is paramount
that these professionals communicate with each other to create a shared understanding
and consensus about the problems to be addressed and what must be ultimately built,
avoiding that each professional carries on with his work based on different hypotheses
and, moreover, avoiding duplicate work. This paper explores the role of an interaction
model, together with some detailed information about it, in mediating the communica-
tion between HCI professionals and software engineers. It assumes that HCI design
precedes the (functional) software design, a decision which also needed to be evalu-
ated among software engineers (in particular, software designers).
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 273278, 2007.
Springer-Verlag Berlin Heidelberg 2007
274 M.G. de Paula and S.D.J. Barbosa
This paper is organized as follows: section 2 briefly presents the semiotic engineer-
ing theory of HCI, section 3 presents the model-based communicative tool proposed
to enhance the communication between HCI professionals and software engineers,
section 4 presents a case study and section 5 concludes the paper.
1
A sign is anything that stands for something for someone (Peirce, 1931-1958).
Investigating the Role of a Model-Based Boundary Object 275
domain, its users, the tasks they need to achieve and the context of use (part 1); a
language to design the usersystem interaction (part 2); elements to support the ex-
planation of the design solution (part 3); and correspondences with software design
representations (part 4). Part 1 is a subset of the artifacts and knowledge produced by
the requirements elicitation activity, and due to space constraints will not be described
here. Likewise, this paper will also omit the description of part 4, because it only
consists of UML skeletons that aim to save software designers some time in their
initial representations.
1
Knowledge
Results of the about the
domain, tasks, and domain, users,
4
functional analyses tasks and
context of use 2 Correspondences
with software design
representations
UML
Requirements engineers
and HCI professionals
3
Software
engineers
Communcation Interaction design
about the
Legend interaction
design
x y X feeds Y
Communicative tool
talk about the domain concepts and other application signs. The designer should
clearly convey to users when they can talk about what, and what kinds of response
to expect from the designers deputy. Fig. 2 presents a diagrammatic representation
of a partial interaction model for a search documents goal in an intranet. Scenes
represent a moment in the interaction where the user may take his turn to participate
in the conversation, whereas system processes represent the designers deputys
turn. The arrows represent the transition utterances where either user (u:) or
designers deputy (d:) gives turn to the other interlocutor to proceed with the
conversation.
Browser U2 Legend
u: [informs URL] pre: _login = T
post: _search F and Scene
u: [cancel search] u: [search document]
_login F
Search documents
Login System process
[inform search
criteria]
d: (EH) invalid login [inform login and [choose
password] d: no document Ubiquitous access
or password presentation format]
d:success found
post: _login F post: _login T
u: [confirm] post: _search F
Transition:
d: (EH) no criteria pre-conditions
provided or invalid u: [confirm]
u: [users utterance] or
date format d: deputys utterance
post-conditions
U1
pre: _search=T
u: [examine d: document(s) found
search results] post: _search T
Examine search
View (document)
results u: [choose
document]
Since the goals of this work are the communication and negotiation of HCI design
decisions, it is necessary to facilitate the understanding of the HCI design by software
engineers. To do this, it is essential to make explicit the HCI design logic that under-
lies the solution represented in the interaction model. Therefore, we have decided to
create a component that acts like a communication layer on top of this model. This
layer makes use of questions that either the interaction designer or the software engi-
neer may pose about MoLIC elements, in order to make explicit the HCI designer-to-
user communication represented in MoLIC. The questions are the following2: Whats
this? Whats this for? Why must/can this be done? How can/must the user do this? Is
it possible to undo this? How? Who can do this? On what/whom does this depend? Is
there another way of doing this? Who/What is affected by this?. The answers to these
questions should be elaborated taking into account what software engineers will need
to know about the interaction and usability goals to make appropriate software design
decisions or to negotiate with HCI designers about the interaction design solution and
the constraints it has imposed on their work.
2
These questions were inspired in the works of communicability evaluation (Prates et al. 2000)
and help systems design (Silveira 2002).
Investigating the Role of a Model-Based Boundary Object 277
4 Case Study
To evaluate this work, a qualitative case study was planned, conducted and analyzed
(Yin, 2003, p.15), with an overall goal to obtain evidence about the usefulness and
ease of use of the communicative tool to support the communication of the HCI de-
sign solution between HCI designers and software engineers. The case study involved
the participation of three software engineers with practical experience in software
development using UML and comprised the following steps: [1] responding a survey
about the participants knowledge of HCI; [2] attending a seminar about the commu-
nicative tool; [3] a hands-on session where the software engineer should make use of
the communicative tool to specify the software functionalities in UML; and [4] an
interview to capture data about the participants understanding of the produced docu-
mentation and its perceived usefulness. The summary of results is presented accord-
ing to the category of analysis.
The role of HCI professionals in the software development process: All partici-
pants recognized the importance of having a professional in the software development
team whose responsibility is to think about the usersystem interaction and the sys-
tems usability. However, participant 2 believes that the HCI professional must define
only the presentation of the user interface, to facilitate the usersystem interaction,
and not the interaction semantics. The understanding of the HCI design solution
represented in the tool: All of the three participants understood the HCI design.
Usefulness of each one of the tools components: All of the three participants un-
derstood the purposes of each tool component, as well as MoLICs semantics and
notation. However, there were divergences in the opinions regarding the usefulness of
each component. For participant 1, the part about the knowledge about the domain,
users, tasks etc. and the communication about the HCI design (parts 1 and 3 of the
tool) will only be useful when the application domain is more complex. For partici-
pant 2, MoLICs goals diagram is unnecessary, and the description of the domain
concepts (part 1 of the tool) and the description of MoLIC signs are redundant. All
three participants agree that the communicative tool as a whole is useful for their
work as software engineers. Information, knowledge or decisions that were neces-
sary but werent represented in the tool: Participant 2 stressed that, together with
the tool, he needs the requirements specification document. Participant 3 said that he
needed information regarding project management. Usefulness and adequacy of the
correspondences with UML (part 4): All participants found the definition of the
correspondences with UML useful, although they didnt use all of them. Comparison
with the usage of different artifacts used to represent HCI concerns: Participant 1
said he prefers to work with the communicative tool than with a list of requirements
and UML diagrams. Participant 2 said he prefers to work directly with UML, instead
of the tool. Participant 3, in its turn, said he needs to develop a pilot project with the
tool in order to decide whether to adopt the tool. All three participants agreed that
screenshots or user interface sketches do not substitute the role of the communicative
tool in the development process. The order of the activities - first the HCI design
solution modeling and only later the software specification: All three participants
agreed with the order of the modeling used in the case study. Adoption of the com-
municative tool in practice: Participant 1 agrees with the adoption of the tool.
Participants 2 and 3 agreed that a pilot project must be conducted to measure the
cost/benefit ratio of such an adoption.
278 M.G. de Paula and S.D.J. Barbosa
5 Concluding Remarks
As seen in the case study results, the tool was overall well accepted by software engi-
neers. All three participants agreed that it is useful and that it facilitates the work of
specifying the internal software functionalities. Moreover, the tools components were
understood an used easily and quickly. From this small case study, we may state that
the communicative tool has served as a boundary object between the areas of HCI and
software engineering. Boundary Objects are objects that support the intersection be-
tween different social world and provide information for each world (Star and
Griesemer 1989). The MoLIC language and the communication about it, in the con-
text of this work, has the goal of representing all the information about the HCI de-
sign solution that both HCI professionals and software engineers need to carry on
with their work. As for future work, we need to conduct more specific case studies to
explore in depth some of the considerations made by the participants, and to evaluate
whether and how the tool should be revised.
Acknowledgements. The authors would like to thank CNPq for the financial support
to this work.
References
1. Barbosa, S.D.J., Paula, M.G.: Designing and Evaluating Interaction as Conversation: a
Modeling Language based on Semiotic Engineering. In: Proceedings of the 10th Interna-
tional Workshop on Design, Specification, and Verification, Portugal (2003)
2. de Souza, C.: The Semiotic Engineering of Human-Computer Interaction. The MIT Press,
Cambridge (2005)
3. Peirce, C.S.: Collected Papers, pp. 19311958. Harvard University Press, Cambridge, MA
4. Prates, R.O., de Souza, C.S., Barbosa, S.D.J.: A Method for Evaluating the Communica-
bility of User Interfaces. ACM Interactions, 3138 (2000)
5. Preece, J., Rogers, Y., Sharp, E., Benyon, D., Holland, S., Carey, T.: Human-Computer
Interaction. Addison-Wesley, Reading (1994)
6. Pressman, R.S.: Software Engineering: A Practitioners Approach. McGraw-Hill Profes-
sional, New York (2005)
7. Silva, B.S.: MoLIC Segunda Edio: Reviso de uma linguagem para modelagem da in-
terao humano-computador. Dissertao de Mestrado, PUC-Rio, Brasil (2005)
8. Silveira, M.S.: Metacomunicao Designer-Usurio na Interao Humano-Computador.
Tese de Doutorado, PUC-Rio, Brasil (2002)
9. Star, S.L., Griesemer, J.R.: Institutional Ecology, Translations and Boundary Objects:
Amateurs and Professionals in Berkeleys Museum of Vertebrate Zoology. Social Studies
of Science 19(3), 387420 (1989)
10. Yin, R.K.: Case Study Research: Design and Methods. SAGE Publications, Thousand
Oaks (2003)
Looking for Unexpected Consequences of Interface
Design Decisions: The MeMo Workbench
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 279286, 2007.
c Springer-Verlag Berlin Heidelberg 2007
280 A. Jameson et al.
Fig. 1. Screen shots illustrating the possible system states that can be reached in the simple exam-
ple used in this paper as well as the possible paths through these states during the performance of
the task of finding the rate for phone calls to the United States of America
Fig. 2. Screen shot of the M E M O workbenchs interface for specifying a system variant along
with ideal methods for the performance of tasks with that system variant
one of the hundreds of links found therea method unlikely to be applied by most users.
We therefore expect that the automatic generation of methods will have to be subjected
to some constraints, both general ones and constraints for users with particular attributes
(e.g., the constraint that keyboard shortcuts are not employed by users who lack previous
familiarity with the system in question).
In this way, the generation of an appropriate method for a given task is analogous
to the problem of finding a route with a navigation system from a starting point to a
destination; and the imposition of constraints on the nature of the methods is analogous
to the use of constraints such as no highways.
methods apply to an even greater extent here: Once errors are considered, the number
of possible methods for performing a task becomes very large, especially since errors
can occur in combination.
The approach currently being explored in M E M O is to use a set of general error gen-
eration rules to produce incorrect behavior at various points during a simulation: The
general procedure for simulating the performance of a given task is to assume that the
user will perform the correct next step unless an error generation rule applies to the sit-
uation, in which case an error is generated with a probability specified by the rule. In
our introductory example, the following rule will generate a description error in some
of the simulation runs:
If the correct action is to select the item I with the label L,
and there is another item I whose label begins with the same word as L,
then the user will select I with a probability of p1 if the users attention to the task
is low and p2 if it is high.
Even this highly simplified rule captures the important fact that this error can occur
and that it is more likely under certain conditions than under others. The introduction of
error generation rules affects the generation of simulation runs as follows:
Whenever the simulated user enters a given state, the workbench checks whether there
is an error generation rule that applies in that state (taking into account the next action
specified by the ideal method currently being applied by the simulated user). If so, with a
probability specified by that rule, the incorrect action prescribed by the rule is simulated,
and the system enters a state that is not on the ideal path for the task in question.
We still need to deal with the question of the extent to which errors are detected and
recovered from and the consequences that they have.
predictable errors can be straightforwardly recovered from as long as they are detected
immediatelyor at the other extreme, it might reveal cases in which no recovery at all
was possible. Still, it should also be possible to simulate cases in which the user does
not detect an error.
Some of the characteristic features of the M E M O approach appear to work quite natu-
rally for some types of system, task, and error and less well for others: the representation
of a system with a state diagram; the automatic derivation of ideal methods for perform-
ing tasks; the rule-based prediction of errors and error detection; and the dependence
of predicted behavior on user attributes. We have argued that, where applicable, these
features of M E M O make possible some useful types of simulation and analysis that go
beyond what is possible with user testing, inspection-based evaluation, and other types
of model-based evaluation. The special promise of these features lies in the ability of the
M E M O workbench to search systematically through a large space of possibilities that
is defined by different system variants, different tasks, different user attributes, and the
nondeterministic occurrence of errors. The simulations generated in this way can hardly
be as accurate as those yielded by more focused, hand-crafted simulation models, but
they may have a greater ability to uncover potential problems that arise only in certain
specific situations.
References
1. Moller, S., Englert, R., Engelbrecht, K., Hafner, V., Jameson, A., Oulasvirta, A., Raake, A.,
Reithinger, N.: MeMo: Towards automatic usability evaluation of spoken dialogue services
by user error simulations. In: Proceedings of INTERSPEECH 2006. the Ninth International
Conference on Spoken Language Processing, Pittsburgh, PA (2006)
2. Norman, D.A.: Design rules based on analyses of human error. Communications of the
ACM 26, 254258 (1983)
3. John, B.E., Salvucci, D.: Multipurpose prototypes for assessing user interfaces in pervasive
computing systems. pervasive computing 4(4), 2734 (2005)
4. Reason, J.: Human Error. Cambridge University Press, Cambridge, New York (1990)
5. Wood, S.D., Kieras, D.E.: Modeling human error for experimentation, training, and error-
tolerant design. In: Proceedings of the Interservice/Industry Training, Simulation and Educa-
tion Conference, Orlando, FL (2002)
6. Paterno, F., Santoro, C.: Preventing user errors by systematic analysis of deviations from the
system task model. International Journal of Human-Computer Studies 56(2), 225245 (2002)
7. Baber, C., Stanton, N.A.: Task analysis for error identification. In: Diaper, D., Stanton, N. (eds.)
The Handbook of Task Analysis for Human-Computer Interaction, pp. 367379. Erlbaum,
Mahwah, NJ (2004)
8. Bastide, R., Basnyat, S.: Error patterns: Systematic investigation of deviations in task mod-
els. In: Coninx, K., Luyten, K., Schneider, K.A. (eds.) Task Models and Diagrams for User
Interface Design, pp. 109121. Springer, Berlin (2006)
Task Modelling for Collaborative Systems
1 Introduction
Web development has experienced a spectacular change in the last years mainly mo-
tivated by the improvements in technology, infrastructures and the way of developing
software applications more focused on the users needs.
The user interacts with the system performing tasks. This is one of the typical top-
ics concerning HCI. However, the user also interacts with other users through the
system performing cooperative tasks. And that is the typical topic concerning CSCW
(Computer-Supported Cooperative Work).
In this paper we propose a conceptual model to describe the tasks that should be
performed to achieve the application goals. To get this, we have based on the con-
cepts proposed by relevant authors in the area of task analysis and collaborative
environments, in such a way that every task, identified and described by means of
a task analysis process, has into account the traditional and most relevant tasks
characterizations.
A good characterization of the tasks makes possible the development of the appli-
cation with a high quality level.
The rest of the paper is organized as follows: In the next section, some related
works are analyzed. Section 3 introduces the conceptual model we propose and
describes the way in which group tasks are characterized. Section 5 describes an ex-
ample of application with the proposed concepts. Finally, section 6 contains some
conclusions and final remarks concerning this work.
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 287 292, 2007.
Springer-Verlag Berlin Heidelberg 2007
288 V.M.R. Penichet et al.
2 Related Work
As commented before, current software combines collaboration among users to per-
form tasks and the use of the Web as an infrastructure.
Therefore, the specification of systems has to take into account some special char-
acteristics regarding such systems.
Some mechanisms, as described in [6] [7] [10], provides a way to represent the or-
ganization of the tasks performed in a system in order to provide the designer with a
clearer information about what does who, in what way something has to be carried
out, etc. Some other works characterize groupware systems and the way the users
interact each other [1] [2] [3] [4] [5]. The conceptual model we propose in this paper
describes the tasks which are necessary to perform in the system to inform the design-
ers about the nature of such tasks.
In [1], a conceptual model is proposed to characterize groupware systems. This
model describes objects and operations on such objects, dynamic aspects, and the
interface between the system and the users and amongst users. This characterization
describes a groupware system from its users point of view.
Our approach is centred on the description of the tasks that take place in a group-
ware system, also taking into account the task features found in the aforementioned
mechanisms to analyze tasks [6] [7] [10].
Other approaches, as the ones used to classify CSCW tools, could be used to char-
acterize the tasks the users must perform as a group to achieve a common objective.
Typical CSCW features, as described in [8] or time-space features as proposed in [5],
are not enough to described group tasks. However, the combined use of all these fea-
tures together with features from the task analysis [6] provides a rich way to describe
the tasks in a system. Such information helps the designers to achieve better quality
systems.
This paper presents a conceptual model to describe tasks in general and group tasks
in particular, and an example where it is applied. It is based on all these features that
have been considered fundamental throughout the years.
Task
{incomplete, disjoint}
*
{complete, disjoint}
GroupTask
{incomplete, overlapping}
CscwFeatures JohansenFeatures
Cooperation Collaboration
As has been traditionally considered, we also take into account that some tasks
composite tasks are decomposed into other more specific tasks, and these ones in
other more specific, and so forth. Finally, there are atomic tasks which cannot be
divided into other tasks. Atomic tasks are the smallest granularity level.
Patern has established a classification of tasks based on the allocation of their per-
formance, and has classified them into application, user, interaction or abstract tasks
[6]. This classification is widely accepted as has shown to be good for modelling tasks
and provides a way of clearly identifying different tasks to be performed in the system
to reach an objective. Thus, it has been taken into account in the definition of the
proposed conceptual model.
A special group of tasks has been traditionally called cooperative tasks. However,
in this characterization, we have preferred to call them group tasks over the other
term, inasmuch as it refers to CSCW tasks [3], whose bases are coordination, com-
munication and information sharing [8].
Typical concepts around CSCW provide a way to characterize tasks depending on
their orientation to the coordination, to the communication [111], to the cooperation
or to the collaboration [4] [8] [9].
290 V.M.R. Penichet et al.
4 Example of Application
An easy example is explained in this section to show all these concepts in practice.
The example is about a typical shared whiteboard. Some sub-problems are explicitly
obviated to focus the problem on the co-participation of the users within the applica-
tion: how different users draw a design together.
CTT graphical notation has been used in the example to describe the organization
of the tasks, as can be appreciated in the following figures.
B) A)
Fig. 2. a) Organization of roles tasks using CTT. b) Decomposition of the composite task
Painting. This task is characterized as a composite group task oriented to the synchronous
cooperation within different places.
Send_picture and Showing. These two tasks are performed depending on the tasks
a user playing the role painter performs.
The composite task called Painting is composed by two tasks which could be per-
formed by different users playing the role painter. Actually, such tasks would be
performed by users application because they are application tasks. That is a normal
situation in order to achieve a common design by using a shared whiteboard.
Therefore, the Painting task is also a group task. This composite and group task is
performed in real-time, that is, every user can see on the screen whatever another user
draws immediately. The design is done by a group of users in a synchronous way,
according to Johansens features.
Commonly, the users of this shared whiteboard will achieve a common design with
independence of the place they are and they will make use of the Internet as a way of
cooperation among them. According to Johansens spatial features, it is a task ori-
ented to different places.
In spite of the fact that the users are somehow coordinated to generate a common
design, it is not a coordination task. Something similar happens with the communica-
tion. The Painting task is a cooperative task because different users cooperate
among them to draw a common design, therefore the Painting task is characterized
as a composite group task oriented to the synchronous cooperation within different
places.
5 Conclusions
This work briefly presents a conceptual model to describe the tasks involved in col-
laborative systems. This conceptual model has been built taken into account some
traditional and widely accepted concepts and classifications in the CSCW and Task
Analysis fields.
The proposed conceptual model allows the complete specification of all kind of
tasks that might be involved in collaborative environments, in which the participation
of several users making use of the network infrastructure is frequent. This proposal
helps designers of collaborative systems by providing them with the most complete
and structured information regarding tasks according to the traditional foundations.
Even more, this conceptual model is an open system that could allow new ways of
characterizing tasks. It also could allow the introduction of new features identified
upon the group tasks to provide an accurate definition.
The proposed characterizations of tasks has been applied to a simple example of a
shared whiteboard to show its applicability and how this conceptual model can help
designers in the identification and characterization of complex tasks usually involved
in multi-user systems.
As a result, we can conclude that the correct and complete specification of all the
tasks to be performed in collaborative systems provides designers with a very useful
source of information to make possible the development of this kind of applications
with a high quality level.
292 V.M.R. Penichet et al.
Acknowledgements
We would like to acknowledge the CICYT TIN2004-08000-C03-01 and the Junta de
Comunidades de Castilla-La Mancha PCC-05-005-1 and PAI06-0093-8836 projects
for funding this work.
References
1. Ellis, C., Wainer, J.: A Conceptual Model of Groupware. In: Proceeding of CSCW 1994,
pp. 7988. ACM Press, New York (1994)
2. Greenberg, S.: The 1988 conference on computer-supported cooperative work: Trip report.
ACM SIGCHI Bulletin 21(1), 4955 (1989)
3. Greif, I.: Computer-Supported Cooperative Work: A Book of Readings. Morgan Kauf-
mann, San Mateo CA (1988)
4. Grudin, J.: Computer-Supported Cooperative Work: History and Focus. Computer 27(5),
1926 (1994)
5. Johansen, R.: Groupware: Computer support for business teams. The Free Press, New
York (1988)
6. Paterno, F.: Model-based Design and Evaluation of Interactive Applications. In: Patern,
F. (ed.) Model-based Design and Evaluation of Interactive Applications, Springer, Heidel-
berg (1999)
7. Pinelle, D., Gutwin, C., Greenberg, S.: Task analysis for groupware usability evaluation:
Modeling shared-workspace tasks with the mechanics of collaboration. ACM Transactions
on Computer-Human Interaction (TOCHI) 10(4), 281311 (2003)
8. Poltrock, S., Grudin, J.: CSCW, groupware and workflow: experiences, state of art, and fu-
ture trends. In: CHI 1999 Extended Abstracts on Human Factors in Computing Systems,
pp. 120121. ACM Press, New York (1999)
9. Poltrock, S., Grudin, J.: Computer Supported Cooperative Work and Groupware (CSCW).
In: Costabile, M.F., Patern, F. (eds.) INTERACT 2005. LNCS, vol. 3585, Springer, Hei-
delberg (2005)
10. Van der Veer, G.C., Van Welie, M.: Task based groupware design: Putting theory into
practice. In: Proceedings of the 2000 Symposium on Designing Interactive Systems, pp.
326337. ACM Press, New York (2000)
11. Wikimedia Foundation, Inc, http://www.wikipedia.org
RenderXML A Multi-platform Software
Development Tool
1 Introduction
Computer software development has nowadays as an important requirement the pos-
sibility of execution in more than one platform, either through desktop computer,
handhelds or mobile phones. To address this demand, ad-hoc software development is
no longer acceptable in terms of the cost and time required for software construction
and maintenance. In this way, many research projects are being developed in order to
allow the creation of software applications that can be executed in multiple use con-
texts, with minimal alteration of its algorithm.
One of the proposed solutions is the development user interfaces (UI) with plastic-
ity, capable of adapting themselves to different use contexts. In order to obtain
plasticity, High-level UI Descriptions (HLUID) are commonly used, enabling the
definition of UIs in a platform independent form. Among the available HLUIDs,
UsiXML [8] is based on the Cameleon reference framework [4], allowing the descrip-
tion of UIs for multiple use contexts.
This paper presents RenderXML, a software tool developed to facilitate the crea-
tion of multi-platform applications. RenderXML acts as a renderer, mapping concrete
UIs described in UsiXML to multiple platforms, and also as a connector, linking the
rendered UI to application logic code developed possibly in multiple programming
languages. Thus, RenderXML is intended to support not only the development of new
(multi-platform) applications but also the migration of legacy applications to a multi-
platform environment.
The main goal of this application is to help the UI developer, acting in the UI engi-
neering process. As explained later in this paper, the tool doesnt have the objective of
helping in the UI definition.
M. Winckler, H. Johnson, and P. Palanque (Eds.): TAMODIA 2007, LNCS 4849, pp. 293 298, 2007.
Springer-Verlag Berlin Heidelberg 2007
294 F.M. Trindade and M.S. Pimenta
The paper is structured as follows: firstly, we describe some related work and the
main concepts of RenderXML, discussing its features and benefits, and how to use it.
An actual multi-platform application example illustrates the process of multi-platform
UI rendering and multilanguage application logic connection. Some concluding re-
marks and future work are presented in the final section.
2 Related Work
The accomplishment of multi-platform UIs is also the goal of some related works in
the literature, which can be classified in two categories: a) tools working with
UsiXML UI descriptions and b) UI rendering tools, for UsiXML or other UI models.
Among the projects which use UsiXML, SketchiXML [5] can generate a UsiXML
Concrete UI (CUI) and also a UIML UI specification, receiving as input hand
sketched UI descriptions, having as main objective the creation of evolutionary UI
prototypes. Working with another kind of input, GrafiXML [7] is a visual designer
which allows the creation of CUI specifications, based on the visual positioning of UI
components by the developer.
In the category of UI rendering tools, QTKiXML [6] can map UsiXML description
to the Tcl-Tk language. FlashiXML [3] can also map UsiXML descriptions, but to
UIs described in vectorial mode, being interpreted by Flash or SVG plug-ins. Inter-
piXML [11] performs the mapping of UsiXML CUI descriptions using Java Swing UI
components.
Using another UI models, Uiml.NET [9] and TIDE [2] map UIs specified in UIML
[1] to the .Net and Java platform respectively. TERESA (Transformation Environment
for InteRactivE System representations) [10] uses the TERESAXML language to per-
form forward engineering design and development of multi-platform applications.
3 RenderXML
In order to create multi-platform UIs based on the Cameleon Reference Framework,
the lifecycle shown in Figure 1 must be followed. This lifecycle is based on a generic
task-model, which envisions all the tasks to be performed by the interactive system,
and is mapped to a final UI for a specific device through multiple reification steps.
In practice, to obtain a final UI following the mapping steps presented in Figure 3,
a generic task model (Task Model) has to be created, which is specified to a task
model of a specific kind of device (Task Model Desktop). From the specific task
model, the UI is further specified to an abstract UI (Abstract UI Desktop), which is
dependent on the kind of interaction being used, and then to a concrete UI (Concrete
UI Desktop), which depends on the target platform of the application. Finally, the
concrete UI can be mapped to a final UI to be executed in a device (Desktop com-
puter).All these steps can be performed supported by tools, which perform an auto-
matic or semi-automatic mapping from one level to another.
RenderXML is a rendering tool projected to work in the last level of this transfor-
mation process, mapping concrete UIs described in UsiXML to final UIs for a specific
device. In addition, RenderXML offers to the user another level of independence,
RenderXML A Multi-platform Software Development Tool 295
References
1. Abrams, M., Phanouriou, C., Batongbacal, A.L., Williams, S.M., Shuster, J.E.: UIML: An
Appliance-IndependentXML User Interface Language. In: Proceedings of the 8th Interna-
tional WWW Conference, Toronto, Canada, pp. 1116. Elsevier Science Publishers, Am-
sterdam (1999)
2. Ali, M.F., Prez-Quiones, M.A., Abrams, M., e Shell, E.: Building Multi-Platform User
Interfaces With UIML. In: Proceedings of 2002 International Workshop of Computer-
Aided Design of User Interfaces: CADUI 2002, Valenciennes, France (2002)
3. Berghe, Y.: Etude et implmentation dn gnrateur dnterfaces vectorielles partir dn
langage de description dnterfaces utilisateur, M.Sc. thesis, Universit catholique de Lou-
vain, Louvain-la-Neuve, Belgium (September 2004)
4. Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., Vanderdonckt, J.A: A
Unifying Reference Framework for Multi-Target User Interfaces. Interacting with Com-
puters 15(3), 289308 (2003)
5. Coyette, A., Faulkner, S., Kolp, M., Limbourg, Q.: SketchiXML: Towards a Multi-Agent
Design Tool for Sketching User Interfaces Based on UsiXML. In: Proc. of Tamodia 2004
(2004)
6. Denis, V.: Un pas vers le poste de travail unique: QTKiXML, un interprteur dnterface
utilisateur partir de sa description, M.Sc. thesis, Universit catholique de Louvain, Lou-
vain-la-Neuve, Belgium (September 2005)
7. Lepreux, S., Vanderdonckt, J., Michotte, B.: Visual Design of User Interfaces by
(De)composition. In: Doherty, G., Blandford, A. (eds.) DSVIS 2006. LNCS, vol. 4323, pp.
157170. Springer, Heidelberg (2007)
8. Limbourg, Q., Vanderdonckt, J., Michotte, B., Bouillon, L., Florins, M., Trevisan, D.,
UsiXML, A.: User Interface Description Language for Context-Sensitive User Interfaces.
In: Proc. of the AVI 2004 Workshop Developing User Interfaces with XML: Advances on
User Interface Description Languages UIXML 2004. EDM-Luc, Gallipoli, pp. 5562 (May
25, 2004)
9. Luyten, K., Thys, K., Vermeulen, J., e Coninx, K.: A Generic Approach for Multi-Device
User Interface Rendering with UIML. In: 6th International Conference on Computer-
Aided Design of User Interfaces (CADUI 2006), Bucareste, Romnia (2006)
10. Mori, G., Patern, F., Santoro, C.: Tool Support for Designing nomadic Applications. In:
Em Proc. of 7th ACM Int.Conf. on Intelligent User Interfaces, pp. 141148. ACM Press,
New York (2003)
11. Ocal, K.: Etude et dveloppement dn interprteur UsiXML en Java Swing, Haute Ecole
Rennequin, Lige (2004)
Author Index