当前位置:首页 >> 信息与通信 >>

Mobile and collaborative augmented reality


Mobile and Collaborative Augmented Reality: A Scenario based design approach

L. Nigay?1, P. Salembier??, T. Marchand??, P. Renevier*, L. Pasqualetti**
? University of Glasgow

Department of Computer Science 17 Lilybank Gardens, Glasgow G12 8QQ laurence@dcs.gla.ac.uk ?? GRIC-IRIT Université Paul Sabatier 31062 Toulouse cedex 7 salembier@irit.fr marchand@irit.fr ** FT R&D-DIH/UCE 38-40 rue G. Leclerc 92794 Issy-lesMoulineaux laurence.pasqualetti @francetelecom.fr

* CLIPS-IMAG Université de Grenoble 1 38041 Grenoble cedex 9 philippe.renevier@imag.fr

Abstract. In this paper we address the combination of the physical and digital worlds in the context of a mobile collaborative activity. Our work seeks to accommodate the needs of professional users by joining their physical and digital operational worlds in a seamless way. Our application domain is archaeological prospecting. We present our approach of the design process, based on field studies and on the design of scenarios of actual and expected activities. We then describe the conceived and developed interaction techniques via the MAGIC platform.

1 Introduction
Augmented Reality (AR) seeks to smoothly link the physical and data processing environments. This is also the objective of other innovative interaction paradigms such as Ubiquitous Computing, Tangible Bits, Pervasive Computing and Traversable Interfaces. These examples of interaction paradigms are all based on the manipulation of objects of the physical environment [10]. Typically, objects are functionally limited but contextually relevant [18]. The challenge thus lies in the design and realisation of the fusion of the physical and data processing environments (hereafter called physical and digital worlds). The object of our study is to address this issue in the context of a situation of unconstrained mobility (archaeological prospecting). Context detection and augmented reality are then combined in order to create a personalised augmented environment.

1

On sabbatical from the University of Grenoble, CLIPS Laboratory, BP 53, 38041 Grenoble Cedex 9, France.

In this paper, we explain a design approach for mobile AR systems. We illustrate our approach, by presenting the outcomes of the design steps of our MAGIC platform. MAGIC is both a hardware and software platform, which carries out the fusion of the two worlds, the physical and digital worlds. Although our goal is to design and develop generic interaction techniques, we base our study on a specific mobile fieldwork: archaeological prospecting, whose main characteristics are presented in the following section.

2 Archaeology and Computer Support
In the domain of archaeology the recent progress in 3D technology have made it possible to develop advanced modelling tools for constructing detailed virtual representations of archaeological sites. Augmented Reality (AR) and Virtual Reality (VR) technologies have also allowed the design of tools to enhance the overall experience of a visitor to a site by providing an AR reconstruction of ancient monuments [19]. With respect to the tasks performed by the archaeologists themselves, computing resources have mainly been used in post-excavation tasks (for example referencing the shards and the objects in a database, reconstructive modelling of the site after excavation, etc.). But, as a fieldworker, the activity of the archaeologist has rarely been considered, and the problem of designing tools in order to support on-site activities has barely been addressed2. Within archaeological procedures, prospecting is very important because it principally influences whether excavation at a site will take place. It provides a global overview of the environment, including a systematic census of the archaeological clues. The goal is to check the state of the archaeological archives and the potential of the sites [7]. The archaeological evaluation must fulfil the following requirements: establish the location of the deposit, find the boundaries of the site, define their nature (habitat, necropolis, etc.), evaluate the density of the structures, and date the site [4]. Prospecting is done by a group of archaeologists and consists initially of a ground analysis based on a systematic division of the zone. If necessary, the archaeologists consult a specialist whose opinion will determine whether prospecting will continue. Currently, this consultation with the expert is inefficient, because it requires repeated trips to and from the site by the archaeologists. The prospecting requires long journeys between sites, where the topographic characteristics are often poorly known. The long distances of the journeys cause problems since they result in asynchronous interaction with distant specialists who possess specialised knowledge whose nature cannot be anticipated a priori. The characteristics of the prospecting activities (e.g., the co-operative process, the nature of shared information, the type of interaction), appear to be representative of the co-operative activities found in mobile situations.

2

For example Ryan & Pascoe [22] in the context of the design of mobile systems for fieldworkers.

The objective of our study is to understand the use of mobile supports and services required in a collaborative situation for a user's task in the real world, justifying the fusion of the physical and digital worlds. In this context, two properties are fundamental: transparency of the interaction and ubiquity. ? Transparency enables the users to focus their attention on the task to be concluded and not on the usage of the tool. It is thus advisable not to separate the human from his physical environment while using the tool. The aim is to create a seamless operational field between the physical and digital worlds. ? Ubiquity stems from usage of the mobile supports. Users wish to access services and to collaborate with their colleagues at any moment and place. The objective is thus to design groupware on mobile supports. The user is no longer a prisoner of the workstation on his desk for collaboration and communication. Transparency and ubiquity contribute to the current effort of HCI to Universality, i.e. the computer tool is integrated into the physical environment, and must be accessible from everywhere and by all.

3 Design Approach
3.1 Design Methods for Mobile Augmented Reality Systems Because Augmented Reality seeks to smoothly link the physical and digital worlds it has expanded our view of the nature of interactive systems. But it must be emphasised that these new technologies also introduce a change in the conventional HCI design methods. Building effective AR applications for instance is not simply a matter of technical wizardry. Beyond the HCI classical design approach, mobile AR makes it compulsory to use a multidisciplinary design approach that embeds complementary methods and techniques for the design and evaluation phases [14]. This is due to the fact that physical objects of the user's environment take an increasing role in the design. We will focus here on two aspects that we have found relevant when designing AR applications: field study and scenario based design. ? Field study: So far mostly naturalistic analysis of activities, inspired by ethnomethodology [15] and francophone ergonomics, have been applied in designing AR applications. These approaches place a strong emphasis on the necessity for field studies and participatory design [9, 14]. According to these situated design approaches, accounting for the context in which the users are involved means that the design methodology cannot be limited to a description of the task at hand. The methodology has to consider the whole environment (physical, technical and social) in which the task is performed. This requires an in depth analysis of users' activities in order to understand their successful work practices and to identify the limitations of the current way of working. ? Scenario: As a way to concretely embody a view of users' actual and future activities, scenarios have proven very useful in AR design projects [16]. In particular scenarios enable the description of how AR devices would affect the way professional users carry out their individual and collective activities. In addition

various scenarios show the importance of supporting collaboration between the design team and the users. 3.2 Functional Role of Scenarios in Designing Interactive Systems Scenarios are widely used in different disciplines, including HCI, Software Engineering, Information Systems, Requirements Engineering. Scenarios can account for the use of various resources, and the context of the current or projected activity of identified or potential users [5, 11, 12, 21]. Scenarios can occur in varied forms and fulfil several functions during the design process: ? Scenarios are simple and accessible to a variety of actors who participate in the development of a new technology [5]. ? Scenarios provide a common language to the set of participants engaged in the design process. In particular, they are supposed to facilitate the exchanges with the future users who in return will contribute to enrichment of the design options [13]. ? Scenario based design processes may boost the participation of the members of the project, and consequently may extend the scope of what is realisable and increase creativity [1]. ? Scenarios provide concrete descriptions of design solution, which can be considered at various levels of detail. The initial scenarios are in general very brief. They provide a description of the system by indicating the tasks that the users can or must carry out, but without explaining in detail the way in which the tasks are performed [6]. 3.3 Applying a Scenario Based Design Approach The method we adopt follows some principles of scenario based-design methods, but is also further enriched by the methodological contributions of the francophone ergonomics tradition ("real task analysis" vs. "activity analysis"). Figure 1 presents the design and realisation steps of our method. As shown in Figure 1, the empirical analysis stage includes two steps: ? First we design scenarios based on an analysis of the real task (explanation by the end-users of her/his relevant work phases, away from the work setting). ? Second we design scenarios based on an analysis of the activity (observation and video recording of the activity of the end-users on site). From these two steps, we derive a set of requirements. These requirements serve as the basis for the specification of the future system: We first specify the functions that address the requirements. Such functional specifications of the system are then evaluated on the basis of scenarios called "projected scenarios": To do so the specified functions are integrated in existing task and activity scenarios. This step is a still in the planning stage since the system is not developed. The interaction techniques are then designed based on the final functional specifications. After a developmental step, tests are conducted in the work setting in order to assess the functional properties of the system as well as its usability. The outcome may lead to

modification of the functional specifications and the designed interaction techniques, as part of an iterative design approach.
Real Task Analysis Real Task Scenario

Activity Analysis

Activity Scenario Requirements Functional Specification Projected Scenario

Interaction Specification

Implementation

Experimentation

Fig. 1. Design and realisation steps

4 Design and Realisation of the MAGIC Platform
Having explained the design method, we now illustrate it by presenting the main results of each step of the design and realisation of our MAGIC platform (Mobile, Augmented reality, Group Interaction, in Context). We base our study on a specific mobile fieldwork: archaeological prospecting. 4.1 Task Analysis Several levels of representation of the data resulting from the task analysis were used: global representation, narrative format of description, sequential chart. For reasons of brevity, only the global representation will be presented in this paper. The task analysis is completely described in [16]. At a general level of description, prospecting is a set of analysis operations of the ground, which will allow a final archaeological evaluation of a site. The site analysis consists of a detailed exploration of the ground, allowing specialists including material scientists, and geologists to evaluate the archaeological value of a site. As shown in Figure 2, this evaluation requires repeated journeys between the site and the research centre by archaeologists in order to provide relevant data to various specialists.

Information Capture Journey from Research Centre to Archaeological Site More information needed Sufficient information Evaluation of the Archaeological Site Experts consultation Data Base Incorporating Data entry on paper Journey from Archaeological Site to Research Centre

Digitalisation

Fig. 2. Global representation of the prospecting task. This representation highlights the cycle (information capture - digitalisation - expert consultation) necessary to evaluate the archaeological value of a site.

4.2 Activity Analysis An activity analysis was carried out during real (as compared to simulated) prospecting; the individual and collective activities were video recorded and commented upon by the archaeologists. As in task analysis step, several levels of representation of the collected data were used: a narrative description of activities, a graph representation of activities, a dynamic representation of displacements of the archaeologists and finally an integrated representation. We present here the narrative description, the graph of activities and the integrated representation. 4.2.1 Narrative description of activities The narrative mode of representation is based on the concept of "meaningful event" for the archaeologists and/or analysts, i.e. a significant unit of action or a significant event occurring in the environment [24]. Table 1 gives an example. 4.2.2 Graph of activities As shown in Figure 3, this mode of description of the archaeologists’ activities shows graphically the fragmentation of their activities into sub-tasks, the repetitions, the redundancies and highlights the prevalence of some tasks as compared to others. At a structural level this mode of description also makes it possible to reveal underlying regularities: sequences of actions, the communication patterns and collaboration between the users. 4.2.3 Integrated representation of activities This representation enables the integration of the set of data collected during the activity analysis (meaningful events expressed narratively, group members’ displacements, video, snapshots, etc.). The representation is dynamic (Flash3 animation) and easy to handle. Figure 4 presents snapshots of such an animation4.
3 4

Flash Macromedia software. Available at: http://www.irit.fr/GRIC/public/scenario.htm

Table 1. Narrative description Sequence K. Topics Actor(s) Artefact(s) Diffusion of contextual information, Geo-positioning, Data Entering, Collective Evaluation Three Archaeologists represented by the letters: V., C. & M. Map

Output Discovery of highly significant element 11:04:30 C finds a metal piece. She brings it to V who stops her activity immediately. They return towards the place where the metal piece was discovered and C tries to find the exact place where it was found, but the exact location remains approximate. Everyone gathers around this discovery. 11:06:45 the element is approximately located on a map and analysed by V while M and C seek other clues at the area of discovery. A first analysis is conducted of the element position to direct the search of other clues near this element (C: "Well, it must have gone down anyway")

4.3 Requirements and Functional Specification: Projection of Existing Scenarios The field analysis highlights the inherent limitations of the current situation for several types of activity, in particular for collaborative activities involving mobility: ? data capture and data entry of a significant element with its context, its mode of collection and its storage in a mobile and time-constrained context, ? contextual evaluation of an element, which requires communication between archaeologists in the field as well as between an archaeologist in the field and a distant expert, ? consultation or diffusion of previously analysed elements or situations in order to carry out an individual evaluation or to share information. We define a set of functions, which may potentially overcome the problems encountered by the archaeologists in the current situation. These functions are then integrated in the scenarios of task and activity ("envisioning" phase) in order to project a new form of the activity as a solution to the present limitations. Different projection phases can be performed: alternative models of the future can then be analysed, debated and compared with the users. At this stage, interaction issues are not taken into account. The goal of the projection of the scenarios of activity, such as the scenario presented in Table 2, is to inform and to guide the functional specification step.

Fig. 3. Analysis of a part of the sequence of events associated with "the discovery of a metal piece by C". The data collected covers continuous observations of the actions and communications of the three archaeologists. This figure shows the graphical depiction of the activities of three archaeologists (V, C and M) in the context of the discovery of a metal piece. The three series of temporally ordered lines represent the sequences of activities performed by the archaeologists. The codes (number and colours) represent the categories of significant actions (hand-written notes, metric statement, photo, reading a map, etc.). The green arrow indicates the moment when the metal piece was discovered. The red arrow shows the move of archaeologist C. towards his colleague V. The yellow arrow expresses exchange of an element between archaeologists, and the black arrows show the communication between two archaeologists.

Fig. 4. Snapshots of the dynamic presentation of a scenario. These three snapshots show the animation at three different times in the sequence G of the scenario. It shows both the narrative scenario and some video snapshots of the activity, and also the positions of the archaeologists on the site as well as the dynamics of the displacements of each archaeologist (represented by the coloured points).

Table 2. Projection of a scenario of activity: envisioning phase Sequence R. Projected Collective evaluation, Distribution of information Topics about a previously discovered object. Three archaeologists represented by letters C., V. & Actor(s) M. Artefact(s) Editing system and location dependent information Output Sharing asynchronous knowledge Case of location dependent information and database consultation. C arrives in front of a small rock structure already quoted but she does not know its significance. The system shows her that the rock structure on the ground was already described by V. She can thus read the notes, access the drawings and the pictures of this structure.

4.4 Interaction Specification Based on the functions integrated in the so-called "projected scenarios", different interaction techniques can be designed. The described interaction techniques are those that we have developed: the MAGIC platform. As part of a user-centred iterative process, experimental evaluation of MAGIC may lead to a redesigning of the interaction techniques. MAGIC is both a hardware and software platform. In order to explain the interaction techniques, we first describe the hardware we used for those techniques. MAGIC is an assembly of commercial pieces of hardware. The MAGIC platform includes a Fujitsu Stylistic pen computer. This pen computer runs under the Windows operating system, with a Pentium III (450 MHz) and 196 Mb of RAM. The resolution of the tactile screen is 1024x768 pixels. In order to establish remote mobile connections, a WaveLan network by Lucent (11 Mb/s) was added. Connections from the pen computer are possible at about 200 feet around the network base. Moreover the network is compatible with the TCP/IP protocol. The hardware platform also contains a Head-Mounted Display (HMD), a SONY LDI D100 BE: its semitransparency enables the fusion of computer data (opaque pixels) with the real environment (visible via transparent pixels). Secondly, a GPS is used to locate the users. It has an update rate of one image per second. The GPS at the University of Grenoble (France) has an accuracy one metre and the one at the Alexandria archaeological site 5 centimetres (Egypt, International Terrestrial Reference Frame ITRF). The GPS is also useful for computing the position of a newly found object and removed objects. Finally, capture of the real environment by the computer is achieved by the coupling of a camera and a magnetometer. The magnetometer is the HMR3000 by Honeywell that provides fast response time, up to 20 Hertz and high heading accuracy of 0.5° with 0.1° resolution. The camera orientation is therefore known by the system. Indeed the magnetometer and the camera are fixed on the HMD, in between the two eyes of the user. The system is then able to know the position (GPS) and orientation (magnetometer) of both the user and the camera. Figure 5-a shows a

MAGIC user, fully equipped: the equipment is quite invasive and suffers from a lack of power autonomy. Our goal is to demonstrate the feasibility of our interaction techniques by assembling existing commercial pieces of hardware and not by designing specific hardware out of the context of our expertise. For a real and long use of the platform in an archaeological site, a dedicated hardware platform must clearly be designed. For example we envision solar energy power, using retractable solar energy panels attached to the pen computer. Based on the above described hardware platform, we designed and developed interaction techniques that enable the users to perform the functions associated with the "projected scenarios". The functions of the "projected scenarios" manipulate objects that are either digital or physical. Examples of digital objects are those contained in an online materials catalogue while examples of physical objects are materials such as metals or even a block-note that the archaeologist is using. As highlighted by our notation ASUR [8], some physical objects are tools while others are the focus of the activity. The block-note is a tool while the discovered items are the focus of the archaeological activity. During the design, we can only change the physical objects used as tools. In our design, we decided to use a pen computer instead of the block-note, and based the design of the pen computer display on the paper metaphor. Such a design solution will skip the digitalisation phase of Figure 2. Having made the design choice of a pen computer, interaction techniques must be designed in order to let the users manipulate the two types of objects: physical and digital. For the flexibility and fluidity of interaction, such manipulation is either in the physical world or in the digital world. We therefore obtain the four cases of Table 3, by combining the two types of objects and the two worlds: the physical world (i.e., the archaeological field) and the digital world (i.e., the screen of the pen computer). The MAGIC interaction techniques cover graphical interaction on the pen computer (case (3) in Table 3) as well as the two cases of mixed interaction (cases (1) and (2) in Table 3).
Table 3. Four cases of interaction techniques

In the physical world In the digital world

Interaction with a physical object Interaction purely in the real world Mixed interaction (1)

Interaction with a digital object Mixed interaction (2) Interaction in the digital world (Graphical HCI) (3)

Figure 5-b presents the graphical user interface displayed on the tactile screen of the pen computer. The graphical user interface is fully described in [20]. Interaction on the pen computer (case (3) in Table 3) is related to communication, coordination and production functions as described by the CLOVER functional model of groupware [23]. For example, coordination between users relies on the map of the archaeological site, displayed in a dedicated window (at the bottom left of Figure 5b). It represents the archaeological field: site topology, found objects and archaeologists current locations known by the GPS.

(a) (b) Fig. 5. (a) A user wearing and holding the hardware MAGIC platform (b) User interface on the pen computer

The two cases of mixed interaction (cases (1) and (2) in Table 3) imply (i) that physical objects must be manageable in the digital world (case (1)) (ii) that digital objects must be manageable in the physical world (case (2)). Transfer of objects in between the two worlds is therefore necessary. To do so we designed a generic interaction technique, a gateway that plays the role of a door between the physical and digital worlds. As a door belongs to two rooms, the gateway exists in both worlds: ? the gateway is an area of the physical world, delimited by a rectangle displayed in a semi-transparency Head-Mounted Display (HMD), ? the gateway is a rectangular area in the digital world, on the pen computer screen. Concretely the gateway is simply a window both displayed on the HMD (Java JFrame) on top of the physical world and on the pen computer screen (Java JInternalFrame). The gateway on the pen computer screen is the window at the top right of Figure 5-b. Objects in the gateway are visible on the HMD (i.e., in the physical world) as well as on the pen computer screen (i.e., in the digital world): ? If the object is physical (case (1)), the object is transferred to the digital world thanks to the camera (fixed on the HMD, between the two eyes of the user). The real environment captured by the camera is displayed in the gateway window on the pen computer screen as a background. We allow the user to select or click on physical objects: we therefore call this technique "the clickable reality". Before taking a picture, the camera must be calibrated according to the user's visual field. Using the stylus on screen, the user then specifies a rectangular zone thanks to a magic lens [3] (kind of camera lens). The cursor displayed on the pen computer screen is also displayed on top of the physical world. The corresponding specified zone (magic lens), displayed in the gateway window on screen and on the HMD, corresponds to the physical object to be captured. The picture is then stored in the

shared database along with the description of the object as well as the location of the object. Note that although the user is manipulating a magic lens using the stylus on screen, s/he perceives the results of her/his actions in the physical world. ? If the object is digital (case (2)) dragging it inside the gateway makes it visible in the real world. For example the archaeologist can drag a drawing or a picture stored in the database to the gateway window. The picture will automatically be displayed on the HMD on top of the physical world. Moving the picture using the stylus on the screen will move the picture on top of the physical world. This action is for example used if an archaeologist wants to compare an object from the database with a physical object in the field. Putting them next to each other in the real world will help their comparison. The motion of a digital object (ex: drag and drop on the pen computer) can be viewed by the archaeologist without looking at the pen computer screen. This is because in using the HMD the archaeologist can simultaneously view digital objects and the real world. As for the previous case (1), although the archaeologist is manipulating a digital object, s/he perceives the results of her/his actions in the physical world. Transfer of digital objects to the physical world can be explicitly managed by the user by drag and drop as explained above or can be automatic. Automatic transfer is performed by the system based on the current location of the user. This technique is called "augmented field" ("augmented stroll" in [20]). Because a picture or a drawing is stored along with the location of the object, we can restore the picture/drawing in its original real context (2D location). When an archaeologist walks in the site, s/he can see discovered objects removed from the site and specified in the database by colleagues. The "augmented field" is an example of asynchronous collaboration and belongs to the mobile collaborative Augmented Reality (AR) class of systems as defined in our taxonomy of AR systems [20]. The "augmented field" technique is directly derived from the functions of the "projected scenario" presented in Table 2. The technique is fully described in [20]. In this section we have described the interaction techniques that are developed in the MAGIC platform. For the same functions of the "projected scenarios", we could derive different interaction techniques. For example in the case of the "clickable reality" technique, instead of selecting the area using the stylus on the screen, the user could perform a gesture captured by a 3D localizer. For the "augmented field" technique ("projected scenario" of Table 2), the approach adopted is to assist the user by providing extra information about the physical field via a device carried by the user. Another approach [15] would be to augment the physical environment by directly projecting extra information on top of the physical world. This approach would be possible in a confined environment and is not possible in the case of archaeological prospecting. Finally we would like to point out that without changing the designed interaction technique, some technical solutions adopted in MAGIC could be modified. For example instead of using a GPS and a magnetometer in the case of the "augmented field", markers could be positioned in the site and would communicate with a receiver carried by a user.

4.5 Implementation: Software Design Having described the interaction techniques of the MAGIC platform, we now address their implementation. Our software design solution draws upon our software architecture model PAC-Amodeus [17]. The software architecture model is not new but the MAGIC implementation shows that it can be applied for the software design of a mobile collaborative Augmented Reality (AR) system. In the context of the overall software architecture of MAGIC [20], one point we would like to place particular emphasis on is the application of PAC-Amodeus architectural patterns for designing the software architecture. In particular one pattern [17] is dedicated to the implementation of multiple views: Use of a software agent to maintain visual consistency between multiple views. The gateway displayed on the pen computer screen and on the HMD is one example of multiple views. We therefore applied the Multiple Views architectural pattern to implement the gateway: Figure 6 shows the resulting three software agents.
Gateway

Physical World Representation
Camera, location, orientation

Digital World Representation
Stylus event

Fig. 6. Applying a PAC-Amodeus heuristic rule for the software design of the gateway: Three software agents implement the gateway.

5 Summary and perspectives
The paper focuses on the design and implementation of mobile collaborative Augmented Reality AR systems. We have presented eight steps of a design method, depicted in Figure 1. In particular scenarios are shown to be very useful as a way to concretely embody a view of users' actual and future activities. Moreover an already established fact is that scenarios are very important in supporting collaboration between members of a multidisciplinary design team as well as between the designers and the final users. We applied our design method for the design and implementation of a mobile collaborative Augmented Reality (AR) system dedicated to a group of archaeologists prospecting fields. We have presented results of each design step including scenarios of present and future archaeological prospecting activities, designed interaction techniques and software architecture solutions of the system: the MAGIC platform. The next step is to experimentally test MAGIC in order to evaluate the usability of the developed interaction techniques (last step of Figure 1). To perform the experiments in the work environment, we must address several technical

challenges, including the already mentioned problem of power autonomy, the ambient luminosity for on screen reading and the precision of the localizer (GPS). As on-going work, we have two avenues: ? First we are currently developing a mobile collaborative game in order to show the generality of the interaction techniques of the MAGIC platform ("clickable reality" and "augmented field"). The game is based on the technique of barter, the mobile players discovering and exchanging between them physical objects that are augmented by magical power. Again scenarios of the game without the system and scenarios of how the game could be with the system ("projected scenarios") are being developed. ? Our second research avenue is methodological. During the design of the MAGIC platform, the applied design method underlined the central role of scenarios, including scenarios of actual activity as well as scenarios of future activity using the system. Additionally from the functions of the scenarios of future activity, we designed interaction techniques and their software design based on software patterns. We would like to extend this approach by establishing links between the scenarios and the software patterns. As shown in our previous work [17], ergonomic properties can be linked to software patterns. By tagging each scenario using ergonomic properties, we can, by association, establish links between scenarios and software patterns. Our work will complement the recent study by [2] where usability scenarios are linked to architectural mechanisms for the design of graphical user interfaces on a desktop computer. Our work is complementary because it considers the design of mobile collaborative AR systems. This study when completed will lead to a design method where scenarios are central and common materials for all the phases of the ergonomic design as well as the software design and implementation. Acknowledgements This work is supported by France Telecom R&D, under contract HOURIA I. We wish to thank our partners from the CEA (Centre d'Etude d'Alexandrie in Egypt) for welcoming us. Special thanks to G. Serghiou for reviewing the paper.

References
1. 2. Ackoff, R.L.: Resurrecting the future of operations research. Journal of the Operations Research Society, Vol. 30, Issue 3. (1979) 189-199. Bass, L., John, B.: Achieving Usability Through Software Architecture. In Conference Proceedings of ISCE 2001, IEEE Computer Society Publ., (2001) 684. CMU/SEI-2001TR-005. Bier, E. et al.: Toolglass and Magic Lenses: The See-Through Interface. In Conference Proceedings of Siggraph’93, Computer Graphics Annual Conference Series,. ACM Press (1993) 73-80. Blouet, V.: Essais de comparaison de différentes méthodes d'étude archéologique préalable. Les nouvelles de l’archéologie, 58. (1994) 17-19.

3.

4.

5.

6. 7. 8.

9. 10. 11.

12.

13. 14.

15.

16.

17.

18. 19.

20.

21. 22. 23. 24.

Carroll, J. M.: The Scenario Perspective on System Development. In Carroll, J. M.: Scenario-Based Design: Envisioning Work and Technology in System Development. J. Wiley & Sons (1995). Carroll, J.M.: Making use. Scenario-based design of computer interactions. MIT Press (2000). Dabas, M., Deletang, H., Ferdière, A., Jung, C., Haio Zimmermann, W.: La Prospection. Paris : Errance (1999). Dubois, E., Nigay, L., Troccaz, J.: Assessing Continuity and Compatibility in Augmented Reality Systems. To appear in International Journal on Universal Access in the Information Society, Special Issue on Continuous Interaction in Future Computing Systems. Springer-Verlag, Berlin Heidelberg New York (2002). Geiger, C., Paelke, V., Reimann, C., Rosenbach, W.: Structured Design of Interactive Virtual and Augmented Reality Content. In Conference Proceedings of Web3D (2001). Ishii, H., Ullmer, B.: Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. In Conference Proceedings of CHI'97. ACM Press (1997) 234-241. Jacobson, I.: The Use Case Construct in Object-Oriented Software Engineering. In Carroll, J. M.: Scenario-Based Design: Envisioning Work and Technology in System Development. John Wiley and Sons (1995) 309-336. Kyng, M.: Creating Contexts for Design. In Carroll, J. M.: Scenario-Based Design: Envisioning Work and Technology in System Development. J. Wiley&Sons (1995) 85107. Kyng, M., Mathiassen, L.: Computers and Design in Context. MIT Press. (1997). Mackay, W., Fayard, A.-L.: Designing Interactive Paper: Lessons from three Augmented Reality Projects. In Conference Proceedings of IWAR’98, International Workshop on Augmented Reality. Natick, MA: A K Peters, Ltd (1999). Mackay, W., Fayard, A.-L., Frobert, L., Médini, L.: Reinventing the Familiar: an Augmented Reality Design Space for Air Traffic Control. In Conference Proceedings of CHI'98. ACM Press (1998) 558-565. Marchand, T., Nigay, L., Pasqualetti, L., Renevier, P., Salembier, P.: Activités coopératives distribuées en situation de mobilité. Rapport Final Contrat FT R&D HOURIA Lot I (2001). Nigay, L., Coutaz, 1997, J.: Software architecture modelling: Bridging Two Worlds using Ergonomics and Software Properties. In Palanque, P., Paterno, F. (eds.): Formal Methods in Human-Computer Interaction, ISBN 3-540-76158-6. Springer-Verlag, Berlin Heidelberg New York (1997) 49-73. Norman, D. A.: The design of everyday things. London: MIT Press (1998) Papageorgiou, D., Ioannidis, N., Christou, I., Papathomas, M., Diorinos, M.: (2000). ARCHEOGUIDE: An Augmented Reality based System for Personalized Tours in Cultural Heritage Sites. Cultivate Interactive, Issue 1, 3. (2000). Renevier, P., Nigay L.: Mobile Collaborative Augmented Reality, the Augmente Stroll. In Little, R. Nigay, L. (eds): Proceedings of EHCI'2001, Revisited papers, LNCS 2254. Springer-Verlag, Berlin Heidelberg New York (2001) 315-334. Rolland, C., Grosz, G.: De la modélisation conceptuelle à l'Ingénierie des besoins. In Encyclopédie d'informatique. Hermès: Paris (2000). Ryan, N., Pascoe, J. Enhanced reality fieldwork: the context-aware archeological assistant. In Gaffney, V., et al. (eds.): Computer applications in Archeology (1997). Salber, D.: De l’interaction homme-machine individuelle aux systèmes multi-utilisateurs. Phd dissertation, Grenoble University. (1995) 17-32. Salembier, P., Zouinar, M.: A model of shared context for the analysis of cooperative activities. In Conference Proceedings of ESSLLI'98 (1998).


相关文章:
增强现实(AR-augmented reality)
增强现实(AR-augmented reality)_专业资料。现实增强或增强现实(或扩展现实、扩张实境)增强现实技术是一种能够把虚拟物体与真实环境紧密结合起来,以增强人们对真实环境...
AR augmented reality
Reinforcement VR system is referred to as AR (Augmented Reality), is we often say more AR on a mobile phone application. It allows the user to see ...
基于场景分割的视频序列设置
Key words:Augmented Reality; Collaborative Augmented Reality; AR environment; multi-camera; scene-partition 引言 支持多用户协同操作的增强现实 (Augmented Reality...
基于andorid的增强现实系统的研究
他于 1997 年开发了一个为 Mobile Augmented Reality System 的用于 导航的系统...(Human AR(Collaborative Interface Technology Laboratory)等多个 研究机构共同推进...
增强现实(AR)系统组成及实用领域综合分析
增强现实(AR)系统组成及实用领域综合分析_信息与通信_工程科技_专业资料。增强现实(Augmented Reality,简称AR),是一种利用计算机系统产生三维信息来增强用户对现实世界...
增强现实价值在于解决现实问题
北京傲唯刃道科技有限公司 www.alwaysict.com "增强现实"价值在于解决现实问题 管增强现实技术(Augmented Reality)这一术语还并未广为人知,但不可否认的是,这个 ...
The biomass-fuelled gasification
collaborative method for augmented reality systems Original Research Article Information and Software Technology A multi-criteria assessment of six energy ...
增强现实技术(AR)
增强现实技术(AR) 一、 AR 定义:增强现实技术(Augmented Reality,简称 AR) ,是一种实时地计算摄影 机影像的位置及角度并加上相应图像的技术, 这种技术的目标是...
augumented reality现实增强
Augmented reality – Wikipedia, the free encyclopedia Top 5 Web Trends of 2009: Mobile Web & Augmented Reality 百度百科–增强现实 真实虚拟连续性 现实...
更多相关标签:
augmented reality | mobile reality kings | x reality for mobile | mobilerealitykings | dream and reality | ideal and reality | death and reality | dreams and reality |