SlideShare a Scribd company logo
1 of 10
Download to read offline
Proc. of Int. Conf. on Recent Trends in Communication and Computer Networks

Augmented Reality for Fire & Emergency
Services
Aameer R. Wani1, Sofi Shabir2, Roohie Naaz2
National Institute of Technology/Department of IT, Srinagar, India
1
Email: aameer.rafiq@gmail.com
2
Email: {shabir, roohie}@nitsri.net
Abstract— this paper presents a proposed system for improving collaboration between
different agencies and decision makers involved in a fire emergency situation with the help
of wearable augmented reality (AR). This field considers the possibility of transcending the
physical and territorial boundaries of a real space; [1] it is applicable to all time space
configurations of hybrid (Real + Virtual) world. User interaction is through the use of hands
and/or gestures. Rapid flow of information across different devices involved in the process
such as head mounted display, PDA, laptop, data walls, and desktop is critical to allow this
form of collaboration to be integrated with adaptive context aware work environments
based on workflow management systems. Functionality of such form of collaboration system
is illustrated in the scenario of a fire emergency situation.
Index Terms—augmented reality; fire and emergency Services; disaster management;
workflow management; Collaboration

I. INTRODUCTION
Augmented Reality (AR) is a variation of Virtual Environments (VE), or Virtual Reality as it is more
commonly called. VE technologies completely immerse a user inside a synthetic environment. While
immersed, the user cannot see the real world around him. In contrast, AR allows the user to see the real
world, with virtual objects superimposed upon or composited with the real world. Therefore, AR
supplements reality, rather than completely replacing it. Ideally, it would appear to the user that the virtual
and real objects coexisted in the same space, Figure 1 shows an example of what this might look like. It
shows a real desk with a real phone. Inside this room are also a virtual lamp and two virtual chairs. Note that
the objects are combined in 3-D, so that the virtual lamp covers the real table, and the real table covers parts
of the two virtual chairs. AR can be thought of as the "middle ground" between VE (completely synthetic)
and telepresence (completely real) [2].

Figure 1. Real Desk with virtual lamp and two virtual chairs. (Courtesy ECRC)

DOI: 03.LSCS.2013.7.590
© Association of Computer Electronics and Electrical Engineers, 2013
Augmented Reality enhances a user's perception of and interaction with the real world. The virtual objects
display information that the user cannot directly detect with his senses. The information conveyed by the
virtual objects helps a user perform real-world tasks. AR is a specific example of what Fred Brooks calls
Intelligence Amplification (IA): using the computer as a tool to make a task easier for a human to perform.
Augmented reality systems, in order to meet growing expectations, will have to be integrated with backbone
systems that can offer the necessary computational power and storage capacities for providing elaborate
context aware environments and improved information access. In the world of information systems, business
process engineering and workflow management systems have been integrated and are properly linked to
databases [3]. A wearable computer with an augmented reality (AR) user interface allows for exciting new
collaborative applications to be deployed in an outdoor environment.
Augmented reality is a term created to identify systems which are mostly synthetic with some real world
imagery added such as texture mapping video onto virtual objects. This is a distinction that will fade as the
technology improves and the virtual elements in the scene become less distinguishable from the real ones.
II. DISASTER MITIGATION AND E MERGENCY SERVICE
Disasters in urban areas with different disaster response teams are in most cases not sufficient for necessary
rescue work. An efficient integrated disaster management could support their activities and help to limit
human losses. Management problems during Disasters whether natural (floods, Earth quakes etc.), industrial
(nuclear, fire etc.), Medical (Epidemic) can be improved by effective collaboration between different entities
involved in this regard. Different techniques could be used for improving collaboration between these
entities, but here we focus on the technological aspect in general and augmented reality in particular.
Improvement in Collaboration between different agencies and decision makers is thus the focus of this paper.
Reducing the impact of disasters requires a complex mix of technical and social endeavors, and no single
prescription or discipline can provide all the answers. Indeed, disaster researchers have frequently expressed
concerns that technology not be viewed as a panacea. [4] Technological, organizational, and social factors
and depends on a solid understanding of disaster management as well as the technologies.
III. PREVIOUS WORK / LITERATURE SURVEY
A. ARS
“An Augmented Reality System for Earthquake Disaster Response” An ARS superposes an image of reality
with a virtual image that extends the visible scenery of reality. Its use in the context of disaster management
is to represent different invisible disaster-relevant information (humans hidden by debris, simulations of
damages and measures) and overlay it with the image of reality. The design of such a system is a challenge in
many ways, since the system integrates different methods like mapping, photogrammetry, inertial navigation
and differential GPS. Check the paper that introduces into the problems of earthquake disaster response and
motivates the use of an ARS. Also, it describes the used hardware components and discusses the data
available and necessary for the system to be operational under real conditions.
B. Rapid Post
Rapid Post: Disaster Evaluation of Building Damage Using Augmented Situational Visualization, research
being conducted at the University of Michigan to design and implement a new reconnaissance technology to
rapidly evaluate damage to buildings in the aftermath of natural and human-perpetrated disasters (e.g.
earthquakes, explosions). The technology being designed will allow on-site damage inspectors to retrieve
previously stored building information, superimpose that information onto a real building in augmented
reality, and evaluate damage, structural integrity, and safety by measuring and interpreting key differences
between a baseline image and the real facility view. In addition, by using feedback from the actual building
view, it will be possible to update structural analysis models and conduct detailed what-if simulations to
explore how a building might collapse if critical structural members fail, or how the building’s stability could
best be enhanced by strengthening key structural members. All damage evaluation analyses will be
conducted on-site, in real-time, and at a very low cost. [5]
C. RoboCupRescue
RoboCupRescue simulation aims at simulating large-scale disasters and exploring new ways for the
autonomous coordination of rescue teams (see Figure 2). These goals lead to challenges like the coordination
33
of heterogeneous teams with more than 30 agents, the exploration of a large-scale environment in order to
localize victims, as well as the scheduling of time-critical rescue missions. Moreover, the simulated

Figure 2. a 3D visualization of RoboCupRescue model for the city of Kobe, Japan

environment is highly dynamic and only partially observable by a single agent. Agents have to plan and
decide their actions asynchronously in real-time. Core problems are path planning, coordinated firefighting,
and coordinated search and rescue of victims. The advantage of interfacing RoboCupRescue simulation with
wearable computing is twofold: First, data collected from a real interface allows improving the disaster
simulation towards disaster reality Second, agent software developed within RoboCupRescue might be
advantageous in real disasters. [6]
IV. DESIGN ISSUES
A. The Registration Problem
One of the most basic problems currently limiting Augmented Reality applications is the registration
problem. The objects in the real and virtual worlds must be properly aligned with respect to each other, or the
illusion that the two worlds coexist will be compromised. Registration problems also exist in Virtual
Environments, but they are not nearly as serious because they are harder to detect than in Augmented Reality.
Since the user only sees virtual objects in VE applications, registration errors result in visual-kinesthetic and
visual-proprioceptive conflicts. Such conflicts between different human senses may be a source of motion
sickness [7]. Because the kinesthetic and proprioceptive systems are much less sensitive than the visual
system, visual-kinesthetic and visual-proprioceptive conflicts are less noticeable than visual- visual conflicts.
For example, a user wearing a closed-view HMD might hold up her real hand and see a virtual hand. This
virtual hand should be displayed exactly where she would see her real hand, if she were not wearing an
HMD. But if the virtual hand is wrong by five millimeters, she may not detect that unless actively looking for
such errors. The same error is much more obvious in a see-through HMD, where the conflict is visualvisual.Also visual capture contributes to these effects Augmented Reality demands much more accurate
registration than Virtual Environments.
Registration errors are difficult to adequately control because of the high accuracy requirements and the
numerous sources of error. These sources of error can be divided into two types: static and dynamic. Static
errors are the ones that cause registration errors even when the user's viewpoint and the objects in the
environment remain completely still. Dynamic errors are the ones that have no effect until either the
viewpoint or the objects begin moving; [8] discusses the sources and magnitudes of registration errors in
detail. The registration requirements for AR are difficult to satisfy, but a few systems have achieved good
results. An open-loop system shows registration typically within ±5 millimeters from many viewpoints for an
object at about arm's length. Closed-loop systems, however, have demonstrated nearly perfect registration,
accurate to within a pixel [9].
B. Sensing
Sensing: Lot of sensors are used in Augmented Reality Systems and the sensors used in Augmented systems
have certain problems associated with them. Specifically, AR demands more from trackers and sensors in
three areas viz. a) Greater input variety and bandwidth b) Higher accuracy c) Longer range. The biggest
single obstacle to building effective Augmented Reality systems is the requirement of accurate, long-range
sensors and trackers that report the locations of the user and the surrounding objects in the environment. For
details of tracking technologies, see the surveys in [10].

34
C. Attributes
The desired quality attributes of a system are derived from the business drivers of the project—target market,
timeframe, and application domain. But based on technical report by Bernd we can say that in general there
are requirements that any serious software architecture for Augmented Reality must address for example
Run-Time Attributes like Performance (i.e Latency of tracking and rendering-the maximum limit of 10 ms.),
Reliability i.e. Tracking accuracy Important for many systems; however, the required degree of accuracy
varies from below 1 mm for medical applications to several meters for outdoor navigation) and Mobility (
Wireless operation Important for most systems, often using WaveLAN networks).
Non-Run-Time Attributes like Portability i.e. Number of supported hardware platforms, several systems
support only a single platform (usually Windows 2000 or Linux); several others support multiple platforms.
Issues also depend upon whether video see through or Optical see through HMD are used.
V. COLLABORATIVE APPROACH
The ARS is particularly suited to communicate knowledge of experts of different fields that have to work
together as has been described by paper [12].
Augmented reality can be used to support collaboration in an outdoor environment because of the ability to
use all four time space configuration, the use of hand and head gestures as the main form of user interaction
and the seamless movement of information across different devices (HMD, PDA, laptop, data walls, and
desktop).
A major research issue is the interaction techniques for users to control and manipulate augmented reality
information in the field. We propose the use of augmented reality in the field (outdoors) as a fundamental
collaboration tool that may be used across a number of application domains, such as medical, maintenance,
military, search and rescue, and GIS visualization. A number of researchers are investigating augmented
reality with wearable computers for distributive collaboration systems [13] but we are proposing an overall
framework to integrate augmented reality into a traditional workflow. Overlaying contextually aware
information on the physical world is a powerful cuing mechanism to highlight or present relevant
information. This ability to view the physical world and augmented virtual information in place between
multiple actors is the key feature to this form of collaboration technology. A key difference with this form of
collaboration is the artifact the users are manipulating. This artifact can be characterized by the following
features: firstly, it corresponds to the physical world; secondly, the size of the information space reflects
physical objects in a large area; and thirdly, the users are able to physically walk within the information space
and the physical world simultaneously. This form of collaboration is similar to distributive virtual
environment collaboration systems. Both have manipulable 3D models and the position of the users affects
their vantage point. The significant differences are that the distances the users are allowed to physically move
are larger and there is a one-to-one correspondence with the physical world [3].
VI. COMMUNICATION AND WORKFLOW MANAGEMENT SYSTEM
The communication between various entities can be achieved effectively by workflow management system.
Its model says that all workflow systems contain a number of generic components which interact in a variety
of ways. To achieve interoperability between workflow products a standardized set of interfaces and data
interchange formats is necessary. A number of distinct interoperability scenarios can be constructed by
reference to such interfaces. For example a standard process to be used by a number of groups of users,
perhaps in different organizations could be defined using one tool and exported to each group who may use
different workflow systems. Also a given user may use one workflow client application to receive tasks
generated by a variety of workflow systems. The model identifies the major components and interfaces. Thus
workflow management systems can coordinate the execution of processes, inter-process communication and
provide interfaces through which databases can be accessed. As workflow management systems are
themselves only a component in an adaptive context aware work environment, communication can be
implemented in a very robust form via message passing through an enterprise bus. Diagrammatic
representation is shown in Figure3 [11].
The effective utilization of resources is of the utmost importance as right thing at right time is critical for the
achievement of success which can only be achieved by accessing the situation even before the emergency

35
Fig.3. Workflow Management system

service unit reaches the affected site. In today’s world critical information in case of a fire emergency is
transmitted by the help of mobile phones but major advanced communication facilities aren’t standard
equipment in fire emergency systems, technologies like radio, GPS, mobile phones can form the basis of an
advanced emergency squad. Augmented reality could us the platform for this advanced facility. This concept
may further be explained by figure.4.

Fig.4. Diagram to demonstrate fire emergency System

The Wearable Computer Lab is investigating this concept; researches include Hand of God, tabletop
collaboration technologies, distributive VR/AR, remote active tangible interactions, mobile AR X-ray vision,
and input devices for wearable computers
Hand of God (Figure 5)-Command-and-control centers require support for intense collaboration, so the
technology should be intuitive and simple to operate. Imagine a commander communicating to support
people in the field and wanting to send a support person to a new position. The simplest method would be for
the commander to physically point to a map that the field operative sees as a virtual representation. This
technology supports through-walls collaboration with the commander providing meaningful information to
the user in the field.
36
Fig5. Hand of God (HOG) system. (a) Indoor expert employing the HOG interface (b) Head mounted display as seen by the
outdoor participant. (c) Physical props as signpost for the outdoor user [39].

Mobile AR X-Ray Vision (Figure 6) -Outdoor users must physically move or walk to view different aspects
of the physical world while performing their tasks. With through-walls collaboration, this could require them
to investigate potentially hazardous locations. The use of robots for telepresence is a well-investigated area
with several commercial products available. We extended this capability for in situ first-person perspective
visualization to extend the ability for through-walls collaboration.

Fig.6. AR x-ray vision through a brick wall. The image above shows the building that's occluding the user's view. The image below
the use of highlighted edges cues to provide the impression the x-ray vision is behind the wall [15]

Several researchers in the WCL have explored the use of AR to provide users with X-ray vision capabilities.
Computer created views of occluded items and locales appear in the user’s vision of the situation. The initial
AR X-ray vision system employed wireframe models textured with video images captured from the outdoor
environment. To overcome the issue of the rendered images appearing as though they were floating on top of
occluding artifacts, they implemented edge overlay visualizations to provide depth cues for X-ray vision not
available in our original system. This visualization process provides highlighted edges from the AR video
stream to give cues that the X-ray video stream is behind the occluding object via a technique similar to that
of Denis Kalkofen and his colleagues. A second technique, tunnel cutout, provides highlighted sectioning to
help outdoor users understand the number of occluded objects between them with X-ray vision. The work of
Chris Coffin and Tobias Höllerer inspired this second technique [14].
Data from all the sources would be transferred online be it from the on-site team or by control room and this
should be done with small effort and interference to the emergency work should be avoided at all costs, that’s
why visual and sensor inputs are important. Controls rooms should be built like that of space-fight or military
operation control rooms. One of the important factors to consider in building such control rooms is their
ability to be deployed easily and strong communication facilities like GPS, GSM, and radio communication.
Also the system must be robust to be successful.
VII. PROPOSED SYSTEM
The approach discussed in this paper focuses on bringing the power of these advanced environments to
mobile users. We therefore investigate adaptive context aware work environments and their possible linkage
with a user through augmented reality interfaces operating on mobile equipment. Our goal is to identify a
way of improving access to information, supporting teamwork, and facilitating communications. By linking
advanced control rooms to mobile users, the centralized parts of the system can get access to on site
37
information to improve the decision making process. The availability of this information also means that we
can also support the building of domain specific corporate memories.
Our system draws on three major areas of computer science research: computer-supported cooperative work
(CSCW), wearable computing, and ubiquitous workspaces [15].
As in disaster situations two immediate actions should take place: first responders deploy to the affected
areas and set up a command-and-control center, with people in the field providing information to the center.
Control center personnel will use this data to direct resources to the appropriate places. So, we shall discuss
some important points about the command center and how the information is sent to the control center and
then processed and then used to direct the resources in the field.
The well-known event queue metaphor, which works well for a single user sitting in front of a single
computer using a GUI, breaks down in such an interactive workspace. Event Heap is a novel mechanism by
which multiple users, machines and applications can all simultaneously interact as consumers and generators
of system events. The Event Heap is a software infrastructure designed to provide for interactive workspaces
what the event queue provides for traditional single-user GUI’s. The system is an extension of TSpaces, a
tuplespace system from IBM Research. It is bulletin-board based, and applications may post and retrieve
events of interest. Event Heap has following features:
Multi-User, Multi-Application, Multi-Machine.
Heterogeneous Machines, Legacy Support.
Failure Isolation.
For more details on Event Heap refer to [16]
We discuss four separate approaches that begin to address the infrastructure requirements for future
workspaces. These include commercial initiatives such as Jini as well as research work that has been
undertaken to provide the ubiquitous computing infrastructure for intelligent and interactive rooms. We also
discuss research being undertaken to provide for future enterprise computing infrastructure. We also discuss
a case study where the Metaglue multi-agent ubiquitous computing infrastructure, which forms the basis of
the MIT Intelligent room, is integrated with the ODSI enterprise infrastructure from DSTC [17]. ODSI
implements the concept of an enterprise bus that allows the integration and coordination of a range of
services across an enterprise. Of particular interest is ODSI’s ability to use enterprise knowledge (in the form
of organizational structures, context, and processes) to orchestrate and coordinate work activities and allow
flexible access to, and use of, various enterprise applications and services.
Several candidate architectures and software infrastructures have been identified as providing at least some
of the desirable characteristics for the type of enterprise-enabled workspaces outlined
D. Meltaglue
Metaglue Version 0.5 is a multi-agent based system that provides computational glue for large groups of
collaborative software agents and device controllers, such as those used in the Intelligent Room at MIT
Cambridge. It provides communication and discovery services and enables users to interact, subject to access
control, with software and data from any space. It also arbitrates among applications competing for resources.
Metaglue is implemented in Java, replacing the Remote Method Invocation (RMI) mechanism with one that
allows dynamic reconfiguration and reconnection so that agents can invisibly resume previously established,
but broken connections. It has support for self-dissemination through a process called glue spreading where
new agents can be created throughout the infrastructure as required.
Metaglue provides the support for managing systems of interactive, distributed computations i.e. those in
which different components run asynchronously on a heterogeneous collection of networked computers. It
provides high-level support for writing groups of interacting software agents and automates the connection
and distribution of these agents according to their computational requirements.
E. Intelligent Room Operating System (iROS)
The Interactive Workspaces project at Stanford University is developing a high level architecture for
organising multi-person, multi-modal interactions in an integrated space that combines multiple computer
systems. It has wall-sized interactive murals, a "collaborative table" with a bottom-projected display and
support for large-scale collaborative applications. The infrastructure also supports wireless networking and
allows the integration of handheld devices like PDAs and palm top computers.
The infrastructure supporting the intelligent room is based on iROS, the Intelligent Room Operating System.
It uses a tuple space environment and incorporates the Tspaces server from IBM. Instead of a traditional
event queue, the architecture utilises a central event heap abstraction, called the Event Heap [16], as the main
38
communication mechanism among the software components. The event queue metaphor, which works well
for a single user sitting in front of a single computer using a GUI, breaks down in an interactive workspace
with multiple users all using common hardware and software applications. The Event Heap is a mechanism
by which multiple users, machines and applications can all simultaneously interact as consumers and
generators of system events. Because the events are tuples in a tuple-space, analogous to a shared blackboard,
publishing and subscribing to the tuplespace is sufficient to participate in the room infrastructure while
allowing components to remain very loosely coupled.
A key requirement for the interactive workspace is a software infrastructure that allows users with mobile
computing devices, including laptops and PDA's, to seamlessly enter and leave the interactive workspace.
While in the workspace, they can use their devices to interact with the current application and/or control
various hardware devices in the room.
F. ODSI
The Open Distributed Services Infrastructure (ODSI) is a framework that supports the collaboration of
enterprise resources to meet the challenge of an adapting enterprise [17]. ODSI concentrates on collaborative
services rather than data-driven applications and hence maintains an enterprise perspective rather than a
software focus.
It provides a service integration environment designed to combine enterprise-modeling techniques, which
incorporate enterprise information such as policy, role and component descriptions, with lightweight
workflow and dynamic service integration technology. It provides lifecycle management for distributed
software services through a peer infrastructure and supports their collaboration to perform complex business
processes.
ODSI utilises the Breeze workflow engine for resource coordination and process orchestration and an
adaptive enterprise bus based on Elvin Version 3, a content-based routing system. Elvin is a lightweight
notification service. Producers detect events, changes in the state of an object, and notify Elvin, which in turn
passes the notification to any clients that have registered an interest with a subscription expression. The
difference between Elvin and most messaging systems is that Elvin’s actions are motivated by the message
content rather than an address or topic name. The advantage of this difficult to implement approach is that it
decouples producers and consumers and thus promotes system evolution, integration and client homogeneity
[18].
G. Jini
An open architecture that enables developers to create a federation of highly adaptive, network-centric
entities, devices or services. Jini technology, which is written in Java, has been designed to build networks
that are scalable, evolvable and flexible. These types of networks are those that are typically required in
dynamic, distributed computing environments.
The overall goal is to turn the network into a flexible, easily administered tool with which resources can be
found by any device or user. Resources can be implemented as hardware devices, software programs or a
combination of the two. One focus of the system is to add a dynamic element to the network that better
reflects the nature of a workgroup by enabling the ability to add and delete services flexibly without stopping
and starting the system.
VIII. IMPLEMENTATION
To place the augmented reality user interface in context of the fire emergency, we present a scenario of a fire
at a remote site.
Police and/or fire stations receive the first call and they may also call a local ambulance service and ask for
the closest hospital helicopter to be placed on standby. The fire is a three hour drive or an hour helicopter
flight from the fire site. First party (police/fire service) arrives and requests the helicopter to be sent to the
fire site. The communication with fire service control room, police control room, the specialist doctor, the
recovery doctor, and the paramedic is composed of audio, video, and augmented reality information. During
this initial phase, the collaboration is a same time – different place configuration.
The fireman starts by performing a life rescue operation and this is performed by thorough checking of the
fire site which is aided by augmented reality. Consider a fireman going into the building where one is not
able to see, as in many countries like US and India we have the detailed building plans in municipality
database. So, it possible to render that plan on the real world that fireman sees (based on the research of
39
architectural domain [13] (Figure 7 and Figure 8) and thus make the pathways visible to him which otherwise
couldn’t be possible. The fireman is also connected to a team of doctors; he can perform an initial diagnostics
of the patient’s injuries if any. The fireman wears a light wearable computer and views data via a see-through
HMD (head mounted display), while the specialist fireman, team of doctors operate from a traditional office
workstation, and the police is operating a notebook computer. All are communicating via a wireless network.

Fig.7. AR application sample used for the virtual reconstruction of architectural heritage study in the roman city of Gerunda, Girona,
Spain, carried out by the authors in the LMVC[13].

Fig.8. Examples of a collaborative scene created from different models. [11]

Specialist firemen can advise the fireman about how to control the fire and/or how to evacuate civilians if
any, this equipment can also be used by a civilian who may have moderate training in the fire control and
with the help of fire emergency kit will provide control room and doctors eyes at the site as early as possible
and thus may prove crucial in saving lives and preventing damage.
The specialist fireman may view multiple sites at a time through video images displayed on workstations of
the control room and figure out the source of fire, fire man can also indicate regions which have been
checked and regions where search and rescue operation has yet to be performed. In addition to that Doctor
can advise the fireman/civilian about procedures while the fireman is tending the patient if any. The doctors
can view a region of interest of any patient via digital video images displayed on their office workstation
while the fireman/civilian concurrently views the patient through their HMD. A doctor may indicate regions
on the patient’s body for further investigation by drawing augmented reality lines over the video image.
While the fireman is performing the diagnostics and treatment, he/she can place augmented reality
information markers on the patient’s body or for documenting the scene. These markers represent a record of
the fire fighter’s case notes. This is a different time – different place configuration, as these stickers are
reviewed by personal, such an ER nurse, at a later date in the hospital, or a forensic expert to explore the
cause of event. The fire fighter points the eye cursor at a region of interest, for example the knee, indicates
the priority, and speaks into a microphone to record an audio memo. The different priorities could be
designated by color, icon, or fill pattern. The priority is indicated either by voice recognition or binary
switches placed on the paramedics clothing. AR-Markers can also be used to guide the supporting teams to
inform them about the ground situation accurately and quickly by providing them the augmented reality
information stickers as world relative information. After the incident the markers and stored information
collected by augmented system can be used by the insurance companies and/or investigation agencies.
Using mobile AR systems, the field operatives can thus define annotated regions on the ground, denoting
dangerous areas, completed searches, and areas that require immediate attention. The field operatives can
also quickly edit 3D representations of buildings to show which portions have been damaged. Control center
personnel can then direct field operatives to investigate particular areas by marking a digital 3D map that is
reflected in the physical world viewed by the user in the field through AR technology.
40
IX. CONCLUSION
The aim of our paper was to use the power of workflow management, augmented reality and co-ordination to
improve the emergency service provided in-case of a fire emergency. We have shown that mobile equipment
integrated with adaptive context aware work environments can prove beneficial for emergency situation like
that of fire. We by the help of emergency AR-markers, X-ray vision, hand of God and other advanced
technologies transfer information in the system effectively. Multimedia may also be used for the same
purpose. The access to the information both for the mobile user and those in control room have been
demonstrated. An important advantage of such system is the automatic recording of the onsite data which
helps to build the records of these incidents without interfering with the work of emergency team. Records
can later be used for training purposes.
REFERENCES
[1] Alejandro Schianchi University of Tres de Febrero, Argentina, “Transcending Space through Mobile Augmented
Reality as a Field for Artistic Creation”, Published in Meida in Transition 8 (MIT-8): Public-Media,Private-Mediaweb.mit.edu/commforum/mit8/subs/abstracts.html
[2] Milgram, Paul, and Fumio Kishino, “A Taxonomy of Mixed Reality Virtual Displays”, IEICE Transactions on
Information and Systems E77-D, 9 (September 1994), 1321-1329.
[3] S. Rathnam, "ACM SIGOIS Bulletin, Special issue: business process reengineering," vol. 16:1: ACM, 1995.
[4] E.L. Quarantelli, “Problematical Aspects of the Information/Communication Revolution for Disaster Planning and
Research: Ten Non-Technical Issues and Questions, “Disaster Prevention and Management” 6(1), 1997. See also
Sharon S. Dawes, Thomas Birkland, Giri Kumar Tayi, and Carrie A. Schneider, “Information, Technology”, and
Coordination: Lessons from the World Trade Center Response, Center for Technology in Government, University at
Albany, State University of New York, 2004; http://www.ctg.albany .edu/publications/reports/ wtc_lessons/
wtc_lessons.pdf.
[5] “Rapid Post-Disaster Evaluation Of Building Damage Using Augmented Situational Visualization”, Vineet R.
Kamat 1, M. ASCE and Sherif El-Tawil 2, M. ASCE
[6] H. Kitano, S. Tadokoro, I. Noda, H. Matsubara,T. Takahashi, A. Shinjou, and S. Shimada. “RoboCup, Rescue:
Search and rescue in large-scale disasters as a domain for autonomous agents research.” In IEEE Conf. on Man,
Systems, and Cybernetics (SMC-99), 1999.
[7] Pausch, Randy, Thomas Crea, and Matthew Conway, “A Literature Survey for Virtual Environments: Military
Flight Simulator Visual Systems and Simulator Sickness”, Presence: Teleoperators and Virtual Environments 1 , 3
(Summer 1992), 344-363.
[8] Holloway, Richard., ”Registration Errors in Augmented Reality”, Ph.D. dissertation. UNC Chapel Hill Department
of Computer Science technical report TR95-016 (August 1995)
[9] Bajura, Michael and Ulrich Neumann,“Dynamic Registration Correction in Video-Based Augmented Reality
Systems”, IEEE Computer Graphics and Applications 15, 5 (September 1995), 52- 60
[10] Meyer92 Meyer, Kenneth, Hugh L. Applewhite, and Frank A. Biocca, “A Survey of Position-Trackers”, Presence:
Teleoperators and Virtual Environments 1, 2 (Spring 1992), 173-200.
[11] Rolland, Jannick, Rich Holloway, and Henry Fuchs. A Comparison of Optical and Video See-Through HeadMounted Displays. SPIE Proceedings volume 2351: Telemanipulator and Telepresence Technologies (Boston, MA,
31 October - 4 November 1994), 293-307.
[12] Workflow Management Coalation- Reference Model - The Workflow Reference Model (WFMC-TC-1003, 19-Jan95, 1.1), Terminology & Glossary (WFMC-TC-1011, Feb-1999, 3.0), The Workflow Reference Model: 10 Years On
- http://www.wfmc.org/reference-model.html.
[13] http://www.isprs.org/proceedings/XXXV/congress/comm3/papers/399.pdf
[14] Sánchez, J., & Borro, D, “Automatic Augmented Video Creation for Markerless Environments”, Proceedings of the
2nd International Conference on Computer Vision Theory and Application (VISAPP´07), Barcelona, Spain. pp 519-522. (2007).
[15] C. Coffin and T. Hollerer, “Interactive Perspective Cut-Away Views for General 3D Scenes,” Proc. IEEE Symp. 3D
User Interfaces (3DUI 06), IEEE CS Press, 2006, pp. 25–28.
[16] Bruce H. Thomas and Wayne Piekarski University of South Australia,"Through-Walls Collaboration", in
PERVASIVE computing Published by the IEEE CS 1536-1268/09
[17] JOHANSON, B., FOX, A., HANRAHAN, P. and WINOGRAD T. (2000) The Event Heap: An Enabling
Infrastructure for Interactive Workspaces CS Tech Report CS 2000-02
[18] FITZPATRICK, G., MANSFIELD, T., KAPLAN, S., ARNOLD, D., PHELPS, T. and SEGALL, B. (1999)
Augmenting the Workaday World with Elvin. In Proceedings of ECSCW’99, Copenhagen, 431-451, Kluwer
Academic Publishers.

41

More Related Content

What's hot

Virtual Reality in Concept Design
Virtual Reality in Concept Design Virtual Reality in Concept Design
Virtual Reality in Concept Design IDES Editor
 
Seminar report meta
Seminar report metaSeminar report meta
Seminar report metasn boss
 
Picto vision - using image recognition to turn sketches into communication
Picto vision - using image recognition to turn sketches into communicationPicto vision - using image recognition to turn sketches into communication
Picto vision - using image recognition to turn sketches into communicationDavid Wright
 
COMUTER GRAPHICS NOTES
COMUTER GRAPHICS NOTESCOMUTER GRAPHICS NOTES
COMUTER GRAPHICS NOTESho58
 
Quest 2 and the future of metaverse v2.0 210908
Quest 2 and the future of metaverse v2.0  210908Quest 2 and the future of metaverse v2.0  210908
Quest 2 and the future of metaverse v2.0 210908Michael Lesniak
 
Reach into the computer & grab a pixel
Reach into the computer & grab a pixelReach into the computer & grab a pixel
Reach into the computer & grab a pixelKIIT University
 

What's hot (9)

weAR MARKOVERSE eng 2021
weAR MARKOVERSE eng 2021weAR MARKOVERSE eng 2021
weAR MARKOVERSE eng 2021
 
Virtual Reality in Concept Design
Virtual Reality in Concept Design Virtual Reality in Concept Design
Virtual Reality in Concept Design
 
M1802028591
M1802028591M1802028591
M1802028591
 
Seminar report meta
Seminar report metaSeminar report meta
Seminar report meta
 
Picto vision - using image recognition to turn sketches into communication
Picto vision - using image recognition to turn sketches into communicationPicto vision - using image recognition to turn sketches into communication
Picto vision - using image recognition to turn sketches into communication
 
COMUTER GRAPHICS NOTES
COMUTER GRAPHICS NOTESCOMUTER GRAPHICS NOTES
COMUTER GRAPHICS NOTES
 
Quest 2 and the future of metaverse v2.0 210908
Quest 2 and the future of metaverse v2.0  210908Quest 2 and the future of metaverse v2.0  210908
Quest 2 and the future of metaverse v2.0 210908
 
Tangible A
Tangible  ATangible  A
Tangible A
 
Reach into the computer & grab a pixel
Reach into the computer & grab a pixelReach into the computer & grab a pixel
Reach into the computer & grab a pixel
 

Similar to Augmented Reality for Fire Emergencies

A mobile agent based approach for data management to support 3 d emergency pr...
A mobile agent based approach for data management to support 3 d emergency pr...A mobile agent based approach for data management to support 3 d emergency pr...
A mobile agent based approach for data management to support 3 d emergency pr...Ijrdt Journal
 
Dk3211391145
Dk3211391145Dk3211391145
Dk3211391145IJMER
 
Augmented Reality In Environment
Augmented Reality In EnvironmentAugmented Reality In Environment
Augmented Reality In EnvironmentAmanda Summers
 
Real time hand gesture recognition system for dynamic applications
Real time hand gesture recognition system for dynamic applicationsReal time hand gesture recognition system for dynamic applications
Real time hand gesture recognition system for dynamic applicationsijujournal
 
Real time hand gesture recognition system for dynamic applications
Real time hand gesture recognition system for dynamic applicationsReal time hand gesture recognition system for dynamic applications
Real time hand gesture recognition system for dynamic applicationsijujournal
 
Augmented reality documentation
Augmented reality documentationAugmented reality documentation
Augmented reality documentationBhargav Doddala
 
IRJET-A Survey on Augmented Reality Technologies and Applications
IRJET-A Survey on Augmented Reality Technologies and ApplicationsIRJET-A Survey on Augmented Reality Technologies and Applications
IRJET-A Survey on Augmented Reality Technologies and ApplicationsIRJET Journal
 
Augmented Reality pdf
Augmented Reality pdf Augmented Reality pdf
Augmented Reality pdf Qualcomm
 
Augmented Reality In Education
Augmented Reality In EducationAugmented Reality In Education
Augmented Reality In EducationMohammad Athik
 
Ubiquitous computing abstract
Ubiquitous computing abstractUbiquitous computing abstract
Ubiquitous computing abstractPriti Punia
 
IRJET- Data Visualization using Augmented Reality
IRJET- Data Visualization using Augmented RealityIRJET- Data Visualization using Augmented Reality
IRJET- Data Visualization using Augmented RealityIRJET Journal
 
Chapter 5 - Augmented Reality.pptx
Chapter 5 - Augmented Reality.pptxChapter 5 - Augmented Reality.pptx
Chapter 5 - Augmented Reality.pptxAmanuelZewdie4
 
Handheld augmented reality_for_underground_infrast
Handheld augmented reality_for_underground_infrastHandheld augmented reality_for_underground_infrast
Handheld augmented reality_for_underground_infrastHelloWorld121381
 
UBIQUITOUS COMPUTING Its Paradigm, Systems & Middleware
UBIQUITOUS COMPUTING Its Paradigm, Systems & MiddlewareUBIQUITOUS COMPUTING Its Paradigm, Systems & Middleware
UBIQUITOUS COMPUTING Its Paradigm, Systems & Middlewarevivatechijri
 
Human Computer Interaction Based HEMD Using Hand Gesture
Human Computer Interaction Based HEMD Using Hand GestureHuman Computer Interaction Based HEMD Using Hand Gesture
Human Computer Interaction Based HEMD Using Hand GestureIJAEMSJORNAL
 
IRJET-Peerless Home Area Network for Guesstimating in Smart Grid
IRJET-Peerless Home Area Network for Guesstimating in Smart GridIRJET-Peerless Home Area Network for Guesstimating in Smart Grid
IRJET-Peerless Home Area Network for Guesstimating in Smart GridIRJET Journal
 

Similar to Augmented Reality for Fire Emergencies (20)

A mobile agent based approach for data management to support 3 d emergency pr...
A mobile agent based approach for data management to support 3 d emergency pr...A mobile agent based approach for data management to support 3 d emergency pr...
A mobile agent based approach for data management to support 3 d emergency pr...
 
Dk3211391145
Dk3211391145Dk3211391145
Dk3211391145
 
Sample seminar report
Sample seminar reportSample seminar report
Sample seminar report
 
Augmented Reality In Environment
Augmented Reality In EnvironmentAugmented Reality In Environment
Augmented Reality In Environment
 
AUGMENTED REALITY
AUGMENTED REALITYAUGMENTED REALITY
AUGMENTED REALITY
 
Real time hand gesture recognition system for dynamic applications
Real time hand gesture recognition system for dynamic applicationsReal time hand gesture recognition system for dynamic applications
Real time hand gesture recognition system for dynamic applications
 
Real time hand gesture recognition system for dynamic applications
Real time hand gesture recognition system for dynamic applicationsReal time hand gesture recognition system for dynamic applications
Real time hand gesture recognition system for dynamic applications
 
Augmented reality documentation
Augmented reality documentationAugmented reality documentation
Augmented reality documentation
 
IRJET-A Survey on Augmented Reality Technologies and Applications
IRJET-A Survey on Augmented Reality Technologies and ApplicationsIRJET-A Survey on Augmented Reality Technologies and Applications
IRJET-A Survey on Augmented Reality Technologies and Applications
 
Augmented Reality pdf
Augmented Reality pdf Augmented Reality pdf
Augmented Reality pdf
 
RITMAN2012-kun
RITMAN2012-kunRITMAN2012-kun
RITMAN2012-kun
 
HGR-thesis
HGR-thesisHGR-thesis
HGR-thesis
 
Augmented Reality In Education
Augmented Reality In EducationAugmented Reality In Education
Augmented Reality In Education
 
Ubiquitous computing abstract
Ubiquitous computing abstractUbiquitous computing abstract
Ubiquitous computing abstract
 
IRJET- Data Visualization using Augmented Reality
IRJET- Data Visualization using Augmented RealityIRJET- Data Visualization using Augmented Reality
IRJET- Data Visualization using Augmented Reality
 
Chapter 5 - Augmented Reality.pptx
Chapter 5 - Augmented Reality.pptxChapter 5 - Augmented Reality.pptx
Chapter 5 - Augmented Reality.pptx
 
Handheld augmented reality_for_underground_infrast
Handheld augmented reality_for_underground_infrastHandheld augmented reality_for_underground_infrast
Handheld augmented reality_for_underground_infrast
 
UBIQUITOUS COMPUTING Its Paradigm, Systems & Middleware
UBIQUITOUS COMPUTING Its Paradigm, Systems & MiddlewareUBIQUITOUS COMPUTING Its Paradigm, Systems & Middleware
UBIQUITOUS COMPUTING Its Paradigm, Systems & Middleware
 
Human Computer Interaction Based HEMD Using Hand Gesture
Human Computer Interaction Based HEMD Using Hand GestureHuman Computer Interaction Based HEMD Using Hand Gesture
Human Computer Interaction Based HEMD Using Hand Gesture
 
IRJET-Peerless Home Area Network for Guesstimating in Smart Grid
IRJET-Peerless Home Area Network for Guesstimating in Smart GridIRJET-Peerless Home Area Network for Guesstimating in Smart Grid
IRJET-Peerless Home Area Network for Guesstimating in Smart Grid
 

More from idescitation (20)

65 113-121
65 113-12165 113-121
65 113-121
 
69 122-128
69 122-12869 122-128
69 122-128
 
71 338-347
71 338-34771 338-347
71 338-347
 
72 129-135
72 129-13572 129-135
72 129-135
 
74 136-143
74 136-14374 136-143
74 136-143
 
80 152-157
80 152-15780 152-157
80 152-157
 
82 348-355
82 348-35582 348-355
82 348-355
 
84 11-21
84 11-2184 11-21
84 11-21
 
62 328-337
62 328-33762 328-337
62 328-337
 
46 102-112
46 102-11246 102-112
46 102-112
 
47 292-298
47 292-29847 292-298
47 292-298
 
49 299-305
49 299-30549 299-305
49 299-305
 
57 306-311
57 306-31157 306-311
57 306-311
 
60 312-318
60 312-31860 312-318
60 312-318
 
5 1-10
5 1-105 1-10
5 1-10
 
11 69-81
11 69-8111 69-81
11 69-81
 
14 284-291
14 284-29114 284-291
14 284-291
 
15 82-87
15 82-8715 82-87
15 82-87
 
29 88-96
29 88-9629 88-96
29 88-96
 
43 97-101
43 97-10143 97-101
43 97-101
 

Recently uploaded

fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024Janet Corral
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 

Recently uploaded (20)

fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 

Augmented Reality for Fire Emergencies

  • 1. Proc. of Int. Conf. on Recent Trends in Communication and Computer Networks Augmented Reality for Fire & Emergency Services Aameer R. Wani1, Sofi Shabir2, Roohie Naaz2 National Institute of Technology/Department of IT, Srinagar, India 1 Email: aameer.rafiq@gmail.com 2 Email: {shabir, roohie}@nitsri.net Abstract— this paper presents a proposed system for improving collaboration between different agencies and decision makers involved in a fire emergency situation with the help of wearable augmented reality (AR). This field considers the possibility of transcending the physical and territorial boundaries of a real space; [1] it is applicable to all time space configurations of hybrid (Real + Virtual) world. User interaction is through the use of hands and/or gestures. Rapid flow of information across different devices involved in the process such as head mounted display, PDA, laptop, data walls, and desktop is critical to allow this form of collaboration to be integrated with adaptive context aware work environments based on workflow management systems. Functionality of such form of collaboration system is illustrated in the scenario of a fire emergency situation. Index Terms—augmented reality; fire and emergency Services; disaster management; workflow management; Collaboration I. INTRODUCTION Augmented Reality (AR) is a variation of Virtual Environments (VE), or Virtual Reality as it is more commonly called. VE technologies completely immerse a user inside a synthetic environment. While immersed, the user cannot see the real world around him. In contrast, AR allows the user to see the real world, with virtual objects superimposed upon or composited with the real world. Therefore, AR supplements reality, rather than completely replacing it. Ideally, it would appear to the user that the virtual and real objects coexisted in the same space, Figure 1 shows an example of what this might look like. It shows a real desk with a real phone. Inside this room are also a virtual lamp and two virtual chairs. Note that the objects are combined in 3-D, so that the virtual lamp covers the real table, and the real table covers parts of the two virtual chairs. AR can be thought of as the "middle ground" between VE (completely synthetic) and telepresence (completely real) [2]. Figure 1. Real Desk with virtual lamp and two virtual chairs. (Courtesy ECRC) DOI: 03.LSCS.2013.7.590 © Association of Computer Electronics and Electrical Engineers, 2013
  • 2. Augmented Reality enhances a user's perception of and interaction with the real world. The virtual objects display information that the user cannot directly detect with his senses. The information conveyed by the virtual objects helps a user perform real-world tasks. AR is a specific example of what Fred Brooks calls Intelligence Amplification (IA): using the computer as a tool to make a task easier for a human to perform. Augmented reality systems, in order to meet growing expectations, will have to be integrated with backbone systems that can offer the necessary computational power and storage capacities for providing elaborate context aware environments and improved information access. In the world of information systems, business process engineering and workflow management systems have been integrated and are properly linked to databases [3]. A wearable computer with an augmented reality (AR) user interface allows for exciting new collaborative applications to be deployed in an outdoor environment. Augmented reality is a term created to identify systems which are mostly synthetic with some real world imagery added such as texture mapping video onto virtual objects. This is a distinction that will fade as the technology improves and the virtual elements in the scene become less distinguishable from the real ones. II. DISASTER MITIGATION AND E MERGENCY SERVICE Disasters in urban areas with different disaster response teams are in most cases not sufficient for necessary rescue work. An efficient integrated disaster management could support their activities and help to limit human losses. Management problems during Disasters whether natural (floods, Earth quakes etc.), industrial (nuclear, fire etc.), Medical (Epidemic) can be improved by effective collaboration between different entities involved in this regard. Different techniques could be used for improving collaboration between these entities, but here we focus on the technological aspect in general and augmented reality in particular. Improvement in Collaboration between different agencies and decision makers is thus the focus of this paper. Reducing the impact of disasters requires a complex mix of technical and social endeavors, and no single prescription or discipline can provide all the answers. Indeed, disaster researchers have frequently expressed concerns that technology not be viewed as a panacea. [4] Technological, organizational, and social factors and depends on a solid understanding of disaster management as well as the technologies. III. PREVIOUS WORK / LITERATURE SURVEY A. ARS “An Augmented Reality System for Earthquake Disaster Response” An ARS superposes an image of reality with a virtual image that extends the visible scenery of reality. Its use in the context of disaster management is to represent different invisible disaster-relevant information (humans hidden by debris, simulations of damages and measures) and overlay it with the image of reality. The design of such a system is a challenge in many ways, since the system integrates different methods like mapping, photogrammetry, inertial navigation and differential GPS. Check the paper that introduces into the problems of earthquake disaster response and motivates the use of an ARS. Also, it describes the used hardware components and discusses the data available and necessary for the system to be operational under real conditions. B. Rapid Post Rapid Post: Disaster Evaluation of Building Damage Using Augmented Situational Visualization, research being conducted at the University of Michigan to design and implement a new reconnaissance technology to rapidly evaluate damage to buildings in the aftermath of natural and human-perpetrated disasters (e.g. earthquakes, explosions). The technology being designed will allow on-site damage inspectors to retrieve previously stored building information, superimpose that information onto a real building in augmented reality, and evaluate damage, structural integrity, and safety by measuring and interpreting key differences between a baseline image and the real facility view. In addition, by using feedback from the actual building view, it will be possible to update structural analysis models and conduct detailed what-if simulations to explore how a building might collapse if critical structural members fail, or how the building’s stability could best be enhanced by strengthening key structural members. All damage evaluation analyses will be conducted on-site, in real-time, and at a very low cost. [5] C. RoboCupRescue RoboCupRescue simulation aims at simulating large-scale disasters and exploring new ways for the autonomous coordination of rescue teams (see Figure 2). These goals lead to challenges like the coordination 33
  • 3. of heterogeneous teams with more than 30 agents, the exploration of a large-scale environment in order to localize victims, as well as the scheduling of time-critical rescue missions. Moreover, the simulated Figure 2. a 3D visualization of RoboCupRescue model for the city of Kobe, Japan environment is highly dynamic and only partially observable by a single agent. Agents have to plan and decide their actions asynchronously in real-time. Core problems are path planning, coordinated firefighting, and coordinated search and rescue of victims. The advantage of interfacing RoboCupRescue simulation with wearable computing is twofold: First, data collected from a real interface allows improving the disaster simulation towards disaster reality Second, agent software developed within RoboCupRescue might be advantageous in real disasters. [6] IV. DESIGN ISSUES A. The Registration Problem One of the most basic problems currently limiting Augmented Reality applications is the registration problem. The objects in the real and virtual worlds must be properly aligned with respect to each other, or the illusion that the two worlds coexist will be compromised. Registration problems also exist in Virtual Environments, but they are not nearly as serious because they are harder to detect than in Augmented Reality. Since the user only sees virtual objects in VE applications, registration errors result in visual-kinesthetic and visual-proprioceptive conflicts. Such conflicts between different human senses may be a source of motion sickness [7]. Because the kinesthetic and proprioceptive systems are much less sensitive than the visual system, visual-kinesthetic and visual-proprioceptive conflicts are less noticeable than visual- visual conflicts. For example, a user wearing a closed-view HMD might hold up her real hand and see a virtual hand. This virtual hand should be displayed exactly where she would see her real hand, if she were not wearing an HMD. But if the virtual hand is wrong by five millimeters, she may not detect that unless actively looking for such errors. The same error is much more obvious in a see-through HMD, where the conflict is visualvisual.Also visual capture contributes to these effects Augmented Reality demands much more accurate registration than Virtual Environments. Registration errors are difficult to adequately control because of the high accuracy requirements and the numerous sources of error. These sources of error can be divided into two types: static and dynamic. Static errors are the ones that cause registration errors even when the user's viewpoint and the objects in the environment remain completely still. Dynamic errors are the ones that have no effect until either the viewpoint or the objects begin moving; [8] discusses the sources and magnitudes of registration errors in detail. The registration requirements for AR are difficult to satisfy, but a few systems have achieved good results. An open-loop system shows registration typically within ±5 millimeters from many viewpoints for an object at about arm's length. Closed-loop systems, however, have demonstrated nearly perfect registration, accurate to within a pixel [9]. B. Sensing Sensing: Lot of sensors are used in Augmented Reality Systems and the sensors used in Augmented systems have certain problems associated with them. Specifically, AR demands more from trackers and sensors in three areas viz. a) Greater input variety and bandwidth b) Higher accuracy c) Longer range. The biggest single obstacle to building effective Augmented Reality systems is the requirement of accurate, long-range sensors and trackers that report the locations of the user and the surrounding objects in the environment. For details of tracking technologies, see the surveys in [10]. 34
  • 4. C. Attributes The desired quality attributes of a system are derived from the business drivers of the project—target market, timeframe, and application domain. But based on technical report by Bernd we can say that in general there are requirements that any serious software architecture for Augmented Reality must address for example Run-Time Attributes like Performance (i.e Latency of tracking and rendering-the maximum limit of 10 ms.), Reliability i.e. Tracking accuracy Important for many systems; however, the required degree of accuracy varies from below 1 mm for medical applications to several meters for outdoor navigation) and Mobility ( Wireless operation Important for most systems, often using WaveLAN networks). Non-Run-Time Attributes like Portability i.e. Number of supported hardware platforms, several systems support only a single platform (usually Windows 2000 or Linux); several others support multiple platforms. Issues also depend upon whether video see through or Optical see through HMD are used. V. COLLABORATIVE APPROACH The ARS is particularly suited to communicate knowledge of experts of different fields that have to work together as has been described by paper [12]. Augmented reality can be used to support collaboration in an outdoor environment because of the ability to use all four time space configuration, the use of hand and head gestures as the main form of user interaction and the seamless movement of information across different devices (HMD, PDA, laptop, data walls, and desktop). A major research issue is the interaction techniques for users to control and manipulate augmented reality information in the field. We propose the use of augmented reality in the field (outdoors) as a fundamental collaboration tool that may be used across a number of application domains, such as medical, maintenance, military, search and rescue, and GIS visualization. A number of researchers are investigating augmented reality with wearable computers for distributive collaboration systems [13] but we are proposing an overall framework to integrate augmented reality into a traditional workflow. Overlaying contextually aware information on the physical world is a powerful cuing mechanism to highlight or present relevant information. This ability to view the physical world and augmented virtual information in place between multiple actors is the key feature to this form of collaboration technology. A key difference with this form of collaboration is the artifact the users are manipulating. This artifact can be characterized by the following features: firstly, it corresponds to the physical world; secondly, the size of the information space reflects physical objects in a large area; and thirdly, the users are able to physically walk within the information space and the physical world simultaneously. This form of collaboration is similar to distributive virtual environment collaboration systems. Both have manipulable 3D models and the position of the users affects their vantage point. The significant differences are that the distances the users are allowed to physically move are larger and there is a one-to-one correspondence with the physical world [3]. VI. COMMUNICATION AND WORKFLOW MANAGEMENT SYSTEM The communication between various entities can be achieved effectively by workflow management system. Its model says that all workflow systems contain a number of generic components which interact in a variety of ways. To achieve interoperability between workflow products a standardized set of interfaces and data interchange formats is necessary. A number of distinct interoperability scenarios can be constructed by reference to such interfaces. For example a standard process to be used by a number of groups of users, perhaps in different organizations could be defined using one tool and exported to each group who may use different workflow systems. Also a given user may use one workflow client application to receive tasks generated by a variety of workflow systems. The model identifies the major components and interfaces. Thus workflow management systems can coordinate the execution of processes, inter-process communication and provide interfaces through which databases can be accessed. As workflow management systems are themselves only a component in an adaptive context aware work environment, communication can be implemented in a very robust form via message passing through an enterprise bus. Diagrammatic representation is shown in Figure3 [11]. The effective utilization of resources is of the utmost importance as right thing at right time is critical for the achievement of success which can only be achieved by accessing the situation even before the emergency 35
  • 5. Fig.3. Workflow Management system service unit reaches the affected site. In today’s world critical information in case of a fire emergency is transmitted by the help of mobile phones but major advanced communication facilities aren’t standard equipment in fire emergency systems, technologies like radio, GPS, mobile phones can form the basis of an advanced emergency squad. Augmented reality could us the platform for this advanced facility. This concept may further be explained by figure.4. Fig.4. Diagram to demonstrate fire emergency System The Wearable Computer Lab is investigating this concept; researches include Hand of God, tabletop collaboration technologies, distributive VR/AR, remote active tangible interactions, mobile AR X-ray vision, and input devices for wearable computers Hand of God (Figure 5)-Command-and-control centers require support for intense collaboration, so the technology should be intuitive and simple to operate. Imagine a commander communicating to support people in the field and wanting to send a support person to a new position. The simplest method would be for the commander to physically point to a map that the field operative sees as a virtual representation. This technology supports through-walls collaboration with the commander providing meaningful information to the user in the field. 36
  • 6. Fig5. Hand of God (HOG) system. (a) Indoor expert employing the HOG interface (b) Head mounted display as seen by the outdoor participant. (c) Physical props as signpost for the outdoor user [39]. Mobile AR X-Ray Vision (Figure 6) -Outdoor users must physically move or walk to view different aspects of the physical world while performing their tasks. With through-walls collaboration, this could require them to investigate potentially hazardous locations. The use of robots for telepresence is a well-investigated area with several commercial products available. We extended this capability for in situ first-person perspective visualization to extend the ability for through-walls collaboration. Fig.6. AR x-ray vision through a brick wall. The image above shows the building that's occluding the user's view. The image below the use of highlighted edges cues to provide the impression the x-ray vision is behind the wall [15] Several researchers in the WCL have explored the use of AR to provide users with X-ray vision capabilities. Computer created views of occluded items and locales appear in the user’s vision of the situation. The initial AR X-ray vision system employed wireframe models textured with video images captured from the outdoor environment. To overcome the issue of the rendered images appearing as though they were floating on top of occluding artifacts, they implemented edge overlay visualizations to provide depth cues for X-ray vision not available in our original system. This visualization process provides highlighted edges from the AR video stream to give cues that the X-ray video stream is behind the occluding object via a technique similar to that of Denis Kalkofen and his colleagues. A second technique, tunnel cutout, provides highlighted sectioning to help outdoor users understand the number of occluded objects between them with X-ray vision. The work of Chris Coffin and Tobias Höllerer inspired this second technique [14]. Data from all the sources would be transferred online be it from the on-site team or by control room and this should be done with small effort and interference to the emergency work should be avoided at all costs, that’s why visual and sensor inputs are important. Controls rooms should be built like that of space-fight or military operation control rooms. One of the important factors to consider in building such control rooms is their ability to be deployed easily and strong communication facilities like GPS, GSM, and radio communication. Also the system must be robust to be successful. VII. PROPOSED SYSTEM The approach discussed in this paper focuses on bringing the power of these advanced environments to mobile users. We therefore investigate adaptive context aware work environments and their possible linkage with a user through augmented reality interfaces operating on mobile equipment. Our goal is to identify a way of improving access to information, supporting teamwork, and facilitating communications. By linking advanced control rooms to mobile users, the centralized parts of the system can get access to on site 37
  • 7. information to improve the decision making process. The availability of this information also means that we can also support the building of domain specific corporate memories. Our system draws on three major areas of computer science research: computer-supported cooperative work (CSCW), wearable computing, and ubiquitous workspaces [15]. As in disaster situations two immediate actions should take place: first responders deploy to the affected areas and set up a command-and-control center, with people in the field providing information to the center. Control center personnel will use this data to direct resources to the appropriate places. So, we shall discuss some important points about the command center and how the information is sent to the control center and then processed and then used to direct the resources in the field. The well-known event queue metaphor, which works well for a single user sitting in front of a single computer using a GUI, breaks down in such an interactive workspace. Event Heap is a novel mechanism by which multiple users, machines and applications can all simultaneously interact as consumers and generators of system events. The Event Heap is a software infrastructure designed to provide for interactive workspaces what the event queue provides for traditional single-user GUI’s. The system is an extension of TSpaces, a tuplespace system from IBM Research. It is bulletin-board based, and applications may post and retrieve events of interest. Event Heap has following features: Multi-User, Multi-Application, Multi-Machine. Heterogeneous Machines, Legacy Support. Failure Isolation. For more details on Event Heap refer to [16] We discuss four separate approaches that begin to address the infrastructure requirements for future workspaces. These include commercial initiatives such as Jini as well as research work that has been undertaken to provide the ubiquitous computing infrastructure for intelligent and interactive rooms. We also discuss research being undertaken to provide for future enterprise computing infrastructure. We also discuss a case study where the Metaglue multi-agent ubiquitous computing infrastructure, which forms the basis of the MIT Intelligent room, is integrated with the ODSI enterprise infrastructure from DSTC [17]. ODSI implements the concept of an enterprise bus that allows the integration and coordination of a range of services across an enterprise. Of particular interest is ODSI’s ability to use enterprise knowledge (in the form of organizational structures, context, and processes) to orchestrate and coordinate work activities and allow flexible access to, and use of, various enterprise applications and services. Several candidate architectures and software infrastructures have been identified as providing at least some of the desirable characteristics for the type of enterprise-enabled workspaces outlined D. Meltaglue Metaglue Version 0.5 is a multi-agent based system that provides computational glue for large groups of collaborative software agents and device controllers, such as those used in the Intelligent Room at MIT Cambridge. It provides communication and discovery services and enables users to interact, subject to access control, with software and data from any space. It also arbitrates among applications competing for resources. Metaglue is implemented in Java, replacing the Remote Method Invocation (RMI) mechanism with one that allows dynamic reconfiguration and reconnection so that agents can invisibly resume previously established, but broken connections. It has support for self-dissemination through a process called glue spreading where new agents can be created throughout the infrastructure as required. Metaglue provides the support for managing systems of interactive, distributed computations i.e. those in which different components run asynchronously on a heterogeneous collection of networked computers. It provides high-level support for writing groups of interacting software agents and automates the connection and distribution of these agents according to their computational requirements. E. Intelligent Room Operating System (iROS) The Interactive Workspaces project at Stanford University is developing a high level architecture for organising multi-person, multi-modal interactions in an integrated space that combines multiple computer systems. It has wall-sized interactive murals, a "collaborative table" with a bottom-projected display and support for large-scale collaborative applications. The infrastructure also supports wireless networking and allows the integration of handheld devices like PDAs and palm top computers. The infrastructure supporting the intelligent room is based on iROS, the Intelligent Room Operating System. It uses a tuple space environment and incorporates the Tspaces server from IBM. Instead of a traditional event queue, the architecture utilises a central event heap abstraction, called the Event Heap [16], as the main 38
  • 8. communication mechanism among the software components. The event queue metaphor, which works well for a single user sitting in front of a single computer using a GUI, breaks down in an interactive workspace with multiple users all using common hardware and software applications. The Event Heap is a mechanism by which multiple users, machines and applications can all simultaneously interact as consumers and generators of system events. Because the events are tuples in a tuple-space, analogous to a shared blackboard, publishing and subscribing to the tuplespace is sufficient to participate in the room infrastructure while allowing components to remain very loosely coupled. A key requirement for the interactive workspace is a software infrastructure that allows users with mobile computing devices, including laptops and PDA's, to seamlessly enter and leave the interactive workspace. While in the workspace, they can use their devices to interact with the current application and/or control various hardware devices in the room. F. ODSI The Open Distributed Services Infrastructure (ODSI) is a framework that supports the collaboration of enterprise resources to meet the challenge of an adapting enterprise [17]. ODSI concentrates on collaborative services rather than data-driven applications and hence maintains an enterprise perspective rather than a software focus. It provides a service integration environment designed to combine enterprise-modeling techniques, which incorporate enterprise information such as policy, role and component descriptions, with lightweight workflow and dynamic service integration technology. It provides lifecycle management for distributed software services through a peer infrastructure and supports their collaboration to perform complex business processes. ODSI utilises the Breeze workflow engine for resource coordination and process orchestration and an adaptive enterprise bus based on Elvin Version 3, a content-based routing system. Elvin is a lightweight notification service. Producers detect events, changes in the state of an object, and notify Elvin, which in turn passes the notification to any clients that have registered an interest with a subscription expression. The difference between Elvin and most messaging systems is that Elvin’s actions are motivated by the message content rather than an address or topic name. The advantage of this difficult to implement approach is that it decouples producers and consumers and thus promotes system evolution, integration and client homogeneity [18]. G. Jini An open architecture that enables developers to create a federation of highly adaptive, network-centric entities, devices or services. Jini technology, which is written in Java, has been designed to build networks that are scalable, evolvable and flexible. These types of networks are those that are typically required in dynamic, distributed computing environments. The overall goal is to turn the network into a flexible, easily administered tool with which resources can be found by any device or user. Resources can be implemented as hardware devices, software programs or a combination of the two. One focus of the system is to add a dynamic element to the network that better reflects the nature of a workgroup by enabling the ability to add and delete services flexibly without stopping and starting the system. VIII. IMPLEMENTATION To place the augmented reality user interface in context of the fire emergency, we present a scenario of a fire at a remote site. Police and/or fire stations receive the first call and they may also call a local ambulance service and ask for the closest hospital helicopter to be placed on standby. The fire is a three hour drive or an hour helicopter flight from the fire site. First party (police/fire service) arrives and requests the helicopter to be sent to the fire site. The communication with fire service control room, police control room, the specialist doctor, the recovery doctor, and the paramedic is composed of audio, video, and augmented reality information. During this initial phase, the collaboration is a same time – different place configuration. The fireman starts by performing a life rescue operation and this is performed by thorough checking of the fire site which is aided by augmented reality. Consider a fireman going into the building where one is not able to see, as in many countries like US and India we have the detailed building plans in municipality database. So, it possible to render that plan on the real world that fireman sees (based on the research of 39
  • 9. architectural domain [13] (Figure 7 and Figure 8) and thus make the pathways visible to him which otherwise couldn’t be possible. The fireman is also connected to a team of doctors; he can perform an initial diagnostics of the patient’s injuries if any. The fireman wears a light wearable computer and views data via a see-through HMD (head mounted display), while the specialist fireman, team of doctors operate from a traditional office workstation, and the police is operating a notebook computer. All are communicating via a wireless network. Fig.7. AR application sample used for the virtual reconstruction of architectural heritage study in the roman city of Gerunda, Girona, Spain, carried out by the authors in the LMVC[13]. Fig.8. Examples of a collaborative scene created from different models. [11] Specialist firemen can advise the fireman about how to control the fire and/or how to evacuate civilians if any, this equipment can also be used by a civilian who may have moderate training in the fire control and with the help of fire emergency kit will provide control room and doctors eyes at the site as early as possible and thus may prove crucial in saving lives and preventing damage. The specialist fireman may view multiple sites at a time through video images displayed on workstations of the control room and figure out the source of fire, fire man can also indicate regions which have been checked and regions where search and rescue operation has yet to be performed. In addition to that Doctor can advise the fireman/civilian about procedures while the fireman is tending the patient if any. The doctors can view a region of interest of any patient via digital video images displayed on their office workstation while the fireman/civilian concurrently views the patient through their HMD. A doctor may indicate regions on the patient’s body for further investigation by drawing augmented reality lines over the video image. While the fireman is performing the diagnostics and treatment, he/she can place augmented reality information markers on the patient’s body or for documenting the scene. These markers represent a record of the fire fighter’s case notes. This is a different time – different place configuration, as these stickers are reviewed by personal, such an ER nurse, at a later date in the hospital, or a forensic expert to explore the cause of event. The fire fighter points the eye cursor at a region of interest, for example the knee, indicates the priority, and speaks into a microphone to record an audio memo. The different priorities could be designated by color, icon, or fill pattern. The priority is indicated either by voice recognition or binary switches placed on the paramedics clothing. AR-Markers can also be used to guide the supporting teams to inform them about the ground situation accurately and quickly by providing them the augmented reality information stickers as world relative information. After the incident the markers and stored information collected by augmented system can be used by the insurance companies and/or investigation agencies. Using mobile AR systems, the field operatives can thus define annotated regions on the ground, denoting dangerous areas, completed searches, and areas that require immediate attention. The field operatives can also quickly edit 3D representations of buildings to show which portions have been damaged. Control center personnel can then direct field operatives to investigate particular areas by marking a digital 3D map that is reflected in the physical world viewed by the user in the field through AR technology. 40
  • 10. IX. CONCLUSION The aim of our paper was to use the power of workflow management, augmented reality and co-ordination to improve the emergency service provided in-case of a fire emergency. We have shown that mobile equipment integrated with adaptive context aware work environments can prove beneficial for emergency situation like that of fire. We by the help of emergency AR-markers, X-ray vision, hand of God and other advanced technologies transfer information in the system effectively. Multimedia may also be used for the same purpose. The access to the information both for the mobile user and those in control room have been demonstrated. An important advantage of such system is the automatic recording of the onsite data which helps to build the records of these incidents without interfering with the work of emergency team. Records can later be used for training purposes. REFERENCES [1] Alejandro Schianchi University of Tres de Febrero, Argentina, “Transcending Space through Mobile Augmented Reality as a Field for Artistic Creation”, Published in Meida in Transition 8 (MIT-8): Public-Media,Private-Mediaweb.mit.edu/commforum/mit8/subs/abstracts.html [2] Milgram, Paul, and Fumio Kishino, “A Taxonomy of Mixed Reality Virtual Displays”, IEICE Transactions on Information and Systems E77-D, 9 (September 1994), 1321-1329. [3] S. Rathnam, "ACM SIGOIS Bulletin, Special issue: business process reengineering," vol. 16:1: ACM, 1995. [4] E.L. Quarantelli, “Problematical Aspects of the Information/Communication Revolution for Disaster Planning and Research: Ten Non-Technical Issues and Questions, “Disaster Prevention and Management” 6(1), 1997. See also Sharon S. Dawes, Thomas Birkland, Giri Kumar Tayi, and Carrie A. Schneider, “Information, Technology”, and Coordination: Lessons from the World Trade Center Response, Center for Technology in Government, University at Albany, State University of New York, 2004; http://www.ctg.albany .edu/publications/reports/ wtc_lessons/ wtc_lessons.pdf. [5] “Rapid Post-Disaster Evaluation Of Building Damage Using Augmented Situational Visualization”, Vineet R. Kamat 1, M. ASCE and Sherif El-Tawil 2, M. ASCE [6] H. Kitano, S. Tadokoro, I. Noda, H. Matsubara,T. Takahashi, A. Shinjou, and S. Shimada. “RoboCup, Rescue: Search and rescue in large-scale disasters as a domain for autonomous agents research.” In IEEE Conf. on Man, Systems, and Cybernetics (SMC-99), 1999. [7] Pausch, Randy, Thomas Crea, and Matthew Conway, “A Literature Survey for Virtual Environments: Military Flight Simulator Visual Systems and Simulator Sickness”, Presence: Teleoperators and Virtual Environments 1 , 3 (Summer 1992), 344-363. [8] Holloway, Richard., ”Registration Errors in Augmented Reality”, Ph.D. dissertation. UNC Chapel Hill Department of Computer Science technical report TR95-016 (August 1995) [9] Bajura, Michael and Ulrich Neumann,“Dynamic Registration Correction in Video-Based Augmented Reality Systems”, IEEE Computer Graphics and Applications 15, 5 (September 1995), 52- 60 [10] Meyer92 Meyer, Kenneth, Hugh L. Applewhite, and Frank A. Biocca, “A Survey of Position-Trackers”, Presence: Teleoperators and Virtual Environments 1, 2 (Spring 1992), 173-200. [11] Rolland, Jannick, Rich Holloway, and Henry Fuchs. A Comparison of Optical and Video See-Through HeadMounted Displays. SPIE Proceedings volume 2351: Telemanipulator and Telepresence Technologies (Boston, MA, 31 October - 4 November 1994), 293-307. [12] Workflow Management Coalation- Reference Model - The Workflow Reference Model (WFMC-TC-1003, 19-Jan95, 1.1), Terminology & Glossary (WFMC-TC-1011, Feb-1999, 3.0), The Workflow Reference Model: 10 Years On - http://www.wfmc.org/reference-model.html. [13] http://www.isprs.org/proceedings/XXXV/congress/comm3/papers/399.pdf [14] Sánchez, J., & Borro, D, “Automatic Augmented Video Creation for Markerless Environments”, Proceedings of the 2nd International Conference on Computer Vision Theory and Application (VISAPP´07), Barcelona, Spain. pp 519-522. (2007). [15] C. Coffin and T. Hollerer, “Interactive Perspective Cut-Away Views for General 3D Scenes,” Proc. IEEE Symp. 3D User Interfaces (3DUI 06), IEEE CS Press, 2006, pp. 25–28. [16] Bruce H. Thomas and Wayne Piekarski University of South Australia,"Through-Walls Collaboration", in PERVASIVE computing Published by the IEEE CS 1536-1268/09 [17] JOHANSON, B., FOX, A., HANRAHAN, P. and WINOGRAD T. (2000) The Event Heap: An Enabling Infrastructure for Interactive Workspaces CS Tech Report CS 2000-02 [18] FITZPATRICK, G., MANSFIELD, T., KAPLAN, S., ARNOLD, D., PHELPS, T. and SEGALL, B. (1999) Augmenting the Workaday World with Elvin. In Proceedings of ECSCW’99, Copenhagen, 431-451, Kluwer Academic Publishers. 41