Constructive Convergence:Imagery and Humanitarian Relief
1. Constructive Convergence:
Imagery and Humanitarian Assistance
Doug Hanchard
Center for Technology and National Security Policy
Institute for National Strategic Studies
National Defense University
February 2012
i
2. The views expressed in this paper are those of the author and do not reflect official policy or position of the
National Defense University, the Department of Defense, or the U.S. Government. All information and sources
for this paper were drawn from unclassified documents.
Google, Google Earth, KML are registered trademarks of Google. All Google Images created using Google
Earth and Google Maps created and published with written permission from Google.
Other product names mentioned herein are used for identification purposes only and may be trademarks of their
respective companies.
Content describing commercial, governmental, nongovernmental, nonprofit organizations, or agencies
describing devices, software, or intellectual property in this document are written under Fair Use per 17 U.S.C. §
107 of the Copyright Act (1976).
The Center for Technology and National Security Policy expresses appreciation to Ms. Jessica L. Block, Dr.
Jeffrey Smotherman, Mr. Frank Hoffman and Ms. Isadora Blachman-Biatch for their editorial support.
Doug Hanchard is the President of Rapid Response Consulting (Toronto, Ontario), an ICT consulting firm
specializing in telecommunication services for enterprise organizations and government agencies. Hanchard has
worked at Bell Canada, TELUS and AT&T implementing and delivering telecommunication solutions
worldwide in 57 countries.
In memory of Ivan W. Hanchard
Collaborator, negotiator, and, most of all, a father
who taught wisdom based on experience.
ii
5. Foreword
In the past few years—especially since the 2010 Haitian earthquake—Geospatial Information System (GIS)
products, often based on imagery, have become critical enablers of humanitarian assistance and disaster relief
(HA/DR) efforts. Much more imagery is becoming available to the HA/DR community, complemented by
increasing bandwidth to share it, more powerful “edge” devices to process it, global volunteer groups to help
make sense of “crowd-sourced” information and high-level policy and doctrine that are becoming more
supportive of collaboration around GIS products in HA/DR environments.
Doug Hanchard’s paper makes an important contribution to this field. He does not try to be all things to all
people, but focuses on important technological aspects of imagery in HA/DR. The paper includes specific
recommendations, from transmission standards, to short message service shortcodes, to application
programming interfaces and data search techniques. At the same time, he recognizes that technology alone is not
enough:
• Social networks need to be developed and trust built to encourage diverse groups to work together
• Policy and doctrine need to be translated into effective field operating procedures so that people “on the
ground” know what to do
• Legal and regulatory constraints must be understood and challenged where necessary
• Resources must be addressed and acquired
• Trainers must be trained, units exercised and curricula adapted to achieve genuine lessons learned,
instead of lessons observed, and re-observed, and re-observed.
Importantly, the paper ties imagery to logistics and to local populations in their worlds, with their resources.
This reinforces a model, developed from Haitian and Afghan experiences, which suggests that organizations
need to build “bridges” to the “crowd” that is generating so much information. 1 Handling the volume and
velocity of information created by social media and the 24/7 news cycle will be essential for all organizations
going forward. At the same time, unless “transactions” are completed that make a difference on the ground
(people pulled from rubble, supplies delivered, contracts fulfilled) improved situational awareness doesn’t help
much—hence the importance of logistics. Finally, “feedback” is needed, both to make the transactions more
effective and to inform the “crowd” (the international community) about what’s happening.
Doug repeatedly points out that “people save lives, not technology,” and his solutions emphasize ways that
technology can help make people more effective, not be an end in itself. The complicated environments of most
HA/DR scenarios require that practitioners seek to achieve “unity of action” when there’s no “unity of
command,” and his recommendations provide tangible steps to help achieve this most difficult goal.
—Linton Wells II
Director, Center for Technology and
National Security Policy,
National Defense University
1 L. Wells and R. Welborn, “From Haiti to Helmand: Using Open Source Information to Enhance Situational
Awareness and Operational Effectiveness,” (Center for Technology and National Security Policy, December 2010),
available at<http://star‐
tides.net/sites/default/files/From%20Haiti%20to%20Helmand%2012%2011%2010%20v14_0.doc >.
v
6. vi
7. Executive Summary
Imagery assessment is a vital tool for humanitarian responders when disaster strikes. Whether derived
from satellite, aircraft, unmanned aerial vehicle (UAV) or ground views, imagery offers event confirmation and
impact, an early assessment and a foundation on which to initiate response planning. The goal of this paper is to
illustrate to the technical community and interested humanitarian users the breadth of tools and techniques now
available for imagery collection, analysis and distribution, and to provide brief recommendations with
suggestions for next steps.
Over the past decade, the humanitarian community has found its growing access to imagery to be
valuable and organizational policies are changing to reflect that importance. Humanitarian response
organizations have also paved new paths forward when the existing methods were antiquated (some of which
are described below). As new methods are adopted, changes in the use of imagery may alter organizational
command structures.
Innovative technology, like imagery and the information derived from it, has long been a hallmark of
human evolution—but using it wisely has been a challenge. There are legal, political and ethical questions that
quickly arise around the use of imagery in disasters. In addition to the legal and ethical issues, humanitarian
assistance requires teamwork and collaboration. Responders using imagery must overcome interoperability
challenges, develop technical standards, create governance structures and protect both personal privacy and
intellectual property. In some cases, those posting or using imagery in the field may be at physical risk.
Recognizing those needs, a growing number of volunteer technical groups have an opportunity to design tools
that reflect current technical capabilities while addressing the full spectrum of requirements. We can now
include imagery contributions from affected populations to a degree never before possible, which raises further
opportunities for the design of new tools and processes.
Today’s technologies include public access to satellite and aerial imagery platforms; resilient networks;
and larger and faster data storage capabilities at data centers, on smart phones and tablet computers that are
capable of manipulating imagery files using surprisingly high-performance applications that reside locally on
the device. Such a convergence of capabilities is uncommon. This paper is intended to stimulate discussion on
that convergence around the use of imagery in humanitarian response, and to inform readers about the resources
available for research in more depth.
1
8. I. Introduction
“Imagery is like fish—best when it’s fresh.”
—Major General John Hawley (USAF Ret.), U.S. Space Command
Disaster Strikes
An earthquake occurs in a city of 375,000 people. As reports are digested the scale of the disaster
becomes clear, revealing an estimated $30 billion in damages and resources are overwhelmed. Compromised
energy resources limit communications for situational awareness, and there isn’t enough manpower on the
ground. Volunteers from humanitarian groups, especially if they can help manage information, would be useful,
but civil defense officials don’t have any experience integrating volunteers to a government response plan.
Could volunteers be deployed?
On February 22, 2011, Christchurch, New Zealand, did just that—urgently defining, over a matter of
hours, a new process for integrating a volunteer technical community into an emergency response. Professional
responders in New Zealand used volunteers, private communications networks, Internet-based tools provided by
private companies and nongovernmental organizations (NGOs) to create a mesh that fed the emergency
management community the data needed to improve its response. That devastating 6.3 earthquake became a
laboratory to carry out a complex, yet necessary, coordination of these technical efforts across the professional
and volunteer boundary. The results demonstrate how open-source maps, layered satellite and aerial imagery,
and open-source information tools can be rapidly integrated into donated logistics software to empower
volunteer teams using any available mobile device. In close collaboration with the municipal government, a
voluntary technical community delivered desperately needed information services with very little warning and
with a flexible, adaptable model that may be worth replicating.
The success of the volunteer coordination in Christchurch was largely due to the converging uses for
imagery among the humanitarian assistance and disaster relief (HA/DR) community. Imagery can be derived
from sensors on satellites, planes or even from cell phone and digital cameras, and provides a picture of a place
on earth at a point in time. The ability to acquire and share these images with coordinated response groups has
been revolutionized by recent efforts that led to several successful rescues and rapid community healing in
Christchurch. Christchurch is unique: two severe earthquakes occurred separately. The events happened quickly,
without warning.
This paper presents a brief overview of the growing power of imagery, especially from volunteers and
victims in disasters, and its place in emergency response. It also highlights an increasing technical convergence
between professional and volunteer responders—and its limits.
The Power of Imagery
Imagery for HA/DR is used for three primary purposes: situational awareness assessments by the
scientific community, government and the community; response logistics for search and rescue, medical
assistance and essential service assessments; and recovery management for clean-up, interim essential service
distribution and post-disaster reconstruction efforts.
2
9. The immediate aftermath of a disaster is chaotic, and post-event imagery can provide relatively
unbiased logistical information for rescue operations. Traditionally, there have been significant limitations to
gaining access to imagery for people responding to an event since civil and military governmental agencies used
to be the sole owners of satellite imagery. But now high-resolution imagery (in both space and time) is
commercially available, providing nearly real-time situational awareness. Further, with the creation of such
organizations as OpenStreetMap (OSM) and Ushahidi, we are beginning to integrate cartography into what can
be best described as social mapping, breaking the bi-directional barriers to accessibility. However, the case
studies below have shown that access to this imagery has created challenges in management and distribution.
Imagery for far-forward situational awareness began to become broadly available during hurricanes
Katrina and Rita in the Gulf of Mexico in 2005 (Figure 1). NGOs, international organizations (IOs), and
volunteer groups began to leverage this new access to convey information to the general public. Since then its
use has accelerated. After the 2010 Haitian earthquake, the use of imagery by non-government groups changed
how HA/DR communities approached many typical disaster response issues. During those 5 years, imagery
access began to affect how government emergency operations, policies, procedures, search and rescue planning,
and logistics services were developed and how humanitarian assistance teams trained and deployed.
Technological advances in other domains have co-evolved with the access to imagery. Film-based
camera systems have been replaced with next generation charge-coupled device (CCD) chips integrated with
next generation photographic lenses in aerial and satellite platforms. These advances have increased the
resolution and thus the size of the image files and the demand for space to archive it. The data centers now
required to manage these file sizes must be accessible in the region where the disaster occurs and in reach back
areas not impacted by the event. That requires broadband connectivity. At the same time, first responders must
be ready to use low bandwidth methods, such as text-messaging if bandwidth is limited.
Open source mapping tool add-ons, such as KML, PATH, GRAPH, KML Color Converter 2 for Google
Earth, use imagery as a foundation for layering field data. But with this availability the amount of information
that can be collected is skyrocketing, which means the risk of information overload is also rising. Techniques to
manage and use these tools must be carefully assessed and coordinated. Network providers are experiencing
daily impacts to voice and data networks by the demands for data sharing, which are made worse in a disaster
zone.
Advancements in handheld devices now enable network connectivity in the disaster-affected area. These
rugged devices are now capable of transmitting and consuming vast amounts of information. In addition,
satellite and emergency terrestrial wireless networks are able to deploy within hours. While most
communications systems today generate decent connectivity on the ground in normal circumstances, care must
be taken as to how organizations and volunteer groups compile and publish information over limited and fragile
data pathways in a disaster. That requires a degree of understanding and cooperation not yet common in an
urgent response.
To coordinate efforts in a disaster environment, a few social networking tools are being integrated by
HA/DR groups. Those tools offer knowledge sharing opportunities that did not exist 10 years ago but contribute
to information overload. This allows critical pieces of information to be easily overlooked, and there is
additional danger in automating the data filtering.
To collect, optimize, and deploy satellite and aerial imagery successfully, we must consider these issues
carefully. Each section in this paper will guide the reader through current and future challenges, opportunities
2 Open source (freeware) tools: http://www.sgrillo.net/googleearth/
3
10. and ideas with recommendations for consideration. Imagery does not often lie and only rarely is it misconstrued.
Each image or map in the appendix tells a story. What is driving the story is the converging use of imagery on
the ground.
Figure 1. Hurricane Katrina NOAA Image. Original Size: 1,114 Kilobits − 4077 x 4092 Pixels
270359.85 m E − 3350705.72 m N − August 30, 2005 15:58:54 CDT 3
This image was taken and published by NOAA on its Internet website. However, its value to first responders was minimal
because information that could have helped analyze the image was not embedded was not, and no geospatial standard was
used to allow the image to be exported to other mapping services and technologies.
II. The Users of Imagery
HA/DR Community: Learning New Visualization Concepts
When a disaster strikes, the organizations that deploy to respond have different objectives. Thus their
needs, policies and procedures are different when addressing the “who, what, when, where and why” of the
event. What they want out of an image is different and it is unlikely that any one solution to imagery will ever
be found to address everyone’s needs. Moving forward, care must be taken so as not to put useful cooperation
3
http://ngs.woc.noaa.gov/storms/katrina/24334501.jpg
4
11. at risk when imagery-related needs are at odds. The existing inter-agency and government compliance
requirements that, in some cases, are in conflict with demands for information generated by the event, which
further complicates matters.
Government (Civil)
Some government structures are based on local culture and historical precedents. Some are based on
religious beliefs and some are tribal or ethnic in nature. Some are autocratic, devoid of oversight or cooperation.
In some regions of the world, abstract technical sophistication is limited and interpreting a basic map is beyond
local capabilities. But in virtually every country, there is a sense that civil government should provide useful
services to people. Imagery of various kinds is emerging as a potentially valuable service.
Governance structures at many levels are beginning to recognize the value of post-crisis and near real-
time imagery as well as pre-crisis planning. Static web pages no longer meet the needs of constituents nor of
emergency operation centers. New dynamic sources of information such as social media and open source
imagery can inform but also overwhelm planning and response mechanisms. The challenge for governments is
often to address the legal limitations that hinder new approaches to information sharing with volunteer
organizations. Economic conditions often further limit government financial support for internal training and
workshops even when sharing is allowed and volunteers welcomed.
Government (Military)
The armed forces are often the most valuable resource on which any government can to lean during and
immediately after a catastrophe. Every executive government branch will be tasked for communications,
transport, and power. But the military may also be able to provide: Field medical services, emergency
construction capabilities, additional essential supplies, engineering, field coordination, logistics operations and
management and aerial surveys. Host-nation militaries often own, or have access to, the highest resolution post-
event imagery in the country. In some countries they may provide the only emergency services available.
Humanitarian operations frequently are integrated as multi-national, civil-military deployments as was the case
in Indonesia and Haiti. Militaries, therefore, are exploring new approaches to HA/DR as demands escalate. The
U. S. Department of Defense (DOD) clearly recognizes the requirement for collaboration and how it must
function in humanitarian assistance and disaster relief. This is reflected in a series of DOD directives and
instructions. One of most forward leaning is DOD Instruction 8220.02, Information and Communication
Technology (ICT) Capabilities for Supporting Stabilization and Reconstruction, Disaster and Humanitarian
Assistance and Civic Operations. 4 Similar themes are found in: DOD Directive 3000.05, Military Support for
Stability, Security, Transition, and Reconstruction (SSTR) Operations, November 28, 2005, 5 reissued on
September 16, 2009 as a DoD instruction on Stability Operations. 6
NGOs/IOs
When a major disaster strikes the developing world, there are more than 40,000 different NGOs and IOs
that may consider responding. They deploy to address specific needs and goals defined by the organization, and
usually must leverage existing resources to do so. Collaboration with other NGOs is often indispensible.
4 http://www.dtic.mil/whs/directives/corres/pdf/822002p.pdf
5
http://www.usaid.gov/policy/cdie/sss06/sss_1_080106_dod.pdf
6
http://www.dtic.mil/whs/directives/corres/pdf/300005p.pdf
5
12. Volunteer and Local Community
The public no longer relies on conventional news organizations as their primary source of information.
The ability to go directly to the source of information online, post comments and offer assistance is growing
with every new disaster through the capabilities of mobile phones, short message service (SMS), multimedia
messaging service (MMS), Flickr, Facebook, Twitter, Kiva, tools from the NGO "Innovative Support to
Emergencies, Disease and Disasters” (InSTEDD), Ushahidi, OSM, and others. This publically generated data
presents challenges to managing unverified information. Contributions from a diaspora of globally distributed
people familiar with the impacted area have also proven valuable, but their use needs refining.
Working Together: Simulation and Coordination Groups
Some groups have recognized the need to collaborate and be formally organized in order to prepare for
anticipated events. Here are some (of many):
Net Hope
Net Hope—a consortium of 32 of the largest global NGOs—was established in 2001 to research
innovative approaches to cooperation, knowledge sharing, 7 defining technology and services required to
accomplish their goals. 8 Many of the most recognized NGO information technology (IT) departments have
joined this consortium. 9 For Net Hope, accurate geospatial information applications are a critical need. One of
the group’s goals is to develop standards for solutions that can be replicated to reduce capital costs, increase
efficiencies and simultaneously expand their capabilities. They are generating internal standards and protocols
for imagery, data flow guidelines, network communications, device usage, and ancillary systems.
Exercise “Pacific Endeavor”
Field trials and planning for multi-national humanitarian civil-military response teams are occurring
with multi-national integrated military exercises such as Pacific Endeavor. 10 The program commonly involves
16 different countries around the Pacific Rim and focuses on establishing communication network
interoperability across voice and data network services. In the recent past, the military was the sole owner and
supplier of post-event imagery. Now imagery is produced by many organizations and the military role is being
reversed from producer to consumer of not only imagery, but maps and new interfaces in which data is layered
on top of the imagery.
Exercise 24 (X24)
Military organizations—like their civilian brethren—are feeling financial restraints on research and
development budgets while exploring next generation tools. San Diego State University developed an innovative
approach to reduce the costs associated with running field exercises. Their solution was to simulate a disaster
and run the exercise as a virtual event online. The program was tested in September 2010 using a scenario in
Southern California involving a large earthquake that generated a tsunami. In March 2011, the exercise was
repeated, this time with a scenario in Europe and the support of the Red Cross of Germany, National Institute for
7
http://www.nethope.org/about/us/
8
http://www.nethope.org/
9
Members of Net Hope: ActionAid, Ashoka, CARE, CHF, Christian Aid, ChildFund International, Children International,
Catholic Relief Services, Concern Worldwide, FINCA, Family Health International, Heifer International, International
Rescue Committee, International Federation of Red Cross and Red Crescent Societies, Mercy Corps, Nature
Conservancy, Opportunity International, Oxfam, PACT, PATH, Plan, Relief International, Save the Children, VSO,
WaterAid, Wildlife Conservation Society, Winrock International, and World Vision
10
http://www.dvidshub.net/news/55232/pacific-region-militaries-join-humanitarian-community-pacific-endeavor
6
13. Urban Search and Rescue, CrisisCommons, and the U.S. European Command. More than 12,500 volunteers and
organizations participated from 79 countries in the first X24, and more than 18,000 from 92 countries in the
second. 11 A key component of this new approach to training through an online simulator was the use of
cartographic applications that layered data published by volunteers. The military participants witnessed the
avalanche of available information. It was reportedly an eye-opener for many and it appeared to be educational
for participants at all levels.
This exercise formed a Virtual Civil-Military Humanitarian Operations Center. Their design was not
new, but using data and software applications from civilian and volunteer organizations capable of infusing data
in a rapid and accurate mapped fashion was new and had an impact. The next challenge will be integration;
seeing how well these organizations can adapt to non-military designed services and applications, many of
which today fall outside of military network security regulations and policies.
Imagery Sharing Organizations
Various volunteer and commercial reach back groups are both consumers and distributors of
information. This is a significant issue as there are many different platforms that are being used to compile a
map product. These organizations—like Ushahidi, All Partners Access Network (APAN), CrisisCommons and
OSM—can serve as resources for disaster information, and they use imagery as a base on which to layer their
datasets. If, however, the map product is created from a proprietary or classified software package, these data
layers are not easily distributable on other platforms, even if completely unclassified and non-proprietary (see
figures 7 and 8.) Some organizations have recognized this obstacle and do not use imagery at all because of the
legal and regulatory restrictions on devices mandated by their organization. If imagery is to be shared among
different organizations, then international standards that meet corporate and governmental requirements are
necessary.
III. Imagery: Types and Uses
Imagery of disasters can provide critical information for search and rescue, medical services, shelter,
and can support ancillary resources such as logistics planning. The source of the image and technology that
produces them determines the spatial resolution of the resulting image. Resolutions are now available from
commercial and military satellites down to 10 centimeters per pixel and finer.
Satellite Imagery
Satellite-based systems take images of the earth and transmit them to the ground upon acquisition. They
can offer high spatial resolution and broad spectral resolution depending on the sensors on the satellite. The
advantage of satellite imagery over aerial-based systems is that the orbit of the satellite is already known, and
can register the bounding coordinates of the image to it automatically. This allows users to project the image to a
2D map, which enables geospatial data points to be plotted with it. The dominant handicap of satellite-based
systems is that weather conditions can impact visibility of the ground. Vendors of advanced digital imagery
available today comply with standards developed by organizations including the American Society for
Photogrammetry and Remote Sensing 12 and the International Society of Photogrammetry and Remote
11
http://x24.eushare.org/ - The author observed and participated in this event.
12
http://www.asprs.org
7
14. Sensing. 13 However, these standards do not imply interoperability or transferability from one software or
hardware platform to another.
Aerial Imagery
Aerial imagery is acquired by a camera attached to a plane or an Unmanned Aerial Vehicle (UAV).
Because it is not in a fixed orbit, aerial platforms can fly multiple missions over a disaster area, thus providing
much more flexibility in temporal and spatial resolution. Aerial images are often supplemented with verbal and
written situation reports, providing further metadata on the image. Advantages over satellite imagery include the
ability to fly below cloud cover, and improving the spatial resolution by altitude variation. Maximum aerial
imagery resolution is now available at 10 cm or less using advanced digital imaging platforms. The most
common and difficult challenge is to geo-rectify images to the same accuracy as a satellite in near real-time.
Commercial aircraft platforms are subject to wind-drift and unintended altitude variations, even with
sophisticated autopilot and GPS managed flight navigation systems. Next generation systems now appear to
address these issues effectively (see figures 43 and 44.) Advanced government aerial image systems eliminated
most of these problems in the 1990s and are capable of registering imagery and embedding it into visualization
platforms rapidly, but those tools are not commonly available on non-military platforms. Commercial map
providers are beginning the move toward application program interface (API) tools to alleviate problems
experienced in the field. Response times are improving, facilitating quicker publishing and manipulation of the
images.
Images from Social Media and Hand-held Devices
Modern humanitarian assistance groups draw on information sources that are no longer confined to
official government sources. Today, individuals and local communities publish information during and after a
crisis event using Internet-based social media websites such as Facebook, YouTube, Wikipedia, Flickr and
Twitter. Pages and articles can now be extracted by HA/DR responders and exported as metadata to other
software applications. The social media sites have been both a source and destination for metadata during
disasters.
The use of social media applications has not gone unnoticed. The United Nations developed several
courses on social media under its United Nations Institute of Training and Research that directly address
Facebook, Flickr, YouTube, Twitter and others. 14 With these tools responders can detect, monitor, report and
record knowledge points within seconds of an event. These flows of information are compiled as datasets. The
data is often open source and therefore exportable to multiple applications and platforms such as Sahana and
Ushahidi.
IV. Computing to Enable the Use of Imagery
It is clear to HA/DR map experts that imagery has significant value when correlated with other sources
of information. But to do this in the field, responders need software to read, analyze and share the imagery.
Below is a snapshot of some software tools that enable each.
13
http://www.isprs.org/technical_commissions/
14
http://www.unitar.org/ksi/innovative‐collaboration‐development
8
15. Software Applications
Geographic Information Systems
A geographic information system (GIS), sometimes referred to a geospatial information system, is a
system designed to work with data referenced to spatial or geographic coordinates. 15 An early example was an
epidemiological map of a cholera epidemic in London made by John Snow and colleagues in 1854. 16 The map
showed the location of the outbreaks in relation to the water pumps and led to an effective intervention. A digital
adaptation to mapping was developed by the Department of Forestry of Canada 17 in 1960. The Canadian
Geospatial Information System became the foundation for today’s computer GIS standards. It remained
primarily a government set of tools for decades, and did not become widespread in commercial applications
until the 1980s. With these standards in place and the use of computer-based maps now possible, new
opportunities in cartography offered the ability to layer information for commercial and consumer applications.
No longer confined to 2D maps, computers offered accuracy since data could be shown in 3D. It would take
almost 35 years to become known outside of professional communities, but the growth of GIS since 2005 has
been explosive.
Global Positioning System and a Reference Standard
Small-form factor Global Positioning System (GPS) map devices used in Marine and Aviation
industries started the digital field-mapping revolution. In the mid-1980s, these devices were built into consumer
devices such as small aircraft and pleasure boats as navigation tools, which spread quickly to automobile
navigation systems. In 2001, a small company named Keyhole developed an Internet-based cartographic
application on a globe. Google acquired Keyhole in 2004, modifying it and renaming the Keyhole tool “Google
Earth.” Google soon began working with authoritative bodies on projection and representation standards. Now
Google Maps, OSM, Bing, and others use the World Geodetic System (WGS) 84 projection described further in
this book, 18 which has become the Web mapping projection standard around the world.
Metadata
Metadata is data about data, and is either embedded in the geographic data file or is made available as
an independent file that the geographic data reads. It defines the object’s grid coordinates (such as latitude,
longitude, and resolution) and it’s used to map geographic data accurately on a map. It also enables additional
geo-referenced data to be layered over it. In the 1990s, metadata standards for commercial photogrammetry
vendors were available for both digital and film formats, and they are still in use today. Metadata can also
include attribution information, such as a description, of the geographic area being plotted. Examples of
metadata for HA/DR response include when an image was taken, by whom it was taken and how many staff
work in the medical clinic seen in the image.
The Global Disaster Alert and Coordination System (GDACS), 19 operated by the United Nations Office
of Coordination of Humanitarian Affairs (UN-OCHA), uses GIS metadata to generate alerts that are then plotted
onto a GDACS-provided map, and, in parallel, are then sent as information alerts by really simple syndication
(RSS) 20 or email. 21 This is done through use of a mapping standard defining externally-sourced information
15 Star and Estes by Jeffrey Star and John Estes, 1990
16
http://en.wikipedia.org/wiki/The_Ghost_Map
17
http://www.cdci.ca/HGIS_Mapping_v2.pdf
18
http://wiki.openstreetmap.org/wiki/EPSG:3857
19
http://www.gdacs.org/
20
http://www.rssboard.org/rss-specification
9
16. streams, plotting data onto a map, and then publishing a finished map either on the Internet, on paper, or stored
in a computer as an archive. With those tasks accomplished, many subscribers of these maps then layer their
own metadata onto these “foundation” maps through Application Programming Interfaces (APIs), Java, or
similar scripting languages that allow them to create their own maps and information elements (see below for a
description of APIs.)
Web Browser Applications
Many Web applications that were originally intended for social networking are now being used for
disaster response assessments. Advances have been made in Web application services that enable
interoperability between them. Examples include the ability to map satellite imagery and embed third-party
datasets from applications such as Twitter and Flickr. The third party datasets can be collected from a variety of
sources including smartphones, social network services and volunteer input data sets. Web portals informing
HA/DR teams of current conditions in real time are being published on sites using Ushahidi, Riff, Crisis
Mappers Net and SwiftRiver. 22 An additional benefit to these sites being online is that they also provide
information to the general public. These websites are integrating data streams from Twitter, SMS, 23 and
volunteer databases in addition to satellite imagery and open source maps. Some Internet applications have
designed specialized versions of their services for mobile phones like Bing Maps, Google Maps and others are
now widely available.
Web Map Service, Web Mapping Tile Service (WMS/WMTS)
To share high-volume imagery on maps, the imagery can be published on the Web through a Web map
service (WMS), 24 a standard protocol for serving geo-referenced map images over the Internet. A WMS
generates images through a map server using data from a GIS database. WMS software reads the embedded
metadata for an image—location, resolution, number of pixels—and places the image on a 2D map. Image sets
are formatted as “tiles” or a series of strips that can be stitched together for easier data navigation. Some
vendors, such as Google, use a WMS (such as Google Maps and Google Earth) that processes the imagery into a
proprietary format that allows the user to view the processed imagery easily, but cannot extract the image for
use in another mapping software. In other cases, a WMS allows the user to download raw images and use them
with other applications. In order for image sharing to occur, other mapping standards are required (such as using
a common map projection). Some service bureaus use another Web mapping protocol called Web mapping tile
service (WMTS), which carries the same function as WMS, but is technically a different software application. 25
WMTS uses extensible markup language (XML) (see below) to interface with the imagery metadata to tile the
images onto a digital map.
Application Programming Interface (API)
The ability to add additional geographic datasets from disparate sources with imagery is accomplished
by using API tools. An API takes data from its source and processes it into a dataset that can be read by the
software interface of your choice. They have become the backbone for integrating third party information
sources onto maps. Examples of common APIs are Java, Sensor Model Language (SensorML), JavaScript
21
http://www.gdacs.org
22
http://swift.ushahidi.com/
23
http://www.shortcodes.com/howto_short-codes.html
24
http://www.opengeospatial.org/standards/wms
25
http://www.opengeospatial.org/standards/wmts
10
17. Object Notation (JSON), XML, Geography Markup Language (GML) and Keyhole Markup Language (KML),
all described below.
Google has created an API for users to manipulate its maps. 26 This allows developers of databases
(regardless of their format) to build datasets that work with Google Earth. Microsoft has a wealth of resources
available to developers on the Microsoft Bing Map site and Bing Map Blog. 27 The company also offers a
software development kit (SDK), which allows Bing Maps to work on Android devices (including RSS) using
its AJAX Control version 7.0 package. 28
The First API: Sensor Model Language (SensorML)
In 1998, the U.S. Government created the Committee on Earth Observing Satellites, endorsed by
agencies including the Environmental Protection Agency (EPA), NASA, the Defense Information Systems
Agency, and satellite manufacturers including General Dynamics and Northrop Grumman. This committee, in
cooperation with the Open Geospatial Consortium (OGC), developed SensorML. 29 SensorML offered the first
standard in which metadata associated with a digital cartographic image could be correlated and given a
description. Since the development of SensorML, other API standards have been developed such as XML 30
protocol (see explanation for XML below), which is related to SensorML. Another API is GML, which offers
similar features and protocol concepts. In 2005, OGC released version 3.1.1 of SensorML in which GML is
encapsulated. This allows protocols like GML, GEO 31 (eXtensible Hypertext Markup Language [X-HTML]), 32
JSON, XML and KML to add additional geographic data as new layers onto a virtual map.
JavaScript Object Notation (JSON/GeoJSON)
JavaScript Object Notation (JSON) is a software programming language that is a lightweight form of
the JavaScript programming language. It can quickly parse data, including imagery, and import it into a text
format that is language-independent (in the programming sense) but uses conventions that are familiar to
programmers from C, C++, C#, Java, JavaScript, Perl, Python and others. 33 Its primary advantage is that it uses
less code than XML. GeoJSON is the correlative mapping programming language used to plot data on a map. It
is used widely by the open source mapping community and supported by many GIS software packages.
Extensible Markup Language (XML)
XML is a protocol used to interchange data between different encoding formats. XML use is widespread
and popular for sharing documents between competing applications (e.g., Microsoft Office to Apple iWork).
Software image processing applications can publish metadata into XML so it can be used immediately with
WMS-formatted maps and with specialized survey maps and applications. Other XML applications using
external metadata layers into crisis maps include Twitter 34 , RSS 35 and YouTube. 36
26
http://code.google.com/apis/maps/documentation/javascript/
27
http://www.bing.com/community/site_blogs/b/maps/default.aspx
http://www.microsoft.com/maps/
28
http://www.bing.com/community/site_blogs/b/maps/archive/2011/03/31/bing-maps-android-sdk-available-on-
codeplex.aspx
29
http://www.opengeospatial.org/standards/sensorml
30
http://www.w3.org/TR/REC-xml/
31
http://microformats.org/wiki/geo
32
http://en.wikipedia.org/wiki/XHTML
33
http://wiki.geojson.org/What_is_JSON%3F
34
https://dev.twitter.com/docs/using-search
11
18. Geographic Markup Language (GML)
GML is an XML-based API to link GIS data and a Coordinate Reference System (CRS). GML is often
used as the reference schema for geospatial objects. Google’s KML for example can extract data from a GML
file but not the other way around because Google’s KML uses a different CRS and may not interoperate with
KML-embedded metadata. GML is continually updated by community users. 37
Keyhole Markup Language (KML)
KML is also an XML-based interface that uses Java-based programming to bind data from a remote
source into a new data layer. Three different versions of KML are widely used because of the ease in which it
can be used in Google Maps and Google Earth. Google’s Earth and Map application resources can be found on
its Lat Long Blog. 38
Application Standards: An Impediment in the Making
Data is no longer confined to static sources. The Internet social revolution offers data from a variety of
sites including Twitter, Facebook, RSS feeds, SMS and MMS, and many others. The structure of this data is
often extracted as simple Comma Delimited Value or Delimited Set Value format. This has created an explosion
of possible sources of information that can be mapped. Developers have taken advantage of social media linking
tools and exported them to mapping solutions.
Applications used for mapping start with the simplicity of plotting reference data. But there are now
more than 25 different database variants, 75 database tools to import and export datasets, and 60 query tools
available for mapping. Some tools are specialized to only run on specific machines such as Microsoft, Apple,
Linux or Unix operating systems. 39 Some databases are only accessible using proprietary software clients, while
the ones that are based on Web browsers use different scripting languages such as ActiveX or Java.
Almost any programming language can be used for mapping applications and the decision of which is
often a matter of personal choice. There is no single answer for which languages are used for specific
applications. This issue is particularly challenging when HA/DR mapping sites are operated by volunteers. For
example, a volunteer helping with the publication of Twitter feeds onto a disaster map may use JavaScript, then
use GeoJSON with Flickr and XML with SMS messaging data. Each of these creates multiple GIS layers that
are then placed on maps. But by using multiple programming languages, there are opportunities for errors and
programming collisions.
Similarly, APIs, which are often written to extract information from one location to be read by an
interface, are managed at the discretion of the owners. APIs will often be made available for specific operating
systems and may not interoperate across platforms.
Simplified and interoperable application standards for reading and understanding datasets need to be
specifically developed for imagery data layers, allowing for open source inputs from various news feeds and
application platforms. Examples of some needed standards include keyword index taxonomies and tagging (or
35
http://www.rssboard.org/rss-specification
36
http://code.google.com/apis/youtube/2.0/developers_guide_protocol.html
37
http://www.opengeospatial.org/standards/gml
38
http://google-latlong.blogspot.com/
39
http://en.wikipedia.org/wiki/Comparison_of_database_tools
12
19. hash-tagging, the process of using a hash mark [#] before the word) often used with GPS coordinates. Currently,
there is no standard for generating a time stamp identification or user ID (source) for export to a dataset, which
gets transcribed onto other application layers such as KML. KML is often the sole source input feed into
volunteer syllabus HA/DR data, because there is a large community of developers that use Google Earth for
their source of maps and imagery. But there are serious issues with KML. When one portal uses imagery
(example Google) and collects data from its own data sources (for example, the KML API version 2), it cannot
be reused on a different service provider source of imagery (say Microsoft Bing Maps) because they do not
support KML natively. Additionally, the introduction of KML API version 3 creates new technical obstacles
with the different programming options available. KML directly links to the original source imagery as it is
being served and cannot be interpreted or extracted. This means work is duplicated to export a data set to
different APIs made for different image repositories. When supplemental image data is combined on different
platforms, this step becomes necessary. One data portal may have updated imagery for Region 1 while a second
data portal may have updated imagery for Region 2, but neither has both regions combined. Integrating the data
portals requires cycle time between the two service bureaus to enable interaction, which consumes valuable
time.
Open standards have been created by the Global Disaster Alert Coordination System (GDACS) 40 in
Europe, comprised of policies organized by UN-OCHA to format data that is collected, apply it to HA/DR maps
and send out the information to its members. The data is imported and exported using RSS and email
notification.
For commonality, WMS is the protocol the majority of service map providers use to layer their
geographic reference datasets. Over the past year, a commercial firm in Germany called GeOps 41 created a non-
profit organization to determine how well developer’s image/maps work to WMS standards. The company
currently monitors 270,000 WMS layers published by more than 4,500 worldwide service bureaus.
The Open Geospatial Consortium (OGC) 42 is a voluntary standards body with no specific mandate and
has no recognized governance body managing it. The OGC works with associations that adhere to
internationally recognized standards groups such as the International Organization for Standardization, Internet
Engineering Task Force (IETF) and World Wide Web Consortium, and has compiled roughly 30 different
application standards. 43 Adherence to these standards is not required and is often ignored for service delivery
reasons.
Commercial Software Applications for Publishing Geographic Information on the Web
• GeoBase (Telogis GIS software) • Smallworld’s SIAS • GSS –MapXtreme
• PlanAcess • Stratus Connect • Cadcorp GeognoSIS
• Intergraph GeoMedia WebMap • ESRI’s ArcIMS and • Autodesk Mapguide
ArcGIS Server
• SeaTrails AtlasAlive • ObjectFX • ERDAS APOLLO Suite
• Google Earth and Google Fusion • MapServer • GeoServer
40 http://www.gdacs.org
41
http://www.mapmatters.org/
42
http://www.opengeospatial.org/
43
http://www.opengeospatial.org/standards
13
20. Data Management
Options and Obstacles
Although there is a lot of technology designed for use in the field, optimal implementation is not always
straightforward. Stakeholders must align their needs collectively and convey them to vendors. Not only must the
technical requirements be mutually acceptable, but they must be delivered within cost constraints.
Image technology advances during the past 5 years have created exceptional opportunities through
lenses, chipsets (in particular CCDs), software and—in the near future—the ability to take multiple resolution
images simultaneously. In particular, digital imagery used with cartographic techniques has enabled new types
of maps to be produced that can be densely layered with datasets. The amount of detail that can be extracted
from an image has increased more than 100-fold since 2005. This capability has created innovation in the
embedding of post-disaster images onto maps in four dimensions: height, width, depth and time.
Images collected from the 2010 Haitian earthquake covered approximately 250 square kilometers (Port
Au Prince is 32 square kilometers) from a variety of commercial and government agency service bureaus. In
Honshu, Japan, the reconnoitered area was more than 5,000 square kilometers. The section of Miyagi Prefecture
where some of the most severe damage occurred required 10 times the amount of imagery as Port-Au-Prince
did. For each image, the file size and data that are transcribed and embedded or linked to it require vast
quantities of storage. The higher the resolution, the more files to be recorded are required.
Files of such imagery require storage services that scale rapidly into the terabyte size or larger. It should
be noted that storage of 10 petaBytes 44 at any one facility constitutes Commercial Data Center classification,
requiring maintenance staff, backup power and redundant broadband access services. Data center services are
expensive to operate. Additionally, data of this scale and volume requires high performance network access. An
example of a global scale mapping effort is Microsoft’s Bing mapping product. Every month, imagery updates
are loaded onto its servers and each global update is estimated to average about 10 terabytes in size. 45
Some disaster responders have argued that the ability to browse the Web in the field is undesirable
because there is not enough bandwidth to access the data being served remotely, or the bandwidth that would be
adequate is prohibitively expensive. Most platforms on the Web host their data with a client-server processing
architecture, meaning that the client (in the field) is requesting data hosted on a server. That server will often be
located in a different country or even a different hemisphere than where the information is needed. Imagery
services may soon develop a standard that enables datasets to be compartmentalized and downloaded with
update solutions that only change imagery data defined as required by each ground team.
Data Sharing
The Internet’s social capabilities have enabled hundreds of applications to be developed and integrated
with imagery, but that doesn’t mean they are all useful. Some social data streams are easily available but are
difficult to filter, or redundant, or are not informative. However, other datasets are highly valuable but not
accessible due to corporate or commercial interests, government policies, or law. Many government agencies
further complicate the issue by mandating software that cannot easily parse these data formats. Even when
44
1000 TeraBytes = 1 Petabyte = 1015 Bytes / 1 Terabyte = 1,000 Gigabytes. / 1 Gigabyte = 1,000 Megabytes. Storage and
file sizes typically are expressed in terms of Bytes in decimal form, while data transmission rates usually are in terms of
bits per second (bps). One Byte = 8 bits. Binary formula equates; 1 Megabyte = 1024 kilobytes, 1 Gigabyte = 1024
Megabytes, etc.
45
http://en.wikipedia.org/wiki/Bing_Maps
14
21. governments and commercial entities do share datasets, the terms and conditions are often very strict, or they are
only available through a proprietary API and not downloadable to other network data storage facilities.
Copyright infringement, licensing rights, privacy regulations and proprietary data limits are additional
roadblocks to sharing data.
The most serious issues overlooked involve liability protections by both the publishers and sources of
imagery and its data. As far as our research shows there is no universally adopted Good Samaritan law that can
protect volunteers who translate emergency help messages, map them and distribute that map to response teams
in the field.
Another rising concern is who owns the finished product when data sources are published and used.
Many non-profit organizations do have Creative Commons license 46 terms and conditions that waive any
proprietary rights. Commercial use agreements for software such as Google Earth, Google Maps 47 and
Microsoft Maps 48 should be reviewed with legal counsel to understand content rights, distribution restrictions
and other limitations.
The Need for Data Interoperability
Developers using APIs know that not all user devices (browsers, netbooks, tablets, smart phones) are
compatible with their interface tools. Users need to be aware that APIs are constantly being upgraded and that
upgraded code may affect the API’s performance on their device. As these APIs improve, backwards
compatibility becomes an obstacle to end user device capability. Google’s API, now available as API Version
3, 49 is compatible with modern smart phones, where previous versions were not. The mobile version 3 is a
lightweight (small—32 kilobytes (KB) per tile) JavaScript design, enabling these devices to work within
acceptable operating parameters. As in all software development and vendor solutions, long-term support for
these APIs create some obvious concerns. Google plans to support Version 2 of the API until 2013. Microsoft
offers an API available in its SDK for Microsoft Bing Maps. 50 However, the ability to interoperate or integrate
with KML is not easily accomplished. The need for consistent interoperability is clear, but it’s a recurrent
problem in many software services and not unique to mapping.
Collaborative efforts to add data layers to imagery are a priority for both public and private entities. The
U.S. Government has created a website on the topic, centered on the use of XML 51 when integrating data
supplied by Government agencies. Options offer opportunities, but also risks conflict, errors and confusion.
Single-use image production not exportable to other portals during a disaster event creates a regrettable demand
for multiple sets of imagery, and thus extra service and processing, plus added data downloads by end users.
This creates congested networks and increased storage requirements.
46
http://creativecommons.org/
47
http://maps.google.com/help/terms_maps.html
48
http://www.microsoft.com/maps/product/terms.html
49
http://code.google.com/apis/maps/documentation/javascript/reference.html
50
http://www.microsoft.com/maps/developers/mapapps.aspx
51
http://xml.gov/
15
22. Standards for Cartography and GIS datasets
GDACS retrieves open source or non-restricted data by subscribing to RSS texts from a variety of
sources and places them onto open source maps that UN-OCHA 52 created to help solve several problems in
HA/DR response.
The most commonly used map projection used in HA/DR mapping is the World Geodetic System
(WGS). A projection is a transformation of the spherical or ellipsoid earth onto a flat map. There are many
different projections that can be chosen to preserve the area, shape or distance computations from the sphere to
the planar representation of a region. In order to take data from one projection and overlay it on a dataset with a
different projection, a transformation must be performed on the data that warps the original data into the new
projection. Every time data is projected or transformed into a new projection, the accuracy and precision of the
original data is compromised. Prior to 1958, there were several different projections commonly used for charting
and publishing maps. A singular global map standard began with WGS60 (1960), updated in 1980 as Geodetic
Reference System 80, which evolved into the WGS84 standard in 1984. With these standards, maps were
published with common reference points regardless of the scale or size of the map. Its adoption offered
information layering opportunities that could be transferable from one source reference map to another with no
errors. WGS84 has been updated to the Earth Gravitational Model(EGM)96. The United States Geological
Survey (USGS) uses WGS84 when mapping an earthquake’s epicenter and publishing location data. 53
Open Source Map resources
Open source applications exist, and enable developers to create new maps that layer over satellite
imagery. It is important to note that when using open source maps or layers, there is no specific support or real-
time capability to troubleshoot any issues that may occur. The advantage to using open source maps as a
foundation is the speed with which you can add data and the lack of legal copyright considerations. Since 2004,
OSM 54 has been a pioneer in cartography and in the visualization of crisis mapping and continues to drive new
innovative technology concepts. OSM natively uses Openlayers Java. 55
There are several sources of open source cartography. With these maps, satellite imagery can be
overlaid using open standard software and APIs. Some commercial vendors have accepted this standard and
allow these services to embed open source maps onto their devices. Garmin offers the ability to layer your own
map 56 images onto several different models. Table 1 shows a few of the open source map providers registered in
the second quarter of 2011.
Table 1. Open Source Map Providers Registered in the Second Quarter of 2011
Map Theme Area
OpenStreetMap general, cyclists, debugging Worldwide
Information Freeway general, almost real-time Worldwide
OSM WMS Servers general, Web map services Worldwide
OpenSeaMap nautical chart Worldwide, multilingual: seas, oceans and waterways
OpenStreetBrowser features highlighting Europe
FreeMap Walkers parts of the United Kingdom
52
http://www.reliefweb.int/
53
http://earthquake.usgs.gov/earthquakes/glossary.php#location
54
http://www.openstreetmap.org/
55
http://www.openlayers.org/
56
http://wiki.openstreetmap.org/wiki/OSM_Map_On_Garmin
16
23. Reit- und Wanderkarte walkers and riders Austria, Germany, Switzerland
TopOSM walkers and riders United States
OpenCycleMap Cyclists Worldwide
YourNavigation Routing Worldwide
OpenRouteService Routing Europe
OpenOrienteeringMap orienteering style Worldwide
OpenPisteMap Skiing some European and USA resorts
Bing OSM “Map App” General Worldwide
CloudMade general, mobile and various other custom styles Worldwide
OpenAviationMap Airspace indexing and classification Braunschweig, Germany
MapQuest Open (beta) general, routing Worldwide
NearMap up-to-date photomaps populated areas of Australia
OSMTransport public transport Worldwide
ÖPNV-Karte, or OpenBusMap Public transport Europe
OSM Mapper Debugging maps by Ito World Ltd
India (Chennai) [Bangalore and Delhi under
Busroutes.in Public transport bus routes
development stage]
Source: Wikipedia 57
Hardware
The Data Center
The power of imagery brings a self-inflicted Achilles’ heel injury that is hard to overcome. The sheer
volume of imagery available is becoming untenable for a variety of users and agencies, which is why it is now
as important to consider where the data is archived as it is to consider how to process it. The volume of storage
space required to house satellite and aerial image data now demands professional data center facilities, which
quickly becomes an accessibility problem. Reach back facilities that are out of harm’s way can provide high
speed access (10 gigabits per second [Gbps] and higher), but end up being retrievable at only 1 megabit per
second to those on the ground. Delivery of these modified resources to the field is a critical issue and can impact
how, when and where users retrieve files.
Users can cache files on traditional devices such as laptops, notebooks and some tablets, while first-
generation smart phones have limited capacity and current generation tablets can store up to 64 gigabytes (GBs)
(iPad 2). Google Earth for personal computers (PCs) and Mac computers can cache up to 2 Gigabytes of data
natively before a call to a data center to load image files is required. There is no easy way to know how much of
this cache is dedicated to global mapping files versus updating imagery that is downloaded. This caching feature
does not cache KML or KMZ files (KMZs are collections of KML files compressed into single file) if the files
do not include the images within the KML container, which most do not. There are free geographic tools that
assist users in creating customized cache settings and files. However, as Google Earth updates its application,
there is no guarantee that customized cache settings and files will continue to work. The tools do not have a
bypass capability to exceed Google Earth’s 2 Gigabyte caching limit. 58 This illustrates the need for data server
access to be near to the disaster zone if intense imagery file retrieval and regular updates are required. It should
57
http://en.wikipedia.org/wiki/OpenStreetMap
58
http://freegeographytools.com/2009/automating-the-google-earth-caching-process
17
24. be understood that Google Earth (and other similar programs) were never intended for HA/DR use. Google’s
disaster response team recognizes these severe shortfalls and is changing both data center policy and hardware
constraints because of it.
There are methods that can be used to get high volume image datasets on the ground faster; an example
includes the Map in a Box.
Moving the Data Center: Using a Map in a Box Solution
Data center mobility should be a key next-generation requirement to provide accessibility to imagery in
the field. The goal should be to ensure that large image data and associated datasets can be made available as
close to the disaster event as possible, saving valuable network connectivity to other critical application services
such as Voice over IP and real-time situation awareness needs. Reach back and forward operating access to
large data storage facilities should be a priority for government and private institutions offering these services.
It is therefore recommended that a mobile data storage system be considered early in a response. The
processing capacity and data storage (hard drive space) should be sufficient to contain enough imagery to hold
the pre-event and post-event aerial and satellite imagery with ancillary digital open source maps and ancillary
applications required to cover a forward operating area of a disaster site. It should also contain two Ethernet 1-
GB network cards for network access. The parameters of HA/DR forward operating headquarters should be
defined by NGOs, IOs, and government agencies to determine which groups are responsible for which
geographic areas, and that will determine the mobile storage system requirements. An example is Port-au-
Prince. Pre- and post-event imagery covering every square inch of the city would require approximately 24
terabytes of data storage requiring approximately 3 compact devices and 1 optional network processing server
which can also host mapping software if required. Each unit consumes approximately 82 watts of power at its
maximum operating performance. This system would be connected to multi-stacked Ethernet switches for local
high speed local area network access and Wi-Fi Access Points. These systems can be configured to be
networking-neutral for easy access by Apple, PC Windows, and Linux computers. They can also be made
available with expanded memory cards with secure digital (SD) and micro SD slots. This allows the quick
transfer of data onto memory cards (and vice versa), which can subsequently be transferred into smart phones
through micro SD memory cards or tablet computers though SD memory cards.
Updates to satellite imagery can be processed with open standard GIS metadata on hot swappable hard
drives and can be delivered through the normal course of resupply that occurs at the forward operating
headquarters. The hot swappable drives can replace or be added to additional files, be quickly installed into
these units, and be instantly made available to users on the network. The economics of supplying imagery
updates in this manner is likely to be more cost effective than consuming expensive satellite or mobile phone
bandwidth and avoids congesting these networks unnecessarily.
For mapping applications to work with a Map in a Box configuration, settings for files storage locations
need to be configurable by the user. In addition, new or upgraded APIs may be required or new Java-based
scripts must be designed. This was prototyped during the Strong Angel III HA/DR demonstration in San Diego
in 2006 and worked very well. The level of computer performance and capabilities are superior today to what
was available then.
18
25. Mobile Smart Devices: Situational Awareness for Humanitarian Responders and the General
Public
Mobile devices such as tablets and mobile phones are now the primary mode for both collecting and
sharing information in a response effort. A January 2011 report published by the Mobile Computing Promotion
Consortium of Japan surveyed users of smart phones. Of those who had smart phones, 55 percent used a map
application, the third most common application after Web browsing and email. 59
Android and iPhones both support Google Maps while Windows phones use Bing maps. Motorola
offers the ability to use Google or Microsoft, depending on which operating system is installed on the device.
MapBox, a custom map application on the Web, has created an application for Apple’s iPad 2. 60 Some
telecommunications providers prepackage imagery options on these devices with embedded software. Others
have exclusive software add-on contracts that may not offer alternatives to be installed on the device when
operated on that mobile carrier’s network (figure 3 shows a map application on an iPAD.) The ability to overlay
information on these mapping applications with any utility varies by manufacturer. For example, some map
applications on smart phones do not recognize KML layers. In addition, not all wireless phone operating
systems are compatible with the application foundation platform, such as Google Maps or Google Earth, and in
fact may only be compliant with a competitor’s proprietary version creating further export challenges. Also,
smart phones vary by operating system, software ecosystem, application capabilities, memory capacity, network
speed and energy consumption. Other variations in mobile phone compatibility are SMS character limits, MMS
compatibility and other third-party extensions like GPS location services. Smartphone browser options vary as
well. Some have their own native Web browser, and others use downloadable versions. Not all browsers support
add-on plug-ins such as Java, 61 Adobe Shockwave, 62 Flash or KML 63 layers (based on Java) or other specialized
software apps.
To deliver imagery to every platform generates redundant cycles of content delivery, increasing costs
and potential delays in publishing critical information. There are no international standards for devices,
applications or hardware for HA/DR service delivery and, like public emergency telephone usage, standards
vary around the world. 64
Because of the application limitations, developers have created workarounds for a variety of smartphone
devices by “jail breaking” the devices away from their registered cell service provider. There is significant risk
in doing so, including the manufacturer remotely “bricking” the device due to legal software agreements a user
signs at the time of purchase. Bricking a device is a process that completely disables the device from operating
and it is extremely difficult to repair or undue. 65 The Electronic Frontier Foundation claims that it is legal to
jailbreak a device and the question is currently before the courts. 66
59
http://www.mcpc-jp.org/english/pdf/20110128_e.pdf
60
http://mapbox.com/#/
61
http://www.java.com/
62
http://www.shockwave.com/
63
http://code.google.com/apis/kml/documentation/
64
http://en.wikipedia.org/wiki/Emergency_telephone_number#Emergency_numbers
65
http://en.wikipedia.org/wiki/Bricking
66
http://www.eff.org/press/archives/2010/07/26
19
26. In February 2009, Apple filed a patent that enables users to be identified when jailbreaking its devices,
including iPhones, iPods, and iPads. 67 Research in Motion’s (RIM’s) Blackberry devices that are capable of
using the AT&T network to tether have software workarounds that are also under scrutiny. 68
Telecommunications carriers are also monitoring modified smartphones that are operating on their
networks. Several carriers have broadcasted to their customers that any modified (jailbroken) smart device will
be disabled if found operating on their network. 69
It is therefore critical that imagery services consider how this information is best served to the widest
user audience possible without creating multiple duplications of layer imagery. An important consideration is
battery consumption in these devices. As device processing power demands increase, the faster the device
consumes power, requiring frequent recharging, which in itself is a limited resource.
For context, it should be noted that during the Miyagi Earthquake and Tsunami of March 2011,
comScore found that 93 percent of all mobile phone users in the impacted region were not smartphone service
capable. 70 In response, Google created multiple viewing options for the same data.
Smartphone manufactures are not aligned to a single source of mapping technology and their operating
systems do not always offer third party application plug-ins.
The Smartphone Global Market Share
Table 2. Top Five Smartphone Vendors, Shipments, and Market Share During the Fourth Quarter of 2010 (Units
in Millions) 71
4Q10 Units 4Q10 Market 4Q09 Units 4Q09 Market Year-over-
Vendor Shipped Share Shipped Share year Growth
Nokia 28.3 28.0% 20.8 38.6% 36.1%
Apple 16.2 16.1% 8.7 16.1% 86.2%
Research In Motion 14.6 14.5% 10.7 19.9% 36.4%
Samsung 9.7 9.6% 1.8 3.3% 438.9%
HTC 8.6 8.5% 2.4 4.5% 258.3%
Others 23.5 23.3% 9.5 17.6% 147.4%
Total 100.9 100.0% 53.9 100.0% 87.2%
Table 3. Top Five Smartphone Vendors, Shipments, and Market Share, 2010 (Units in Millions) 72
2010 Units 2010 Market 2009 Units 2009 Market Year-over-year
Vendor Shipped Share Shipped Share Growth
Nokia 100.3 33.1% 67.7 39.0% 48.2%
Research In Motion 48.8 16.1% 34.5 19.9% 41.4%
Apple 47.5 15.7% 25.1 14.5% 89.2%
67
http://www.patentvest.com/console/reports/docs/app/20100207721.html
68
http://crackberry.com/att-blackberry-bridge-download
69
http://www.tipb.com/2011/03/18/att-cracking-jailbroken-mywi-users/
70
http://www.comscore.com/jpn/Press_Events/Press_Releases/2011/2/Smartphone_Adoption_Continues_to_Grow_in_Japa
n
71
Source: IDC Worldwide Quarterly Mobile Phone Tracker, January 27, 2011.
72
IDC - http://www.idc.com/about/viewpressrelease.jsp?containerId=prUS22689111
20
27. Samsung 23 7.6% 5.5 3.2% 318.2%
HTC 21.5 7.1% 8.1 4.7% 165.4%
Others 61.5 20.3% 32.6 18.8% 88.7%
Total 302.6 100.0% 173.5 100.0% 74.4%
Smartphones to Smart Tablets
Smartphone and smart tablet technology offer advanced capabilities and uses for the HA/DR
community. Tablet computing, once thought to be a small consumer market segment, has exploded with the
launch of Apple’s iPad. 73 The second generation release in 2011 created new market innovations in browsing
information and computing power. Competitors have launched new products because of Apple’s tablet success.
Table 4. 74 Global Shipments of tablet operating systems (os)
Shipping Period Q3 ‘10 Q4 ‘10 2010
Apple iOS 4.2 7.3 14.8
Android 0.1 2.1 2.3
Others 0.1 0.3 0.5
Total 4.4 9.7 17.6
Table 5. Global Tablet Operating System Marketshare (%)
Evaluation Period Q3 ‘10 Q4 ‘10 2010
Apple iOS 95.5% 75.3% 84.1%
Android 2.3% 21.6% 13.1%
Others 2.3% 3.1% 2.8%
Total 100.0% 100.0% 100.0%
The three key breakthroughs in tablet developments are battery life, graphics and small form factor A5
processors. The device claims to have up to 10 hours of operating capability before recharging is required.
Apple’s iPad 2 is available with tri-mode network connectivity. This enhances the users’ ability to connect to a
variety of communications networks that include Wi-Fi and third generation (3G) in code division multiple
access (CDMA) 75 or Global System Mobile (GSM) 76 modes. With the added performance advantage of large
solid-state disk space, large software ecosystem performance is greatly enhanced. Google’s Android open
source software approach expands imagery applications opportunities that may be otherwise confined. Software
designed for Blackberry’s new Playbook will run Android apps and connect with all Wi-Fi standards and next
generation wireless long term evolution (LTE) and high-speed packet service (HSPA)+ network protocols. 77
73
http://www.apple.com/ipad/
74
Source: Strategy Analytics
75
Code division multiple access - http://en.wikipedia.org/wiki/Code_division_multiple_access
http://www.cdg.org/
76
Global System for Mobile Communications - http://en.wikipedia.org/wiki/Global_System_for_Mobile_Communications
http://www.gsmworld.com/
77
BlackBerry PlayBook with Wi-Fi 802.11 a/b/g/n
BlackBerry 4G PlayBook with Wi-Fi 802.11 a/b/g/n + WiMax
BlackBerry 4G PlayBook with Wi-Fi 802.11 a/b/g/n + LTE
BlackBerry 4G PlayBook with Wi-Fi 802.11 a/b/g/n + HSPA+
http://us.blackberry.com/playbook-tablet/?IID=us:bb:homepage_Learn%20more
21
28. As these devices become part of the social fabric, Global CDMA 78 and GSM smart phones with map
applications are becoming available 79 and should offer the ability to accept and view layered maps for HA/DR
teams in conjunction with post event imagery to solve situational awareness for possible communications
infrastructure damage. The acquisition of this technology will initially be adopted by well funded NGOs and
public safety agencies. It remains to be seen if such technology can cascade to smaller NGOs and government
public safety agencies that have very long IT replacement cycles. Adoption in regions like Africa, Southeast
Asia and South America may use low cost versions by smaller vendors and may not be compatible with name-
brand applications and may have limited network connectivity options.
Figure 2. iPad with Health Pro software image 80 Figure 3. iPad with Jeppesen map 81
78
http://www.cdg.org/worldwide/index.asp
79
http://www.mobileworldlive.com/maps/
80
http://www.apple.com/ipad/
81
http://www.apple.com/ipad/
22
29. Figure 4. Blackberry Playbook by RIM 82
The tablet market was once thought to be a specialized market segment used for unique applications and
services. The idea was to use it for logistical applications such as medical and marketing surveys. No longer is
this the case as Apple has proven with the launch of the iPad 2, selling approximately 400,000 units in 26
countries within 3 days. 83 RIM’s launch of the Playbook (above) integrates with Android apps, 84 offering
thousands of applications written for Android. RIM’s foray into the tablet market is clear recognition that these
devices are a large segment of the “Smart” telephony and Internet access market. By offering features similar to
Apple and Motorola, the device offers rich graphics capability in addition to long battery life with wireless high-
speed access including LTE and HSPA+. It is important to note that these devices offer critical advantages over
smaller form factors. Among them are interchangeable keyboard formats for language, large screens for Web
applications (such as Google language translator) and advanced graphics processing enabling the use of satellite
imagery to be viewed in the user’s desired resolution. A key feature is that it can tether via Bluetooth to an
existing Blackberry smartphone to use its network connectivity. If network connectivity survives a disaster,
these devices will play an important role in HA/DR responses anywhere services are operational. With advanced
cellular on wheels emergency communications platforms available in a variety of configurations, repair times
are reduced (see section V, communications networks.)
82
http://ca.blackberry.com/playbook-tablet/
83
http://online.wsj.com/article/SB10001424052748704027504576198832667732862.html
84
https://market.android.com/
23
30. Figure 5. Motorola Xoom Figure 6. Samsung Galaxy
Every major manufacturer now has a tablet model either in the development pipeline or entering service.
This includes Motorola (Xoom with Android) 85 selling more than 100,000 units (figure 5), as well as the
Samsung Galaxy, which is selling well (figure 6). Verizon is offering CDMA–Wi-Fi/LTE network
connectivity. 86 Samsung offers the Galaxy 87 tablet available in two different screen sizes and it is also available
with Wi-Fi/HSPA+ connectivity. LTE and HSPA+ are capable of downloading at speeds of 22 megabits per
second (Mbps.) All claim to have more than 10 hours of battery life when network connectivity is enabled.
Other manufactures such as Dell, Hewlett Packard, LG, and Viewsonic have thin-profile devices available in 7
inches and larger sizes. The cost of these units ranges from $300 to $950 (U.S.).
85
http://www.motorola.com/Consumers/US-EN/Consumer-Product-and-Services/Tablets/ci.MOTOROLA-XOOM-US-
EN.overview
86
http://www.motorola.com/Consumers/US-EN/Consumer-Product-and-Services/Tablets/ci.MOTOROLA-XOOM-US-
EN.alt
87
http://www.samsung.com/global/microsite/galaxytab/
24
31. V. Communications Networks
Communications networks, including satellite, terrestrial wireless, and landline voice/data network
links, are vital to transmitting all forms of information to decisionmakers for analysis and action. Imagery has
become a cornerstone of factual information for disaster response and requires capable communication networks
to be shared. The explosive growth in telecommunication technology, networks, and devices has created new
applications within individual and group communities influencing consumer behaviors, business operations, and
volunteerism. These tools are driving network providers to develop faster network access services. Hardware
vendors are following suit with powerful handheld devices that rival regular desktop computers. Data is
becoming accessible to those who need it.
Mobile Network Services
There are three different technologies that are used for terrestrial-based mobile networks. Terrestrial-
based wireless technologies include GSM, CDMA and Universal Mobile Telecommunications System (UMTS).
GSM is used worldwide while CDMA is generally focused in North America, Japan, China, and India. 88 UMTS
networks are deployed in North America, Japan, Southeast Asia, Europe and Africa. Each has next generation
high-speed data network capabilities including 3G, fourth generation, HSPA+ 89 and LTE. Over the next several
years, these technology platforms may be paired down to two competing standards, GSM/EDGE and Wide
Code Division Multiple Access (W−CDMA)/LTE/UMTS, although this forecast is not certain given the
constantly changing market and nature of the industry. Wireless access technologies are now capable of
delivering true high-speed broadband service to mobile phone devices; in particular, smartphone hardware is
equipped with next generation chipsets. Ten years ago, the average mobile user connected at 14,400 to 28,800
Kb per second. Today, smartphones can connect at speeds exceeding 20 Mbps with higher speeds becoming
available within the next 3 years.
Global System Mobile (GSM)
The most popular mobile network solution used worldwide is the GSM technology platform with a
follow on system called EDGE (Enhanced Data GSM Evolution). The technology is continually evolving as
shown in figure 7. GSM uses time division multiple access limiting its maximum range to approximately 35
kilometers from a tower, assuming ideal conditions are available. 90 It is possible to extend the range up to 120
kilometers. 91
88
http://en.wikipedia.org/wiki/List_of_mobile_network_operators
89
High Speed Packet Service - http://www.gsacom.com/news/gsa_319.php4
90
http://www.allbusiness.com/electronics/computer-electronics-manufacturing/6838169-1.html
91
http://www.allbusiness.com/electronics/computer-electronics-manufacturing/6838169-1.html
25
32. Figure 7. GSM and UMTS Product Roadmap Illustrating Data Download Speeds 92 (data from the 3rd
Generation Partnership Project – 3 GPP)
Code Division Multiple Access (CDMA)
The range of a mobile network depends on the frequencies used. In 2008, more than 180 CDMA mobile
carriers announced network upgrades to W-CDMA. 93 Backwards compatibility will be maintained for the
foreseeable future. In 2010, the evolution continued with the addition of LTE. Overall CDMA will continue to
be supported for years to come. However, as devices migrate to next generation standards, use of generic first-
generation CDMA will be gradually discontinued. This will allow for more efficient use of the wireless carrier
spectrum, which currently uses it.
Universal Mobile Telecommunications System (UMTS)
UMTS is primarily used in Europe. One of the drawbacks first generation UMTS/EDGE handsets
currently experience is higher power consumption levels when compared to GSM or CDMA service. These
devices are still in wide use and production today. W−CDMA and UMTS can be interoperable, roaming
between different mobile network operator customers if the handset is a dual mode model. Handset
manufactures offer dual, tri-mode, and quad-mode phones.
92
http://www.gsacom.com/news/statistics.php4
93
http://www.cellular-news.com/story/34083.php
26
33. Wide Code Division Multiple Access (W−CDMA)
Japan’s largest mobile network is based on W−CDMA technology and has essentially the same structure
as UMTS.
Figure 8. Comparison of Bandwidth Speeds 94
Figure 8 shows the time needed to download different kinds of popular applications (songs, albums,
DVD movies, HD Movies) under different communication technologies. Actual times vary based on the
wireless carriers infrastructure backbone, tower capacity and investments made in the network architecture.
94
http://www.gsacom.com/news/statistics.php4
27
34. Figure 9. Coverage Map 3G Illustrating W−CDMA and UMTS Coverage with HSPA Availability 95
Wireless Resiliency
In the 2004 Banda Aceh 9.3 earthquake and tsunami that killed 250,000 people, 60 percent of all mobile
towers in the region were destroyed leaving no CDMA mobile data and only partial GSM capacity (although
SMS texting using analog transmission was still able to get through.) Repairs took more than a year to
complete. 96 In some areas, 80 to 90 percent of all landlines were damaged or destroyed. 97
Cellular on Wheels
Mobile wireless telecommunications carriers now provide mobile cellular tower systems that can be
brought to a site to augment or replace damaged communications infrastructure and have it operating in several
hours, depending on environmental conditions.
The January 2010 7.0 Haitian earthquake killed more than 200,000 people and took a terrible toll on
infrastructure. Only three cellular base stations survived. 98 However, within 6 months the system was fully
operational to pre-quake service levels. The September 2010 7.1 Christchurch earthquake took down 24 cell
95
http://www.gsacom.com/news/statistics.php4
96
http://www.kuna.net.kw/ArticleDetails.aspx?language=en&id=1500693
97
http://www.kuna.net.kw/ArticleDetails.aspx?language=en&id=1500693
- Author liaison with PT Telkom counterpart while employed with Bell Canada 2004.
98
http://www.digicelgroup.com/en/media-center/press-releases/achievements/digicel-update-on-situation-in-haiti
28
35. sites—almost all of the local system. 99 More were damaged in the 6.3 aftershock. On both occasions, Cellular
on Wheels (COW) mobile systems were brought to Christchurch, augmenting service within 7 days.
A major concern was fuel for generating power at the cell sites; the task was assigned priority by the
New Zealand government. Several base stations were not damaged but simply lacked emergency diesel fuel
replenishment. Service was augmented as the delivery of fuel became available and reduced the load on the
towers that were still connected to the power grid.
Table 6. Dependency of Frequency on Coverage Area of One Cell of a CDMA2000 Network. 100
Frequency (megahertz) Cell radius (km) Cell area (km2) Relative Cell Count
450 48.9 7521 1
950 26.9 2269 3.3
1800 14.0 618 12.2
2100 12.0 449 16.2
In table 6, note that the higher the frequency used, the shorter the radius of support becomes and the
number of cell towers increases. The lower the frequency, the less data bandwidth is available to share across
users to connect to that cell.
Figure 10. Traditional Mobile Base station 101 Figure 11. Alcatel-Lucent–Light Radio 102
The rapid deployment of cellular on wheels is dramatically improving. The Alcatel-Lucent Light Radio
is 300 grams (about 10 ounces) and stackable. It also consumes very little power, eliminating large generation
and storage requirements. It is can operate on solar, wind and/or battery power. Each cube fits into the size of a
human hand and is fully integrated with radio processing, antenna, transmission, and software management of
frequency. The device can operate on multiple frequencies simultaneously and work with existing
infrastructure. 103
99
http://tvnz.co.nz/national-news/request-limit-use-cellphones-3759932
100
http://en.wikipedia.org/wiki/CDMA2000
101
http://net-trixsolutions.com/images/base%20station.gif
102
http://www2.alcatel-lucent.com/blogs/corporate/2011/02/no-mobile-gridlock-universal-coverage-lightradio-saves-the-
mobile-industry-billions/
103
http://www.youtube.com/watch?feature=player_embedded&v=I7QnGkKlwBw Tod Sizer – Bell Labs / Alcatel - Lucent
29