International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
High Performance Computing for Satellite Image Processing and Analyzing – A ...Editor IJCATR
High Performance Computing (HPC) is the recently developed technology in the field of computer science, which evolved
due to meet increasing demands for processing speed and analysing/processing huge size of data sets. HPC brings together several
technologies such as computer architecture, algorithm, programs and system software under one canopy to solve/handle advanced
complex problems quickly and effectively. It is a crucial element today to gather and process large amount of satellite (remote sensing)
data which is the need of an hour. In this paper, we review recent development in HPC technology (Parallel, Distributed and Cluster
Computing) for satellite data processing and analysing. We attempt to discuss the fundamentals of High Performance Computing
(HPC) for satellite data processing and analysing, in a way which is easy to understand without much previous background. We sketch
the various HPC approach such as Parallel, Distributed & Cluster Computing and subsequent satellite data processing & analysing
methods like geo-referencing, image mosaicking, image classification, image fusion and Morphological/neural approach for hyperspectral satellite data. Collective, these works deliver a snapshot, tables and algorithms of the recent developments in those sectors and
offer a thoughtful perspective of the potential and promising challenges of satellite data processing and analysing using HPC
paradigms.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
High Performance Computing for Satellite Image Processing and Analyzing – A ...Editor IJCATR
High Performance Computing (HPC) is the recently developed technology in the field of computer science, which evolved
due to meet increasing demands for processing speed and analysing/processing huge size of data sets. HPC brings together several
technologies such as computer architecture, algorithm, programs and system software under one canopy to solve/handle advanced
complex problems quickly and effectively. It is a crucial element today to gather and process large amount of satellite (remote sensing)
data which is the need of an hour. In this paper, we review recent development in HPC technology (Parallel, Distributed and Cluster
Computing) for satellite data processing and analysing. We attempt to discuss the fundamentals of High Performance Computing
(HPC) for satellite data processing and analysing, in a way which is easy to understand without much previous background. We sketch
the various HPC approach such as Parallel, Distributed & Cluster Computing and subsequent satellite data processing & analysing
methods like geo-referencing, image mosaicking, image classification, image fusion and Morphological/neural approach for hyperspectral satellite data. Collective, these works deliver a snapshot, tables and algorithms of the recent developments in those sectors and
offer a thoughtful perspective of the potential and promising challenges of satellite data processing and analysing using HPC
paradigms.
Digital Heritage Documentation Via TLS And Photogrammetry Case Studytheijes
In the last decade, several manual tradition measurement techniques were used to document the heritage buildings around the word; however, some of these techniques take a long time, often lack completeness, and may sometimes give unreliable information. In contrast, terrestrial laser scanning “TLS” surveys and Photogrammetry have already been undertaken in several heritage sites in the United Kingdom and other countries of Europe as a new method of documenting heritagesites. This paper focuses on using the TLS and Photogrammetry methods to document one of the important houses in Historic Jeddah, Saudi Arabia, which is Nasif Historical House, as an example of Digital Heritage Documentation (DHD).
Accurate and rapid big spatial data processing by scripting cartographic algo...Universität Salzburg
Accurate and rapid big spatial data processing by scripting cartographic algorithms: advanced seafloor mapping of the deep-sea trenches along the margins of the Pacific Ocean
When you georeference your raster data, you define its location using map coordinates and assign the coordinate system of the map frame. Georeferencing raster data allows it to be viewed, queried, and analyzed with your other geographic data. The georeferencing tools on the Georeference tab allows you to georeference any raster dataset.
In general, there are four steps to georeference your data:
Add the raster dataset that you want to align with your projected data.
Use the Georeference tab to create control points, to connect your raster to known positions in the map
Review the control points and the errors
Save the georeferencing result, when you are satisfied with the alignment.
Satellite Image Processing technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications.
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Topographic Information System as a Tool for Environmental Management, a Case...iosrjce
IOSR Journal of Environmental Science, Toxicology and Food Technology (IOSR-JESTFT) multidisciplinary peer-reviewed Journal with reputable academics and experts as board member. IOSR-JESTFT is designed for the prompt publication of peer-reviewed articles in all areas of subject. The journal articles will be accessed freely online.
Digital Heritage Documentation Via TLS And Photogrammetry Case Studytheijes
In the last decade, several manual tradition measurement techniques were used to document the heritage buildings around the word; however, some of these techniques take a long time, often lack completeness, and may sometimes give unreliable information. In contrast, terrestrial laser scanning “TLS” surveys and Photogrammetry have already been undertaken in several heritage sites in the United Kingdom and other countries of Europe as a new method of documenting heritagesites. This paper focuses on using the TLS and Photogrammetry methods to document one of the important houses in Historic Jeddah, Saudi Arabia, which is Nasif Historical House, as an example of Digital Heritage Documentation (DHD)
Geospatial Data Acquisition Using Unmanned Aerial SystemsIEREK Press
The Rivers State University campus in Portharcourt is one of the university campuses in the city of Portharcourt,
Nigeria covering over 21 square kilometers and housing a variety of academic, residential, administrative and other
support buildings. The University Campus has seen significant transformation in recent years, including the
rehabilitation of old facilities, the construction of new academic facilities and the most recent update on the creation
of new collages, faculties and departments. The current view of the transformations done within the University
Campus is missing from several available maps of the university. Numerous facilities have been constructed on the
University Campus that are not represented on these maps as well as the qualities associated with these facilities.
Existing information on the various landscapes on the map is outdated and it needs to be streamlined in light of
recent changes to the University's facilities and departments. This research article aims to demonstrate the
effectiveness of unmanned aerial systems (UAS) in geospatial data collection for physical planning and mapping of
infrastructures at the Rivers State University Port Harcourt campus by developing a UAS-based digital map and
tour guide for RSU's main campus covering all collages, faculties and departments and this offers visitors, staff and
students with location and attribute information within the campus.
Methodologically, Unmanned Aerial Vehicles were deployed to obtain current visible images of the campus
following the growth and increasing infrastructural development. At a flying height of 76.2m (250 ft), a DJI
Phantom 4 Pro UAS equipped with a 20-megapixel visible camera was flown around the campus, generating
imagery with 1.69cm spatial resolution per pixel. To obtain 3D modeling capabilities, visible imagery was acquired
using the flight-planning software DroneDeploy with a near nadir angle and 75 percent front and side overlap.
Vertical positions were linked to the World Geodetic System 1984 and horizontal positions to the 1984 World
Geodetic Datum universal transverse Mercator (UTM) (WGS 84). To match the UAS data, GCPs were transformed
to UTM zone 32 north.
Finally, dense point clouds, DSM, and an orthomosaic which is a geometrically corrected aerial image that provides
an accurate representation of an area and can be used to determine true distances, were among the UAS-derived
deliverables.
Digital Heritage Documentation Via TLS And Photogrammetry Case Studytheijes
In the last decade, several manual tradition measurement techniques were used to document the heritage buildings around the word; however, some of these techniques take a long time, often lack completeness, and may sometimes give unreliable information. In contrast, terrestrial laser scanning “TLS” surveys and Photogrammetry have already been undertaken in several heritage sites in the United Kingdom and other countries of Europe as a new method of documenting heritagesites. This paper focuses on using the TLS and Photogrammetry methods to document one of the important houses in Historic Jeddah, Saudi Arabia, which is Nasif Historical House, as an example of Digital Heritage Documentation (DHD).
Accurate and rapid big spatial data processing by scripting cartographic algo...Universität Salzburg
Accurate and rapid big spatial data processing by scripting cartographic algorithms: advanced seafloor mapping of the deep-sea trenches along the margins of the Pacific Ocean
When you georeference your raster data, you define its location using map coordinates and assign the coordinate system of the map frame. Georeferencing raster data allows it to be viewed, queried, and analyzed with your other geographic data. The georeferencing tools on the Georeference tab allows you to georeference any raster dataset.
In general, there are four steps to georeference your data:
Add the raster dataset that you want to align with your projected data.
Use the Georeference tab to create control points, to connect your raster to known positions in the map
Review the control points and the errors
Save the georeferencing result, when you are satisfied with the alignment.
Satellite Image Processing technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications.
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Topographic Information System as a Tool for Environmental Management, a Case...iosrjce
IOSR Journal of Environmental Science, Toxicology and Food Technology (IOSR-JESTFT) multidisciplinary peer-reviewed Journal with reputable academics and experts as board member. IOSR-JESTFT is designed for the prompt publication of peer-reviewed articles in all areas of subject. The journal articles will be accessed freely online.
Digital Heritage Documentation Via TLS And Photogrammetry Case Studytheijes
In the last decade, several manual tradition measurement techniques were used to document the heritage buildings around the word; however, some of these techniques take a long time, often lack completeness, and may sometimes give unreliable information. In contrast, terrestrial laser scanning “TLS” surveys and Photogrammetry have already been undertaken in several heritage sites in the United Kingdom and other countries of Europe as a new method of documenting heritagesites. This paper focuses on using the TLS and Photogrammetry methods to document one of the important houses in Historic Jeddah, Saudi Arabia, which is Nasif Historical House, as an example of Digital Heritage Documentation (DHD)
Geospatial Data Acquisition Using Unmanned Aerial SystemsIEREK Press
The Rivers State University campus in Portharcourt is one of the university campuses in the city of Portharcourt,
Nigeria covering over 21 square kilometers and housing a variety of academic, residential, administrative and other
support buildings. The University Campus has seen significant transformation in recent years, including the
rehabilitation of old facilities, the construction of new academic facilities and the most recent update on the creation
of new collages, faculties and departments. The current view of the transformations done within the University
Campus is missing from several available maps of the university. Numerous facilities have been constructed on the
University Campus that are not represented on these maps as well as the qualities associated with these facilities.
Existing information on the various landscapes on the map is outdated and it needs to be streamlined in light of
recent changes to the University's facilities and departments. This research article aims to demonstrate the
effectiveness of unmanned aerial systems (UAS) in geospatial data collection for physical planning and mapping of
infrastructures at the Rivers State University Port Harcourt campus by developing a UAS-based digital map and
tour guide for RSU's main campus covering all collages, faculties and departments and this offers visitors, staff and
students with location and attribute information within the campus.
Methodologically, Unmanned Aerial Vehicles were deployed to obtain current visible images of the campus
following the growth and increasing infrastructural development. At a flying height of 76.2m (250 ft), a DJI
Phantom 4 Pro UAS equipped with a 20-megapixel visible camera was flown around the campus, generating
imagery with 1.69cm spatial resolution per pixel. To obtain 3D modeling capabilities, visible imagery was acquired
using the flight-planning software DroneDeploy with a near nadir angle and 75 percent front and side overlap.
Vertical positions were linked to the World Geodetic System 1984 and horizontal positions to the 1984 World
Geodetic Datum universal transverse Mercator (UTM) (WGS 84). To match the UAS data, GCPs were transformed
to UTM zone 32 north.
Finally, dense point clouds, DSM, and an orthomosaic which is a geometrically corrected aerial image that provides
an accurate representation of an area and can be used to determine true distances, were among the UAS-derived
deliverables.
SDSC Technology Forum: Increasing the Impact of High Resolution Topography Da...OpenTopography Facility
High-resolution topography is a powerful tool for studying the Earth's surface, vegetation, and urban landscapes, with broad scientific, engineering, and educational-based applications. Over the past decade, there has been dramatic growth in the acquisition of these data for scientific, environmental, engineering and planning purposes. In the US, the U.S. Geological Society is undertaking the 3D Elevation Program (3DEP) to map the entire lower 48 with lidar by 2023.
The richness of these topography datasets make them extremely valuable beyond the application that drove their acquisition and thus are of interest to a large and varied user community. A cyberinfrastructure platform that enables users to efficiently discover, access and process these massive volumes of data increases the impact of investments in collection of the data and catalyzes scientific discovery as well as informs critical decisions that are made across our Nation every day that depend on elevation data, ranging from immediate safety of life, property, and environment to long term planning for infrastructure projects.
Join us to hear about the motivations, technology, and data assets behind the National Science Foundation funded OpenTopography platform, which aims to democratize access to high resolution topographic data. OpenTopography’s innovation is in co-locating massive volumes of topographic data with processing tools that enable users with varied expertise and application domains to quickly and easily access and process data, to enable innovation and decision making.
Visualising large spatial databases and Building bespoke geodemographicsDr Muhammad Adnan
This presentation outlines my work at the Local Futures and the PhD research. I have been working on a combined project between Local Futures and UCL and the presentation starts by giving an introduction of the project. My PhD investigated the creation of Real-time bespoke geodemographics, and this presentation presents the work i did during the PhD journey.
Land Cover maps supply information about the physical material at the surface of the Earth (i.e. grass, trees, bare ground, asphalt, water, etc.). Usually they are 2D representations so to present variability of land covers about latitude and longitude or other type of earth coordinates. Possibility to link this variability to the terrain elevation is very useful because it permits to investigate probable correlations between the type of physical material at the surface and the relief. This paper is aimed to describe the approach to be followed to obtain 3D visualizations of land cover maps in GIS (Geographic Information System) environment. Particularly Corine Land Cover vector files concerning Campania Region (Italy) are considered: transformed raster files are overlapped to DEM (Digital Elevation Model) with adequate resolution and 3D visualizations of them are obtained using GIS tool. The resulting models are discussed in terms of their possible use to support scientific studies on Campania Land Cover.
A Study on Data Visualization Techniques of Spatio Temporal DataIJMTST Journal
Data visualization is an important tool to analyze complex Spatio Temporal data. The spatio-temporal data
can be visualized using 2D, 3D or any other type of maps. Cartography is the major technique used in
mapping. The data can also be visualized by placing different layers of maps one on other, which is done by
using GIS. Many data visualization techniques are in trend but the usage of the techniques must be decided
by considering the application requirements.
Visualizing 3D atmospheric data with spherical volume texture on virtual globes
1. Visualizing 3D atmospheric data with spherical volume texture on
virtual globes
Jianming Liang a,b
, Jianhua Gong a,c
, Wenhang Li a,c,n
, Abdoul Nasser Ibrahim a,c
a
State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences,
P.O. Box 9718, Beijing 100101, China
b
University of Chinese Academy of Sciences, Beijing 100049, China
c
Zhejiang-CAS Application Center for Geoinformatics, Zhejiang 314100, China
a r t i c l e i n f o
Article history:
Received 19 November 2013
Received in revised form
24 March 2014
Accepted 26 March 2014
Available online 3 April 2014
Keywords:
Volumetric ray-casting
Virtual globe
Atmospheric volume data
Spherical volume texture
a b s t r a c t
Volumetric ray-casting is a widely used algorithm in volume visualization, but implementing this
algorithm to render atmospheric volume data that cover a large area on virtual globes constitutes a
challenging problem. Because atmospheric data are usually georeferenced to a spherical coordinate
system described by longitude, latitude and altitude, adaptations to the conventional volumetric ray-
casting method are needed to accommodate spherical volume texture sampling. In this paper, we
present a volumetric ray-casting framework to visualize atmospheric data that cover a broad but thin
geographic area (because of the thinness of Earth's atmosphere). Volume texture conforming to the
spherical coordinate system of a virtual globe can be created directly from the spherical volume data to
avoid oversampling, undersampling or a loss of accuracy due to reprojecting and resampling such data
into a Cartesian coordinate system. Considering the insignificant physical thickness of the atmosphere of
the Earth, the ray-casting method presented in this paper also allows for real-time vertical scaling
(exaggeration of the altitudinal range) without the need to re-process the volume texture, enabling
convenient visual observation of the altitudinal variations. The spherical volume ray-casting method is
implemented in a deferred rendering framework to integrate the volume effects into a virtual globe
composed of a variety of background geospatial data objects, such as terrain, imagery, vector shapes and
3D geometric models.
& 2014 Elsevier Ltd. All rights reserved.
1. Introduction
Former US Vice President Al Gore put forth his visionary Digital
Earth (DE) concept in 1998, defined as a computer-networked
virtual representation of the physical Earth based on georefer-
enced heterogeneous data sources (Gore, 1999). Since the Gore
speech, many DE prototypes and commercial applications have
emerged and have continued to evolve over the last decade, even
well beyond Gore's vision (Goodchild et al., 2012). Pioneering DE
systems such as Google Earth (GE) and NASA WorldWind (WW)
have been developed to construct a realistic replica of the planet,
complete with global satellite imagery, digital terrain models
(DEM), vector data and three-dimensional (3D) cities. As we
advance toward the year 2020, DE systems should become a
dynamic framework for sharing information globally and improv-
ing our collective understanding of the complex relationships
between society and the environment in which we live (Craglia
et al., 2012) because the first principle of DE is unrestricted
accessibility to DE by all of humankind whether they are indivi-
duals, government agencies, non-governmental organizations
(NGOs), or for-profit and not-for-profit organizations (Ehlers et al.,
2014).
Currently, DE is more focused on information about the Earth's
surface, from the atmosphere down to the biosphere. In a classical
DE application, global terrain and imagery are the most funda-
mental data layers, for which the conventional out-of-core data
management and level-of-detail (LOD) rendering schemes, e.g.,
P-BDAM (Cignoni et al., 2003), have been sufficiently optimized to
support real-time exploration of a theoretically unlimited amount
of data. However, neither GE nor WW provides any volume
rendering functionalities, not to mention a practical mechanism
for the interactive visualization of regional- or planetary-scale
volumetric data, e.g., atmospheric modeling results. Instead, many
GE users in academia rely heavily on Keyhole Markup Language
(KML) to visualize, locate and navigate through their own geos-
patial data as two-dimensional (2D) overlays or 2.5D polygons
(Bailey and Chen, 2011). For example, the progressive dispersion of
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/cageo
Computers & Geosciences
http://dx.doi.org/10.1016/j.cageo.2014.03.015
0098-3004/& 2014 Elsevier Ltd. All rights reserved.
n
Corresponding author at: State Key Laboratory of Remote Sensing Science,
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences,
P.O. Box 9718, Beijing 100101, China. Tel./fax: þ86 10 64849299.
E-mail address: mylihang@163.com (W. Li).
Computers & Geosciences 68 (2014) 81–91
2. volcanic gas was visualized in GE by animating an array of KML
polygons on a timeline (Wright et al., 2009). Volumetric data
scanned by space-borne meteorological radar were represented as
2D vertical profiles using KML in GE. Such visualization techniques
only present a partial view of the original volumetric data and may
not be able to provide a holistic representation that allows viewers
to easily understand the data as a whole.
Volumetric visualization has been broadly applied in acade-
mia and industry to facilitate visual analytics and simulate
participating media in applications such as medical image
rendering (Zhang et al., 2011), cloud simulation (Elek et al.,
2012), and geoscience data visualization (Patel et al., 2010).
There are many open-source volume visualization toolkits
available online. For instance, the Visualization Toolkit (VTK)
and Voreen are freely available volume rendering engines that
incorporate many sophisticated algorithms. One of the earliest
software systems to support fully interactive 3D visualization of
time-varying volumetric weather simulations was Vis5D, which
also provides an open-source API to enable developers of other
systems to incorporate Vis5D's functionality (Hibbard et al.,
1994). The 2004 IEEE Visualization Contest encouraged partici-
pants to develop new tools to visualize the various types of
behavior exhibited by a hurricane volume dataset. The following
applications were among the three winning entries to the
contest: (1) SimVis (Doleisch et al., 2004) was an interactive
visual analysis software for multi-variate and time-dependent
3D simulation data on unstructured grids; (2) AtmosV (Doleisch
et al., 2004) was an immersive prototype application derived
from the Immersive Drilling Planner to visualize large multi-
variate atmospheric datasets; (3) the OpenGL application (Greg
and Christopher, 2004) used multithreading to maximize the
interactivity of the visualization. Nevertheless, these visualiza-
tion toolkits only provide a pure volumetric rendering environ-
ment that lacks the ability to integrate various geospatial data
sources. In contrast to these general-purpose rendering toolkits,
DE offers as one of its major benefits convenient viewer access
to a variety of georeferenced data sources because DE plays the
role of a knowledge engine (Krzysztof and Hitzler, 2012) rather
than a pure rendering engine. Consequently, the implementa-
tion of volume visualization in DE must be conducted bearing in
mind that such functionalities can easily be integrated with a
regular virtual globe, set in an enriched geographic background
that allows users to comprehend the context.
Graphic processing units (GPUs) have largely been used to
accelerate volumetric rendering for interactive applications
(Monte, 2013). Volumetric data in video memory are present in a
GPU-compatible format, i.e., 3D texture or volume texture, which
is supported in either DirectX or OpenGL to facilitate direct
volumetric rendering. In fact, a volume texture is a collection of
2D textures in the form of slices. Volumetric ray-casting (Kruger
and Westermann, 2003), shear-warp (Lacroute and Levoy, 1994)
and splatting (Westover, 1991) are three classic direct volume
rendering methods. Because volumes are determined on a pixel-
by-pixel basis, ray-casting can generate high-quality images
without any blurring or loss of detail (Dye et al., 2007). Volumetric
ray-casting is perhaps the most researched method due to its
superior rendering quality, easily understood fundamental princi-
ples and convenient GPU-based implementation.
A few attempts have been made to implement GPU-based
volumetric ray-casting in virtual globe applications (Yang and
Wu, 2010; Li et al., 2011, 2013). Yang and Wu (2010) created
volume texture directly from electromagnetic data in a geographic
coordinate system (GCS) (Bugayevskiy and Snyder, 1995) and
presented a volume-ray casting method to visualize them on a
virtual globe. Li et al. (2013) reprojected and resampled GCS-based
atmospheric volume data into a Cartesian volume space for
visualization in WW. However, this type of reprojection may lead
to oversampling or undersampling of volumetric data whose
voxels were originally described by longitude, latitude and alti-
tude. Moreover, volume data in geosciences, such as atmospheric
modeling results, may cover large geographic areas with a very
small altitudinal range, calling for a ray-casting method that
supports on-the-fly vertical exaggeration on virtual globes.
We herein present a GPU-based volumetric ray-casting frame-
work to accomplish the following:
(1) Direct visualization of GCS-based volumetric data (particularly
atmospheric data in this study) on virtual globes, without
resorting to data resampling and reprojection techniques, to
provide a more faithful representation of the original data
contents as they were recorded;
(2) Real-time vertical scaling in GCS for more convenient observa-
tion of altitudinal variations;
(3) Seamless integration of atmospheric volume data with other
data contents on virtual globes to provide enriched geographic
backgrounds for interactive volume analysis.
Fig. 1. Geographic setting of Hurricane Isabel.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–9182
3. 2. Data preparation
Hurricane Isabel (Fig. 1) was the strongest and most devastat-
ing hurricane that occurred in 2003. Originating near the Cape
Verde Islands in the tropical Atlantic Ocean on September 6, the
hurricane moved northwestward and strengthened steadily,
reaching peak winds of 165 mph (265 km/h) on September 11
(PBS&J, 2005).
The Hurricane Isabel dataset (Table 1) used in this study was
provided by the National Center for Atmospheric Research (NCAR)
and the U.S. National Science Foundation (NSF), who generated
simulation data for Hurricane Isabel using the Weather Research
and Forecasting (WRF) Model.
3. Typical characteristics of atmospheric volume data
Unlike most commonly observed volumetric data, such as
medical images, which are not georeferenced and can conveni-
ently be visualized in an isolated environment without resorting
to multi-source data integration, atmospheric volumetric data
have the following characteristics:
(1) Atmospheric volumetric data are composed of arrays of
spherical voxels located by longitude, latitude and altitude.
Spherical voxels have variable spatial sizes along the parallel
and meridian. Because longitude is not a uniform unit of
measure, one degree of longitude gradually shortens from
111.321 km at the equator to 55.8 km at a latitude of 601 and
finally to zero at the poles.
(2) Atmospheric volume data cover broad geographic areas, in some
cases even the entire Earth, but represent a very narrow vertical
range due to the thinness of Earth's atmosphere. For example,
Hurricane Isabel had a horizontal extent of approximately
2000 km, which was 100 times greater than its vertical extent
of 19.8 km. However, there are 100 volume slices in the dataset
along the vertical dimension, so it is necessary to scale up the
altitudinal range to highlight the hurricane's vertical features.
4. The spherical and Cartesian coordinate systems
A typical virtual globe platform makes use of two types of
coordinate systems to describe 3D objects in different contexts, i.
e., Cartesian coordinate systems (CCS) and GCS.
A CCS describes a position in 3D space by a triple set (X, Y, Z)
(Fig. 2). The Cartesian origin (0, 0, 0) of a virtual globe usually
corresponds to the spherical or spheroidal center.
A spherical coordinate system describes a position in 3D space
by a radius, a polar angle, and an azimuthal angle (Fig. 2). A GCS is
a variant of the spherical coordinate system widely used in
geosciences to describe the spatial locations of georeferenced data
by longitude, latitude and altitude (Snyder, 1987).
In real-time rendering, 3D geometric primitives, e.g., triangles,
have to be represented in Cartesian coordinates because the GPU
rendering pipeline does not directly take in spherical coordinates.
Nevertheless, GCS is more convenient for describing spherical
locations and for helping human beings to imagine objects on
the Earth. Therefore, many georeferenced data are natively regis-
tered to GCS and require on-the-fly transformation into Cartesian
coordinates for rendering in a virtual globe; For example, geor-
eferenced satellite images are clamped onto Cartesian triangular
meshes as texture layers.
A recent study (Li et al., 2013) proposed a reprojection of GCS-
based atmospheric volume data into a CCS so that each voxel has a
uniform spatial size, unlike spherical voxels (Fig. 3): first, a CCS-
aligned bounding volume is calculated and an average voxel size is
determined to create a volume texture; second, the original voxels
are transformed into a CCS and resampled into the volume texture,
which can then be visualized by implementing the generic volume
ray-casting algorithm in a virtual globe application.
However, such a direct implementation of the volume ray-
casting algorithm on virtual globes may cause the following data-
related problems:
(1) Oversampling or undersampling. Because the spherical voxels
have variable sizes due to the angular unit (Snyder, 1987), a
uniform voxel resolution needs to be determined to reproject
such volume data into a CCS.
(2) Inefficient storage. A larger Cartesian bounding volume (Fig. 4)
is needed to fully contain the spherical volume data for
reprojection, resulting in many void spaces in the newly
created volume texture.
(3) Inconvenience in vertical scaling on virtual globes. After an
atmospheric volume dataset is reprojected to a CCS, vertical
scaling cannot be dynamically applied to zoom in on the
altitudinal variations because the Cartesian voxels no longer
conform to the spherical shells of the virtual globe (Fig. 4).
5. Ray-casting method for spherical volume texture on virtual
globe
5.1. Spherical volume texture
In response to the above-mentioned problems related to the
implementation of volume-ray casting on virtual globes, we
introduce the notion of “spherical volume texture” (SVT), which
is a replica of the GCS-based volume data in video memory.
Table 1
Summary of the Hurricane Isabel dataset.
Dimensions 500 Â 500 Â 100
Latitudinal range 83 W–62 W
Longitudinal range 23.7 N–41.7 N
Altitudinal range 0.035 km–19.835 km
Cartesian distance 2139 km (east–west),
2004 km (north–south),
19.8 km (vertical)
Time step 1 h
Time span 48 h (1 simulated hour
between time steps)
Uncompressed data
size per time step
95 MB
Fig. 2. Cartesian and spherical coordinate systems.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–91 83
4. For example, suppose we want to create a volume texture for a
single time step of the Hurricane Isabel dataset: first, a volume
texture of 512 Â 512 Â 100 is allocated in video memory, with each
voxel representing a floating-point value; second, the spherical
volume data are copied to the volume texture on a voxel-by-voxel
basis. Like a generic volume texture, an SVT adheres to a normal-
ized texture coordinate system, which means that in all three
dimensions, the texture coordinates range from 0 to 1 (Fig. 5).
The spherical bounding volume of an SVT (Fig. 6) is composed
of two spherical shells with four walls on the edges, with each
voxel indexed by its longitude, latitude and altitude. The spatial
size of a voxel cube tends to decrease toward the polar regions.
Next, we introduce the conventional volume ray-casting algo-
rithm and discuss how to adapt the algorithm so that an SVT can
be visualized directly on virtual globes.
5.2. The conventional volume ray-casting method
Volume ray-casting is an image-space rendering method that
transforms a volume texture into a 2D image for display. To obtain
a screen-space image, a ray is shot from every screen pixel through
the volume to fetch samples and produce the final color through
certain transfer functions. Before the actual ray-casting algorithm
is executed, the entry and exit positions of each ray have to be
obtained. A simple technique for finding the ray-volume intersec-
tions involves the following steps:
(1) Find a model-space bounding box for the volume data and
tessellate it with triangle primitives for rendering.
(2) Prepare two 2D texture buffers to store the ray entry and exit
positions.
(3) Enable back-face culling and draw the bounding box to render
the front-face positions to the texture buffer for storing ray
entry positions.
(4) Enable front-face culling and draw the bounding box to render
the back-face positions to the texture buffer for storing ray exit
positions.
Fig. 3. Reprojection of spherical volume data for ray-casting on a virtual globe (Li et al., 2013).
Fig. 4. Cartesian bounding box (green wireframe box) for spherical volume data (red wireframe shape). (For interpretation of the references to color in this figure legend, the
reader is referred to the web version of this article.)
Fig. 5. Texture coordinate system of SVT.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–9184
5. After the ray entry and exit positions have been obtained in the
form of two texture buffers, the core ray-casting algorithm can be
executed in the following steps:
(1) Ray construction. For each pixel of the final image, a ray is
constructed from the entry position toward the exit position.
(2) Texture sampling. From the entry to the exit position of a ray,
equidistant texture samples are collected.
(3) Shading and compositing. A color transfer function and a light
transport model are used to calculate the final color for each
ray based on an accumulation of the texture samples. The
following equation describes a classic light transport model:
I ¼
Z tmax
0
IðtÞe
À
R t
0
μðsÞds
dt ð1Þ
where tmaxis the path distance from ray entry to exit, I is the final
irradiance, s is the distance increment, t is the upper integration
limit and μ is the attenuation coefficient.
The conventional ray-casting method does not take into
account the spherical curvature or the angular units of a spherical
coordinate system and does not specify how to integrate with a
virtual globe. Next, we will present an SVT-compatible ray-casting
framework designed specifically for virtual globes.
5.3. Ray-casting for spherical volume texture
First, we have to find a spherical bounding box that fully
contains the volume space so that we can determine the ray entry
and exit positions. Yang and Wu (2010) transformed the eight
corner points of the geographic bounding volume directly to
construct a Cartesian bounding volume that ignores the curvature
of the Earth. Such a Cartesian bounding volume may result in
excessive void spaces over a large geographic domain, causing
unnecessary samplings and calculations along the rays. We pro-
pose the use of spherical tessellation to create a bounding volume
comprising two spherical shells and four vertical walls on the east,
west, north and south edges (Fig. 7). Because a larger number of
triangle primitives will put more stress on the GPU, this bounding
volume has to be tessellated as simply as possible. We approx-
imate a minimum bounding volume through spherical interpola-
tion and tessellation (Fig. 7) in the following manner:
(1) We determine the approximate parallel and meridian spacing
for tessellation based on the spatial resolution of the data. Two
spherical shells are tessellated with triangle strips based on
the specified parallel and meridian spacing. One spherical shell
is set at the minimum altitude, and the other is set at the
maximum altitude.
Fig. 6. Geographic coordinate system of SVT.
Fig. 7. Tessellation of the spherical bounding volume.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–91 85
6. (2) We extrude the four walls from the respective edges of the
geographic extent by an amount equal to the altitudinal range.
We then allocate two separate floating-point texture buffers to
store the ray entry and exit positions. We create a GPU vertex
shader and fragment shader to render the spherical bounding
volume. To write the face positions onto the textures, the positions
are passed to the fragment shader as texture coordinates for
rasterization. Back-face culling is enabled to obtain the ray entry
positions (Fig. 8a), and front-face culling is enabled to obtain the
ray exit positions (Fig. 8b). Fig. 8 shows the ray entry and exit
positions in shaded colors:
Because each voxel of an SVT is spatially located by its long-
itude, latitude and altitude, we need to convert the Cartesian
coordinates (X, Y, Z) of a sample point on the cast ray into
geographic coordinates using Eq. (2):
rho ¼ sqrtðx2
þy2
þz2
Þ
longi ¼ arctan y
x
À Á
lati ¼ arcsin z
rho
À Á
alt ¼ rhoÀr
8
>>><
>>>:
ð2Þ
where r is Earth's radius, longi is longitude, lati is latitude and alt is
altitude. Because the altitude is obtained in real-time calculations,
a vertical scale factor can be applied in transforming the spherical
coordinates into texture coordinates using Eq. (3):
u ¼ ðlongiÀlongiminÞ=ðlongimax ÀlongiminÞ
v ¼ ðlatiÀlatiminÞ=ðlatimax ÀlatiminÞ
s ¼ ðaltÀaltminÞ=½ðaltmax ÀaltminÞ Â vscaleŠ
8
><
>:
ð3Þ
where u, v and s are the normalized 3D texture coordinates, vscale
is the vertical exaggeration factor, longimax is the maximum long-
itude, longimin is the minimum longitude, latimax is the maximum
latitude, latimin is the minimum latitude, altmax is the maximum
altitude and altmin is the minimum altitude.
As vscale changes, the vertices of the spherical bounding
volume need to be re-calculated accordingly to provide correct
ray-entry and ray-exit positions. Because both the ray-casting and
the texture sampling are performed in real-time, a change in the
variable vscale would be immediately reflected in the vertical
component (s) of the 3D texture coordinates (u, v, s). More
specifically, in the execution stage of the ray-casting algorithm
as described in Section 5.2, the 3D texture coordinates required for
the texture sampling are calculated by Eq. (3). Assuming that the
longitudinal range, latitudinal range and altitudinal range are
known variables, what we need to do is transform the sampled
ray position (X, Y, Z) from the Cartesian space to the spherical
space using Eq. (2). The resulting spherical position (longi, lati, alti)
can then be transformed into the texture address (u, v, s) using
Eq. (3) for actual texture sampling.
5.4. Seamless integration to virtual globes
The conventional volume ray-casting algorithm does not provide
a specific solution for how to merge the data with the virtual globe
background scene. Because multi-source spatial data integration is
one of the fundamental functionalities of virtual globes, fitting
volume techniques into the virtual globe-rendering pipeline is vital
to the applicability of the presented volume ray-casting framework.
Deferred rendering (Geldreich et al., 2004; Hargreaves and
Harris, 2004) gained considerable popularity in 3D engine devel-
opment between 2008 and 2010. In comparison to traditional
forward rendering, which shades each 3D object individually,
a typical deferred rendering pipeline renders all 3D objects from
a scene into three separate texture buffers, i.e., the position buffer
(or depth buffer), the normal buffer and the color buffer, and then
performs shading at a later stage to yield the final visual (Fig. 9).
Most virtual globe engines, such as WW, still employ a forward
rendering pipeline to render terrain, vector shapes and 3D models
for immediate display, without considering possible blending with
volume effects at a later stage. Therefore, we propose to leverage
deferred rendering to seamlessly incorporate volume effects into a
virtual globe in a compositing operation. Our deferred rendering
pipeline is organized as shown below:
As indicated in the above illustration (Fig. 10), the main
rendering pipeline consists of three render passes:
(1) In the first render pass, all of the virtual globe data objects, such
as terrain, imagery, vector shapes and 3D geometric models, are
rendered separately to a position buffer and a color buffer. The
position buffer records the per-pixel positions, and the color
buffer records the per-pixel colors. Unlike the regular virtual
globe-rendering pipeline, the rendering results are not presented
for immediate display but are saved for later compositing.
(2) In the second render pass, the ray entry and exit positions of
the spherical volume are rendered to two separate textures.
(3) In the final render pass, the spherical volume is rendered using
the ray-casting method presented herein and fused with the
Fig. 8. Ray entry and exit positions of the volume data.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–9186
7. virtual globe buffers to produce the final image. The position
and color buffers from the first render pass as well as the ray
entry and exit buffers from the second render pass are all
bound to a GLSL fragment shader. For each screen pixel, a ray is
shot through the spherical volume to accumulate texture
samples. The actual ray entry and exit positions are obtained
by comparing the original ray entry and exit positions against
the distance of the background pixel from the camera. The
compositing process is illustrated below in the form of a
pseudocode snippet:
foreach (pixel on screen){
vec3 ray_origin¼getRayOrigin(uv);
vec3 ray_entry¼texture2D(tex_ray_entry, uv);
vec3 ray_exit¼texture2D(tex_ray_exit, uv);
vec3 globe_pos¼texture2D(tex_globe_pos, uv);
vec3 globe_color¼texture2D(tex_globe_color, uv);
float origin2globe¼length(ray_origin - globe_pos );
float origin2entry¼ length(ray_origin - ray_entry );
float origin2exit¼ length(ray_origin - ray_exit );
// depth (distance to ray origin) comparison
//case 1: volume is occluded, terminate ray-casting
if(origin2globe oorigin2entry){return globe_color;}
//case 2: volume is partially occluded, change ray-exit
position
if(origin2globe4origin2entry && origin2globe
oorigin2exit){ray_exit¼globe_pos;}
//case 3: volume is not occluded, keep ray-exit position
else { }
//volume ray-casting
vec3 volume_color ¼cast_ray(SVT, origin2entry,
origin2exit);
vec3 final_color¼composit(globe_color, volume_color);
return final_color;
}
6. Analysis and discussion
The demonstrations were run on a machine with a 2.90-GHz
quad-core Intel Core i52310 CPU and 4 GB of RAM. The graphics
card is an NVIDIA GeForce GTS 450 with 1.0 GB of RAM. The
volume ray-casting framework presented in this paper was imple-
mented using Cþ þ, OpenSceneGraph and osgEarth. The core ray-
casting algorithm was written in a GLSL fragment shader.
6.1. Rendering of spherical volume data that cover a large geographic
area
To demonstrate that the method presented can effectively
visualize a GCS-based atmospheric dataset, we ignored the physi-
cal extent of the hurricane and overstretched the original volume
texture to cover a much larger geographic area (Fig. 11).
In the above figure, the images on the left side (Fig. 11a and c)
show that the volume texture covers a geographic area from 23N,
70E to 41.7N, 140E. The images on the right side (Fig. 11b and d)
Deferred Shading
Color buffer Normal buffer Depth buffer
Fig. 9. Illustration of the deferred rendering pipeline (Koonce, 2004).
Fig. 10. Deferred rendering framework for integrating volumetric ray-casting with
virtual globes.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–91 87
8. represent an even larger geographic area, from 0N, 0E to 90N,
180E. Satisfactory rendering results are obtained for either case,
suggesting the applicability of the presented method to the
rendering of atmospheric volume data on a regional to global
scale (in terms of geographic extent rather than data volume).
Arbitrary vertical scaling is an important feature of the method
because most atmospheric datasets are relatively thin in the
vertical dimension. The following figure (Fig. 12) illustrates how
vertical scaling can magnify altitudinal features to help visual
analytics.
The thickness of the atmosphere is no more than 1% of the
Earth's radius. Thus, when rendered on the globe without any
altitudinal scaling, the Hurricane Isabel dataset appears similar to
a sheer 2D image clamped on the Earth (Fig. 12a). Applying an
altitudinal scale factor of 20 (Fig. 12b), we can clearly observe the
vertical variations in comparison to no scaling at all (Fig. 12a).
6.2. Performance analyses
Performance testing is necessary to determine the rendering
efficiency of the visualization framework as well as to verify the
ray-casting method presented herein. Experiments should be
conducted to evaluate the frames per second (FPS) for a
computation-intensive visualization system under various con-
straint conditions (Yang et al., 2011).
Given a constant number of samplings for each ray, the
rendering performance for the ray-casting method depends on
the number of screen pixels being covered by the volume. A closer
Fig. 11. Rendering of spherical volume data that cover a large geographic area.
Fig. 12. Highlighting vertical variations through scaling.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–9188
9. viewing distance or a larger vertical scaling factor causes a larger
portion of the screen area to be covered and therefore may lead to
a gradual decrease in FPS. Thirteen different viewing distances
were chosen to look at the volume center from 17,832 km to
2000 km away, with the vertical scaling factor varying between
1 and 25 (Figs. 13 and 14). A maximum of 128 texture samples
Fig. 13. Rendering performance against viewing distance and vertical scaling factor.
Fig. 14. Visualization at different viewing distances for a vertical scaling factor of 25.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–91 89
10. were collected along each ray to be marched. An experiment
under the above-mentioned constraint conditions was performed,
and the FPS was plotted against the viewing distance and the
vertical scaling factor, as shown in Fig. 13 below.
The experiment results (Figs. 13 and 14) found that rendering
performance declines quickly with either decreasing viewing
distance or increasing vertical scaling when the viewing distance
is relatively large. As the viewing position draws closer, the
rendering performance is more sensitive to the viewing distance
than to the vertical scaling factor. Although the rendering perfor-
mance was shown to be sufficient to support interactive explora-
tion as suggested by the FPS (Fig. 13) under the given constraint
conditions, it can be highly variable depending on the viewing
distance and the vertical scaling factor.
6.3. Integration with multi-source data on a virtual globe
Integration with a variety of spatial data sources on virtual
globes is a key feature of the presented volume ray-casting
framework compared to the standard ray-casting method. To
demonstrate this feature, we created a virtual globe scene com-
posed of various spatial objects, e.g., geographic graticules, 3D
models, 2D polygons and text labels.
Fig. 15 shows that the volume visualization of Hurricane Isabel
is correctly fused with the geographic background on the virtual
globe. The depth comparison in our ray-casting algorithm (Section
5.4) can help to achieve correct occlusion effects for the volume
and other 3D objects. The alpha compositing produces a translu-
cent effect so that we can still see the 3D objects behind the
hurricane volume.
7. Conclusions
The increasing availability of volumetric data in geosciences
has created a demand for DE to provide integrated volume
visualization capabilities. However, mainstream DE systems such
as GE, WW and osgEarth lack the necessary functionalities to
visualize large-scale volume data. In response to such needs, we
present a volume ray-casting framework that has been demon-
strated to be suitable for integration into the open-source virtual
globe osgEarth system without requiring any modifications to the
rendering engine.
The applicability of the volume-ray casting framework pre-
sented in this paper has been tested using the Hurricane Isabel
dataset, and the following conclusions have been drawn:
(1) The volume ray-casting method can directly visualize atmo-
spheric volume data on a large geographic area.
(2) The volume ray-casting method can provide on-the-fly altitu-
dinal scaling.
(3) The deferred rendering pipeline can blend the volume effects
with a virtual globe.
However, the presented method is still subject to many
deficiencies that should be addressed in future work. Alpha
blending involving multiple translucent 3D objects is difficult to
achieve with deferred shading because the position and color
buffers rendered from a virtual globe only provide the final results
without recording the intermediate values. Furthermore, volume
ray-casting is a costly rendering algorithm, the computation
intensity of which increases with the projected screen area. Thus,
a closer viewing distance or a larger vertical scaling factor may
lead to a sharp drop in frame rates. In the future, other volume
rendering methods should be tested and compared with volume
ray-casting on virtual globes. Moreover, although the Hurricane
Isabel data can easily fit into video memory in the case presented,
a massive amount of volume data would still warrant further
studies of out-of-core rendering techniques (Yang et al., 2011).
Acknowledgments
This research was supported by the Key Knowledge Innovative
Project of the Chinese Academy of Sciences (KZCX2 EW 318), the
National Key Technology R&D Program of China (2014ZX10003002),
the National Natural Science Foundation of China (41371387), the
National Natural Science Foundation of China (41201396), Jiashan
Science and Technology Projects (2011A44) and the DRAGON 3
(Project ID 10668).
References
Bailey, J., Chen, A., 2011. The role of virtual globes in geosciences. Comput. Geosci.
37 (1), 1–2.
Bugayevskiy, L., Snyder, J., 1995. Map Projections: A Reference Manual. Taylor & Francis,
London p. 352.
Fig. 15. Fusion with multi-source data in the virtual globe.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–9190
11. Cignoni, P., et al., 2003. Planet-Sized Batched Dynamic Adaptive Meshes (P-BDAM).
In: Proceedings of the 14th IEEE Visualization 2003 (VIS'03), Washington, DC,
USA, pp. 147–155.
Craglia, M., et al., 2012. Digital Earth 2020: towards the vision for the next decade.
Int. J. Digit. Earth 5 (1), 4–21.
Doleisch, H., et al., 2004. Interactive visual analysis of hurricane isabel with SimVis.
IEEE Vis. Contest, 2004.
Dye, M., et al., 2007. Volumetric Visualization Methods for Atmospheric Model Date
in an Immersive Virtual Environment. In: Proceedings of High Performance
Computing Systems (HPCS ’07), Prague, Czech.
Ehlers, M., et al., 2014. Advancing digital earth: beyond the next generation. Int. J.
Digit. Earth 7 (1), 3–16.
Elek, O., et al., 2012. Interactive cloud rendering using temporally coherent photon
mapping. Comput. Gr. 36 (8), 1109–1118.
Goodchild, M., et al., 2012. Next generation digital earth. Proc. Natl. Acad. Sci. 109
(28), 11088–11094.
Gore, A., 1999. The digital earth: understanding our planet in the 21st century.
Photogramm. Eng. Remote Sens. 65 (5), 528.
Greg, P., Christopher, A., 2004. OpenGL visualization of hurricane Isabel. IEEE Vis.
Contest, 2004.
Hargreaves, S., Harris, M., 2004. Deferred rendering. NVIDIA Corporation. 44 pp.
Hibbard, W., et al., 1994. Interactive visualization of earth and space science
computations. Computer 27 (7), 65–72.
Koonce, R., 2004. Deferred Shading in Tabula Rasa. In: GPU Gems 3, pp. 429–457.
Kruger, J., Westermann, R., 2003. Acceleration techniques for GPU-based volume
rendering. In: Proceedings of the 14th IEEE Visualization 2003 (VIS’03),
Washington, DC, USA, pp. 287–292.
Krzysztof, J., Hitzler, P., 2012. The digital earth as knowledge engine. Semant. Web
3 (3), 213–221.
Li, J., et al., 2011. Visualizing dynamic geosciences phenomena using an octree-
based view-dependent LOD strategy within virtual globes. Comput. Geosci.
37 (9), 1295–1302.
Li, J., et al., 2013. Visualizing 3D/4D environmental data using many-core graphics
processing units (GPUs) and multi-core central processing units (CPUs).
Comput. Geosci. 59, 78–89.
Monte, C., 2013. Estimation of volume rendering efficiency with GPU. Proced.
Comput. Sci. 18, 1402–1411.
Patel, D., et al., 2010. Seismic volume visualization for horizon extraction. In: IEEE
Pacific Visualization Symposium (PacificVis), 2010, Taipei, pp. 73–80.
Lacroute, P., Levoy, M., 1994. Fast Volume Rendering Using a Shear-Warp Factoriza-
tion of the Viewing Transformation. In: Proceedings of SIGGRAPH ’94, Orlando,
Florida, July, pp. 451–458.
Post, Buckley, Schuh and Jernigan, Inc. (PBS&J), 2005. Hurricane Isabel assessment,
a Review of Hurricane Evacuation Study Products and Other Aspects of the
National Hurricane Mitigation and Preparedness Program (NHMPP) in the
Context of the Hurricane Isabel Response. NOAA, 301 pp.
Geldreich, R., et al., 2004. Deferred lighting and shading. Presentation at Game
Developers Conference 2004, 21 pp.
Snyder, J., 1987. Map projections: a working manual. Professional paper. United
States Geological Survey, 383 pp.
Westover, L., 1991. Splatting: A Parallel, Feed-Forward Volume Rendering Algo-
rithm (Ph.D. dissertation). University of North Carolina at Chapel Hill, NC, USA
p. 103.
Wright, T., et al., 2009. Visualising volcanic gas plumes with virtual globes. Comput.
Geosci. 35 (9), 1837–1842.
Yang, C., Wu, L., 2010. GPU-based volume rendering for 3D electromagnetic
environment on virtual globe. Int. J. Image, Gr. Signal Process. 2 (1), 53–60.
Yang, C., et al., 2011. Using spatial principles to optimize distributed computing for
enabling the physical science discoveries. Proc. Natl. Acad. Sci. 108 (14),
5498–5503.
Zhang, Q., et al., 2011. Volume visualization: a technical overview with a focus on
medical applications. J. Digit. Imaging 24 (4), 640–664.
J. Liang et al. / Computers & Geosciences 68 (2014) 81–91 91