### SlideShare for iOS

by Linkedin Corporation

FREE - On the App Store

The nature of data structures used for the representation of terrain has a great influence on the possible applications and reliability of consequent terrain analyses....

The nature of data structures used for the representation of terrain has a great influence on the possible applications and reliability of consequent terrain analyses.

This research demonstrates a concise review and treatment of the surface network data structure, a topological data structure for terrains. A surface network represents the terrain as a graph where the vertices are the fundamental topographic features (also called critical points), namely the local peaks, pits, passes (saddles) and the edges are the ridges and channels that link these vertices. Despite their obvious and widely believed potential for being a natural and intuitive representation of terrain datasets, surface networks have only attracted limited research, leaving several unsolved aspects, which have restricted their use as viable digital terrain data structures. The research presented here presents novel techniques for the automated generation, analysis and application of surface networks.

The research reports a novel method for generating the surface networks by extending the ridges and channels, unlike the conventional critical points-based approach. This proposed algorithm allows incorporation of much wider variety of terrain features in the surface network data structure.

Several ways of characterising terrain structure based on the graph-theoretic analysis of surface networks are presented. It is shown that terrain structures display certain empirical characteristics such as the stability of the structure under changes and relationship between hierarchies of topographic features. Previous proposals for the simplification of surface networks have been evaluated for potential limitations and solutions have been presented including a user-defined simplification. Also methods to refine (to add more details to) a surface network have been shown.

Finally, it is shown how surface networks can be successfully used for spatial analyses e.g. optimisation of visibility index computation time, augmenting the visualisation of dynamic raster surface animation, and generating multi-scale morphologically consistent terrain generalisations.

- Total Views
- 4,342
- Views on SlideShare
- 4,341
- Embed Views

- Likes
- 0
- Downloads
- 74
- Comments
- 0

http://www.slideshare.net | 1 |

Uploaded via SlideShare as Adobe PDF

© All Rights Reserved

- 1. Surface Networks: New techniques for their automated extraction, generalisation and application Sanjay Singh Rana Department of Geomatic Engineering University College London University of London A revised and updated version of the thesis submitted for the degree of Doctor of Philosophy in Geographical Information Science in 2004 2010
- 2. For fellow surface networks researchers
- 3. Contents Abstract I Acknowledgements II List of Figures III List of Tables VIII Chapter 1 Introduction 1 1.1 Context 1 1.2 Unresolved issues and aims of the thesis 5 1.2.1 Model 9 1.2.2 Generation 14 1.2.3 Simplification 22 1.3 Structure of the thesis 26 Chapter 2 Automated Generation 27 2.1 Automated generation of surface networks 27 2.2 Novel solutions to unresolved issues 27 2.2.1 Stating semantic uncertainties in labelling terrain 27 2.2.2 Automated extraction of surface topology 32 2.2.3 Implementation 40 2.3 Discussion 40 Chapter 3 Structural Characterisation 54 3.1 Extending the description of surface networks 54 3.1.1 Standard graph measures 55 3.1.2 Two case studies 56 3.2 Simplification of surface networks 57 3.2.1 Surface Topology Toolkit 60 3.2.2 New weight measures 60 3.2.3 Non-sequential contractions 64 3.3 Refinement of surface networks 66 3.4 Discussion 69 Chapter 4 Applications 71 4.1 Proposal 71 4.2 Fast computation of visibility dominance in mountainous terrains 72 4.2.1 Proposal 73 4.2.2 Methodology 73 4.2.3 Results 76 4.2.4 Summary 79 4.3 Multi-scale and morphologically consistent terrain generalisation 79 4.3.1 Methodology and Results 81 4.3.2 Summary 81 4.4 Visualisation of the evolution of a terrain 81
- 4. Contents 4.4.1 Proposal 89 4.4.2 Methodology 89 4.4.3 Results and Summary 91 4.5 Scope for further applications 92 Chapter 5 Conclusions 95 5.1 Summary 95 5.1.1 Modelling terrain using surface networks: what’s possible and 95 what’s not 5.1.2 Revealing terrain structure using surface networks 97 5.1.3 Applications of surface networks 98 5.2 Future research 99 References 100 Appendix 110 A.1 SNG file format 110 A.2 SNM file format 111
- 5. Abstract The nature of data structures used for the representation of terrain has a great influence on the possible applications and reliability of consequent terrain analyses. This research demonstrates a concise review and treatment of the surface network data structure, a topological data structure for terrains. A surface network represents the terrain as a graph where the vertices are the fundamental topographic features (also called critical points), namely the local peaks, pits, passes (saddles) and the edges are the ridges and channels that link these vertices. Despite their obvious and widely believed potential for being a natural and intuitive representation of terrain datasets, surface networks have only attracted limited research, leaving several unsolved aspects, which have restricted their use as viable digital terrain data structures. The research presented here presents novel techniques for the automated generation, analysis and application of surface networks. The research reports a novel method for generating the surface networks by extending the ridges and channels, unlike the conventional critical points-based approach. This proposed algorithm allows incorporation of much wider variety of terrain features in the surface network data structure. Several ways of characterising terrain structure based on the graph-theoretic analysis of surface networks are presented. It is shown that terrain structures display certain empirical characteristics such as the stability of the structure under changes and relationship between hierarchies of topographic features. Previous proposals for the simplification of surface networks have been evaluated for potential limitations and solutions have been presented including a user-defined simplification. Also methods to refine (to add more details to) a surface network have been shown. Finally, it is shown how surface networks can be successfully used for spatial analyses e.g. optimisation of visibility index computation time, augmenting the visualisation of dynamic raster surface animation, and generating multi-scale morphologically consistent terrain generalisations.
- 6. Acknowledgements The research that I have managed to do in the last five years could not have been possible without the help of many people. My heartfelt thanks go to: • I think I can now *safely* write that for good reasons, initially neither of my supervisors was probably too keen on the topic as the topic of the research was only partially related to their research interests, and there were several rather tricky issues in this topic. So, I am grateful of my PhD Supervisors, namely Jeremy Morley and Mike Batty for their patience and support throughout the research. • My family for being there for me in times of thrills and sorrows at the end of telephone line and Christmas dinners. • Brilliant research colleagues, namely Gert Wolf, John Pfaltz, Jo Wood, Shigeo Takahashi, Jason Dykes, Lewis Griffin, David Unwin, David Mark, and many others who provided intellectual and materials support. • All the colleagues at CASA, every one of whom helped to make the long journey seem so short and enjoyable. • Close friends, namely Anupam, Bittu, TK, and Jim for their company and faith. • The Association of Commonwealth Universities, UCL, and Ordnance Survey for providing the funding at various stages of the research. I also thank the colleagues at University of Leicester for their support and encouragement. • Isle of Man dataset was obtained from Jo Wood. Cairngorm data were obtained from the Digimap Archive under the CHEST agreement and are a crown copyright. Ordnance Survey provided the Salisbury terrain data. Gert Wolf provided the Latschur Mountains surface network. • PhD Thesis examiners, namely David Unwin and David Kidner for their questions and suggestions during the PhD examination.
- 7. List of Figures 1.1 Representation of a terrain into two different models depending upon the 3 application. 1.2 (a) A contour representation of a terrain and (b) the corresponding surface 7 network. Numbers in the parentheses of features in (a) are their respective elevations and the number on the links of the graph in (b) are weights associated with edges. 1.3 Inconsistent topology (from Wolf 1990). 10 1.4 (a) Two channel junctions (in circles) in a surface network and (b) their 12 decomposition into an infinitesimally closely located pair of pits and passes (modified after Wolf 1990). 1.5 (a) A ridge bifurcation (in circle) in a surface network and (b) their decomposition 13 into an infinitesimally closely located pair of pass and peak (modified after Wolf 1990). 1.6 A mountain with gullies on slope faces (Source: Anonymous). 13 1.7 Point p(i,j) in a grid and its 8 adjacent neighbours (after Takahashi et al. 1995). 15 1.8 Point p in a grid (analytical view) and its 7 adjacent neighbours (hollow circles) 16 (after Takahashi et al. 1995). 1.9 Decomposition of a degenerate pass (modified from Takahashi et al. 1995). Figure 17 shows the neighbours and their heights. Higher neighbours are placed inside a grey region. (a) The original neighbour list, (b) the reduced neighbour list, (c) the list in the first turn of the loop in the algorithm, and (d) the final set of neighbours which will define the pass. 1.10 Critical points of the surface and the configuration of their eigenvalues and 20 eigenvectors. R1 and R2 are the real parts of the eigenvalues. 1.11 Scale dependency of the Wood (1998) method. (a) The top-left corner of a raster 21 and the first cells that could be classified based on different filter window sizes and (b) two peaks with similar local shape but with different extents. 1.12 (a) (yo–zo)-contraction and (b) (xo–yo)-contraction of a surface network. 24 2.1 Quantitative vs. Qualitative model uncertainty. The loss of height and shape 28 information during a conversion of a raster surface to a TIN as shown in (a) can be evaluated to some precision however the representation of a raster (assumed to be a continuous surface) as a set of feature lines and points as shown in (b) involves unknown and subjective choice during conversion hence uncertainty will remain qualitative or at best a probability.
- 8. List of Figures 2.2 Three types of approaches for modelling the scale-space. 29 2.3 (a) Topographic map of a part of Salisbury area in England showing a pass feature 31 in the box (courtesy: Streetmap.co.uk) and (b) the aerial photograph of area (courtesy: GetMapping.com). 2.4 A topological network of ridges (white lines) and channels (black lines) bounded 33 by pits, peaks and passes (from Wood 1998). Note, many ridges and channels do not follow true ridge and channel locations. 2.5 Flowchart for stages II- IV to construct surface networks from raster terrain. 34 2.6 (a) Positive plan curvature with divergent flow lines and (b) negative plan 35 curvature with convergent flow lines (modified after Peschier 1996). 2.7 3x3 feature extraction filter window and the criteria for feature classification. 36 2.8 Comparison of feature extraction methods in a terrain around Salisbury, UK. (a) 43 Elevation based flow assessment, (b) descent of plan curvature, and (c) conic section analysis. All the methods used the same input DEM and parameters. Red cells are ridges and blue cells are channels. The contour overlay can be used to observe the robustness of the feature extraction algorithm. The cell size of DEM is 10 m and has 502x501 cells. 2.9 Fill-in-the-gap filter used to connect broken features. (a) Simple case, where the 43 filter produces desired output and (b) where it produces problematic outputs. 2.10 (a) Broken surface network topology. Snapping the unconnected downstream 44 ends of channels and unconnected loose end of ridges by (b) linear extension, (c) steepest downslope path, and (d) aspect path. Solid lines are original links and dashed lines are extensions. Blue lines are channels and red lines are ridges. Compare the methods for accuracy for links in the red box in NE part of the area. The small boxes are an artefact of raster to vector conversion. 2.11 Conversion of a ridge and channel segments to elementary surface network. 45 Dashed edges are artificial. 2.12 The configuration of leader nodes at the various types of ridge and channel 46 intersections and their respective decompositions. See text for the commentary. Dashed edges are artificial. 2.13 Representation of a loop of ridge segments around a crater by decomposition 48 into elementary surface networks. Dashed edges are artificial. 2.14 (a) An arbitrary configuration of leader nodes at the various types of ridge and 49 channel intersections and (b) their respective decompositions. See text for the commentary. 2.15 (a) Hill-shaded view of a part of Isle of Man raster terrain (cell size 100 m, 50 IV
- 9. List of Figures 135x121 cells) and the corresponding (b) surface network model (peaks, pits and passes are not displayed to reduce clutter) with contours (20 m interval). 2.16 (a) Hill-shaded view of a part of Salisbury raster terrain (cell size 10m, 329x379 51 cells) and the corresponding (b) surface network model (peaks, pits and passes are not displayed to reduce clutter) with contours (20 m interval). See text for commentary on the features in the circle. 2.17 The effect of smoothing on the feature classification of the Salisbury area terrain. 52 (a) The four linear trends (red lines) have been derived visually and the corresponding numbers indicate the order in which they appear during smoothing and (b) feature maps. Blue areas are channels and red areas are ridges. 2.18 The effect of smoothing on the feature classification of the Isle of Man area 53 terrain. (a) The four linear trends (red lines) have been derived visually and the corresponding numbers indicate the order in which they appear during smoothing and (b) feature maps. Blue areas are channels and red areas are ridges. 3.1 (a) Surface Network of central Isle of Man, (b) distribution of mean depth values, 58 (c) frequency plot of degree, and (d) variations in graph diameter with contractions. 3.2 (a) Surface Network of a part of Latschur mountains in Austria, (b) distribution of 59 mean depth values, (c) frequency plot of degree and, (d) variations in graph diameter with contractions. 3.3 Graphical User Interface of Surface Topology Toolkit. 61 3.4 Comparison of the effectiveness for selection of points in the surface network 63 between (b) maximum of elevation difference criterion and (c) and maximum of edge length criterion. Note that criterion (b) selects a long ridge due to its low drop in elevation (350). 3.5 Comparison of the effectiveness for selection of points in a surface network, 63 between (b) sum of elevation difference criterion and (c) valency criterion, showing how criterion (b) can mislead about the ridge/channel crossings. Numbers at peaks in (a) are sum of elevation differences and their valences (in parentheses). 3.6 Cascading contraction. In the original form in nature as shown in (a) the two 64 channels at the bottom with weights equal to 8 exist because of the flow from the 8 channels upstream. But when two upstream channels are contracted, the number of contributing channels to the minor channels is reduced to 6 as shown in (b). This could mean that the two minor channels dry up and are then removed from the network as shown in (c). 3.7 Generating artificial changes in terrain, in this case a large valley using User 66 Defined Contraction on a part of Latschur surface network. 3.8 Generating artificial changes in terrain, in this case erosion of minor features to 66 V
- 10. List of Figures yield a large ridge using User Defined Contraction on a part of Isle of Man surface network. 3.9 Refining a long ridge with a (y0-z0)-splitting. Note how the choice of configuration 68 ensures a topological consistency after the addition of the new edges. 3.10 A sequence of 2 refinements on longest ridges of a hypothetical surface network 68 with repeated (y0-z0)-splitting. Blue lines are channels and orange lines are ridges. Red dots are peaks, green dots are passes and black dots are pits. 3.11 (a) Uncertainty regarding an elevation value in the case of scattered points and (b) 70 in the case of surface network. 4.1 (a) Hill-Shaded terrain of SE Cairngorm Mountains, Scotland. Minimum Elevation 74 = 395 m and Maximum Elevation = 1054 m and (b) 910 topographic feature targets with an overlay of contours (20 m interval). 4.2 Comparison between the (a) Golden Case based visibility dominance and (b) 77 topographic features based visibility dominance. Darker coloured areas have more visual dominance than lighter coloured areas. 4.3 Uncertainty assessment based on the entire DEM. (a) Absolute vs. estimated 77 visibility dominance of all locations and (b) residuals based on the linear regression between absolute and estimated visibility dominance values of all locations. 4.4 Uncertainty assessment based on selective sampling. (a) Absolute vs. estimated 78 visibility dominance of the topographic features and (b) correlation coefficient vs. errors at 19 sets of 418 random locations, with the average value shown with the dotted line. 4.5 Comparison between the AVI and EVI based on the reduced observers strategy. 78 4.6 (a) Original 50m spatial resolution DEM and (b) its resampling to 200 m spatial 80 resolution with simple averaging filter. Note the smoothness in (b) at the expense of structural losses for example at the point indicated by the arrow. 4.7 (a) Landform PROFILE contours of a part of Salisbury (top) and hill-shaded 10 82 m Salisbury raster terrain (bottom) produced using the TOPOGRID function. Note the pronounced terracing effect on slopes located in the northeast and northwest of the area. 4.8 Comparison of the preservation of elevation, morphology and surface continuity 83-87 after different types of generalisation of the 10 m cell size Salisbury raster terrain shown in Figure 4.7b to 100 m cell size. The area in circles shows that MMTG method preserves both elevation and morphology. Also note that ordinary interpolation using TOPOGRID is not sufficient to ensure feature preservation. MMTG generalisation method also produces the best continuity in generalised DEM. VI
- 11. List of Figures 4.9 Digital elevation models of a sand spit at Scolt Head Island, North Norfolk, UK. 90 Two situations are shown representing the results of surveying the feature in 1997. 4.10 Increase in the structural information delivery with the addition of contours and 91 surface network overlays. 4.11 22 Intermediate surfaces (microsteps) generated by blending the February, 1997 93 surface (Situation 1) into the September, 1997 (Situation 2) surface. 4.12 Use of the surface network representation to visualise the changes in the 94 morphology of the sand spit. The box indicates an area of interest. Note that the surface network variations highlight changes that are not evident from the representation that uses color to show variation in elevation. VII
- 12. List of Tables 1.1 Criteria for classification of critical points in the eight-neighbour method (after 15 Takahashi et al. 1995). 1.2 Criteria for the classification of non-degenerate critical points based on Delaunay 16 triangulation (after Takahashi et al. 1995). 1.3 Morphometric Features described by second derivatives (after Wood 1996). 20 VIII
- 13. "Never again," cried the man, "never again will we wake up in the morning and think Who am I? What is my purpose in life? Does it really, cosmically speaking, matter if I don't get up and go to work? For today we will finally learn once and for all the plain and simple answer to all these nagging little problems of Life, the Universe and Everything!" ….. "An answer for you?" interrupted Deep Thought majestically. "Yes. I have." ….. "Forty-two," said Deep Thought, with infinite majesty and calm. ….. "Forty-two!" yelled Loonquawl. "Is that all you've got to show for seven and a half million years' work?" "I checked it very thoroughly," said the computer, "and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you've never actually known what the question is." "But it was the Great Question! The Ultimate Question of Life, the Universe and Everything!" howled Loonquawl. "Yes," said Deep Thought with the air of one, who suffers fools gladly, "but what actually is it?" A slow stupefied silence crept over the men as they stared at the computer and then at each other. "Well, you know, it's just Everything ... Everything..." offered Phouchg weakly. "Exactly!" said Deep Thought. "So once you do know what the question actually is, you'll know what the answer means." From “Hitchhiker’s Guide to the Galaxy” by Douglas Adams (1979)
- 14. Chapter 1 Introduction 1.1 Context Digital elevation models (DEMs) are essential components of wide ranging applications that require a representation of natural terrain. The applications could be as diverse as archaeology (e.g. viewshed analyses of ancient settlements; Lake and Woodman 2003) to the modelling of zoological habitats (e.g. forest ecosystems; Mackey et al. 2000). Naturally, an enormous quantity of research has taken place in the various aspects of digital elevation modelling. Some recent significant works amongst many others on digital elevation modelling include von Minusio (2002), Hutchinson and Gallant (2000) and the proceedings of the recent ASPRS conference on “Terrain Data and Applications” (URL #1). DEMs are an outcome of the object generalisation of continuous terrain (Weibel and Dutton 1999) and consequently inherit uncertainty arising out of the discretisation (von Minusio 2002). Different DEMs provide varying quantity of data on of the terrain continuity. For example, raster DEM will provide a more complete description of terrain heights compared to a triangulated irregular network or TIN (Kumler 1994) because of the differences in their approach to sample the continuous terrain. Various terrain representations reveal varying levels of abstraction which is particularly relevant when the increasingly massive terrain datasets are stored in computers. von Minusio (2002) provides a detailed discussion on the desirable characteristics of a reliable DEM. The choice of the DEM to represent the continuous terrain i.e. whether TIN or raster or others is critical to the success of the terrain analyses. It is important to appreciate how different disciplines describe terrains, for a broader understanding of previous research on this topic. Mathematicians have modelled terrains primarily with an aim to decompose the shape of terrain into the basic descriptors or elements even if it introduced over-simplification (e.g. by using the simple geometrical shapes such as triangles) and potential loss of the structure. Such descriptions are generic (i.e. universal to all types of surfaces), formal and robust thus crucial for applications such as computer-aided design. The aim of these mathematical descriptions is to produce a constrained global model of terrain. The other large group of terrain researchers from the field of physical geography use more compound descriptors (e.g. valleys, mounds, scarps, drainage network) with more emphasis on the preservation of the structural information of the terrain. Although the compound descriptors are more natural, their derivation is subjective to each individual, hence it is often difficult to derive an objective definition of terrain features1. These researchers are more interested in the process which resulted the surface hence the descriptors are also symbolic of the factors in the process. 1 Wolf (1993, p24) highlights the importance of exact definitions quoting Werner (1988) and Frank et al. (1986).
- 15. Introduction A simple example of such a fundamental dichotomy is the description of terrain by these two disciplines. To achieve a simple and tractable model of terrain, a typical algebraic definition of terrain will be as follows: Terrain is a smooth, doubly-continuous function of the form z= f(x,y), where z is the height associated with each point (x,y). The local maxima or peak of the terrain is a point with a zero slope and a convex curvature. Definitions for other terrain features are defined similarly using morphometric measures. Most physical geographers will however find these definitions very restrictive because, a) they do not offer scope to include some common terrain features such as lakes and overhangs that are fundamental to certain applications e.g. runoff modelling, and b) natural terrain features cannot be localised to a point because a peak with zero local slope doesn’t really exist in nature. In physical geography, the description of terrain surface and terrain features is more indicative than precise. Therefore, provided the shape of the terrain around an area could be described as a certain terrain feature type, it is the responsibility of the geographers to locate the position of the feature on terrain based on their opinion. Figure 1.1 shows the difference between an algebraic topology (as a Triangulated Irregular Network or TIN) and physical geography (drainage network) description of a terrain surface. It follows therefore, that a combination of these two ways of terrain modelling should provide a complete and robust approach for describing surfaces. In other words, the terrain data model which includes both the morphological structure (in terms such as hills and valleys) and the geometrical form (e.g. xyz coordinates) of terrain will be an ideal digital representation of terrain. Wolf (1993) stated a more general form of this requirement. Wolf regarded that an efficient terrain data structure will contain both the geometrical information (e.g. coordinates, line equations) and topological information on the geometrical data (e.g. neighbourhood relationships, adjacency relationships) of the terrain. However, the course of research in this thesis revealed that the construction of the topologically consistent terrain data model is non-trivial because real terrains seldom obey the constraints required by topological rules. It is trivial to produce terrain data models that only store the geometrical information about a surface. We simply need to collect certain points on the surface either on a regular lattice/grid or irregular locations. In fact many surface applications only require geometrical information for analyses. However, storing topological information has several significant advantages as listed below: • If we assume certain homogeneity in surface shape (e.g. smooth and continuous), using a topological data structure will reduce the number of points required to construct a surface. For example, by storing only certain morphologically important point (MIPs) (e.g. corners, inflexions) and their topological relationships we could reconstruct the surface by using interpolation between MIPs. Thus, the quantity of computer disk space required to store the surface will be reduced significantly. Helman and Hesselink (1991) reported 90% compression of volumetric surface datasets using topological data structures. • Topological relationships are a more efficient way to gain access to a spatial database e.g. sophisticated spatial queries such as clustering would be easily implemented by performing trivial adjacency and proximity tests between the points. 2
- 16. Introduction Terrain Mathematician’s Model Physical Geographer’s Model Piecewise model of terrain surface as Drainage network model of terrain surface triangular patches. as a network of ridge and channels. Representation simply requires the Representation requires non-trivial storage of points and their topological derivation of consistent drainage network. relationships. Figure 1.1 Representation of a terrain into two different models depending on the application • Presence of topology could provide an unified representation of the global structure of the surface. This would be useful for analyses that require uniform and controlled response from the entire surface e.g. morphing in computer graphics and erosion modelling in hydrology. • Topological data model will be useful for the visualisation of the structure of surfaces, particularly multi-dimensional surfaces. For example, Helman and Hesselink (1991) and Bajaj and Schikore (1996) reported that rendering of volumetric surface datasets as skeleton-like topological data structure is faster and more comprehensible compared to traditional volume rendering. • Bajaj and Schikore (1996) propose that topological surface data models will be a simple mechanism for correlating and co-registering surfaces due to the embedded information on the structure of terrains. 3
- 17. Introduction While the benefits of combining topological relationships between morphological features of terrains is clear, it is uncertain which MIPs and topological relationships should constitute a universal surface topological data structure. In general, each terrain should be characterised by MIPs suitable for a particular application. Many types of MIPs have been proposed by researchers in different disciplines and referred with different names for example landform elements (Speight 1976), critical points and lines (Pfaltz 1976), surface-specific features (Fowler and Little 1979), symbolic surface features (Palmer 1984), surface patches (Feuchtwanger and Poiker 1987) and specific geomorphological elements (Tang 1992). The common aim of these classifications has been to provide sufficient resemblance to the surface relevant to a particular application. The research presented in this thesis focuses on a DEM that models terrain as a graph between the important morphological features. In particular, the motivation of the research reported here was to refine and extend the surface network (Pfaltz 1976) of the terrain. A surface network is a graph where the local peaks, passes and pits are the three sets of vertices and the local ridges and channels, the edges (Figure 1.2). The ridges connect the passes to the peaks and the channels connect the pits to the passes. Even with such a general description, the surface network promises several unique and desirable properties, as follow: • A surface network is made up of the fundamental (Fowler and Little 1979) and minimal set of local topographic features hence it is highly compressed (up to 90% compression can be achieved by representing the terrain as a surface network), e.g. see Helman and Hesselink (1991) and yet preserves critical terrain morphology. • A surface network is an explicit expression of the terrain morphological structure unlike TIN or raster where the morphological structure has to be represented and stored separately. • The graph-theoretic structure of the surface network makes it amenable to formal graph based contraction, which has been proposed to be useful for the generalisation of terrain structure and not merely the terrain database (Wolf 1989, Rana 1998). Therefore, due to its explicit topographic design, surface network is a natural and intuitive data model to represent terrains. However, several issues related to the four main aspects of surface networks, namely their model design, automated generation, generalisation and practical use remain unresolved, which has prevented their use as a practical DEM. The goal of the thesis has been to fill the gaps. The remaining part of this chapter will provide a background to the above four aspects, which will then be used to highlight the unresolved issues and the aims of the thesis. At this stage, a difference between the terms “data structure” and “data model”, used in the context of terrain datasets is proposed. It is proposed that the term (surface/terrain) data structure should merely represent a format for storing the geometric and topological information (e.g. point heights and adjacency relationships) in a single construction. On the other hand, the (surface/terrain) data model should be an extended version of the surface data structure, where additional metadata information characterising the surface (e.g. valleys, ridges i.e. characteristic properties of surface) is also incorporated to produce a natural representation of the surface. In other words, a surface data model is a value-added product of the surface data structure and the surface data model explicitly represents the characteristics of the surface. Thus all surface data models can be regarded as surface data structures but the opposite is not necessarily true. 4
- 18. Introduction In the literature the term DEM has often been restricted to a raster, also called gridded, lattice and cellular, elevation data structure. This definition seems to be a rather limited use of the scope of the term DEM. Therefore, it is proposed that the term DEM be used for any data structure that satisfies the following properties: • It provides a 2.5D representation of the topographic surface. • It provides a description that can be used to reconstruct the continuity of the topography. By implication the above definition will also include a TIN of a terrain as a DEM while a collection of scattered elevation values or digitised contours will not be classified as DEMs. Also, the term DEM here only implies the natural topography thus it excludes vegetation and anthropogenic features which may exist on terrains. Some researchers also use the term digital terrain model (DTM). A digital surface model (DSM) on the other hand could consist of all desirable features including the terrain. 1.2 Unresolved Issues and Aims of the Thesis The concept of surface network has evolved from the culmination of ideas from mathematics, physical geography and computer science. This mixed lineage is because terrain modelling is a part of the broader research on modelling of scalar two-dimensional functions that are regarded as two-dimensional surfaces; specifically, • Terrain is assumed to be a scalar function such that the scalar property z = f(x,y), and 2 • Terrain is C continuous (so there are no overhangs), defined over a domain that is simply connected (so there are no holes), and bounded by a closed contour line. Terrain modelling is an inter-disciplinary topic attracting interests from researchers in physical geography (terrains); social geography (socio-economic surfaces); computer science (digital images); metrology (metal surfaces); physics (crystallographic energy surfaces) and many others. Koenderink and van Doorn (1998) provide an excellent review on the various applications. The following sets of brief descriptions list the sequence of events related to the conceptual developments of surface networks, and link the lineage of the ideas from the various disciplines. The primary origin of surface network research lies in the realisation of the fundamental topographic features. Fundamental topographic features are characteristic local topographic features that are common to all terrains and contain sufficient information to construct the whole terrain, thus taking away the need to store each point on the surface. The concept of critical points of a surface (where δz/δx = δz/δy = 0), namely the 2 2 2 2 2 2 2 2 2 2 local maxima (δ z/δx > 0, δ z /δy > 0), minima (δ z/δx < 0, δ z /δy < 0), saddles also 2 2 2 2 2 called passes (δ z/δx > 0, δ z /δy < 0or vice versa) and slope lines (lines normal to contours) also called topographic curves and gradient paths of surfaces, were proposed as early as the mid-19th century by mathematicians De Saint-Venant (1852) (reported by Koenderink and van Doorn 1998) and Reech(1858) (reported by Mark 1977). In physical geography, Cayley (1859), based on contour patterns, first proposed the subdivision of a topographic surface into a framework of summits (local peaks or maxima), immits (local pits or minima), knots (local saddles or passes), ridge lines (slope lines from saddles that reach up to a summit) and course lines (slope lines from saddles that go down to a minima) 5
- 19. Introduction (Figure 1.2). Maxwell (1870)2 maintained Cayley’s definition of summits and immits and added that a pass is the low point connecting two summits and a bar is the high point connecting two immits. As can be seen in Figure 1.2, for a terrain a bar and a pass are really the same critical point. Maxwell proposed the following relations between the numbers of critical points on terrain, which were later proved by Morse (1925) using differential topology: summits = passes + 1; immits = bars + 1 ∴ summits + immits – 2 = (bars + passes) / 2 OR summits + immits – passes = 2 Maxwell also described the partition of the topographic surface into Hills (areas of terrain where all slope lines end at the same summit) and Dales (areas of terrain where all slope lines end at the same immit) based on the fundamental topographic features. The earliest graph-theoretic representation of the topological relationships between the critical points of a terrain is the Reeb Graph (Reeb 1946, reported by Takahashi et al. 1995). Reeb graph represents the splitting and merging of equi-height contours (i.e. the cross-section) of a surface as a graph. The vertices of the graph are the peaks, pits and passes because the contours close at the pits and the peaks, and split at the passes. Consequently, the edges of the Reeb Graph turn out to be the ridges and channels. In a significant related development in mathematics, Morse (1925) derived the relationship between the numbers of critical points of sufficiently smooth functions, which is known as the Critical Point Theory or Morse Theory. More specifically, Morse derived that for a two-dimensional function f with the following properties: 2 • f is sufficiently smooth i.e. f ∈ C . Thus, it is possible to calculate the curvature at each point on the function hence cases like overhangs and lakes do not exist, • For all points b on the boundary f(b) < f(i) where i is an interior point. • All critical points of f are non-degenerate i.e., the Hessian matrix H(p) of the 2nd derivatives, of critical points has a nonzero determinant (i.e. singular or regular). The Hessian matrix for a point p(x,y) is defined as: H(p) = δ2f/δfxδfy • The following relationships exist between the critical points of f where P0, P1 and P2 denote the sets of pits, passes and peaks of f respectively. The last equation has been referred as the Mountaineer’s Equation (Griffiths 1981, reported by Takahashi et al. 1995) and the Euler Formula (Takahashi et al. 1995). |P0| ≥ 1; |P0| - |P1| ≥ 1; |P0| - |P1| + |P2|= 2 2 The anxiety with which Maxwell presented his paper is quite amusing. His note to the editor of the journal reads “An exact knowledge of the first elements of physical geography, however, is so important, and loose notions on the subject are so prevalent, that I have no hesitation in sending you what you, I hope, will have no scruple in rejecting if you think it superfluous after what has been done by Professor Cayley.” 6
- 20. Introduction x (1000) z2(4000) z1(5000) y1(2000) x (1000) y2(3000) (a) x (1000) z3(5000) Pass Peak Pit 3000 z1 y1 (b) 2000 1000 z2 x 2000 2000 y2 2000 z3 Figure 1.2 (a) A contour representation of a terrain, and (b) the corresponding surface network. Numbers in the parentheses of features in (a) are their respective elevations and the number on the links of the graph in (b) are weights associated with edges. 7
- 21. Introduction The function f that satisfies the above properties is called a Morse function. The generic nature and wide applicability of Morse Theory led to the expansion in interest in the critical points of surfaces amongst various disciplines. Warntz (1966) revived the interest of geographers and social science researchers into critical points and lines when he applied the “Hills and Dales” concept for socio- economic surfaces, and called the data model as the Warntz network (A term apparently first used by Mark 1977). A data structure identical to Reeb graph is the contour tree (Morse 1968, 1969). A contour tree represents the adjacency relations of contour loops. The tree like hierarchical structure develops because each contour loop can enclose many other contour loops but it can itself be enclosed by only one contour loop. As is evident, the contour tree is the same as the Reeb graph. Interestingly, Kweon and Kanade (1994) proposed a similar idea called the Topographic Change Tree. Similar to the Reeb graph, the vertices of such a contour tree and topographic change tree are the peaks, pits and passes. Pfaltz (1976) proposed the graph representation of the Warntz network and called it the surface network (Mark 1977 refers to surface network as Pfaltz’s graph). While the topology of Pfaltz’s Graph was based on a Warntz network, Pfaltz added the constraint that the surface will have to be a Morse function. Since Pfaltz was in the computer science field, his work attracted the attention of researchers in three-dimensional surfaces such as in medical imaging, crystallography (Johnson et al. 1999, Shinagawa et al. 1991) and computer vision (Koenderink and van Doorn 1979). Pfaltz also proposed a homomorphic contraction of a surface network graph to reduce the number of redundant and insignificant vertices. Along the similar lines, Mark (1977) proposed a pruning of the contour tree to remove the nodes (representing contour loops), which do not form the critical points, i.e., the vertices of the contour tree, and called the resultant structure, the surface tree. This reduces the contour tree to the purely topological state of a Pfaltz’s graph. The Reeb graph, Pfaltz’s graph and Surface tree have fundamental similarities and are inter-convertible (Takahashi et al. 1995). Nackman (1984) proposed a new construction for the graphs of critical points, called the Critical Point Configuration Graph (CPCG). He proposed rules for CPCG to be a surface network under more general conditions than those in the Pfaltz’s graph. Wolf (1984) extended Pfaltz’s graph by introducing more topological constraints in order it to be a consistent representation of the terrain. He proposed assigning weights to the critical points and lines to indicate their importance in the surface and thus he proposed the name weighted surface network (WSN) for Pfatlz’s graph. Wolf demonstrated new weights-based criteria and methods for the contraction of the surface networks. Later, Wolf suggested that to visualise a WSN for cartographic purposes and be useful for spatial analyses, the vertices could be assigned metric coordinates (Wolf 1990). The resultant representation is termed metric surface network (MSN). Feuchtwanger and Poiker (1987) proposed a topological model for terrains, which was a combination of ideas from the Interlocking Ridge and Channel Network (Werner 1988), Hills and Dales, Contour Tree, Surface Tree, and Pfaltz’s Graph. Sadly, although interesting, the idea did not advance beyond the Entity-Relationship Model of the data structure. Recent works (c. 1999-2004) have mostly focussed on the automated extraction of surface networks from raster and TIN DEMs, which will be discussed in a later section. 8
- 22. Introduction 1.2.1 Model The basic construction of the surface network model has not changed since the proposals by Pfaltz (1976) and Wolf (1988, 1990). Thus their description of surface networks has been used throughout the following section. The original descriptions have been extended in places where it was felt necessary for a better comprehension. 1.2.1.1 Concept The surface network represents a terrain, assumed to be a two-dimensional Morse function, as a weighted, directed, tripartite graph W = (P0, P1, P2; E), where P0, P1, P2 are three vertex sets representing the sets of all pits, passes and peaks, respectively, while E is the set of all edges (Figure 1.2). In addition a consistent weighted surface network (WSN) satisfies all the following rules (after Pfaltz 1976, Wolf 1988): Rule 1: W is planar (Wolf 1988). This means that an intersection of edges i.e. intersection of ridges and channels is not allowed. There can only be one type of slope line passing through a point except at the critical points. Rule 2: The subgraphs [P0,P1] and [P1,P2] are connected (Pfaltz 1976). This means that channels connect pits and passes, and ridges connect peaks and passes. This property imparts a global and unified structure to surface network, which makes it amenable for spatial analyses such as hydrological modelling. Rule 3: |P0| - |P1| + |P2| = 2 (Pfaltz 1976). It states that the number of pits minus the number of pass points plus the number of peaks must always be two, as also required for the terrain to be a Morse function. Rule 4: For all v ∈ P1, id(v) = od(v) = 2 (Pfaltz 1976); id and od represent in-degree and out-degree respectively. This rule requires that exactly two incoming edges (channels) and exactly two outgoing edges (ridges) should be incident at a pass thus excluding the existence of degenerate passes. Rule 5: If val(u, vi) = val(vi, w) = 1 then there must exist vj ≠ vi such that (u,vj), (vj,w) ∈ E (Pfaltz 1976); val represents valency, and u ∈ P0 , w ∈ P2 , 1 ≤ i , j ≥ |P1|. This condition requires that if there is a path from pit u via pass vi to peak w, which consists only of edges with valency one, then there exists another path from pit u to peak w via a distinct saddle vj. Rule 6a: If (u,v) is an edge of a circuit in the bipartite graph [P0,P1] then val(v,w) ≠ 2 for all w ∈ P2 (Pfaltz 1976), and Rule 6b: If (v,w) is an edge of a circuit in the bipartite graph [P1,P2] then val(u,v) ≠ 2 for all u ∈ P0 (Pfaltz 1976). This property asserts that a graph configuration as shown in Figure 1.3 is not allowed, because it implies a violation of Rule 1. Another way of stating this rule is that val(u, v) + val(v, w) ≤ 3 for all u ∈ P0 , v ∈ P1, w ∈ P2 (Pfaltz 1976). 9
- 23. Introduction z x y Figure 1.3 Inconsistent topology (from Wolf 1990). Rule 7: wt(ei) > 0 for all ei ∈ E (Wolf 1988); wt represents weight and 1 ≤ i ≥ |E| This means that all the edge weights must be greater than zero. For instance, if h(u), h(v) and h(w) represents the elevations of a pit, pass and peak, respectively, then the weight of a channel is h(v) – h(u) and the weight of a ridge is h(w) – h(v). Rule 8: For all u ∈ P0, vi, vj ∈ P1, w ∈ P2 and (u,vi), (u,vj), (vi,w),(vj,w) ∈ E holds w(u,vi) + w(vi,w) = w(u,vj) + w(vj,w) (Wolf 1988). This means that for all paths from pit u to peak w the weight is the same, no matter which saddle point is passed. Rule 9a: If val(u,v) = 2 with ei1 = (u,v) and ei2 = (u,v) then w(ei1) = w(ei2) (Wolf 1988). Rule 9b: If val(v,w) = 2 with ei1 = (v,w) and ei2 = (v,w) then w(ei1) = w(ei2) (Wolf 1988). This means that all channels from a pit to a pass have the same difference in altitude; the same holds for ridges, too. The proof of these rules is available in Pfaltz (1976) and Wolf (1988) and it will not be dealt with here. 1.2.1.2 Limitations of the WSN Model and Topological Rules Although the weighted surface network model provides a natural and sophisticated representation of the terrain, it has been considered merely as an interesting proposal by geomorphologists. Pfaltz (1976) himself understood the several limitations and commented that “it is unknown whether these properties are sufficient to guarantee the realizability of G” (surface network). It is therefore no surprise that WSN is not mentioned amongst most GI science textbooks as a DEM. It is proposed here that the surface network model suffers from the following three main drawbacks: (i) Non-representation of all terrains and terrain features. The fundamental shortcoming of the surface network model is the assumption that natural terrain is C2 continuous everywhere so that features such as overhangs (e.g. glaciated terrains, dunes, and plateaus), holes (e.g. karstic terrains), break in slope (e.g. alluvial fans, scarps) are absent. This requirement is a severe limitation as these features are abundant in nature. Commonly available DEMs are often full of sensor and interpolation noise (von Minusio 2002), and don’t represent the relief below water level (isolated islands or large flat areas inside terrain). Therefore, it is not possible to realise the surface networks for all types of terrains, especially those which do not have the entire set of critical points and lines required for the surface networks. 10
- 24. Introduction It is clearly an oversimplification to assume that natural terrains are Morse functions. The rules presented in section 1.2.1.1 e.g. Rules 3 and 4 simply ignore the chaotic behaviour of weathering processes, which leave the terrain in a state of constant inequilibrium3. However, it would also be unrealistic to suppose that it is impossible to derive surface networks. These exceptional terrain patches could be regarded as well behaved terrains or functions. Different kinds of limitations exist in representing well- behaved terrains. A common concern amongst geomorphologists regarding WSN is that it does not represent many important hydrological features e.g. junctions and bifurcations because the ridges and channels could only meet at the critical points. As a solution, Wolf (1990) suggested that the channel junctions and ridge bifurcations could be represented as infinitesimally closely located pit-pass and pass-peak pairs (Figures 1.4, 1.5) and termed the new WSN model as MSN. While the proposal from Wolf (1990) correctly adjusts the graph topology at the location of channel junctions and ridge bifurcations, the connections of the edges incident at the artificial peaks/pits/passes is arbitrary. For example, it assumes that free passes and peaks will be available, which connect the new artificial peaks/pits/passes to the graph (Figure 1.4b). Similarly, the gullies (small channels) on hill faces connecting to the main channel, features common to any mountainous terrain, are not included. These gullies, called the inner leaves of the channel network in the interlocking ridge and channel network model (Figure 1.6) by Werner (1988), are a prominent terrain feature and relevant in hydrological modelling for catchment analyses. Again, the problem here is that these gullies start from a point on the hill face (called source nodes; Werner 1988), which is not a critical point. (ii) Scaling Terrain features are organised in a hierarchy, expressed as a variation in their spatial extents (Fisher et al. 2004). For example, a gully on a slope face has small spatial extent compared to the channel it drains into (Figure 1.6). The position of the feature in a hierarchical arrangement can be regarded as the scale of the feature. Fisher et al. (2004) demonstrated that a location on the terrain could be a part of different feature types at different scales. Therefore, a terrain is inherently composed of multi-scaled features. It therefore implies that a terrain would have multiple surface networks representing the feature scales of a terrain. The existing surface network model (or for that matter Morse Theory) does not address how such individual surface networks could be unified into a single surface network model of a terrain. (iii) Uncertainty Surface network is an approximation of terrains based entirely on a minimal set of line and point topographic features. As mentioned earlier, the surface network would inevitably fail to capture all the variations present in terrain, which could lead to a considerable heights/shape related uncertainty in the DEM. In general, the uncertainty will depend on the deviation of the terrain from an ideal Morse function and would vary spatially across the terrain. At present, there are no proposals for determinig the uncertainty associated with a surface network. 3 Pfaltz (1976) suggests that points in in-equilibrium e.g. degenerate points could be decomposed into non- degenerate points but does not provide any proposals. 11
- 25. Introduction (1300) y1 (1800) (1100) z1 (2100) (a) y2 z3 (1500) z2 y3 (1900) (1700) (1400) y1 x1 y4 x2 y5 (1800) z1 (2100) (b) y2 z3 (1500) z2 y3 (1900) (1700) (1400) Figure 1.4 (a) Two channel junctions (in circles) in a surface network and (b) their decomposition into an infinitesimally closely located pair of pits and passes (modified after Wolf 1990). 12
- 26. Introduction z1 z1 (1600) y3 z2 y1 y2 (1300) (1400) y1 y2 (a) (b) x1 x1 (1000) Figure 1.5 (a) A ridge bifurcation (in circle) in a surface network and (b) their decomposition into an infinitesimally closely located pair of pass and peak (modified after Wolf 1990). Figure 1.6 A mountain with gullies on slope faces (Source: Anonymous). 13
- 27. Introduction 1.2.2 Generation The process of accurate automated extraction of a surface network from the digital elevation dataset (e.g. raster, TIN, contour etc.) lies between the surface network model and its use in practise. The methods of surface networks extraction have ranged from the simple ones like manual digitisation (Wolf 1988) and triangulation (Takahashi et al. 1995) methods to the complex surface fitting (Pfaltz 1976, Wood 1998). The different methods were chosen depending on the researchers’ belief in the best way of extracting the critical points and lines. There have been many suggestions for detecting the critical points and lines of a surface. This thesis focuses on the afore-mentioned four works in some detail as they represent the culmination of the most widely implemented ideas, used specifically for surface networks. 1.2.2.1 Methodologies In short, the generation of a surface network involves two steps - (i) extraction of the critical points and (ii) connecting them with the critical lines. The three main categories of methods, including a manual method, for surface network extraction are as follow: (i) Manual Extraction Wolf (1988) reported a successful manual generation of a topologically consistent surface network. He picked the critical points from a contour map using a digitiser and established the topological relationships i.e., the ridges and channels, by visual inspection. (ii) Triangulation Takahashi et al. (1995) proposed a modified version of the eight-neighbour method detection of the critical points (Peucker and Douglas 1975) for grid surfaces. The eight- neighbour method compares the height of a point, p(i,j), with its eight neighbours in a 3x3 square surrounding p (Figure 1.7) and classifies the point as a critical point based on the criteria in Table 1.1. Takahashi et al. (1995) showed that the eight-neighbour method based detection depends on the value of the threshold and this ambiguity could cause the loss of the Mountaineer’s Equation constraint i.e., pits – passes + peaks ≠ 2. They suggested that to satisfy the Mountaineer’s Equation, the contour changes should be determined according to the neighbour heights and not according to the threshold. They suggested the use of the Delaunay triangulation (Guibas and Stolfi 1985) to triangulate the 3 x 3 square, centered at p, and determine only the adjacent points (amongst the 8 surrounding neighbours) of p (Figure 1.8). The point is then classified according to the criteria given in Table 1.2. However, in the case of degenerate passes (Figure 1.9a) there will be more than 4 sign changes as three or more equi-height contours are merged. Takahashi et al. (1995) derived that any degenerate pass can be decomposed into non-degenerate ones, m, where m = (Nc – 2)/2 (Figure 1.9d). By solving this equation, we can find out that the number of sign changes, Nc, at a degenerate pass will be equal to 2 + 2m (m = 1,2,...). The algorithm to decompose a degenerate pass by Takahashi et al. (1995) is unique and noteworthy. The steps are as follow: (a) Generate a counter-clockwise (CCW) list of the adjacent neighbours of this pass, which here is {p1, p2, p3, p4, p5, p6, p7} (Figure 1.9a). (b) Divide this list into an upper sequence, which has the higher neighbours, i.e, {p1},{p3, p4} and {p6}, and a lower sequence, which has the lower neighbours, i.e., {p2},{p5} and 14
- 28. Introduction i –1 , j -1 i –1 , j i –1 , j +1 i , j -1 p (i,j) i , j +1 i +1 , j -1 i +1 , j i +1 , j +1 Figure 1.7 Point p(i,j) in a grid (data view) and its 8 adjacent neighbours (after Takahashi et al. 1995). . Peak |∆+| > Tpeak |∆ -| = 0 Nc = 0 Pit |∆-| > Tpit |∆ +| = 0 Nc = 0 Pass |∆+| + |∆-| > Tpass Nc = 4 |∆+| The sum of all positive height differences between the point and its 8 neighbours. |∆-| The sum of all negative height differences between the point and its 8 neighbours. Nc The number of sign changes i.e., Σ |∆+| + |∆-| associated with the point. Tpeak Threshold height for a point to be a peak. Tpit Threshold height for a point to be a pit Tpass Threshold height for a point to be a pass. Table 1.1 Criteria for classification of critical points in the eight-neighbour method (after Takahashi et al. 1995). 15
- 29. Introduction p Figure 1.8 Point p in a grid (analytical view) and its 7 adjacent neighbours (hollow circles) (after Takahashi et al. 1995). Peak |∆+| > 0 |∆-| = 0 Nc = 0 Pit |∆-| > 0 |∆+| = 0 Nc = 0 Pass |∆+| + |∆-| > 0 Nc = 4 |∆+| The sum of all positive height differences between the point and its 8 neighbours. |∆-| The sum of all negative height differences between the point and its 8 neighbours. Nc The number of sign changes i.e., Σ |∆ +| + |∆ -| associated with the point. Table 1.2 Criteria for the classification of non-degenerate critical points based on Delaunay triangulation (after Takahashi et al. 1995). 16
- 30. Introduction p2 p2 86 p1 86 p1 p3 106 p3 106 110 p 110 p 103 100 89 100 89 p4 93 p7 93 104 p7 104 p5 p5 p6 p6 (a) (b) p2 p2 86 p1 86 p3 106 p3 110 p 110 p 100 89 100 p7 93 93 104 104 p5 p5 p6 p6 (c) (d) Figure 1.9 Decomposition of a degenerate pass (modified from Takahashi et al. 1995). Figure shows the neighbours and their heights. Higher neighbours are placed inside a grey region. (a) The original neighbour list, (b) the reduced neighbour list, (c) the list in the first turn of the loop in the algorithm, and (d) the final set of neighbours which will define the pass. 17
- 31. Introduction {p7}. Reduce the neighbours list by selecting the highest neighbour from each upper sequence and the lowest neighbour from the lower sequence. For example, in the current example, the original neighbours list is reduced to {p2, p3, p5, p6, p7, p1} (Figure 1.9.b) by removing p4, because p3 is higher in the sequence { p3, p4}. Note that if the list has more one neighbour then the reduced begins with a lower neighbour to ensure that the four alternating upper and lower neighbours at the pass are selected correctly. Also it can be seen from the reduced list that there are 6 sign changes therefore, the number of denegerate passes m is 2. (c) Put all the elements of the reduced list except the first two i.e., {p5, p6, p7, p1}, in a trailing list to further reduce the neighbours list. (d) Select the last four elements i.e., {p5, p6, p7, p1}, of the trailing list as representative neighbours. Remove the last two elements, which are {p7, p1} in this case, of the representative neighbours, from the trailing list. (e) Repeat steps (c) – (d) until the trailing list is reduced to a lower and a upper neighbour of the pass, which here are {p7, p1}. The final neighbours list of the decomposed pass has the first two elements of the trailing list and the two elements remained after step (v) thus here the final neigbhours of p are {p2, p3, p5, p6}. Other degenerate points such as flat regions could either be iteratively smoothed or perturbed to introduce a slight inclination in the terrain. The methodology to connect the points is intuitive and simple. It is based on the assumption that a ridge line is the line of steepest ascent from a pass while a channel is the line of steepest descent. Therefore, the ridge (channel) line is traced by moving to the highest (lowest) neighbour and repeating the tracing until a peak (pit) or the boundary is reached. (iii) Polynomial surface fitting Recall from the last chapter that according to the Morse Theory, a point is a critical point of the surface if the local slope at the point is zero. However, not all points that have zero slopes are critical points. To classify the locally flat areas into a peak or a pit or a pass, we have to know the local curvature using the second derivative of the height function at the candidate point. The local curvature could also be used to detect whether the candidate point is a ridge or channel. The second derivative can be used to classify the critical points and lines in two ways. First, the easier method is to compare the curvature along the three orthogonal components (Table 1.3) (Wood 1996). The components x and y are not necessarily parallel to the axes of the DEM, but are in the direction of maximum and minimum profile convexity. Secondly, the eigenvalues and eigenvectors of the Hessian matrix (see section 1.2) can indicate the gradient flow at the critical point (Figure 1.10). A critical point is a peak if the two real parts (R1, R2) of the eigenvalues of the Hessian matrix are positive, indicating a gradient flow away from the critical point. A critical point is a pit if the two real parts of the eigenvalues of the Hessian matrix are negative indicating a gradient flow towards the critical point. In the case of the pass, the two real parts of the eigenvalues are of different signs. In addition, at a pass the eigenvector along the positive eigenvalue indicates the ridge line while the eigenvector along the negative eigenvalue marks the channel direction. To calculate the derivatives, the local surface around a critical point can be interpolated as a polynomial of desired smoothness. For example, it could be modelled as a biquadratic function (Evans 1980, Wood 1996) or a bicubic function (Bajaj and Schikore 1996). Evidently, the complex polynomials will provide a significantly generalised surface approximation and will take longer time to be solved. A complex polynomial will also 18
- 32. Introduction require bigger kernels or filters and these lead to wider unclassified along the borders. For instance, the surface around a DEM grid cell can be represented as the following continuous quadratic function, made up of the sum of six terms (Wood 1998): z = ax2 + by2 + cxy + dx + ey + f Various methods have been used to solve the surface polynomials for the coefficients such as simple combinations of neighbouring cells (Evans 1980, Zvenburgen and Thorne 1987) and matrix algebra (Wood 1996, Kidner et al. 1999). The properties of the continuous surface fitted on the discrete DEM values can now be derived analytically from the continuous function. For example, Evans (1980) defines steepest slope and aspect as follow: slope = arctan (d2 + e2)1/2 aspect = arctan (e/d) Second order derivatives such as longitudinal and cross-sectional curvature can also be derived from the quadratic function (Wood 1998) (Table 1.3). A potential uncertainty with these surface measures is that they represent the value of the measure at a point at the centre of the quadratic function (Wood 1998). This could lead to an incorrect feature classification if the centre of the critical point or line is offset considerably from the centre of the area of interest. Wood suggested that testing the quadratic patch for a type of conic section i.e. whether elliptic or parabolic or hyperbolic or planar could unambiguously determine the feature type and surface flow direction. Incidentally, the first three conic section types represent the critical points and lines, namely pits and peaks (elliptic), channels and ridges (parabolic) and passes (hyperbolic). Wood (1998) used a raster DEM and classified cells by passing a square filter window also called kernel across the DEM. The filter window is at least 3 cells by 3 cells wide and could increase to the number of cells along the shorter side of the DEM. The possibility of increasing the filter window means that features of varying scales could be extracted. The procedure for connecting the critical points is more developed than the previous one because the information about the ridge and channel axes is also available (Wood 1998, Wood and Rana 2000). The steps are as follow: Identify the passes, Move upwards in the direction of any ridge axes that fall within the area of interest a new grid is reached, Recursively repeat (b) until no higher cell is found, Repeat steps (a) – (c) but moving downwards along a channel axes. 1.2.2.2 Limitations (i) Scale dependency Scale dependency refers to the subjectivity in measurements arising out of not including the multi-scaled nature of most spatial datasets e.g. terrain features in DEM, population density patterns in demographic maps and many others. Many feature extraction methods are scale-dependent because they only explore a fixed space around a point to classify the point into a feature type. Therefore terrain features that could fit within the search space are extracted. Takahashi et al. (1995) and Wood (1998)’s surface network generation methods suffer from scale dependency in different ways. 19
- 33. Introduction Feature Derivative Expression Description 2 2 δ z δ z Point that lies on a local convexity in all Peak >0, 2 >0 δx 2 δy directions (all neighbours lower). δ 2z δ 2z Point that lies on a local convexity that Ridge >0, 2 =0 is orthogonal to a line with no δx 2 δy convexity/concavity. δ 2z δ 2z Point that lies on a local convexity that Pass >0, 2 <0 δx 2 δy is orthogonal to a local concavity. δ 2z δ 2z Points that do not lie on any surface Plane =0, 2 =0 δx 2 δy concavity or convexity. δ 2z δ 2z Point that lies in a local concavity that Channel <0, 2 =0 is orthogonal to a line with no δx 2 δy concavity/convexity. δ 2z δ 2z Point that lies in a local concavity in all Pit <0, 2 <0 δx 2 δy directions (all neighbours higher). Table 1.3 Morphometric Features described by second derivatives (after Wood 1996). Peak: R1, R2 > 0 Pit: R1, R2 > 0 Pass: R1 < 0, R2 > 0 Figure 1.10 Critical points of the surface and the configuration of their eigenvalues and eigenvectors. R1 and R2 are the real parts of the eigenvalues. 20
- 34. Introduction (i.i) Scale dependency of Takahashi et al. (1995) Method Although Takahashi et al. (1995) realised that “it is possible that small undulations hide large undulations in the case of steep mountain regions” i.e. terrain features exist at multiple-scales, they used wavelet filtering to eliminate such “small undulations”. Therefore, their surface network extraction method ignored the scale dependency of features. In addition, since the triangulation-based detection uses only the eight surrounding neighbours for the classification of the critical points, it has a fixed scale of observation. In a later work, Takahashi (1996) suggested referring to the scale-space theory (Witkin 1983, Lindeberg 1994) before the extraction of the surface network. However, it is uncertain how the current method of triangulation can be extended to detect larger features. (i.ii) Scale dependency of Wood (1998) Method The Wood (1998) method allows a multi-scale extraction of the surface network but since only the cell at the centre of the filter windows is classified, the number of cells which could be classified reduces as the filter window grows in size (Figure 1.11a). In addition, the extraction methodology doesn’t distinguish between a feature which can be identified with both small and big filter windows (Figure 1.11b). 9 by 9 filter window 7 by 7 filter window 5 by 5 filter window 3 by 3 filter window (a) (b) Figure 1.11 Scale dependency of the Wood (1998) method. (a) The top-left corner of a raster and the first cells that could be classified based on different filter window sizes and (b) two peaks with similar local shape but with different extents. (ii) Delineation of topological links (ii.i) Broken Surface Networks In most methods for the generation of surface networks, including Takahashi et al. (1995) and Wood (1998) methods, the surface network is built incrementally by tracing the ridges and channels from the passes. It is assumed that by tracing the steepest (shallowest) gradient (Takahashi et al. 1995) or the ridge (channel) axes (Wood 1998) starting from a 21
- 35. Introduction pass will faithfully lead to either a peak (pit) or to the edge of the DEM (external pit or peak). However, as discussed in this section, DEMs of natural topography are seldom sufficiently smooth enough for the successful delineation of the ridges (channels). As a result, ridges and channels don’t necessarily terminate at peaks and pits respectively. (ii.ii) Junctions, bifurcations are not extracted Both the Takahashi et al. (1995) and Wood (1998) methods don’t locate junctions and bifurcations in the way suggested by Wolf (1990). 1.2.3 Simplification Pfaltz (1976) noted that despite the significant abstraction achieved by surface networks, they may carry too much information by storing minor peaks, passes, and pits. He proposed a simplification of the surface network graph with the aim of extracting “those points of equilibria which correspond to the macrostructure of the surface, and suppress points which are part of a local microstructure” (Pfaltz 1976). Pfaltz proposed an iterative simplification of the surface network by homomorphic contractions such that the resultant surface network is isomorphic to the parent surface network graph. Wolf (1988) categorised the two possible types of homomorphic contractions, which always result into a topologically consistent surface network4. They are as follow: (i) (yo – zo)-contraction (Wolf 1988) Let, W = Surface Network, yo = Pass with Peaks R(yo) = {zo,z} and the difference in height along an adjacent ridge h(yo, zo) ≤ h(yi, zo); i = 1,2, …., n-1 where n = degree of the peak zo. Set of adjacent passes to zo L(zo) = {yo, y1, y2,…, yn-1}. Then, the (yo, zo)-contracted graph W’ is the graph with the following properties: Vertex set V(W’) = V’ = V – {yo, zo}, Edge set E(W’) = E’ = E + {(y1, z’), (y2, z’),….,(yn-1, z’)}, and Edge elevation drops: h(yi, z’) = h(yi, zo) – h(yo, zo) + h(yo, z); i = 1,2,…, n-1. h(e’) = h(e) for all other edges e’ ∈ E(W’) This transformation, which contracts the subgraph [yo,zo] and converts the original surface network onto a condensed one is called (yo, zo)-contraction (Figure 1.12a). The contraction removes the peak zo and its highest adjacent pass yo together with the entire critical lines incident with at least one of these critical lines. However, this elimination causes the loss of two properties of surface networks, which are (a) the condensed subgraph [P’1,P’2] is no longer connected (violation of Rule 1) and (b) od(yi) = 1; i = 1,2, …., n-1 (violation of Rule 3). The topological consistency is restored by connecting the free passes yi to z i.e., the edge set of W’ contains the old edge set E(W) and the new links (yi,z’). The most important part of the contraction is the choice of yo, which ensures that the elevation differences along the new links are always greater than zero. This idea originated from Mark (1977), where he proposed methods for the generalisation of surface trees. Positive elevation differences are essential for the realisation of a topographic surface for instance a situation where a higher pass connects to a lower peak is unnatural. (ii) (xo– yo)-contraction 4 See Pfaltz (1976) and Wolf (1988) for the proof. 22
- 36. Introduction A (xo – yo)-contraction can be similarly defined for the contraction of the subgraph [xo,yo] (Figure 1.12b) except that the pass yo that is removed is the lowest pass connected to the pit xo. A surface network can be condensed by repeated (yo–zo)-contraction and (xo– yo)- contraction until a desired level of simplicity or an elementary surface network (ESN) is achieved. An elementary surface network is the most basic topologically consistent form of surface network with a single pass with exactly two ridges and channels incident on the pass. For example, the surface networks produced after generalisation shown in Figure 1.12 are elementary. The surrounding peak and pit is immutable as a contraction of these will lead to a collapse of the surface network graph. A typical surface network contraction sequence is as follow: Step 1: For all internal x ∈ P0 ; z ∈ P2 { Calculate w j ( x ), w k ( z ); j = 1..id ( x ), k = 1..od ( z ) ; Add w j ( x ), w k ( z ) to contraction sequence list R . } Step 2: Sort R in ascending order. Step 3: If R[ 0 ] ∈ P0 { Do [ x 0 − y 0 ] − contraction on R[ 0 ] . } else { Do [ y 0 − z 0 ] − contraction on R[0] . } Step 4: If W is NOT elementary OR NOT generalised enough { Go to Step 1 } Else { Exit } w are the weights associated with pits and peaks. 1.2.3.1 Selection criteria for simplification Mark (1977) and Wolf (1988) proposed that any types of weights, used to select critical points for contraction should be based on the elevation or in general on the value of the mapped property of the critical point because this would ensure a topologically consistent surface network after generalisation. They suggested the following importance measures: (i) Height of the Peak and Pit. w(xi) = |h(xi)| w(zk) = |h(zk)| (xi) is a pit, (zk) is a peak, h denotes height and w denotes weight. Height of the critical point is perhaps the simplest and most obvious weight that could be assigned (Mark 1977). 23
- 37. Introduction (a) (b) y1 y1 xo z xo z x zo x zo yo yo y1 y1 z xo z x zo x Figure 1.12 (a) (yo–zo)-contraction and (b) (xo–yo)-contraction of a surface network. (ii) The maximum of the elevation differences between a peak or pit and all its adjacent passes. w(xi) = max {h(yj) – h(xi)} w(zk) = max {h(zk) – h(yj)} (xi, yj) ∈ E, (zk, yj) ∈ E This measure ranks peaks and pits on the basis of the ridge and channel with the maximum drop in elevation, linked to them. (iii) The minimum of the elevation differences between a peak or pit and all its adjacent passes. w(xi) = min {h(yj) – h(xi)} w(zk) = min {h(zk) – h(yj)} (xi, yj) ∈ E, (zk, yj) ∈ E This measure ranks peaks and pits, ranked on the basis of the ridge and channel with the smallest drop in elevation, linked to them. (iv) The sum of the elevation differences between a peak or pit and all its adjacent passes. w(xi) = Σ{h(yj) – h(xi)} w(zk) = Σ{h(zk) – h(yj)} (xi, yj) ∈ E, (zk, yj) ∈ E 24
- 38. Introduction This measure selects pits and peaks with a low number of crossings. However as can be seen this measure could be misleading because it will be biased by the heights of the points. (v) The sum of the elevation differences between a peak or pit and all its adjacent passes normalised by the degree of the peak or pit. Σ {h ( y j ) − h ( x i )} w(xi) = n( x i ) Σ {h( z k ) − h ( y j )} w(zk) = n( z k ) (xi, yj) ∈ E, (zk, yj) ∈ E, n denotes the degree of the critical point. The idea behind this measure is the same as in measure (iv) but this one normalises the elevation differences. However, this is an unnecessarily involved way to find crossings. The degree of the peak or pit is perhaps more suited. Still, the normalised sum could prove to be useful for some other purpose. Wolf (1988) proposed that the geometrical errors observed while creating simplified contour maps using line simplification methods could be avoided by making the contour maps directly from the generalised surface networks. This was particularly useful for cartographic purposes. 1.2.3.2 Limitations Despite the simplicity and robustness of surface network contraction methods proposed by Pfaltz (1976) and Wolf (1988), they have three main limitations which restrict their utility in practical terrain generalisation methods: (i) Limitations of weight measures According to Weibel and Dutton (1999), the first step in the generalisation of spatial datasets is the cartometric evaluation of the dataset which involves an assessment of the dataset to select the portions suitable for generalisation. For a surface network generalisation above, cartometric evaluation involved the assignment of weights (based on elevation differences) and using it to rank (by using the selection criteria) the peaks and pits for contraction. Mark (1977) and Wolf (1988) have argued that all weights and selection criteria must be based on elevation. However, it is simple to prove that elevation and elevation differences provide little information about the importance of a point (Franklin 2000). For example, two peaks could have ridges with equal elevation differences but of different extent. Pfaltz (1976) first raised the potential arbitrariness in assigning weights and selecting points for contraction. More crucially, the existing weight measures do not take the morphometric measures such as slope etc. into account. In addition, it is unclear how critical points with equal weights should be ordered for contraction. (ii) Sequential contraction Pfaltz (1976), Mark (1977) and Wolf (1988) proposed an iterative and sequential (based on rank) generalisation of a surface network until a surface with a desired simplicity has been reached. However, it is sometimes desirable to influence the generalisation sequence for the sake of structural integrity (Pfaltz 1976, Wolf 1988) or, when the sequence could be anomalous (e.g. two points with equal weights). Currently, there are no proposals to achieve an arbitrary generalisation sequence. 25
- 39. Introduction (iii) Purely topological nature of generalisation The existing generalisation method for surface networks is only able to achieve terrain simplification at a topological level i.e. while there is a terrain corresponding to the original surface network, simplifications in surface networks do not have a morphological expression. For example, if after a generalisation three new ridge edges are created, there are no corresponding ridges produced directly in the digital elevation model. This is perhaps the most critical limitation of existing generalisation methods and prevents it from being used for practical terrain generalisation methods. Wolf (1988) did not consider the construction of the generalised morphology based on changes in the topological links and merely triangulated the critical points left after generalisation. 1.3 Structure of the Thesis Surface networks have received intensive research inputs from researchers in mainly computer science (vision, graphics), geographic information science (terrain modellers) and, to a limited extent, by social scientists. The research of surface networks can be broadly divided into three main areas namely automated generation, generalisation, and application. The research that was undertaken during the doctoral research has also been accordingly divided into these three main topics. These chapters are self-contained descriptions on these areas and include a conclusion either during the descriptions or at the end of the chapters. This has been done to maintain a consistency of thoughts. These chapter conclusions are brought together in the final thesis conclusions. Chapter 2 presents novel techniques for an automated generation of surface networks. Two main key proposals in this chapter are regarding a new data structure to store surface networks and the storage of ridge junctions and channel bifurcations. The new data structure to store surface networks ensures that both geometrical and topological information about terrain is preserved. Wolf (1984) and most GIS literature use the term generalisation as synonymous to the simplification of surface networks. However, it is proposed that generalisation should be considered as both simplification and refinement. Simplification of a data structure involves removal of redundant and/or undesired details while on the contrary, refinement introduces details into a coarse data structure. The literal use of the term generalisation merely means development of a hypothesis/principle/conclusion by approximation of many observations thus logically it is also valid for refinement. Chapter 3 identifies several issues related to vertex-importance based simplification of surface networks. It presents new weights measures for characterising the structure of surface networks and proposals for the refinement of surface networks. Chapter 4 includes a demonstration of the ideas and techniques developed in the previous sections. Three types of common terrain analyses namely viewshed analysis, terrain generalisation and visualisation of landscape evolution are presented where surface networks help in improving the computation time and quality of the terrain analyses. Chapter 5 contains a summary of the research and presents the directions for future research. 26
- 40. Most of the fundamental ideas of science are essentially simple, and may, as a rule, be expressed in a language comprehensible to everyone. From “The Evolution of Physics” by Albert Einstein & Leopold Infeld (1967)
- 41. Chapter 2 Automated Generation 2.1 Automated generation of surface networks Despite being a potentially useful representation of terrain, surface network has not been widely implemented in GIS. The lack of robust techniques for the automated generation of surface network is one of the main reasons. Several unresolved issues in the existing automated methods for the extraction of surface networks were highlighted in Chapter 1. The chapter presents a novel solution for the automated generation of surface networks. In addition, the chapter will also address the more conceptual (philosophical) shortcomings in designing an automated algorithm. Most computational geometers are indifferent to the philosophical bottlenecks because the unresolved issues arise from the very origin i.e. the discretised modelling of originally continuous terrain and, hence are inherently irresolvable. These philosophical arguments, which are now being widely formalised, comprise of uncertainty related to the semantics involved in the description of topographic features. In a related argument, Weibel and Dutton (1999) used the term Object Uncertainty for the inherent deviation of a model from the reality. Model uncertainty could be categorised into two types, namely as quantitative and qualitative (non-quantitative). Quantitative uncertainty is a straightforward comparison of a particular quantifiable measure between two datasets one of which could be raw data and the other, modelled version of the dataset. For instance, in Figure 2.1a the conversion of the continuous terrain to a TIN version, introduces a reasonably well-defined loss of elevation values. Qualitative uncertainty arises when the continuous terrain is modelled using subjective measure descriptors (Figure 2.1b). For instance, the representation of continuous terrain as a surface network involves the classification of the surface into fundamental topographic features. However, the classification of the surface (raster cells) involves morphometric properties to define a topographic feature (e.g. a peak). The choice of morphometric properties therefore influences the feature identification step. However, labels such as peak are inherently vague i.e. what the extent of a peak is? - is it the whole mound or is it the certain area around the tip of a mound? The chapter first discusses the issue of subjectivity in feature labelling and proposes a computational solution for the automated extraction of a consistent surface network from raster, TIN and contour based terrain datasets. 2.2 Novel solutions to unresolved issues 2.2.1 Stating semantic uncertainties in labelling terrain As mentioned in Chapter 1, the argument that a surface network is made up of the fundamental local topographic features i.e. peaks, pits, passes, ridges and channels, is central to the surface network’s superiority over other terrain data structures. The notion that these morphological features are the only fundamental features on terrain is debatable. This ambiguity was also highlighted in the previous chapter since geomorphologists have proposed other types of morphological features to be significant for certain terrains. Therefore this becomes the first assumption in modelling of terrain using
- 42. Automated Generation surface networks i.e. a framework of peaks, pits, passes and ridges, and channels is universal to all terrains. However, debatable issues in the semantics of feature classification remain. (a) (b) Figure 2.1 Quantitative vs. Non-Quantitative model uncertainty. The loss of height and shape information during a conversion of a raster surface to a TIN as shown in (a) can be evaluated to a some precision however the representation of a raster (assumed to be a continuous) surface as a set of feature lines and points as shown in (b) involves unknown and subjective choice during conversion hence uncertainty will remain non-qualitative or at best a probability. How can we objectively define any topographic features? For example, the mathematical definition of a peak states that a peak is a point with a locally convex curvature and zero slope. However, our general understanding of what a peak is generally based on an area-based/regional (Fisher et al. 2004) view of the terrain. In terms of strict mathematical definitions, a hill with a flat top may not be considered to have a peak (Figure 2.2a) but if asked, most of us would instinctly identify a peak on such a hill. This type of semantic uncertainty arises due to the unclear notion of the scale of observation. The term scale has been used to indicate both the extent of the study and sampling resolution (Fisher et al. 2004). For example, the answer for the question “what is the scale?” could mean the area of the local terrain patch used to identify features or the cell size of the given raster DEM. The latter definition seems to be an incorrect representation of the concept of scale as coarser scale does not necessarily have to mean the lack of features with small geographical extents and vice versa (Hutchinson 1996). At the same time, a surface network derived from a 50 m spatial resolution raster surface is likely to be different from the one extracted from a 100 m spatial resolution raster surface. It is non-trivial to correlate the two concepts (i.e. area and resolution) there because both contribute to give scale effects. There are three main types of approaches to understand the multi-scaled aspects of terrain. Several studies have tried to present the variations in certain morphometric properties under gradual resampling of surfaces to 28
- 43. Automated Generation coarser spatial resolutions. The assumption is that the resampling filters out smaller topographic features, thus is a simple and effective technique for generating a multi-scale terrain dataset for use in regional terrain analyses (e.g. global climate models or regional hydrological models) and, save storage space. Figure 2.2a shows how the coarsening of the surface can possibly highlight the true feature. This approach has been primarily been popular with early researchers in geography. More recently, another approach for the notion of multi-scale made popular by Wood (1996) and Fisher et al. (2004) has been to expand gradually the extent of the local area of interest on the terrain (also called kernel and filter sizes in raster based terrain analysis). Figure 2.2b shows the use of the adaptive filter size to explore the feature extent and type. Yet another approach, more popular with researchers from computer vision, has been to smooth the surface with a mean/average filter while keeping the spatial resolution unchanged. This is an iterative process and often involves feature-sensitive smoothing functions such as non-linear anisotropic diffusion (Perona and Malik 1990). Figure 2.2c shows a possible outcome of a smoothing operation that could result in the correct identification of the feature type. Scale • Coarser • Regional (a) Resampling to coarser cell size AOI Extent AOI Extent Varying (b) filter size (c) Smoothing by a mean filter (from left to right) Figure 2.2 Three types of approaches for modelling the scale-space. 29
- 44. Automated Generation As also stated in Chapter 1, the role of scale is therefore crucial in an acceptable description of terrain with any type of feature labelling. It follows therefore that all feature extractions are scale-dependent i.e. they are valid only for the given scale of observation. However, a vast amount of research has been done particularly in computer vision (Lindeberg 1994) to develop techniques to represent the scale-space of the terrain (analogous data will be visual scenes in the case of computer vision) into a model which stores features from across the scales. Various scale-space models have been proposed by computer-science researchers which vary in the way the scale levels relate to one another in the scale-space hierarchy. For example, in the Gaussian scale-space model each scale is derived by Gaussian smoothing (also referred as convolution) of a scale one level below in the hierarchy. Each scale level is identified as an event related to the appearance and disappearance of features during smoothing. Please see Koenderink (1984) for a detailed treatment on Gaussian Scale-space. Other research in GIS has focussed on various descriptions and visualisation of uncertainty in feature extraction arising due to scale-dependency (Wood 1996, Fisher at al. 2004). Thus it is relatively less formal compared to the computer science research on scale-space. However, even most formal scale-space models are still in the developmental stage because the conversion of image-space (the term used in computer vision to represent the raw data e.g. DEM, photographs) to object-space (the term used to represent a model of reality automatically interpreted from the image-space) is non-trivial and perhaps impossible due to our inability to convert our human logic to machine languages. This fact has led to the notions of acceptable error and fitness for purpose in evaluating the robustness of feature-extraction algorithms and resultant outputs. It is important to mention here that there are several other terrain data structures e.g., quadtrees (Gaede and Günther 1998), hierarchical TIN (De Floriani and Magillo 2001) and wavelets (Gallant and Hutchinson 1996) that can store the terrain at various levels of details. However, these multi-scale data structures are generally generated for fast data- compression and access purposes therefore morphology preservation is not always a priority. Kidner et al. (2001) provide a review and solutions of such techniques. It is difficult to develop a description of a multi-scale terrain data structure that includes topographic features. In nature, the evolution of topographic features (and their extent) does not always follow a chronological order across the extent of terrain. Besides scale-dependency, there is one more fundamental problem with previous algorithms. These algorithms deliberately assume away the variability in terrain morphology under the influence of the elegant mathematical tractability of Morse Theory and Pfaltz’s graph. As mentioned in the last chapter, a smooth doubly-continuous terrain that behaves as a Morse function is rare in nature. However, elementary surface networks do occur in nature (Figure 2.3). Therefore, it seems that there is no alternative to adopting a hybrid approach to represent the entire terrain as a surface network, wherein the surface network is made up of genuine and hypothetical topographic features. In other words, for the sake of completeness of geographic coverage, non-topographic feature points could also be inserted in the Pfaltz’s graph. However, their addition must follow the topological constraints imposed by Pfaltz and Wolf. The non-topographic locations/points on the terrain will assume a pseudo-topographic feature label and have to follow the topological rules regarding connectivity and weights. The hybrid approach is similar to the proposal by Wolf (1990) for resolving the issue of junctions and bifurcations (see section 1.2.1.2). This approach was first suggested by Pfaltz (1976) to decompose non-topographic features. Another example of this approach is the decomposition of degenerate passes as 30
- 45. Automated Generation proposed by Takahashi et al. (1995). At this stage, following the thoughts presented above, the composition of an ideal surface network data structure is proposed as follow. It is proposed that an ideal surface network should confirm to the following three main properties: • It should be consistent with the topological rules as proposed by Pfaltz (1976) and Wolf (1984). • Being the sparse data structure, it should particularly try to reduce uncertainty in morphology and elevation arising from the minimal sampling of terrains. • It should be amenable to common terrain analyses such as drainage network extraction. With these aims in mind, this research has developed a model for the practical implementation of a surface network data structure which is based on a combination of both topological and morphometric properties of terrain structure. (a) (b) Figure 2.3 (a) Topographic map of a part of Salisbury area in England showing a pass feature (Courtesy: Streetmap.co.uk) and (b) the aerial photograph of area (Courtesy: GetMapping.com). 31
- 46. Automated Generation 2.2.2 Automated extraction of surface topology Recall the observations in Chapter 1 that not all terrains can be modelled as a surface network representation. They are simply not true Morse functions and one of most obvious proofs of this statement is that the Mountaineer’s Equation is not satisfied. In fact, it will be shown later that it is relatively easy to derive a topologically consistent surface network which follows all the rules specified by Pfaltz (1976) and Wolf (1984) except the Mountaineer’s Equation. The Mountaineer’s Equation has little importance for the structure of surface network and is based on the terrain that has evolved in a homogenous and isotropic manner. This is clearly an unrealistic assumption and expectation. However, several other properties of the surface network e.g. strict in-degree and out-degree of passes, positive weights, and compactness (i.e. no holes) are relevant. This section will demonstrate an unconstrained surface network (see also Rana and Morley 2002), which is produced by relaxing the Mountaineer’s Equation condition but at the same time is able to represent many types of morphological features without violating the other eight rules set by Wolf (1984). There are two main two ways of constructing surface networks from raster and TIN DEMs. (i) Identify passes and develop topology by tracing the ridges and channels to the peaks and pits respectively from the passes. This is the most commonly used method for the construction of surface networks amongst both computer graphics (Takahashi et al. 1995, Natarajan and Edelsbrunner 2004, Ni et al. 2004) and GIS (Wood 1998) researchers. In this approach, correct identification of critical points is more important than critical lines. o Pros: This is a well-defined method that would work well for terrains with well-defined topographic features. In the worst cases, the researcher will at most have to incorporate erroneous topological connections by suitable manipulations of topological links (e.g. treatment of junctions and bifurcations demonstrated by Wolf (1990) and degenerate passes by Takahashi et al. (1995)). o Cons: The success of this method is based entirely on the detection of passes. As discussed in the previous section, it is not always possible to identify features due to the scale-dependency of feature extraction. Not all channels and ridges on terrain originate and terminate at the critical points (e.g. see Figure 1.6). The ridges and channels are created by respectively tracing the steepest upslope and downslope paths originating from a pass. Figure 2.4 shows the output of such path tracing from Wood (1998). As can be seen clearly, ridges and channels are not always the steepest upslope and downslope paths respectively. (ii) Identify ridges and channels using morphometric analysis of plan curvature and resolve the start-nodes, end-nodes, and intersections of ridges and channels into consistent surface network features. This is a novel proposal presented in this work. In this approach, the critical lines are given importance over critical points. 32
- 47. Automated Generation o Pros: The identified ridges and channels would be the true natural ones. Most types of ridges and channels (i.e. including those ones on hill slopes) can now be included in the surface network and thus it will be a comparatively more accurate description of the terrain. It will also be shown later how this approach provides a good approximation of the idea of interlocking ridge and channel networks proposed by Werner (1988). The interlocking ridge and channel network is considered to be one of the most realistic representations of fluvial terrains (Mark 1979). o Cons: It will be shown later that the conversion of raw ridge-channel networks into surface networks requires the addition of pseudo-feature nodes to complete the topology. The algorithm is relatively slower than the previous approach because of the high amount of post-processing required resolving the topology. This chapter will present the ridge-channel network based approach to generate the surface networks. But hypothetically there are many other ways. One such idea which could be modified for surface networks is as follows: Figure 2.4 A topological network of ridges (white lines) and channels (black lines) bounded by pits, peaks and passes (from Wood 1998). Note, many ridges and channels do not follow true ridge and channel locations. (iii) Self Organising Networks Self-organising networks (SONs) are one of the latest ideas in modelling large and complex systems (e.g. common examples include biological processes and scientific collaborations) whereby small components of the systems synenergistically group together to form large structures (Lucas 2004). In the domain of GIS, the concept of SONs has been used in cluster analysis and feature identification1. In a somewhat similar vein, Mark (1979) presented an 1 Bernd Fritzke maintains a website with tools for cluster analysis and feature mapping. 33
- 48. Automated Generation idea of simulation whereby he simulated ridge networks by forming minimum spanning trees from sets of randomly located peaks. Mark’s approach can be modified to simulate the formation of hypothetical surface networks and iterated until it matches natural morphology. For example, starting with just the critical points, topologically consistent networks can be produced by randomly connecting the point features at the same time observing the nine topological rules. Admittedly this approach is likely to produce many meaningless outputs however it would be interesting to observe the difference between the hypothetical and real surface network. A disadvantage of this approach is that it will only generate a topological surface network, and therefore will not be suitable for modelling the terrain morphology. 2.2.2.1 Ridge-Channel Network Algorithm Figure 2.5 shows the sequence of stages in the generation of a surface network from a raster terrain dataset. The algorithm flow can be broadly subdivided into four main steps as follow: Stage I Assign universal pit/peak. Stage II Construct a completely connected ridge-channel network. Stage III Construct surface network graph. Stage IV Construct surface network model. Raster Make Contours Smooth Raster Identify ridge NO and channels Does the Ridge and channel pattern match with contours? YES Build SNG Build SNM Figure 2.5 Flowchart for stages II- IV to construct surface networks from raster terrain. Stage I: Assign universal pit/peak. In this step, it is decided whether the terrain is surrounded by a universal peak or pit. Most researchers derive surface networks bounded by a universal pit. Stage II: Construct a completely connected ridge-channel network The aim of this step is to identify the ridges and channels and then create a set of connected ridge and channel edges. Most of the effort in developing a robust algorithm has http://www.ki.inf.tu-dresden.de/~fritzke/research/incremental.html (Accessed on 16 June 2004) 34
- 49. Automated Generation been spent in developing this step and Stage III. This step consists of four further sub-steps as follow: Stage II.Ia: Construct a contour map of the terrain The aim of this step is to generate a contour map of the raster terrain, which can be used for visual verification of the quality of the feature identification. Contour maps by virtue of their design reveal morphological variations relatively better and explicitly than raster map visualisation. Typically, a perspective view of the surface is produced to visualise the structure of the terrain, however, even in 2D, contours can reveal a lot of detail about the structure of terrain compared to the perspective view of the raster terrain (Rana and Dykes 2003). Stage II.Ib: Smooth the raster terrain The aim of this step is remove noise present in the raster terrain for a better feature identification. Noise in a DEM originates from many sources (von Minusio 2002) and it’s not always easy to trace the origin and amount of noise. Numerous smoothing filters have been proposed in the literature. Filters such as non-linear anisotropic diffusion will tend to preserve the original structure of the terrain. According to Takahashi et al. (1995) and Ni et al. (2004), smoothing removes minor morphological variations and helps in generating a consistent surface network and is generally a standard pre-processing step in most algorithms. Stage II.II: Identify the ridges and channels In this step, ridges and channels are identified from the smoothed raster terrain. As discussed in Chapter 1 there are many ways of extracting these topographic features. The approach chosen here is a variation of the simple eight-neighbour method proposed by Fowler and Little (1979). The proposed algorithm differs in one important aspect in that classification of a cell to a feature type is based on the plan curvature and not the elevation of adjacent cells. Plan curvature is the curvature of the surface in the direction perpendicular to the slope direction (Figure 2.6). A channel has a negative (concave) plan curvature while a ridge has positive (convex) plan curvature. The use of plan curvature ensures that feature classification is based on the shape of the local neighbourhood and not mere elevation difference as the latter could provide misleading results. The basic logic behind the proposed algorithm is that ridges and channels generally have high plan convexity upstream/upslope and low plan convexity downstream/downslope. This is due to the continuous and gradual effects of erosional processes on these features. Figure 2.6 (a) Positive plan curvature with divergent flow lines and (b) negative plan curvature with convergent flow lines (modified after Peschier 1996). (a) (b) 35
- 50. Automated Generation A 3 by 3 cells window (Figure 2.7) is passed over each cell of the raster terrain and the central cell of the window is classified as either a ridge or channel based on a combination of two criteria (Figure 2.7). The first criterion tests whether the central cell has a positive or negative plan curvature to test if it belongs to feature with local positive curvatures (ridges) or local negative curvatures (channels) respectively. The second criterion identifies the feature type and orientation. The use of plan curvature to classify cells produces much better results compared to use of conventional elevation-based methods (e.g. Fowler and Little 1979). The method is simple and highly adaptable. Figure 2.8 shows a comparison of the feature extraction based on elevation, plan curvature (proposed algorithm) and sophisticated conic-section fit (Wood 1998) methods for an area in Salisbury region, UK. Figure 2.8 shows that the curvature-based feature extraction method accurately extracts most of the features. It is particularly important to note that in comparison to the conic-section based method, the proposed algorithm correctly identifies feature orientation and preserves feature connectivity. C1 C2 C3 C4 C5 C6 C7 C8 C9 1st Criterion 2nd Criterion Feature c4 > c5 & c6 > c5 & c2 > c5 & c5 > c8 N-S channel dipping south c4 > c5 & c6 > c5 & c8 > c5 & c5 > c2 N-S channel dipping north c4 > c5 & c6 > c5 & c2 = c5 & c5 = c8 N-S channel no dip c2 > c5 & c8 > c5 & c4 > c5 & c5 > c6 E-W channel dipping east c2 > c5 & c8 > c5 & c6 > c5 & c5 > c4 E-W channel dipping west c2 > c5 & c8 > c5 & c4 = c5 & c5 = c6 E-W channel no dip c1 > c5 & c9 > c5 & c3 > c5 & c5 > c7 NE-SW channel dipping SW c1 > c5 & c9 > c5 & c7 > c5 & c5 > c3 NE-SW channel dipping NE c1 > c5 & c9 > c5 & c3 = c5 & c5 = c7 NE-SW channel no dip c7 > c5 & c3 > c5 & c1 > c5 & c5 > c9 NW-SE channel dipping SE c5 < 0 c7 > c5 & c3 > c5 & c9 > c5 & c5 > c1 NW-SE channel dipping NW c7 > c5 & c3 > c5 & c1 = c5 & c5 = c9 NW-SE channel no dip c4 < c5 & c6 < c5 & c2 > c5 & c5 > c8 N-S ridge dipping south c4 < c5 & c6 < c5 & c8 > c5 & c5 > c2 N-S ridge dipping north c4 < c5 & c6 < c5 & c8 = c5 & c5 = c2 N-S ridge no dip c2 < c5 & c8 < c5 & c4 > c5 & c5 > c6 E-W ridge dipping east c2 < c5 & c8 < c5 & c4 < c5 & c5 < c6 E-W ridge dipping west c2 < c5 & c8 < c5 & c4 = c5 & c5 = c6 E-W ridge no dip c1 < c5 & c9 < c5 & c3 > c5 & c5 > c7 NE-SW ridge dipping SW c1 < c5 & c9 < c5 & c3 < c5 & c5 < c7 NE-SW ridge dipping NE c1 < c5 & c9 < c5 & c3 = c5 & c5 = c7 NE-SW ridge no dip c7 < c5 & c3 < c5 & c1 > c5 & c5 > c9 NW-SE ridge dipping SE c5 > 0 c7 < c5 & c3 < c5 & c1 < c5 & c5 < c9 NW-SE ridge dipping NW c7 < c5 & c3 < c5 & c1 = c5 & c5 = c9 NW-SE ridge no dip Figure 2.7 3x3 feature extraction filter window and the criteria for feature classification. Admittedly, the proposed classification is simple as it only tests for eight possible orientations of features amongst all possible directions. Owing to the limitation ridges and channels at some places appear broken. Therefore, the broken channel and ridge segments are connected by another 3x3 cell filter. The filter passes over each cell and tests whether 36
- 51. Automated Generation the central cell is blank, if so it then tests whether cells diametrically opposite along the eight directions in the filter have the same classification. If the opposite cells have the same feature type then the central cell in the filter is assigned the feature type of these cells. An example of a broken channel is shown in Figure 2.9. Blank cells represent unclassified cells and cells with “c” are channel cells. A side effect of the filling filter is that it tends to expand the thickness of cell clusters (Figure 2.9b). Therefore, a thinning filter is applied to reduce the cluster to single cells so that the ridges and channels can be extracted in vector form. Stage II.III: Evaluate the feature identification visually In this step, the features extracted are overlaid on the contour map of the terrain produced in Stage II.Ia to evaluate the accuracy of the extracted feature types and their orientation. If the feature identification is not satisfactory then Stage II.I – II.III are repeated till a satisfactory feature representation has been developed. Stage II.IV: Remove isolated ridges and channels below a threshold length The aim of this step is to remove any insignificant depressions and elongated mounds that might appear as channels and ridges respectively. These could be natural features or can also be erroneous outputs produced during the feature identification process. Stage II.V: Create connected ridges and channels The aim of this step is to link the unconnected ridges and channels to ensure a fully- connected graph. Figure 2.10a shows the state of a typical vectorial output, with broken links, from the steps so far. The entire area is sloping to the south-east. In a non-Morse function terrain, a representation of the morphology of the terrain as a surface network will require some sort of manipulation of links even leading to the generation of pseudo- feature nodes mentioned earlier. For example, the isolated channel on the mountain slope shown inside the box in Figure 2.10a will have to be linked to either another neighbouring ridge or channel. Another similar situation is the unidentified eroded parts of the ridges and channels. One way of achieving complete connectivity is to extend the unconnected downslope end of each feature till it is connected to another feature. The extension could be based on a simple linear extension along the axis of the feature or could be based on steepest downslope path based on height or aspect. Other methods for extending the features may be possible. However, even in the best feature identification, inevitably, there will be some topologically inconsistent and unrealistic links following the extension. Figure 2.10b shows the output of such an extension procedure following a linear extension and Figures 2.10c, d show steepest drop and aspect-based downslope paths respectively. Each of the extension approaches provides varying levels of success, depending on the area. In some places (especially when the separation between broken nodes is small) a mere linear extension of the link will be sufficient (especially cartographically) as the latter two methods will introduce paths that may be morphologically sensible but may be undesirable. Some of the extensions also create a morphologically meaningless configuration of links; however, extensions are essential to create a complete graph. For example, compare the extended part of the incomplete ridge at a pass location, shown in the box in the NE of the area in Figures 2.10a, b, c, d. Arguably, a detailed algorithm can be developed which could evaluate the configuration of broken links in the context of local links and extend the broken links accordingly unlike the rather simple approach presented above. Such an algorithm is however likely to involve subjective decisions such as the type of links which are to be extended, under what conditions the links are to be extended, and 37
- 52. Automated Generation where to connect the broken links. It will be shown later that in either approach the non- morphological artificial links can be resolved into consistent surface network topology under certain assumptions. The aim here is to preserve as much original height information as possible even at the expense of some morphologically meaningless topological links. Stage III: Build surface network graph (SNG) – NEEDS FURTHER WORK… In this step, the framework of intersecting ridge and channel segments is converted into the surface network data structure. The algorithm proposed here differs significantly from other algorithms hence the following list of rules provides the fundamental characteristics of the proposed model for surface networks. Rules: (i) Each ridge and channel segment is converted/modelled into an elementary surface network (ESN) by representing the upslope nodes and downslope nodes into artificial peaks, pits, and passes and connecting the ridge/channel segment suitably to ensure positive weights (Figure 2.11). Each ESN is made up of three artificial edges and one real edge. The real edge is used to represent the type of segments. The topology of ESN is constructed such that the height variation along the ridge and channel segment can be included into the ESN. For example, an ESN for a ridge segment has a peak as the upslope node so that a ridge can be drawn from it to a lower pass. Similarly, an ESN to represent a channel segment starts from a higher pass and terminates at a lower pit. The artificial edges are added to the pass to ensure that the ESN follows the Rule 4 (see section 1.2.1.1). This approach is in correspondence to the ones followed by Pfaltz (1976), Wolf (1988, 1990), and Takahashi et al. (1995) to store junctions and bifurcations, which are degenerate passes by decomposition into non-degenerates. The approach presented here however will be able to represent any type of intersection i.e. not just a degenerate pass. (ii) The elevation of the upslope node of a ridge segment is assigned to the upslope peak of the ridge segment’s ESN while the elevation of the downslope node is assigned to the pass of the ridge segment’s ESN. The elevation of the upslope node of a channel segment is assigned to the pass of the channel segment’s ESN while the elevation of the downslope node is assigned to the downslope pit. Such assignments of elevations ensure that height variation along the ridge and channel segment is accurately stored. The rest of the peaks and pits are assigned an infinitesimally higher and lower elevation value respectively, in relation to the elevation of the connected pass. (iii) The upslope ESN nodes are referred as the origin nodes and the downslope nodes are referred as the receiver nodes. Receiver nodes have a leader node which is used to decide the link combination at intersections (see next section). For ridge and channels segments, the leader nodes are peaks and pits respectively (Figure 2.11). (iv) Intersection of segments i.e. junctions, bifurcations and others are stored by linking the receiver nodes of the incident ESNs with the origin nodes of the ESNs leaving the intersection. The linking process is done in the following manner to ensure consistent topology: a. A list of the type of leader nodes at the intersection is prepared in a clockwise arrangement, starting with the receiver nodes. Figure 2.12 shows various combinations of receiver and origin nodes at the intersections. For example, the list of leader nodes for the intersection 38
- 53. Automated Generation shown in Figure 2.12a is {peak,peak,peak} and in the case of Figure 2.12e is {peak,pit,peak,peak}. b. The links are developed between the consecutive leader nodes list based on their types. If two consecutive leader nodes are of the same type then they are merged to a single node (Figures 2.11a,b,c,d). If pair of consecutive nodes is of different types then a node common in the ESN of the two segments is used to connect the two ESN (Figures 2.11e,f,g). This methodology would work for most cases of ridge and channel segment intersections however there are two important exceptional cases which involve some pre- processing and exceptional handling before implementing the above rules. Rules for exceptional cases of ridge and channel segment intersections: If the two endpoints of a ridge/channel segment are at the same elevation then the elevation of one of the randomly selected nodes is lowered slightly. This approach is similar to the Takahashi et al. (1995) who suggested tilting the terrain to introduce a slope for flat areas. For a loop of channel/ridge segments or non sloping ridge such as found along crater boundaries, first the loop is broken into four separate segments and secondly if the origin and receiver nodes are at the same elevation then the rule above is carried out (Figure 2.13). With the above simple rules, it is now possible to store any set of connected ridge and channel segments as a consistent surface network topology. The above proposal based on ESN has the significant advantages over other methods in that it allows an arbitrary topological arrangement of ridge and channels. Figure 2.14a shows an arbitrary framework of a ridge and channel segments based on an island surrounded by a universal pit. Figure 2.14b shows the surface network representation of the island derived by following the proposed methodology. Figure 2.14b shows a marked deviation from the conventional surface network and the most prominent deviation is the absence of natural critical points as the start and end points of ridges and channels. The structure however captures the geometry of the ridges and channels completely. This is one of the main advantages of the proposed model. In addition, it still provides the ability to assign weights along the real edges of ESN, which can be used for the simplification of surface networks. Although, the artificial edges and peaks/pits/passes seem to create a scatter of nodes and appear unnatural in the figures, they are in fact unnoticeable because of the very small separation in both x,y and z coordinates. The following section demonstrates examples of the implementation. The storage format of the SNG dataset is described in the Appendix. Stage IV: Build surface network model (SNM) The SNG data format only stores the topology of the surface network, which does not provide enough information about the morphology of the terrain. In this step, the SNG is converted into an arc-node type data structure whereby the geometry of the edges is also stored. Such a dataset is useful for the following two important reasons: • The uncertainty related to the missing elevation and morphological information in the surface network data structure can be reduced. Thus, SNM can reconstruct a terrain surface e.g. as a TIN and for visualisation of terrain structure. • Algorithms are now available (Bremer et al. 2003) for TIN data structure which can incorporate the changes in topology following a contraction. 39
- 54. Automated Generation A record of the real/artificial nature of the edges is stored in the SNM so that in a practical application e.g. visualisation or converting a surface network into a TIN, the artificial edges could be left out from the processing. The Appendix describes the format for the SNM storage. 2.2.3 Implementation The proposed algorithm has been tested on two terrains from the Salisbury area in England and the Isle of Man. The algorithm has been developed for feature recognition from raster terrain datasets. The software developed for the automated generation of the surface network has been built in the popular GIS software Arc/Info and ArcView 3.2 developed by ESRI. These software were chosen because they contain many useful terrain analysis functions and thus help in speeding the software development process. For example, ArcInfo provides robust and fast functions for the computation of plan curvature (based on Zvenburgen and Thorne 1987), smoothing (mean filters), and raster to vector conversion. The ArcInfo scripting language AML and ArcView’s scripting language Avenue were used to write the software. AML scripts were used for feature recognition and Avenue scripts were used to construct the surface networks from the unconnected ridge and channel segments. Figures 2.15a and 2.15b respectively show the hill-shaded view and surface network graph of the terrain in a part of Isle of Man. Figures 2.16a and 2.16b respectively show the hill-shaded view and the surface network graph of the relief in a part of the Salisbury area in England. Despite a few morphologically undesired links, a visual comparison of the two figures reveal that the feature recognition has been successful in capturing most of the linear features on the terrain, independent of their geographic extent. The difference between the conventional surface network and the proposed model is also apparent. For example, the prominent pass to the NW of the area in Salisbury terrain (shown in a circle in Figure 2.16b) is only stored as a set of intersecting ridges and channel ESNs. In a conventional representation, it would only be stored as a single peak even though the morphology and heights around the peak area are not uniform at all. In simple terms, it can be said that the proposed model is more faithful to the actual terrain morphological variations compared to the conventional surface network model. 2.3 Discussion The proposed algorithm for automated generation of surface networks differs with conventional methods (Takahashi et al. 1995, Schneider and Wood 2004) in that it gives more significance to the linear topographic features based on the belief that linear features are more crucial for defining the morphology of the terrain and the other methods are more vulnerable to scale-dependency of feature recognition. One of the debatable aspects of the proposed algorithm is the extension of unconnected downslope ends of features and the resultant morphologically meaningless links between features. Arguably, one can even challenge whether the resultant output is a true surface network. The answer is no. However at the same time, note that the proposed surface network model is superior to the conventional ones, as it allows the encoding of problematic morphological features, while not deviating significantly from the fundamental properties of a surface, i.e. a framework of important points and lines on terrain and maintains the topological relations between critical points and lines. The proposed algorithm does suffer from the following computational issues that need to be addressed in future work: 40
- 55. Automated Generation Issue 1: Number of smoothing iterations This aspect remains one of the subjective issues of the proposed algorithm and relates to the choice of smoothing iterations necessary to achieve satisfactory feature extraction. There is always a risk that excessive smoothing can obliterate features. In addition, the notion of iteration also takes away the desired fully automated characteristic of the algorithm. It is non-trivial to derive an appropriate number of smoothing iterations that will be sufficient to generate a reliable identification of features. However, based on empirical evidence, a methodology for an approximate assessment is proposed. Figure 2.17 shows a gradual variability observed in feature classification with varying number of smoothing iterations for the Salisbury data. The y-axis is the variance of the feature grid. To detect breakpoints, it can be useful to break the continuous trend into individual linear trends. On the curve in Figure 2.16, four tangent lines have been marked and numbered. Most significant lines are 1 and 4 which represent the trend of the curve for a minimum number of iterations, and for many of iteration, where the curve is leveling off. In Figure 2.17a, lines 2 and 3 are arbitrary but simulate sections of clearly different trend in a case other than for this area. The intersections of these trends (or the breakpoints) signify changes in feature scale which can be used to identify the number of iterations. In this case, the intersection between the first trend and the second trend occur at around ten iterations, the intersection between the second and third trend occur after at around 30 iterations and so on. These breakpoints are also apparent in the feature extractions (Figure 2.17b) although the changes across the breakpoints are not very pronounced. Often, the number of intersections corresponding to the first or second intersections would be sufficient. Another interesting aspect of Figure 2.17b is that after a certain number of iterations, smoothing doesn’t significantly affect the overall feature classification pattern, corresponding to the leveling-off of the graph in Figure 2.17a. The user will have to identify these trend lines hence the breakpoints to find a good estimate of the iteration limit. In the actual case of Figure 2.17a, where lines 2 and 3 are relatively arbitrary, we might choose the intersection of lines 1 and 4 as representing an estimate for the iteration limit. The choice of limit can be checked visually by matching the feature extraction against the original contour map to ensure that major features are still correctly represented. A final possibility, not illustrated on Figure 2.17a, is that the iteration limit could correspond to a 90% reduction from the variance after 1 iteration to the asymptotic value after 'many' iterations. This method is based purely on the limiting statistics and the variance curve and does not require estimation of sections of common trend in the variance. The effect of smoothing on the feature classification should be expected to vary according to the terrain. For example, Figure 2.18 shows the effect of smoothing on the feature classification of the Isle of Man area terrain. Evidently after 10 smoothing iterations feature classification varies only slightly, which is also apparent by the intersection of the trend no. 1 and 3 (Figure 2.18a). This example provides added support to the hypothesis that the feature classification variance versus smoothing iterations plot provides an indication of the structure of the terrain. Issue 2: Feature classification model The proposed algorithm for feature extraction evaluates only eight cardinal directions at a cell for the possible flow direction. The algorithm doesn’t employ any threshold (e.g. regarding curvature differences) to constrain the feature classification. As a result, the feature extraction produces broken and in places doubtful feature classification. While the 41
- 56. Automated Generation broken features can be connected with post-processing, it is non-trivial to produce robust threshold parameters. Issue 3: Unclassified cells at boundary All raster based feature extraction algorithms suffer from the limitation that the edges of the raster terrain remain unclassified as the filter window is square and edge cells will have parts of the filter window outside the raster terrain In the proposed algorithm the top two rows, bottom two rows, left-most two columns and right-most two columns remain unclassified. A simple solution to solve this problem is to use an input raster terrain larger than the output area. Arguably, there are similarly many other methods which could be developed. Underlying to any kind of geo-spatial generalisation is the compromise between data quality (i.e. accuracy of height) and feature preservation. Therefore, the user of any terrain analyses should be aware of the pros and cons of each method because, as discussed earlier, different methods have different strengths and weaknesses. Obviously, the definitive test of the utility of the terrain data structure is the test for its suitability for a particular application. For example, in a fly through in computer games, the accuracy of feature preservation is not so important, while a hydrological analysis will require greater accuracy. 42
- 57. Automated Generation (a) (b) Figure 2.8 Comparison of feature extraction methods in a terrain around Salisbury, UK.. (a) Elevation based flow assessment, (b) descent of plan curvature and (c) conic section analysis. All the methods used the same input DEM and parameters. Red cells are ridges and blue cells are channels. The contour overlay can be used to observe the robustness of the feature extraction algorithm. The cell size of DEM is 10 m and has 502x501 cells and contour (c) interval is 20 m. c c c c c c c c c c c c c c Figure 2.9 Fill-in-the-gap filter used to connect broken features. (a) Simple case, where the filter produces good results and (b) where it produces problematic outputs. 43
- 58. (a) (b) (c) (d) Figure 2.10 (a) Broken surface network topology. Snapping the unconnected downstream ends of channels and downslope unconnected end of ridges by (b) linear extension, (c) steepest downslope path and (d) aspect path. Solid lines are original links and dashed lines are extensions. Blue lines are channels and red lines are ridges. Compare the methods for accuracy for links in the red box in NE part of the area. The small boxes are an artefact of raster to vector conversion. 44 Automated Generation
- 59. Automated Generation Conversion of a ridge segment z to elementary surface network Peak Pass Pit Ridge Channel Leader Node z Conversion of a channel segment to elementary surface network Figure 2.11 Conversion of ridge and channel segments to elementary surface network. Dashed edges are artificial. 45
- 60. Automated Generation (a) (b) (c) (d) Peak Pass Pit Ridge Channel Figure 2.12 The configuration of leader nodes at the various types of ridge and channel intersections and their respective decompositions. See text for the commentary. Dashed edges are artificial. 46
- 61. Automated Generation (e) (f) (g) Peak Pass Pit Ridge Channel Figure 2.12 The configuration of leader nodes at the various types of ridge and channel intersections and their respective decompositions. See text for the commentary. Dashed edges are artificial. 47
- 62. Automated Generation Peak Pass Pit Ridge Channel Figure 2.13 Representation of a loop of ridge segments around a crater by decomposition into elementary surface networks. Dashed edges are artificial. 48
- 63. Automated Generation (a) (b) Peak Pass Pit Ridge Channel Figure 2.14 (a) An arbitrary configuration of leader nodes at the various types of ridge and channel intersections and (b) their respective decompositions. See text for the commentary. 49
- 64. Automated Generation (a) (b) Figure 2.15 a) Hill-shaded view of a part of Isle of Man raster terrain (cell size 100 m, 135x121 cells) and the corresponding, (b) surface network model (peaks, pits and passes are not displayed to reduce clutter) with contours (20 m interval). 50
- 65. Automated Generation (a) (b) Figure 2.16 (a) Hill-shaded view of a part of Salisbury raster terrain (cell size 10m, 329x379 cells) and the corresponding (b) surface network model (peaks, pits and passes are not displayed to reduce clutter) with contours (20 m interval). See text for commentary on the features in the circle. 51
- 66. Automated Generation (a) 1 Iteration 10 Iterations 30 Iterations 90 Iterations (b) Figure 2.17 The effect of smoothing on the feature classification of the Salisbury area terrain. (a) The four linear trends (red lines) have been derived visually and the corresponding numbers indicate the order in which they appear during smoothing and (b) feature maps. Blue areas are channels and red areas are ridges. 52
- 67. Automated Generation (a) 1 Iteration 10 Iterations (a) 20 Iterations 30 Iterations (b) Figure 2.18 The effect of smoothing on the feature classification of the Isle of Man area terrain. (a) The four linear trends (red lines) have been derived visually and the corresponding numbers indicate the order in which they appear during smoothing and (b) feature maps. Blue areas are channels and red areas are ridges. 53
- 68. Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore. Albert Einstein In “Albert Einstein, a Philosopher-Scientist” by Paul A. Schilpp
- 69. Chapter 3 Structural Analysis Surface networks, by way of their graph construction are suitable for many types of graph- theoretic characterisations. Earlier research has offered characterisations based on the scalar properties associated with surface networks. For example, the weight measures proposed by Pfaltz (1976), Mark (1977), Wolf (1984), Rana (2000), and Edelsbrunner et al. (2003) are based on the morphometric (mainly height) properties of vertices and edges of the graph. These types of graph measures are only a small subset of possible graph measures that could be used to characterise surface networks. In GIS and geography, graphs have been used widely to represent various process/phenomenon such as transport networks (e.g. for routing), drainage networks (e.g. for stream ordering), socio-economic indicators (e.g. for analysis of monetary transactions), and recently cyber-geography (e.g. for trace-route analysis). Therefore, a substantial quantity of literature exists that have presented the application of graph measures for various purposes. This chapter demonstrates the application of graph measures from non-geomorphological contexts for analysing terrain surface networks. The first part of chapter demonstrates graph- theoretical measures otherwise used in non-terrain contexts, which could also be used to characterise the structure of terrain surface networks. The list of measures presented is merely a selection of graph measures, which could be derived and used relatively simply. The following part of chapter will focus on the simplification of surface networks. As discussed in Chapter 1, one of the reasons for the characterisation of surface networks has been to assign weights to the surface networks. These weights are used for a sequential weight-dependent simplification of surface networks. The latter part of this chapter will identify several issues with the existing simplification methods and propose new ways of simplifying surface networks. The final sections of chapter will present novel proposals on how to refine the surface network graph by adding new edges and vertices. 3.1 Extending the description of surface networks Arguably, many types of graph measures can be implemented on surface networks. However, not all the graph measures will be relevant to surface networks and the underlying modelled phenomena. Many of the graph measures are only applicable to certain graph configurations. The following sections present graph measures which are common to all types of graphs followed by a special set of measures, which are based on the concept of Small World networks (Milgram 1967, Watts and Strogatz 1998). Small World networks are widely used forms of graphs to represent the general connectivity of the networks. Some of the measures presented below assume that a surface network is non-directed and non-weighted, which is contrary to the topological constraints of surface networks. However, as will be evident in the following text, measures presented here are still relevant to surface networks because they help in understanding the information
- 70. Structural Analysis about the terrain, which is independent of graph construction. For example, level of desiccation, which is an indicator of the maturity of terrain and used to compare terrains, can be calculated if we assume that the graph is non-directed and non-weighted. 3.1.1 Standard graph measures In the following text, the term length of a path means the number of edges that are traversed to reach a node from another node. For example, the length of a path from a node to all its adjacent nodes is 1. The term depth of a node represents the shortest length required to arrive at the node from another node. With these fundamental definitions, the higher order graph measures can now be presented. • Diameter – It is the maximum depth in a graph. Most commonly, diameter has been used to assess whether the graph is a Small World network. The Small World theory, proposed by Milgram (Milgram 1967), like the literal meaning of the word “Small World”, involves the notion of finding out relationships between strangers, using a chain of mutual social acquaintances. Two well-known examples of Small World experiments are the Erd s number and Bacon Number. These numbers respectively represent the number of acquaintances required by a mathematician and a Hollywood actor/actress respectively to become related to the mathematician Paul Erd s and the Hollywood actor Kevin Bacon. Watts and Strogatz (1998) and Albert and Barabási (2002) have found from analysing empirical data that the diameter in Small World networks increase logarithmically with the number of nodes. • Degree – It is simply the number of edges incident on a node. This is a simple general- purpose measure of the connectivity of a node. Watts and Strogatz (1998) and Albert and Barabási (2002) found that in Small World networks the frequency of the degree of the nodes follows a power-law distribution. The term scale-free has been used in the literature to ascribe this property of Small World networks. • Eccentricity – It is the maximum depth of a node. Nodes at the peripheries of the graph will have higher eccentricity than nodes in the centre. • Fareness – It is the sum of all the depth values of a node. g Fareness(i) = ∑δ j =1 ij where g is the size (number of nodes) of the network, δ ij is the depth of the node i to node j. • Mean Depth – It is the average of all the depth values of a node. It is a general- purpose measure to describe the connectivity of a node with other nodes. Therefore, higher mean depth indicates a well-connected node in the graph (Turner 2001). Fareness(i ) Mean Depth(i) = g • Closeness – It is the inverse of the sum of all the depth values of a node. 55
- 71. Structural Analysis 1 Closeness(i) = g ∑δ j =1 ij • Bavelass-Leavitt Index – It is the ratio of the sum of depths of a node to sum of depths from and to that node. g ∑δ i =1, j =1 ij BLCentrality(i) = g ∑ (δ j =1 ij + δ ji ) 3.1.2 Two case studies Figures 3.1a and 3.2a show the surface network graphs of parts of Isle of Man and Latschur Mountains (Austria) respectively. The graph analyses were done using the software AGNA developed by I. M. Benta (Benta 2004). Please refer to AGNA user guide (Benta 2004) for more information on AGNA. The networks display different characteristic configurations e.g. unlike the single ridge structure of the Isle of Man surface network, Latschur surface network shows more complex topographic structure. The plots of the various measures reveal common patterns in the structures of the two networks. For example, The nodes located in the central parts of both networks have lower mean depth values (Figures 3.1b, 3.2b). The frequency of degree of the nodes decay exponentially (Figures 3.1c, 3.2c) suggesting that there are nodes which form clusters e.g. the peak in the northern part of the Isle of Man network and the pit in the central part of the Latschur mountain surface network. The plots did not show power-law relationships in either of the networks even when the data were plotted on log-log scales. Thus, it seems that these surface networks are not Small World networks. However, the similarities indicate the uniformity in the terrain formation processes. For the sake of hypothesis, building this simple observation can be stretched a bit more. In terrain, such similarities could mean similar type and sequence of erosional process and the same geological/geomorphological age, but a proof of the reasons behind the pattern is beyond the scope of research presented here. An important part of graph analysis is the evaluation of changes in the graph under contraction. As the graph structure is contracted, new links distort the structure of the networks. This kind of experiment has been done extensively for the study of Internet backbone structure to test the robustness and improve the data flows. Based on such an analysis of the Internet Albert and Barabási (2002) were able to suggest the unique scale-free nature of the Internet. In this work, an experiment has been done to observe the variation of diameter measure in the two networks at each contraction of the graph. The contraction was based on the elevation drop weight measure and the maximum of weights criterion. The diameter versus contractions plots (Figures 3.1d, 3.2d) reveal two main insights into the structure of surface networks. 56
- 72. Structural Analysis • There is generally a linear decline in the diameter of the network with the deletion of nodes (which in this case is a contraction of the network) hence, these surface networks are not scale-free networks. • The step like pattern of variations suggests that the removal of certain nodes introduce drastic changes in the paths of the network. The flat parts of the plot are contractions that do not affect the overall structure of the network. The above local and global graph measures can now be used as weights. The following section presents an example of the use of the degree measure, where such simple graph measures can be used to perform structurally-aware contraction. 3.2 Simplification of Surface Networks In short, a surface network graph can be simplified/contracted by a series of two homomorphic contractions called pit-pass contraction and pass-peak contraction. These contractions reduce the number of vertices and edges but preserve the topological structure of the corresponding topographic surface. Pit-pass and pass-peak contractions cause the removal of “internal” pits and “internal” peaks (the surrounding pits/peaks cannot be selected for contraction) respectively with their lowest (in the case of pit) and highest (in the case of peak) adjacent passes together with all surface-specific lines incident with at least one of these critical points. The pass connected to the contracted pit (or peak), which becomes free, is then connected to the pit (or peak) originally linked to the contracted pass, thus restoring topological consistency (Figure 1.11). The mathematical proof of the contractions is provided in Pfaltz (1976) and Wolf (1989). The selection of a pit or peak is based on an importance criterion, which depends on the particular problem and the topography. The three main variations of importance criteria for the selection of pits and peaks, based on edge weights, proposed by Wolf (1989) are: The maximum of the elevation differences between a peak or pit and all its adjacent passes. This measure can be used to select peaks and pits ranked based on the steepest ridge and channel linked to them. The minimum of the elevation differences between a peak or pit and all its adjacent passes. This measure can be used to select peaks and pits ranked based on the shallowest ridge and channel linked to them. The sum of the elevation differences between a peak or pit and all its adjacent passes. This measure is used to select pits and peaks with a low number of crossings. However, as discussed in the section 1.2.3.2, these original ideas on generalisation and weight measures from Wolf (1988) can be extended in their scope for simplification of surface networks. In this section, first a software Surface Topology Toolkit (STT) developed to address some of the issues is presented. The following sections address the limitations highlighted in section 1.2.3.2 and present novel solutions. At the end, a comparison of the various criteria is presented to visualise the wide variety of possible outputs. 57
- 73. Structural Analysis (a) (b) 30 30 - 0 .4 5 1 9 x y = 41.2e 25 y = -0.569x + 26.165 2 2 25 R = 0.9775 R = 0.936 Diameter 20 15 20 10 Frequency 5 15 0 0 10 20 30 40 50 10 (d) Contrac tions 5 0 0 5 10 (c) Degree Figure 3.1 (a) Surface Network of central Isle of Man, (b) distribution of mean depth values, (c) frequency plot of degree, and (d) variations in graph diameter with contractions. 58
- 74. Structural Analysis (a) (b) 30 45 y = -0.4144x + 26.65 25 2 40 R = 0.9542 20 Diam eter -0 .4 8 5 x 35 y = 62.8e 15 2 30 R = 0.9557 10 F r e que ncy 5 25 0 20 0 10 20 30 40 50 60 (d) Contractions 15 10 5 0 0 5 10 Degree (c) Figure 3.2 (a) Surface Network of a part of Latschur mountains in Austria, (b) distribution of mean depth values, (c) frequency plot of degree, and (d) variations in graph diameter with contractions. 59
- 75. Structural Analysis 3.2 1 Surface Topology Toolkit Surface Topology Toolkit (STT) is an application written in the Tcl/Tk programming language for the surface network simplification experiments (Rana 1998). While STT primarily implements simplification of surface networks, the outputs derived from STT can also be useful for geomorphological studies. A detailed description of the inputs, outputs and functions of STT is outside the focus of the chapter however; it is relevant to highlight the main reasons for developing the surface network simplification in Tcl/Tk language. Tcl/Tk is a popular language amongst some GIS programmers (Dykes 1997). The highlight of Tcl/Tk functions is the provision of dynamically manipulating the properties of graphical objects with ease and speed, which is particularly useful for cartographic and other visualisation applications. Some of the proposed solutions discussed later, required a highly interactive interface to the structure of the surface network. Tcl/Tk is particularly useful for developing such applications. The other main advantages of STT are as follow: STT informs the user of every contraction (except for continuous contractions) so that a selection can be made more intuitively. Users can generalise the topography by a combination of importance measures rather than a single one. Users can arbitrarily select an internal pit or peak for contraction. • Users have the flexibility to undo a contraction to observe the changes in results for better generalisation. Figure 3.3 shows the main interface and different control windows of STT. 3.2.2 New weight measures As mentioned in section 1.2.3.2 many of the existing weight measures could provide misleading information about the structure of the surface. This work proposes the use of edge lengths, edge slopes and degree (or valency) of vertices as alternative weights to avoid these limitations. These are as follow: (i) Edge length The length can be measured either as the distance between the endpoints of an edge or the sum of the lengths of each segments that are part of the edge i.e. w(xi) = (u(x ) − u (y )) + (v(x ) − v(y )) = λ(x , y ) i j 2 i j 2 i j w(zk) = (u (z ) − u(y )) + (v(z ) − v(y )) = λ(z , y ) k j 2 k j 2 k j where (xi, yj) ∈ E, (zk, yj) ∈ E, u, v are the x, y co-ordinates of the pit, pass, and peak. The use of edge length will be an effective weight for cartographic generalisation as it brings out high frequency elevation changes in surfaces for condensing. In addition, variants of this weight measure can also be derived e.g. average length of an edge segment, and mean length of an edge segment. 60
- 76. Structural Analysis Figure 3.3 Graphical User Interface of Surface Topology Toolkit. (ii) Edge slope The slope of an edge is simply the drop in elevation along an edge divided by the length of the edge. Again, it could be defined only between the endpoints and/or an average slope amongst the edge segments. ( ) λ(x , y ) w(xi) = h( xi ) − h y j i j w(zk) = h( z ) − h( y ) λ(z , y ) k j k j where (xi, yj) ∈ E, (zk, yj) ∈ E and h is the height of a pit/pass/peak. The use of edge slope as a weight will be suitable to distinguish between peaks or pits with equal edge lengths but with contrasting difference in elevations. As described previously, several variants of this weight measure can also be derived e.g. average slope of an edge segment, and mean slope of an edge segment. (iii) Valency or Degree of vertices Valency or degree of a peak/pit is a simpler alternative to the sum of elevation drops weight measure, to find out the connectivity of a peak/pit in the surface network structure. It is better than the sum of elevation drops measure because it does not depend on the rare, though possible likelihood of two peaks/pits with sums of equal elevation having a different number of incident edges. The new weight measures presented above will be more suitable compared to Wolf’s weight measure in several situations. For example, Long gradually sloping ridge or channels, which are structurally significant, will give a low drop in elevation as a weight, thus making them vulnerable for contractions. For 61
- 77. Structural Analysis example, for the surface network shown in Figure 3.4a, the next contraction based on the maximum drop in elevation weight criterion will remove the ridge [y1, z2] (Figure 3.4b) although it is longer, thus more important, than some of the other ridges in the surface. On the other hand, the maximum edge length weight criterion based condensing selects to remove the ridge [y1, z1] (Figure 3.4c) and therefore is a more sensible measure. However, it is important to note that even after a better decision; the ridge [y1, z2] is still removed due to topology condensing rules, which illustrates the earlier stated proposal that condensing solely based on weights ignores the structure of ridge/channel networks. The sum of edge weights will not be able to differentiate between two equally weighted points but with a different number of edges. Thus, it is not a good indicator of the ridge and channel crossings at a point. Figure 3.5a shows the situation in which of sum of edge weight criterion suggests removing the ridge [y4, z5] (Figure 3.5b) although the peak z5 has the highest number of ridge crossings. On the other hand, valency weight criterion selects the ridge [y5, z6] (Figure 3.5c) which gives a more natural condensation. Besides the above, as mentioned in section 3.1 above, the structure of surface networks, especially where they represent non-terrain surfaces e.g. metrological surfaces, can be suitably represented with non-geomorphological weight measures such as depth. (iv) Cascading contraction using global weight measures In the existing methods of contracting the surface network graph, the effect of the contraction is limited to only the affected local subset of the graph. In other types of graph-theoretic, networks e.g. transport and drainages networks, any local changes in the graph are cascaded/propagated to the rest of the network because such a post-contraction effect is more realistic for the modelled systems. The actual modelling of the effect requires a detailed analysis of both the external and internal factors involved in the simulation. For example, modelling of the effects of the closure of an underground train station i.e. removal of node/vertex in the underground-train network graph, would involve factors such as capacity for stations/roads/trains, number of passengers, and connectivity with other forms of transports e.g. buses. It is proposed that such a cascading effect is also applicable to surface network contraction because that would reflect a more accurate modelling of the surface phenomena. For example, the removal of a channel with a (x0- y0)-contraction uphill should trigger adjustment in the drainage network downhill, as the contraction would result in a decreased flow of water. In nature, such a reduction of water flow results in the drying up of channels. Thus, it means, a contraction of a channel somewhere in the mountains can lead to a contraction of a channel in the lowlands. Most terrain evolution modelling is based on this cascading premise presented above (Burrough and McDonnell 1998). This type of cascading contraction requires a weight that relates to all parts of a network i.e. a global weight measure. A typical global weight measure that could be used in drainage network modelling is flow. It is expressed in various ways depending on the context. For example, in the case of drainage network extraction from a raster DEM, flow is generally represented as the numbers of cells (or an equivalent cumulative measure) that drain into a particular cell. In the case of stream ordering, it is shown as the position of a channel in the hierarchy of the drainage network. 62
- 78. Structural Analysis x1 z2 y1 z1 y2 x2 x3 (a) Maximum of elevation Maximum of edge length difference criterion criterion Removal of ridge [y1,z2] Removal of ridge [y1,z1] z2 z1 x2 x3 x2 y2 y2 x3 (c) (b) Figure 3.4 Comparison of the effectiveness for selection of points in the surface network between (b) maximum of elevation difference criterion and (c) and maximum of edge length criterion. Note that criterion (b) selects a long ridge due to its low drop in elevation (350). (a) 800(2) x7 y5 z4 y4 900(1) x5 z6 1250(1) x6 Maximum of elevation z5 Maximum of elevation difference criterion difference criterion Removal of ridge [y5,z4] Removal of ridge [y5,z6] y4 z4 y4 x5 z6 x5 x6 (b) (c) x6 z5 z5 Figure 3.5 Comparison of the effectiveness for selection of points in a surface network, between, (b) sum of elevation difference criterion and (c) valency criterion, showing how criterion (b) can mislead about the ridge/channel crossings. Numbers at peaks in (a) are sum of elevation differences and their valences (in parentheses). 63
- 79. Structural Analysis Height 1 1 1 1 1 1 1 1 1 1 3 2 2 1 6 4 4 7 5 5 7 5 5 8 6 8 6 (a) (b) (c) Figure 3.6 Cascading contraction. In the original form in nature as shown in (a) the two channels at the bottom with weights equal to 8 exist because of the flow from the 8 channels upstream. However, when two upstream channels are contracted, the number of contributing channels to the minor channels is reduced to 6 as shown in (b). This could mean that the two minor channels dry up and are then removed from the network as shown in (c). Figure 3.6a shows a hypothetical surface network graph, where a global weight measure flow has been assigned to each channel that represents the cumulative sum of the number of upslope channels that can drain into the channel. Ridges are not displayed in the figure for clarity. As can be seen in the figure, channels in the higher reaches of the drainage have the lowest flow weights while the channels at the outlets of the drainage network will have higher flow weights. The general procedure for performing a cascading contraction on the channels will be as follows: Step 1: Assign local weights. Step 2: Perform (x0-y0)-contraction or (y0-z0)-contraction. Step 3: Adjust global weights of channels. Step 4: Contract channel(s) whose global weight fall(s) below a threshold. Step 5: If desired simplicity is reached, then STOP else go to Step 1. Figure 3.6c shows the contracted surface network following one step of cascading contraction. The computation of suitable weights would involve complex issues and a detailed treatment on these topics was not part of the research presented here. For example, actual water volume can be used instead of topological links as flow weights. Another interesting aspect of cascading contraction is that it does not follow a strictly sequential contraction i.e. contraction does not follow the structural importance based order. This type of non-sequential contraction of surface networks is also useful for other application, as will be discussed in the following section. 3.2.3 Non-sequential contractions As the contraction of a surface network graph is based on a defined sequence of vertices, it does not provide complete flexibility to generate a desired topology and topography. 64
- 80. Structural Analysis This follows from the fact that there is no a priori information about the evolution of the surface network. Wolf (1989) experienced a typical limitation. He observed that the quality of contour maps produced from simplified surface networks could be improved substantially if the step to eliminate a certain peak and its adjacent pass were shifted to a subsequent one. In addition as discussed above, vertex importance based selection criteria can be misleading regarding to the importance of the ridge or channel structure at a peak or pit. This means that edges are solely selected for condensation, based on their weights and no consideration is given to the size or significance of the host structure (such as length of edges). In this work, the following two solutions to provide flexibility in manipulating the contraction are explored: (i) User Defined Contraction (UDC) UDC (Rana 1998) is similar to (x0– y0)-contraction and (y0 – z0)-contraction in that it involves a selection of an internal peak/pit for contraction. However, it differs with the latter in the way the internal peaks/pits are selected for contraction. Unlike sequential contraction, in UDC the user is able to select an arbitrary internal peak/pit for contraction. To ensure a topologically consistent network after contraction, the steps after the selection of the peak/pit remain same as in the sequential homomorphic contraction. For example, after a peak has been selected for contraction, the edge with the lowest elevation difference is selected for contraction. UDC has another very important use in terrain modelling. Study of landform evolution is a well-established topic of research to understand the geomorphic and tectonic phenomena in nature. Researchers use some form of landform evolution models to simulate changes and make predictions, but these often require detailed mathematical analysis. As an alternative, this work proposes that UDC introduces similar changes easily and quickly. An example of the generation of a NW-SE aligned artificial valley in the Latschur surface network is shown in Figure 3.7. This valley was achieved simply by merging minor channels in this area and the removal of the intersecting ridges along these channels. However, evidently, the changes are purely topological and one of the main advantages of other landform evolution models is their ability to regenerate the topography. Figure 3.8 shows the application of UDC on the surface network of Isle of Man (see Figure 3.1a) to remove the minor ridges and channels that flank the periphery of the main central ridge on the Isle of Man Island. With a series of UDC steps, these peripheral channels and ridges can be condensed to highlight the central ridge structure. (ii) Multi-criteria contraction In a conventional contraction sequence, only a single weight-measure is used to rank the vertices and is kept so until the surface network reaches the elementary state. A new approach of multi-criteria contraction sequence, whereby a combination of weight measures or different measures is used at different stages of the contraction sequence is proposed. For example, the contraction may initially be started with an aim to reduce edges of certain length threshold and when the desired degree of simplicity has been achieved, a different weight measure may be used to achieve a different objective. A multi-criteria contraction will also be useful in situations where two internal vertices of the graph have equal importance, in which case another criterion does a second ordering of the vertices. 65
- 81. Structural Analysis Figure 3.7 Generating artificial changes in terrain, in this case a large valley, using User Defined Contraction on a part of Latschur surface network. Figure 3.8 Generating artificial changes in terrain, in this case erosion of minor features to yield a large ridge using User Defined Contraction on a part of Isle of Man surface network. 3.3 Refinement of surface networks The term refinement is taken from the computer graphics literature where it is generally used for an incremental procedure involving addition of more details to a surface data structure. The iterative addition of details is continued until a desired error threshold or the level of details has been achieved (Heckbert and Garland 1997). A typical application of a refinement operation is in smoother rendering of scenes in computer animations. In conventional refinement methods, the first step is to decide where, and how much level of detail must be added. Various approaches e.g. greedy-insertion (Fowler and Little 1979), feature-constrained (Heller 1990), and their numerous variants have been proposed to perform refinement operations. In computer graphics research, the key requirement of any such refinement technique has been fast speed followed by appealing 66
- 82. Structural Analysis display. This meant that arbitrary topological links are allowed during the refinement process. However, in the case of surface networks, the scope of refinement is constrained by the strict topological rules. Despite the constraints, the refinement of surface networks promises the following important uses: • It simulates landscape evolution processes, which could be used to generate landscapes for erosion modelling and simple computer animations. • It develops varying levels of details in different parts of the terrain. This could be relevant in the case of incomplete feature extraction and development of multi-scale terrain. The novel refinement technique presented here is based on an extension of the solutions presented to represent junctions/bifurcations. A surface network can be refined with the following pair of rules: Rule 1: A ridge edge (y0,z0) can be split only if the pass y0 is connected to two distinct pits x0, and x. With this premise, (y0-z0)-splitting can be defined as follows: Let, W = Surface Network, y0 = Pass with Pits R(y0) = {x0, x} Then, after a (y0, z0)-splitting W’ is the graph with the following properties: Vertex set V(W’) = V’ = V + {y’, z’}, ’ ’ Edge set E(W’) = E’ = E + {(y’,z0), (y’, z ), (y0,z ), (x, y’), (x0,y’)} h(z’) is infinitesimally higher than h(y’). h(y’) can be derived by an interpolation of h(y0) and h(z0 ). Figure 3.9 shows the principle of (y0-z0)-splitting on elementary surface networks. Figure 3.10 shows a sequence of refinements of the longest channel of a hypothetical surface network. A rule for splitting a channel edge can be similarly defined. Rule 2: A channel edge (x0,y0) can be split only if the adjacent pass y0 is connected to two distinct peaks z0, and z. With this premise, (x0-y0)-splitting can be defined as follows: Let, W = Surface Network, Y0 = Pass with Peaks R(y0) = {z0, z} Then, after a (y0, z0)-splitting W’ is the graph with the following properties: Vertex set V(W’) = V’ = V + {x’, y’}, Edge set E(W’) = E’ = E + {( x0,y’), (x,y’), (x’,y0), (y’,z), (y’,z0)} h(x’) is infinitesimally lower than h(y’). h(y’) can be derived by an interpolation of h(x0) and h(y0). As can be seen from the figures, the rules proposed above guarantee two important characteristics of surface networks: 67
- 83. Structural Analysis • The refined surface network is topologically consistent according to the rules proposed by Pfaltz (1976) and Wolf (1984) (see section 1.2.1.1). • The changes introduced in the surface network are reversible i.e. the edges inserted can be removed to restore the surface network to the original state. z0 x x0 y0 z0 y' x z' x0 y0 Figure 3.9 Refining a long ridge with a (y0-z0)-splitting. Note how the choice of configuration ensures a topological consistency after the addition of the new edges. Figure 3.10 A sequence of 2 refinements on longest ridges of a hypothetical surface network with repeated (y0-z0)-splitting. Blue lines are channels and orange lines are ridges. Red dots are peaks, green dots are passes and black dots are pits. 68
- 84. Structural Analysis As mentioned earlier, the first step in a refinement process is to select the location of refinement of a data structure. In computer graphics, the locations are often decided in real-time e.g. in the case of a TIN being used as the backdrop scenery in flight simulators, the level of details are added in area visible to the viewer. For surface networks, most refinements will tend to be based on an informed choice. For example, long edges could be suitable locations for refinements as shown in Figure 3.10. The refinement of surface network can be repeated until a desired level of desiccation in terrain has been achieved or, when repeated refinements lead to numerous superficial low-weight pair of peaks-passes and pits-passes. 3.4 Discussion This chapter has presented several ways of characterising surface networks. The chapter highlighted various issues regarding the choice of weight measures, contraction criteria and the effect of the selection on the resultant structure. The chapter also offered solutions such as cascading contractions, global weight measures, multi-criteria contraction and refinement of surface networks. One crucial aspect of surface network, which is generally included in most research on DEMs but missing so far from the chapter, is the uncertainty in the model of the surface networks. A surface network conceptually contains the following two main types of uncertainty: • Shape uncertainty: The morphology of the terrain represented by a surface network could be wrong due to errors in feature extraction and the sparse nature of the surface network. • Height uncertainty: The heights associated with surface network elements could be incorrect and thus interpolation of a terrain based on a surface network could subsequently inherit errors. Numerous previous researches describe the estimation and modelling of uncertainty in digital elevation models if the original elevation data are available. For an exhaustive and up-to-date review, please refer to von Minusio (2002). However, the issue of modelling uncertainty gets complicated if the data structure does not store information about the original surface, e.g. when the data structure does not provide surface continuity. This issue applies to surface networks and is the classical interpolation type problem. For example, while one can derive the errors in the elevation values at the topographic features in surface networks with some certainty, it is conceptually impossible to describe the uncertainty present at slope faces in the surface network. In other words, there is no way of comparing an entire surface network with its original surface. An obvious easy solution to the problem is the provision of a Root Mean Square Error (RMSE) type measure. This can be based on elevations at key locations on the original surface and by appropriate interpolation of elevations of corresponding locations from the surface network. However, uncertainty in a surface network is likely to vary spatially according to the density of the surface network and the type of interpolation method, therefore such a global measure could provide misleading results. Therefore, a probability type measure is more suited to determine the range of errors in elevation values. An ideal measure is Shannon’s information entropy measure (Salomon 2004), which describes the probability of a value depending on several 69
- 85. Structural Analysis occurrences of all values. For example, in a cluster of elevation points (Figure 3.11a) the entropy (H) of elevation e at an unknown location, n N (ei ) H(e) = - i =1 ∑p i ln p i ; pi = N where pi is the probability of the point having an elevation e i , N (ei ) is the number of occurrences of ei , and N is the total number of points. e3 e1 e1 ? e2 e3 e2 ? (a) e4 (b) Figure 3.11 (a) Uncertainty regarding an elevation value in the case of scattered points and (b) in the case of surface network. Similarly in the case of surface network, a range of morphometric values exist (Figure 3.11b) in an area not occupied by surface network features. The entropy measure could thus be used to describe the uncertainty/quality of a surface network dataset. In practise, however the uncertainty is often not modelled and a linear or weighted combination of all possible values could generally be used. Clearly, a better alternative will be to store certain key morphological information about the features e.g. curvature along with elevation. In fact, types of ancillary information should be stored to minimise uncertainty deserves an independent research. 70
- 86. I have a hunch that the unknown sequences of DNA will decode into copyright notices and patent protections. From “The Art of Computer Programming” by Donald E. Knuth
- 87. Chapter 4 Applications 4.1 Proposal The primarily topological nature and strict rules for topological consistency of surface networks has previously restricted the use of surface networks to mainly non-analytical applications (e.g. visualisation), with promising but limited practical implications. The information contained in the conventional surface network data structure was not sufficient to provide details about several morphological characteristics of surfaces. For example, hydrologically important features such as ridge junctions (e.g. monkey saddles) and channel bifurcations (e.g. in braided rivers) are not represented in the conventional surface network model. However, in the previous chapters it has been shown how these and other morphological features can be included by manipulating the topological constraints and inclusion of geometrical information about the topographic features. The chapter presents three practical applications of surface networks which range from visualisation of animated geographical surfaces to the use of surface networks for the generalisation of digital terrain datasets. The applications benefit from the following important properties of surface networks: • Morphologically important locations of surface network elements – The topographic features by their very nature define most of the landscape of the terrain and are thus the morphologically important locations (Fowler and Little 1979). Since topographic features are also substantially fewer, with certain assumptions they can be used as a model of terrain. With this premise, a surface network can be used in terrain analyses where it is important to reduce the computational load by careful sampling of terrain locations and heights. For example, in visibility analysis, performing line of sight tests against only the topographic features will yield a quick and general idea of visual coverage from a viewpoint. Another application of this property includes terrain generalisation where the resampling of the terrain to a coarser/finer spatial resolution can be guided/constrained by the surface network. • Skeletal representation of terrain – A surface network defines the skeleton of the terrain, which is useful for analyses such as terrain generalisation and visualisation. In terrain generalisation, a surface network can guide/constrain the resampling of terrain to a coarse/finer resolution such that the essential morphological information is preserved. In visualisation, a surface network can highlight characteristics of the terrain such as variation of topographic feature density and to monitor the evolution of topographic features over time. Three practical applications of surface networks will be presented in details, which are as follow: • Fast computation of visibility dominance,
- 88. Applications • Visualisation of the evolution of the terrain, and • Multi-scale and morphologically consistent terrain generalisation. 4.2 Fast computation of visibility dominance in mountainous terrains Visibility analysis of terrains is now an integral part of several applications (Rana 2003b). Some typical applications include the planning of defence installations (e.g. watch towers, troop movements, flight paths; Franklin et al. 1994), communication / facilities allocation (e.g. TV/Radio Transmitters; Lee 1991, De Floriani et al. 1994, Kim et al. 2002), landscape analysis (e.g. visibility graphs; O’Sullivan and Turner 2001) and environmental modelling (e.g. terrain irradiation; Wang et al. 2000a). Most existing research has focused on broadly two main aspects of terrain visibility analysis, namely visibility index1 computation time and accuracy of the viewshed (area covered by the visible terrain). While formal methods for modelling viewshed uncertainty were established early in the last decade (Fisher 1991, 1992, 1993), the search for algorithms to optimise visibility computation remains an attractive topic for research (Izraelevitz 2003, Rana 2003a). The computation time of a visibility index is directly proportional to O(ot), where o is the number of observers (viewpoints) and t is the number of targets on the terrain. In a so-called Golden Case all the points, n, on the terrain are used as observers and targets i.e. the visibility indices of all points on the terrain are computed by drawing a line of sight (LOS) to all other points on the terrain. Thus, the computation time in a Golden Case is O(n2) because o=t=n, which is exhaustive and time-consuming. On the other hand, optimised visibility index computation methods are based on strategies to reduce the observer-target pair comparisons e.g. by choosing a polyhedral terrain model (e.g. Triangulated Irregular Network or TIN; De Floriani and Magillo 2001) instead of a grid, and by using algorithmic heuristics (Franklin et al. 1994, Franklin 2000, Wang et al. 2000b). Accordingly, there are two main types of optimisation strategies, namely the Reduced Observers Strategy and Reduced Targets Strategy. As the names suggest Reduced Observers Strategy and Reduced Targets Strategy respectively reduce the Observers (e.g. using random sampling of observers) and Targets (e.g. by limiting the maximum visibility distance as in horizon culling) parts of the computational load. The visibility indices derived in a Golden Case are called the Absolute Visibility Indices (AVI). The visibility indices derived in either of the optimisation strategies are called as the Estimated Visibility Indices (EVI). In many applications, however, finding out the location of visibly dominant (i.e. visibility dominance) observers has more practical use than exact visibility indices of observers (Franklin 2000). In addition, as seen above, visibility indices can be biased by the number of targets. There could potentially be many ways for calculating visibility dominance. In this work, visibility dominance is calculated by normalising visibility index as follows: vi − vmin di = vmax − vmin where vi and d i are respectively the values of visibility index and visibility dominance at an observer i . vmin and vmax are respectively the minimum and maximum visibility indices on the terrain. 1 Visibility index is generally expressed in terms of the visible points or the physical area of ground covered by visible points. 72
- 89. Applications This section of the chapter demonstrates methodologies and examples, which employ surface networks both in the Reduced Observers Strategy and Reduced Targets Strategy for fast approximation of visibility dominance. The proposal is based on the findings of Lee (1992) who reported that fundamental topographic features, namely peaks, pits, passes, ridges and channels, dominate the visibility of other ground locations and therefore could be good viewpoint sites. As mentioned earlier, due to the selective sampling for observers and targets, optimised algorithms will either under-estimate or over-estimate the visibility dominance of non-topographic feature points on terrain. This uncertainty due to the level of abstraction is closely similar to the uncertainty referred to as Object Generalisation (Weibel and Dutton 1999). No proposals exist for assessing such uncertainty in the visibility analysis literature, other than an earlier version of the chapter by Rana (2003a). The earlier research evaluated whether the overall visibility dominance pattern was realistic, albeit abstracted. The following section proposes two simple methods based on an iterative comparison between the AVI and EVI for assessing this uncertainty. 4.2.1 Proposal A target is considered visible if a line of sight from an observer can be drawn to it without an obstruction by an intermediate point (Wang et al. (2000b) provided an exception, who used reference planes to establish the visible areas). The most common approach in previous Reduced Targets Strategy based methods (e.g. Franklin et al. 1994) has been to draw the LOS from an observer to an arbitrary small number of randomly located targets on the terrain. In the earlier work (Rana 2003a) based on small study areas, it was demonstrated that the computation time can be reduced substantially without any significant visibility information loss if the LOSs are drawn only to the topographic features. The underlying assumption of this proposal is that the terrain is not without topographic features. The methodology for the computation of visibility indices using topographic feature as targets consists of three steps: (i) extract the topographic features; (ii) compute the visibility dominance of each point using the topographic features as targets; and (iii) assess the uncertainty in the visibility dominance. The Reduced Observers Strategy to reduce computation time involves the evaluation whether the visibility dominance of non-topographic feature points could be derived by interpolating the visibility dominance of topographic feature points. It is assumed that a spatial autocorrelation of the visibility dominance (i.e. points near visually dominant points will tend to have higher visibility dominance) exists in the terrain. The Reduced Observers Strategy consists of four steps: (i) extract the topographic features; (ii) compute the visibility dominance of only the topographic features by drawing the LOS to all the points in terrain; (iii) interpolate/extrapolate the visibility dominance of other points; and (iv) assess the uncertainty in the visibility index. 4.2.2 Methodology 4.2.2.1 Visibility Computation The study area is a 100 m resolution raster DEM (73x76 cells) of the Cairngorms in Scotland (Figure 4.1a). Note that this proposal is generic and could also be applied to an irregular terrain model such as a TIN. Visibility analysis was carried out in ESRI’s ArcView GIS and all the parameters were the defaults of the Visibility Request in ArcView. The observer eye level is at 1m above the local ground level and the targets are at local ground level. The observer is capable of seeing from the ground zero until infinity (i.e. no horizon culling), across the full range of azimuths and from the zenith to nadir. The experiments were done 73
- 90. Applications on a 1 GHz Intel-Pentium processor based personal computer, with 512 MB RAM. A log of CPU time taken by ArcView for each visibility computation was kept. (a) (b) Figure 4.1 (a) Hill-Shaded terrain of SE Cairngorm Mountains, Scotland. Minimum Elevation = 395 m and Maximum Elevation = 1054 m and (b) 910 topographic feature targets with an overlay of contours (20 m interval). 4.2.2.2 Interpolation of visibility dominance As part of the Reduced Observers Strategy, Natural Neighbours Interpolation (NNI) (Sibson 1981) was used to derive the visibility dominance at non-topographic feature points. NNI is a simple, robust and objective (no requirements for search radius, neighbourhood type) method for interpolation in two dimensions. NNI produces a continuous slope surface everywhere in the convex hull of the point set except at the points. ArcView was used for NNI to derive a surface at the same spatial resolution as the raster DEM i.e. 100 m. 4.2.2.3 Uncertainty Assessment Geospatial uncertainty assessment involves the derivation of deviations between the measured and estimated values with an aim to develop models that could predict the behaviour of the causes of uncertainty (systematic or random) and the process under observation. For example, Fisher (1991, 1992, 1993) suggested an approach based on Monte-Carlo analysis for assessing the effect of noise in a DEM and robustness of algorithms for computing visibility indices. In the proposed approach, the uncertainty is the deviation between the absolute and estimated visibility dominance values arising due to the selective sampling of targets and observers. The only previous example which dealt with the estimation of uncertainty in a Reduced Targets Strategy, is that of Franklin et al. (1994). They compared the visibility indices of an arbitrary number of spatially distributed locations on the terrain computed from their exhaustive R2-visibility algorithm (similar to the Golden Case here), with their optimised methods. Though the results are encouraging, their sampling methods (i.e. the selection of the test points) could not be regarded as formal and objective for two important reasons. First, since there is no prior knowledge about the statistical distribution of the visibility pattern, it is not possible to estimate the number of random points required to capture fully the sensitivity of the visibility dominance distribution of the terrain. However, the choice of the number of random points is critical, as it will dictate the computation time. Secondly, since viewshed at a location is generally anisotropic i.e. the visual spread varies according to directions; the random locations 74
- 91. Applications could bias the uncertainty estimation. One of the conclusions of this chapter is that the visibility pattern is highly dependent on the spatial distribution and number of the random points. The aim of uncertainty assessment in this chapter was only to quantify the deviations and did not involve any form of predictive modelling based on deviations. We used the following two methods for the uncertainty assessment based on a slight modification of the Franklin et al. (1994) method: Method 1: Spatial Correlation between absolute and estimated visibility dominance This method compares the similarity between the overall visibility pattern shown by absolute and estimated visibility dominance values. The comparison could be based on either the deviations at the topographic feature locations or random locations as follow: Type 1: Absolute vs. Estimated visibility dominance at topographic feature locations– • Calculate the absolute visibility dominance of the topographic feature locations by drawing the LOS to all the terrain points. • Calculate the correlation coefficient between two sets of absolute and estimated visibility dominance values. The correlation coefficient should suggest the similarity between the two visibility patterns. This method is similar to Franklin et al. (1994) except that the definition of our sample locations is objective and more natural. However statistically, it remains only an approximate test, especially when using exceptional terrains, where the topographic features are not distributed uniformly across the terrain. Type 2: Absolute vs. Estimated visibility dominance at random locations – Unlike the Type 1 method, this method is more exhaustive but also time-consuming. It is an abridged form of the Monte-Carlo method of uncertainty modelling and involves an iterative comparison between the absolute and estimated visibility dominance at a set of random locations but with the important exception that no subsequent model parameter estimation is done in this method. The steps are as follow: • Generate random sample locations. Since there is no prior knowledge about the visibility dominance distribution it is non-trivial to determine the optimal number of random sample locations sufficient to capture the visibility pattern. It is proposed here, without formal proof, that the randomly placed locations, equal in number to the number of unique EVI (i.e. frequency of each EVI) would be sufficient if we assume that no part of the study area is hidden from the topographic features. Thus a histogram of the EVI (computed using topographic features) represents unique viewsheds. In other words, it is assumed that each viewshed will be assigned to at least one sample location. • Compute the absolute visibility dominance values at the random locations by drawing the LOS to all the points on the terrain. • Calculate the correlation coefficient between the absolute and estimated visibility dominance values. 75
- 92. Applications • Repeat steps (ii) – (iii) several times. Again, due to the lack of any prior information about the distribution of the visibility dominance, it is difficult to decide statistically on a specific number of iterations. In a practical exercise, it would ultimately depend on the amount of time available to the researcher for the experiment. • Choose the lowest and highest correlation coefficient as indicators for the worst- and the best- case approximation. Method 2: Error in the visibility dominance In the previous methods, the correlation coefficients only indicate the reliability of estimated visibility dominance. However, these do not reveal the amount of approximation in the estimated visibility dominance. A simple method for measuring the uncertainty in the estimated visibility dominance is as follows: n| d i' − d i | ∑i =1 d i Average Error (%) = ± × 100 n ' where d i = estimated visibility dominance, d i = absolute visibility dominance and n = number of observers. 4.2.3 Results 910 topographic features were extracted as the targets and observers (Figure 4.1b). The automated extraction of the topographic features took less than five seconds. Since the study area was small, Golden Case visibility patterns of our study areas were also derived (Figure 4.2a). It took 537 seconds to compute the Golden Case. 4.2.3.1 Reduced Targets Strategy It took 91 seconds to compute an estimated visibility dominance map of the study area. Figure 4.2b shows the pattern of the estimated visibility dominance and it is clear from the figures that the overall pattern of the visibility indices is similar to the Golden Case. In fact, as indicated by the correlation coefficient (0.99) and R2 (0.98) values, statistically there is very little difference between the measured and estimated visibility dominance (Figure 4.3a). The ridges and peaks have high visibility indices compared to the pits, passes and channels. The average error in the estimated visibility dominance values is +17% and the residuals are uniform (Figure 4.3b), which when considered together prove that the computation time was successfully optimised without losing a significant amount of visibility information. Based on Method 1 for uncertainty estimation, Figure 4.4a shows the relation between the measured visibility dominance and estimated visibility dominance at the locations of topographic features. The strong correlation coefficient of 0.98 suggests that the optimisation successfully represents the overall visibility pattern. As part of Method 2 for uncertainty assessment to perform a more exhaustive calculation, 19 sets of 418 randomly located points were collected on the terrain. This was followed by a calculation of the correlation coefficient and error between the measured visibility dominance and estimated visibility dominance for each of these sets of random points. Figure 4.4b shows the wide variation in the quality of the estimated visibility pattern at various points on the terrain thus supporting the exercise to validate the quality of the estimated visibility iteratively. There seems to be no correlation between the error and correlation coefficient, which suggests that these measures qualify different aspects of the visibility dominance pattern. 76
- 93. Applications (a) (b) Figure 4.2 Comparison between (a) Golden Case based visibility dominance and (b) topographic features based visibility dominance. Darker coloured areas have more visual dominance than lighter coloured areas. 1 Estimated visibility dominance 0.8 0.6 Figure 4.3 Uncertainty assessment based on the 0.4 y = 0.983x + 0.0026 entire DEM. (a) Absolute vs. R2 = 0.98 estimated visibility dominance of all 0.2 ρ = 0.99 error = + 17% locations and (b) residuals based on n = 5548 the linear regression between absolute 0 and estimated visibility dominance 0 0.2 0.4 0.6 0.8 1 (a) values of all locations. Absolute visibility dominance 0.2 0.15 0.1 Residuals 0.05 0 0 0.2 0.4 0.6 0.8 1 -0.05 -0.1 -0.15 (b) Predicted visibility dominance 77
- 94. Applications 22 1 Estimated visibility dominance 20 0.8 18 Error (%) 0.6 16 y = 0.98x + 0.0068 0.4 R2 = 0.98 14 r = 0.99 0.2 error = + 21% 12 n = 910 0 10 0 0.2 0.4 0.6 0.8 1 0.9870 0.9890 0.9910 (a) Absolute visibility dominance (b) Correlation Coefficient Figure 4.4 Uncertainty assessment based on selective sampling. (a) Absolute vs. estimated visibility dominance of the topographic features and (b) correlation coefficient vs. errors at 19 sets of 418 random locations, with the average value shown with the dotted line. y = 0.74x + 0.0629 1 R2 = 0.72 Figure 4.5 Estimated visibility dominance ρ = 0.85 Comparison between the AVI and EVI 0.8 error = + 55% based on the reduced observers strategy. n = 5260 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 Absolute visibility dominance 4.2.3.2 Reduced Observers Strategy ArcView took only a few seconds to interpolate the visibility dominance values across the study area. Since Natural Neighbour interpolation replicates data values, Method 1 for uncertainty assessment was not appropriate because all deviations would have been zero or negligible. Instead, a single comparison for the entire terrain involving all the measured and the estimated visibility dominance values was derived. Figure 4.5 shows a considerable increase in the error (+55%) but the correlation coefficient (0.85) and R2 (0.72) values suggest a strong similarity between the measured and estimated dominance patterns. 4.2.3.3 Improvement in computation time The computation time has been substantially reduced by at least 5 times in the experiments. The optimisation is linear as the time saved was merely due to a linear reduction in the number of comparisons, unlike other approaches such as by Izraelevitz (2003), where previous computations are recycled to reduce computation time. The CPU 78
- 95. Applications time usage could be further optimised by combining the current approaches with further Reduced Targets Strategies such as horizon culling. 4.2.4 Summary In general, there is a compromise between performance and accuracy in any practical visibility computation (Franklin et al. 1994). This experiment also shows that the use of the fundamental topographic features as optimal targets and observers can be used to decrease the visibility computation time substantially without any significant visibility information loss particularly with the reduced targets approach. This approach is especially useful for a fast approximation of visibility dominance in hilly terrain. The reduced sampling of the targets on the terrain, however, introduces an uncertainty in the visibility indices of the observers on the terrain. In the current work, the use of the correlation coefficient and the simple statistical measures such as correlation coefficients and R2 values as measures of a visibility pattern quality and uncertainty provide a global pattern matching but visibility is a directional property. We expect developing ways in which we could estimate the visual integrity in our optimised approach. Although our observation that at certain numbers, both topographic feature targets and random targets would produce a similar quality of visibility estimation is based on thorough experimentation of the current study area, experiments with other DEMs will be useful to validate fully this empirical observation. It is more important to realise that visibility, as a property of terrain location, cannot be formally modelled. It is invariant of the local properties (e.g. elevation, slope, aspect) and global properties (e.g. geographic setting such as fault etc.) of a location. It is derived only after a LOS test with other locations Thus, it is proposed that the regression between measured and estimated visibility dominance only provides information about the similarity or the amount of approximation. Finally an interesting intellectual exercise still remains in understanding the effect of the topographic feature extraction scale on the computed visibility pattern. 4.3 Multi-scale and morphologically consistent terrain-generalisation (MMTG) Terrain generalisation is a procedure for generating physical surfaces at different scales of planimetric and elevation resolution. These multiple generalisations remain a small but important component of many geographic information (GI) analyses (Weibel and Dutton 1999) and computer science/graphics applications (Heckbert and Garland 1997). Multiple representations can be used for reducing computational load and multi-scale geomorphological analyses (Dikau 1990). Some examples of such applications of terrain generalisation involve flight simulators in computer games (Misund 1997), hydrological studies, cartographic maps, and online visualisation of digital elevation models. Despite its crucial role in various critical applications such as flood modelling and radiowave viewshed planning, most terrain generalisation remains arbitrary and oriented mainly towards data reduction. For example, one of the most common methods is based on the resampling of elevation values using statistical measures such as minimum, average, mode amongst others. However, as Figure 4.6 shows, an averaging filter destroys the morphology and produces a non-uniform notion of scale. The three dimensional structure of the terrain, i.e. the ridges and channels are lost during a morphologically- insensitive generalisation. 79
- 96. Applications (a) (b) Figure 4.6 (a) Original 50m spatial resolution raster and (b) its resampling to 200 m spatial resolution with simple averaging filter. Note the smoothness in (b) at the expense of structural losses for example at the point indicated by the arrow. In general, an iterative and time-consuming post processing is applied to correct the topological errors in the generalisation. Hutchinson (1988) first suggested an understanding of third dimension of contour maps with the aim of extracting the structure lines and points also called the fundamental topographic features, such as ridges, channels, peaks, passes and pits. Later, Weibel (1992) proposed a method based on adaptive triangular mesh filtering (Heller 1990), which employs the structure lines to guide the generation of variable resolution TINs without losing the three-dimensional morphology. However, in their review of surface simplification methods, Heckbert and Garland (1997) concluded that the quality of “feature methods” is generally inferior to many other methods. Similar structure line methods carried out in computer graphics are however supportive of this approach, both for surface and volume datasets (Bajaj and Schikore 1998, Kraus and Ertl 2004). 80
- 97. Applications This section demonstrates a morphology-preserving methodology to generalise the raster DEM at user-defined spatial resolutions. However, the actual methodology can be modified for other types of terrain datasets. The work reported in this section was part of a project sponsored by the Ordnance Survey (OS). 4.3.1 Methodology and Results The study area in this experiment is a part of Salisbury plains in South-West England. The input dataset was 10m cell size 502x501 cell raster DEM (Figure 4.7). The ArcInfo GIS was used to perform the experiment as it contains many useful morphometry functions. There are two steps in the generalisation process. Step 1 Construct the surface network In this step, the topographic features are identified using the methodology proposed in Chapter 2. Step 2 Generalise the raster to a coarser/finer resolution The surface network features are now converted to spot heights and a suitable interpolation is applied to convert the spot heights into continuous raster. A particularly good interpolation function available in ArcInfo GIS software is the thin-plate spline interpolation called TOPOGRID, which is based on the ANUDEM program developed by Michael Hutchinson (Hutchinson 1988, 1989). The use of topographic feature elevations ensures that the generalised DEM preserves morphology. Figure 4.8 shows a comparison in plan and perspective between resampling of Salisbury DEM (Figure 4.7b) to a cell size of 100 m using five methods viz. (i) bilinear interpolation; (ii) biquadratic interpolation; (iii) bicubic interpolation; (iv) interpolation to coarser resolution using TOPOGRID, and (v) proposed MMTG algorithm combined with TOPOGRID and feature elevation values. All the methods had the same input DEMs and parameters. The figure shows that MMTG method preserves both elevation and morphology (area in circle) and surface continuity. The perspective view reveals the difference in surface continuity achieved by the methods with MMTG generalisation method being the smoothest generalised DEM and without any feature loss. The most important achievement of the MMTG based method is that the generalised DEM was generated from only about 16% of the heights in the basic raster. In other words, even after an 84% compression, the MMTG generalised DEM preserves morphology, and achieves continuity. It is expected that the advantage of this characteristic of MMTG based method will be evident even in cases of still coarser DEMs. 4.3.2 Summary Morphology-preserving terrain generalisation is likely to be an important topic of future research considering the vast amount of elevation data being generated by LIDAR sources and non-terrestrial sensors. Besides, the resampling methods, described above, certain smoothing methods e.g. anisotropic diffusion (Desbrun et al. 2000) can also be used to produce multi-scale terrain datasets. It seems that a combination of such smoothing methods and MMTG type feature-constrained resampling methods would provide the optimal solutions. Such a combination of algorithms will ensure that minimal loss of morphological details. 4.4 Visualisation of the evolution of the terrain An animation of terrain surfaces as a temporal series (e.g. an evolving part of sea coast over time) is sometimes produced to aid visualisation. These animations are popular because they reveal the spatial variations in a single frame thus removing the effort to 81
- 98. Applications 153 m 30 m Figure 4.7 Landform PROFILE contours of a part of Salisbury (top), and hill-shaded 10 m Salisbury raster terrain (bottom) produced using the TOPOGRID function. Note the pronounced terracing effect on slopes located in the northeast and northwest of the area. 82
- 99. Applications Figure 4.8 Comparison of the preservation of elevation, morphology and surface continuity after different types of generalisation of the 10 m cell size Salisbury raster terrain shown in Figure 4.7 to 100 m cell size. The area in circles shows that MMTG method preserves both elevation and morphology. Also note that ordinary interpolation using TOPOGRID is not sufficient to ensure feature preservation. MMTG generalisation method also produces the best continuity in generalised DEM. Bilinear 83
- 100. Applications Bi-quadratic 84
- 101. Applications Bi-cubic 85
- 102. Applications TOPOGRID 86
- 103. Applications MMTG 87
- 104. Applications memorise and match differences. The techniques for animating surfaces have evolved from simple paper cartoons (McCloud 1993) to sophisticated hardware-software driven solutions of modern times (Ware 2000). Geovisualisation researchers in collaboration with computer graphics researchers have tried to resolve the static nature of geospatial datasets visualisation with ever advancing and aesthetically appealing geovisualisation interfaces. As a consequence, animation functions are often a standard component of many current geographic information systems (GIS). However, despite the vast improvements in technology, a question posed in the early 1960’s by Bertin (Bertin 1967) i.e. “whether animation helps in a better understanding” is still thrown back and forth between geovisualisation researchers. There have been several attempts to characterise the issues in animated geovisualisation (Emmer 2001, Ogao and Blok 2001, Ogao and Kraak 2001, Kousslakou 1990, Dibiase et al. 1992, Peterson 1993, MacEachren 1995, Ware 2000, Slocum et al. 1990). Ware (2000) presents a comprehensive comparison between advantages and disadvantages of animated visualisation and identifies suitable research directions. Bertin’s main argument against animated maps is that the presence of motion distracts a user’s attention from the visual properties (e.g. colours, shape etc.) of symbols thereby resulting in a limited interpretation. Unlike static maps, an animated map requires a continuous attention to the stream of information. Bertin’s criticism is further strengthened by Miller’s (Miller 1956) observation that human seem to follow only about 7+2 visual cues simultaneously. In other words, it cannot be guaranteed that animation will be useful for interpretation due to the free flow of information. Although DiBiase et al. (1992) and MacEachren (1995) proposed methods to control the transient symbology in animation, formal and generic guidelines for the use of these visual variables do not exist. Therefore, while their effectiveness is largely unknown, Bertin’s objections (Bertin 1967) are not fully addressed. Please see Gershon (1992) and Acevedo and Masuoka (1997) for studies on the implications of dynamic visual variables, such as frequency, frame rate and others, on time-series animations. Usually this limitation of animated geovisualization has arisen from mainly two sources, namely the conceptual (e.g. design related issues) and implementation (e.g. software, hardware) limitations. In the not so distant past, limited hardware capabilities and non-graphics oriented languages restricted the scope of animation. Certainly, the hardware and software available to generate animations has improved significantly (Earnshaw and Watson 1993, Gahegan 1999) but a desktop solution for our often-massive surface datasets still seems some years away. On the other hand, conceptual limitations are less well defined but at least they do not require advanced understanding of modern sophisticated computer hardware and software. The above limiting factors start to take an effect from the start of the geovisualization process i.e. preparation of spatial datasets (e.g. lack of spatio-temporal continuity in spatial datasets) and then eventually feeding into the interpretation stage as information overload. Visualisations available as part of the AIDS Data animation project (URL #2) is one such example of poor design and implementation, where due to the high and sudden variations in successive frames, the inter-frame variations in spatial patterns appear as movements to the viewer. MacEachren (1995) offers a perceptual and cognitive treatment for such misleading interpretations. The combined effects of these factors are distraction, poor retention and lack of clear expression of the information (Morrison et al. 2000, Gahegan 1999, Openshaw et al. 1994). In this section, an approach is proposed that addresses both the design and implementation related limitations in geovisualization. The approach presented here is based on an extension of the proposals by Pascucci (2004) on using the surface network for 88
- 105. Applications scientific visualisation. The aim of the experiment was to demonstrate how the surface network representation offers both intuitive design insights but also improvements in implementation. The approach presented here takes advantage of techniques from computer graphics and geography. It must be stressed at this stage that the exact implementation of any of the methods should inevitably vary according to the context of the visualization (data, user, use, etc.). It is also important to understand that the approach proposed here is not in any way ‘optimal’ or the ‘most effective’ method to use. At the same time, the proposal aims to provide sufficient explanation in the following sections to demonstrate the proposed approach and use examples to illustrate ways in which it may be applied in a flexible manner. 4.4.1 Proposal As like in the previous sections, to realise a surface network we assume that the terrain is a doubly continuous function of the form z = f(x,y) where z is the property (e.g. elevation) being mapped and associated with a point (x,y). Although this topological integrity of terrain is required to ensure mathematic tractability, it is not crucial for visualisation. Therefore, the visualisation of any surface which contains surface network features namely, the peaks, pits, passes, ridges and channels could equally benefit from our approach even though it might not be based on a consistent surface network. It is because a surface network in any form highlights the information of the surface (data) where the definition of information is based on Shannon’s Information Theory (Salomon 2004) that information is only the useful part of the data. This argument is supplemented with more reasons in section 4.4.2.2. 4.4.2 Methodology 4.4.2.1 Increase inter-frame continuity Figure 4.9 shows an example where due to practical limitations, ordered sequences of terrain surfaces could not be sampled frequently enough to create a continuous temporal series yet the feature changes constantly in the dynamic coastal environment where it is subjected to denudation and deposition (Raper 2000). The temporal gaps lead to abrupt changes in the animation of dynamic surfaces (Shepherd 1995). Attempts to reduce the abrupt jumps between successive situations depicted in an animation to increase inter- frame continuity include adjustment of the ‘duration’ dynamic visual variable, either by slowing the sequence or through direct user control. Alternately, additional situations can be derived from the data to smooth transitions. This step is also an ‘exaggeration’ effect with the aim to the include ‘microsteps between larger steps’ as these are found to be beneficial to the viewer (Morrison et al. 2000). Several techniques exist for generation of animations in this way. One of the simplest methods is blending through which a smooth transition of intermediate situations or ‘microsteps’ can be achieved. Blending is used widely in the computer graphics field for transforming one particular shape or object into another (Gomes et al. 1998). It is also available in commercial graphics software such as 3D Studio Max (URL #3) which provide tools for applying the technique to both raster and vector spatial datasets. A basic implementation of blending involves a linear interpolation between the two consecutive situations (frames) however a more sophisticated nonlinear interpolant could also be used to visualise punctuated phenomena. The MapTime software (Slocum et al. 2001) makes use of the first of these options for generating intermediate frames between two situations2. 2 Situations are termed ‘key frames’ in the context of the MapTime software. 89
- 106. Applications 4.4.2.2 Highlight the information in terrain ? February 1997 September 1997 Figure 4.9 Digital elevation models of a sand spit at Scolt Head Island, North Norfolk, UK. Two situations are shown representing the results of surveying the feature in 1997. As mentioned in the beginning of the section 4.4, human visual processing has limited capabilities for interpreting the parallel information streams that characterise dynamic processes (Ware 2000). Human cognition processes, especially the working memory can follow at most 7 ± 2 simultaneous cues (Miller 1956, Ware 2000). Therefore, highlighting the information is a key cartographic objective when scenes are complex. Morrison et al. (2000) indicated that the clear comprehension and expression of the conceptual message is essential in animated graphics. Tobler (1970) proposed reducing complex processes into component parts or simplified representations. Dransch (2000) identifies several factors that may enhance the cognition process in multimedia systems, including the need to ‘increase the important information’. This could be achieved through careful and meaningful simplification. The two key types of information in dynamic geographic surfaces are the structure of the surface and the local importance of points (locations). However, in the common representations of surfaces such as the colourmap (equivalent cartographic representation is a hypsometric tint) and contour map (or coloured isopleth), the transfer of the structural information is dependent on the contour interval and spatial and thematic resolutions (Bajaj and Schikore 1996). Therefore, a representation of the surface is required which would provide an objective and yet natural representation of the surface morphology and structure. Fowler and Little (1979) proposed that the fundamental topographic features of a surface, namely the peaks, passes, pits, ridges and channels, are sufficient to describe the significant information about a surface. These topographic features constitute the surface network therefore, an application of surface networks in computer graphics has been to visualise the structure of surfaces. For example, Helman and Hesselink (1991) and Bajaj and Schikore (1996) demonstrated that the surface network representation could enhance the graphic representation of vector and scalar surfaces significantly as compared to the use of colourmaps and contour maps. The surface is also broken down to five main information streams (3 point types and 2 line types) which would make the changes easily observable. Helman and Hesselink (1991) reported that the surface network representation helped in both visualisation and reduction in storage space. Therefore, it can be argued that the derivation of surface networks from dynamic surfaces has the potential to highlight the information when animating sequences of surfaces for visualization, thus reducing the load on the viewer and potentially aiding interpretation. 90
- 107. Applications In the terms of Dransch’s proposals, extracting surface networks will correspond to a step aimed to increase the important information, reduce the information overload, and help in creation of a mental model of the dynamic processes. For example, Figure 4.10 shows a comparison between the surface network representation, contours and colourmaps of the Norfolk coast sand spit on their ability to describe the structural information of the surface. In summary, a surface network representation is useful for the visualisation of dynamic surface animations because: • The consistent definition of surface network means that it can be used to quantify and isolate changes. The surface network provides a frame of reference, which could be used to track changes in the surface for example the rate of the displacement of the ridge lines through an animation could indicate the behaviour of the surface under changing conditions. • The use of point and line symbols to represent surfaces enables the viewer to take advantage of their natural propensity for interpreting attribute change between successive scenes as motion and reduces the possibility of minor variations in visual variables being interpreted as such. The surface network is thus conceptually similar to the ideas of topological rendering of volume data sets proposed by Upson and Kerlick (1989) and Kerlick (1990). Structural Information Basic raster terrain Basic raster terrain with Basic raster terrain with contours surface network Figure 4.10 Increase in the structural information delivery with the addition of contours and surface network overlays. 4.4.3 Results and Summary The study data were the digital elevation models shown in Figure 4.9. The implementation revealed some promise and highlighted several issues. Various controls to support animated, sequential and conditional interaction (Krygier et al. 1997) were implemented in an application (snv, surface network visualizer, developed by Jason Dykes in collaboration) for animating surface networks such as those derived here, to support visualisation. snv allows an animation of an ordered sequence of surface networks and a range of levels of sophistication of visualisation tasks (such as query, interactions) as identified by Crampton (2002). snv also allows a graphic lag whereby a user-defined rate of change in the lightness of symbols representing the surface network features is used to fade in and fade out between successive situations. In its approach, graphic lag performs a type of epitomic symbolism (Shepherd 1995), similar to the ideas of Levy et al. (1970) and Openshaw et al. (1994) for controlling the brightness and luminosity of symbol colours, respectively. 91
- 108. Applications Figure 4.11 shows the inter-frame continuity achieved by blending (using linear interpolation) the terrain of the sand spit recorded in February 1997 into that recorded in September 1997. While we can observe the variations in relief of the surface, it is not possible to assess the changes in the structure, as the structural changes are not obvious from the field view. Figure 4.12 shows a part of the same sequence of the blending with an overlay of the surface network, in which the changes in the structure can be identified. Note the detection of the changes in topographic features that are significant at this scale of measurement at the top right of the spit in Figure 4.12. The animation can be accessed online and assessed (URL #4). There is no denying the fact that geovisualization is an inexact science. Some researchers will in fact argue in favour of keeping geovisualization as informal and open- ended to preserve the exploratory spirit. However, that while graphical methods for visualization should draw on appropriate theoretical literature and exhibit graphic logic, the degree of success with which the process of visualisation is aided by graphic tools lies in the ’eye of the beholder’. This uncertainty could be the cause of nightmares to visualisation software developers trying to develop the most effective visualisation system. These efforts are questionable when experimental and theoretical evidences in the literature have suggested that human visual processing system does not have the propensity for interpreting complex animated sequences particularly successfully. In the words of Morrison et al. (2000), ‘The drawback of animation may not be the cognitive correspondences between the conceptual material and the visual situation but rather perceptual and cognitive limitations in processing a changing visual situation’. This small experiment has endeavored to demonstrate that the combination of methods employed in various related disciplines to support visualization offers some opportunity for solutions to this situation. A generic approach is introduced through which various data transformations are applied in a manner that corresponds to established cartographic practice. Techniques have been demonstrated using examples depicting ordered variations in time, attribute and scale. The aim of these techniques is to draw parallels between existing cartographic practice and opportunities for information visualisation that address identified limitations in processing animated sequences of surfaces for prompting thought and insight. The proposal could be summarised as follow: (i) Increasing the inter-frame spatial and attribute continuity by removing small-scale variations and focusing on broader trends. A clear parallel exists between this process and that of smoothing in static cartography. (ii) Highlight the information content by way of a surface network representation. The aim is to make the significant information about the surface explicit, an important objective under the identified limitations of animated cartography. 4.5 Scope for further applications This chapter only presented just a few of the possible applications of surface networks for spatial analyses. Other potential applications of surface networks include analysing demographic surfaces (Okabe and Masuyama 2004) and streaming of terrain for web mapping. The only limitation regarding the wider application of surface network is its sparse representation of surface networks. Many spatial analyses require a continuous model of terrain e.g., calculation of morphometric measures such as slope. One possible solution for this limitation was presented in Chapter 2 e.g. the surface network can be triangulated to form a piece-wise or interpolated to a continuous surface. 92
- 109. Situation 1 Situation 2 Figure 4.11 22 Intermediate surfaces (microsteps) generated by blending the February, 1997 surface (Situation 1) into the Applications 93 September, 1997 (Situation 2) surface.
- 110. Figure 4.12 Use of the surface network representation to visualise the changes in the morphology of the sand spit. The box indicates an area of interest. Note that the surface network variations highlight changes that are not evident from the representation that uses colour to show variation in elevation. Applications 94
- 111. New ideas pass through three periods: It can't be done. It probably can be done, but it's not worth doing. I knew it was a good idea all along ! Arthur C. Clarke
- 112. Chapter 5 Conclusions 5.1 Summary The aim in this thesis was that each chapter of the thesis should focus on an individual aspect of the data structure and present research into different aspects of the structure and its use. In other words, the thesis was made up of mini-researches. Clearly, some parts are stronger and well presented than others. While, the research has not deviated a lot from the research agenda developed in the MPhil to PhD transfer report (Rana and Morley 2002), some experiments had to be left out from the research. For example, the regeneration of morphology after a graph-theoretic contraction was identified as a potential research topic in the MPhil to PhD transfer report (Rana and Morley 2002). However, since Bremer et al. (2003) proposed a robust solution to the problem, it was thus not studied in the research. Fortunately, there were still many challenges to address in the research. The aim of this chapter is primarily to present a summary of issues concerning surface networks and to highlight how they were addressed in the research reported here. The chapter concludes with suggestions for some potential research areas. 5.1.1 Modelling terrain using surface networks: what’s possible and what’s not Chapter 1 demonstrated that certain surfaces (more commonly so for natural terrains) are not suitable as models of Morse functions; typical examples are feature-less surfaces or terrains formed from non-fluvial erosion processes. These terrains are clearly not suitable for representation as surface networks. Nevertheless, there may still be some application of topographic features e.g. for the visualisation of the structure of surfaces such as shown in Chapter 4. In fluvially eroded surfaces, the key problem faced during automated surface network generation is in the first step itself i.e. feature identification. Features are often eroded and vary in geographical extent (scale). Without the prior knowledge of the scale of features, most automated algorithms fail to identify the complete set of topographic features. The scale-dependency of feature identification has received the most interest in GIS literature. The proposals vary from computing a fractal dimension (Tate and Wood 2001) through wavelet decomposition (Gallant and Hutchinson 1996) to adaptive feature- identification filtering (Wood 1999, Fisher et al. 2004). The first two methods seem promising but remain to be proven widely (McClean and Evans 2001). The recent work of Wood (1999) and a related later work by Fisher et al. (2004) have demonstrated variation of the size of feature-extraction filter window to identify and visualise the features at different scales. However, it falls short of actually proposing any solutions to incorporate such a multi-scale feature set in constructing a consistent surface network. This approach assumes that varying feature-identification filter window will always give information about the multi-scale terrain structure. Arguments that contradict this assumption were shown in Chapter 2. In short, these assumptions suffer from the classical problem of
- 113. Conclusions modelling anisotropic nature of terrain (Gauvin 2004) with regular shaped feature extraction windows and necessarily relating the filter size to the extent of a feature. For the sake of hypothesis building, even if it is assumed that well-defined identification of features at multiple scales exist, it remains unclear how the features at various scales should connect together to form a single surface network structure. Features of small extent generally do not necessarily connect to feature of larger extents, simply because of the way with which erosion processes work in nature. However, from the point of terrain datasets, all features are relevant and required. The automated algorithm proposed for the generation of surface network in Chapter 2 goes some way in solving the above issues by not searching for the scale of a feature and combining features of various scales by careful manipulation of feature geometry. Thus, it is generic and makes it possible to include in the surface network model many additional features e.g. channels on mountain faces, ridge junctions and channel bifurcations. Nonetheless, there are many terrain types that cannot be fully converted to a surface network and thus require a more non-feature specific data structure such as TIN. The proposed algorithm could still play an important role in such exceptional terrains. For example, one of the classical problems in TIN generation is the flat triangles that typically occur at ridge/channel axes and hilltops. This problem occurs for most terrains and the most effective solution to overcome the problem is the introduction of linear breaklines e.g. channel and ridge lines, during the TIN generation (Ardiansyah and Yokoyama 2002, Dakowicz and Gold 2002). The feature identification part of the proposed algorithm can be used to extract ridges and channels breaklines. Nevertheless, following questions remain: Scale related: How can the features at various scales be identified and included in the surface network, without adding many pseudo-feature nodes? How do the feature scales vary on terrain i.e. is it a set of related or unrelated geographical extents? Feature recognition related: What is the most reliable source for identifying topographic features and consequently surface network i.e. whether a raster terrain or TIN or contours etc. is suitable for feature recognition? Can the rules for identification of features based on plan curvature be modified to produce better feature recognition? Can the plan curvature based feature identification method be modified to identify other important morphological features such as scarp edges and overhangs? What kind of filter is best suited for morphology sensitive smoothing that can be used during feature recognition? 96
- 114. Conclusions Data Structure format: How can more morphological information be embedded in the surface network to include additional original elevations and morphological details? How should the quality of a surface network be described and embedded? 5.1.2 Revealing terrain structure using surface networks Terrain has been represented in various forms. Some popular examples include various types of stream ordering, fractal dimensions, and wavelet coefficients. An understanding of terrain structure is fundamental to many terrain analyses e.g. drainage network extraction and the applications derived thereafter. Most analyses have been relatively simplistic compared to the network analyses that are carried out by physicists, biologists, and transport network planners. The obvious reason behind the difference being the fact that the network of terrain features is different from a network of power lines, nervous system and road layouts. However, there are also several striking similarities, which have been unnoticed e.g. streams and ridges display an organic growth pattern similar to cities (Batty and Longley 1994) and other non-terrain networks. Thus, a brief study was conducted on what kind of information would appear when measures used in non-terrain networks are carried out on terrain surface networks. Several potentially useful observations have been presented in Chapter 3 e.g. the terrain structure appear to show a universal relationship in the connectivity of a node (i.e. a peak/pit). It was found that there is an inverse exponential relationship between the frequency and degree of nodes. Thus suggesting that in terrain only a few nodes exist with a high number of links. Another interesting observation relates to the stability of the structure of the terrain structure and it somewhat follows the earlier observation. In Chapter 3, it was shown that in terrains, few nodes are crucial to the otherwise very stable terrain structure. This property is evident in a plot of path lengths under a series of contraction of surface networks. The plot has a step-like appearance where the flat areas represent the removal of structurally unimportant nodes while the nodes at the edges of the steps are important ones. These observations clearly need to be supported by further experiments and on different terrains. Several potential limitations in the weights and contraction criteria proposed by Wolf (1984) were identified in Chapter 1. The solutions presented in Chapter 3 include new weight measures e.g. centrality measures, slope of edges, degree of nodes, and various path length measures that reflect local and global topological structure unlike the conventional primarily morphometric properties based measures. Two new methods of simplifying surface networks have been presented in Chapter 3 with potentially many applications. The use of cascading contraction to spread the effect of a local contraction to other parts of the network and the user-defined contraction to allow arbitrary sequence of simplifications can be used to generate artificial landforms, and evaluating the relationship between features and parts of terrain structure. Both these new methods provide a simpler alternative to the otherwise highly involved terrain evolution algorithms generally followed in geomorphology, often involving complex numerical analysis. At the same time, it is important to note that the simulations based on surface networks are only relevant for macro-scale terrain evolution analysis. In other words, only broad changes in structure at the level of ridges/channels can be incorporated and produced in the experiment. Thus, unless specifically encoded, for example neither contraction mechanisms will allow modelling of the effect of increased erosion on the mountain slopes with changes in the silt load of rivers. 97
- 115. Conclusions Techniques for refining a surface network i.e. adding more details have been suggested in Chapter 3. This is novel to this research. Just like the user-defined contraction and cascading contraction, surface network refinements have the potential to be useful for simulating terrain evolution and generating artificial landforms. Refinements can also be used to manipulate incomplete surface networks to make them topologically consistent. One of the important issues dealt with only in passing in Chapter 3 is that of uncertainty in surface networks. The main reason for the lack of a detailed treatment here is the fact that uncertainty is a significantly large and varied topic that deserves a separate research such as done by von Minusio (2002). The level of uncertainty depends on the density of surface network, the deviation of the terrain morphology from a smooth surface, and the type of surface interpolation used to reconstruct the terrain from surface network. The errors in the terrain dataset also affect the derived surface network. As a preliminary solution, a probability measure, namely Shannon’ entropy measure has been proposed in Chapter 3 to evaluate the uncertainty in the data structure inherited by converting a continuous surface into surface network. Uncertainties that exist in the original continuous surface (which itself is a contentious issue) have not been dealt here. Lately, the notion of suitability for the application is used in the GIS literature instead of accuracy to describe the quality of a spatial data product. The understanding of the quality will vary depending on the individual application. The characterisation of suitability of surface networks for different applications is one of the aims for future research. It was addressed partly in the previous chapter, where different uses of surface networks in different contexts were described. 5.1.3 Applications of surface networks To prove the advantages of the surface network for terrain analyses, three types of spatial analyses have been demonstrated in Chapter 4, where the surface network data structure is particularly relevant. Owing to the topological description of the terrain in terms of the topographic features, the surface network is an ideal data structure where the structure of the terrain is considered. For example, it has been shown in Chapter 4 that the use of the surface network as the set of optimal observers and targets can reduce the visibility index computation time significantly with only a negligible loss of visibility information. Secondly, the use of the surface network representation to visualise a temporal and attribute animation series of geographic maps highlights the structure of the surfaces during the animation. The surface network representation augments the working memory of the viewer by focussing the attention to the morphologically important parts of the surface undergoing changes. Finally, because of the morphological significance of surface network features, it can also be used to constrain the resampling of terrain. It has been shown in Chapter 4 how a multi-scale and morphology preserving terrain generalisation can be achieved by preferentially preserving the elevations associated with the surface network. These are only handfuls of the possible applications. For example, Wolf (1992) has also shown the extraction of drainage networks and watershed from surface network. Applications ranging from analysing roughness of metal surfaces (Scott 2004) and population density (Okabe and Masuyama 2004) have been proposed. Amongst the three main aspects researched in this work i.e. automated generation, generalisation and applications of surface networks, the applications topic has the most scope for research and development in future, as there are several types of surfaces yet to be represented as 98
- 116. Conclusions surface networks. The next sections provides a list of some of those unexplored surface types 5.2 Future research In his research on surface networks, John Pfaltz (Pfaltz 1978) suggested many uses of surface networks. To date, some of the ideas remain unexplored. As mentioned earlier, there is an immense scope of implementing surface networks for different types of surfaces. The following research topics are particularly promising: Visualising complex systems A typical example of complex systems is the weather. Visualising weather behaviour like most other 3D surfaces is non-trivial. The representation of weather surfaces e.g. precipitation, temperature, atmospheric pressure, as a 3D surface network, similar to the ones produced for other types of volumetric datasets e.g. fluid flows by Helman and Hesselink (1991) and sub-atomic particle collision by Bajaj and Schikore (1996), would help in visualising the otherwise unpredictable and complicated weather pattern. Visualising dynamic events An example of how surface network representation can be used to show the study the topographic structure of a landscape has already been showed in this work. However, there are many types of dynamic surfaces such as demographic surfaces e.g. population density and economical surface e.g. monetary transactions (Warntz 1966), which are also suitable. Modelling non-spatial datasets The notion of maxima, minima and saddle points could perhaps be applied to model even non-spatial datasets provided the topological relationships are maintained. For example, for understanding human relationships, an individual with the highest number of relations in a group of people can be a peak. Similarly, someone in the family with the lowest number of relationships can be a pit. Other examples of non-spatial datasets potentially mappable as surface networks could be computer directory structures. Areas of a computer directory of high storage can become the peaks; lowest storage can become the pits and so on. This kind of representation could be potentially useful when querying the disk usage and performance analysis. Finally, since the start of this doctoral research there has been a lot of research on surface networks, mostly from computer science researchers. Some examples include the works of Vijay Natarajan, Valerio Pascucci and Peer Timo Bremer. Recent research has reported the fallibility, often presented with elegant solutions, of surface network data structures to represent any arbitrary surface, for example the work of Ni et al. (2004) and Rana (2004). Nonetheless, this is obviously good news as it indicates a revival of wider interest in the topic of surface network. 99
- 117. References Acevedo, W., and Masuoka, P., 1997. Time-series animation techniques for visualizing urban growth, Computers & Geosciences, 23, 423–435. Albert, R., and Barabasi, A-L., 2002. Statistical Mechanics of Complex Networks, Review of Modern Physics, 74, 47-97. Ardiansyah, P.O.D., and Yokoyama, R., 2002. DEM generation from contour lines based on the steepest slope segment chain and a monotone interpolation function, ISPRS International Journal of Photogrammetry and Remote Sensing, 57, 86-101. Bajaj, C.L., and Schikore, D.R., 1996. Visualization of Scalar Topology for Structural Enhancement, Technical Report CSD-TR-96-006, Department of Computer Sciences, Purdue University. Bajaj, C.L., and Schikore, D.R., 1998. Topology preserving data simplification with error bounds, Computers & Graphics, 22, 3–12. Batty, M., and Longley, P., 1994. Fractal Cities, Academic Press Inc., London and San Diego. Benta, I.M., 2004. AGNA 2.0 user manual, Department of Sociology, University of Cork. Bertin, J., 1967. Semiologie Graphique, Mouton, Paris. Bremer, P.-T., Edelsbrunner, H., Hamann, B., and Pascucci, V., 2003, A multi-resolution data structure for two-dimensional Morse-Smale functions, In: G. Turk, J.J. van Wijk, and R.J. Moorhead (eds.), IEEE Visualization 2003, Seattle, WA, 139–146. Burrough, P.A., and McDonnell, R.A., 1998. Principles of Geographical Information Systems, Oxford University Press, Oxford. Cayley, A., 1859. On contour and slope lines. The London, Edinburgh and Dublin, Philosophical Magazine and Journal of Science, XVIII, 264–268. Crampton, J.W., 2002. Interactivity types in geographic visualization, Cartography and Geo graphic Information Science, 29, 85–98. Dakowicz, M., and Gold, C., 2002. Visualizing terrain models from contours- plausible ridge, valley and slope estimation, In: Proceedings International workshop on visualization and animation of landscape. De Floriani, L., Marzano, P.K., and Puppo, E., 1994. Line-of-sight communication on terrain models, International Journal of Geographical Information Systems, 8, 329– 342.
- 118. References De Floriani, L., and Magillo, P., 2001. Multiresolution Meshes and Data Structures, Principles of Multiresolution in Geometric Modeling - PRIMUS01 summer school, 193-234. De Saint-Venant, 1852. Surfaces á plus grande pente constituées sur des lignes courbes, Bulletin de la sc.philomath. de Paris. Desbrun, M., Meyer, M., Schröder, P., and Barr, H., 2000. Anisotropic feature-preserving denoising of height fields and bivariate data, In: Proceedings of Graphics Interface, Montreal, 145-152. DiBiase, D., MacEachren, A.M., Krygier, J.B., and Reeves, C., 1992. Animation and the role of map design in scientific visualization, Cartography and Geographic Information Systems, 19, 201–204. Dikau, R., 1990. Geomorphic landform modelling based on hierarchy theory, In: Proceedings of the 4th International Symposium on Spatial Data Handling, 230-239. Dransch, D., 2000. The use of different media in visualizing spatial data, Computers & Geo sciences, 26, 5–9. Dykes, J.A., 1997. Exploring Spatial Data Representation with Dynamic Graphics, Computers and Geosciences, 23, 345-370. Earnshaw, R.A., and Watson, D., 1993. Animation and scientific visualization: tools and Applications, Academic Press, London. Edelsbrunner, H., Harer, J., and Zomorodian, A., 2003. Hierarchical Morse-Smale complexes for piecewise linear 2-manifolds, Discrete & Computational Geometry, 30, 87–107. Emmer, N.N.M., 2001. Determining the effectiveness of animations to represent geo- spatial temporal data: a first approach, In: Proceedings of the 4th Association of Geographic Information Laboratories in Europe Conference on Geographic Information Science, Brno, 585–589. Evans, I.S., 1980. An integrated system of terrain analysis and slope mapping, Zeitschrift fur Geomorphologie, 36, 274–295. Feuchtwanger, M., and Poiker, T.K., 1987. The surface patchwork – an intelligent approach to terrain modelling, In: Proceedings of the 5th Annual North West Conference on Surveying and Mapping, Whistler. Fisher, P.F., 1991. First experiments in viewshed uncertainty – The accuracy of the viewshed area, Photogrammetric Engineering & Remote Sensing, 57, 1321–1327. Fisher, P.F., 1992. First experiments in viewshed uncertainty: simulating the fuzzy viewshed, Photogrammetric Engineering & Remote Sensing, 58, 345–352. Fisher, P.F., 1993. Algorithm and implementation uncertainty in viewshed analysis, International Journal of Geographical Information Systems, 7, 331–347. 101
- 119. References Fisher, P., Wood, J., and Cheng, J. 2004. Where is Helvellyn? Fuzziness of Multiscale Landscape Morphometry, Transactions of the Insitute of British Geographers, 29, 106-128. Fowler, R.J., and Little, J.J., 1979, Automatic extraction of irregular network digital terrain models, Computer Graphics, 13, 199–207. Franklin, W.M., 2000. Approximating visibility, In: Proceedings of the 1st International Confer ence on Geographic Information Science, Savannah, GA, 126–138. Franklin, W.M., Ray, C.K., and Mehta, S., 1994. Geometric Algorithms for Siting of Air Defense Missile Batteries, Technical Report on Contract No. DAAL03-86-D-0001, Battelle, Columbus Division, Columbus, OH, 116. Frank, A., Palmer, B., and Robinson, V., 1986. Formal methods of the accurate definition of some fundamental terms in physical geography, In: Proceedings of the 2nd International Symposium on Spatial Data Handling, Seattle, WA, 583-599. Gaede, V., and Günther, O., 1998. Multidimensional access methods, ACM Computing Surveys, 30, 170-231. Gahegan, M., 1999. Four barriers to the development of effective exploratory visualization for the geosciences, International Journal of Geographical Information Science, 13, 289–309. Gallant, J., and Hutchinson, M.F., 1996. Towards an understanding of landscape scale and structure, In: Proceedings of the 3rd International Conference/Workshop on integrating GIS and Environmental Modelling, On CD-ROM. Gauvin, R.., 2004. Terrain Classification of a High Resolution Digital Elevation Model: Segmentation of Terrain Derivatives into Object Entities. http://members.shaw.ca/rjgauvin/gettingstarted/ecog1.htm (accessed 28 June 2004) Gershon, N.D., 1992. Visualization of fuzzy data using generalized animation, In: Proceedings of the IEEE Visualization 1992, Boston, MA, 268–273. Gomes, J., Costa, B., Darsa, L., and Velho, L., 1998. Warping and Morphing of Graphical Objects, Morgan Kaufmann, San Francisco, CA. Griffiths, H.B., 1981. Surfaces, 2nd Edition, Cambridge University Press, Cambridge, MA. Guibas, L.J., and Stolfi, J., 1985. Primitives for the Manipulation of General Subdivisions and the Computation of Voronoi Diagrams, ACM Transactions on Graphics, 4, 74- 123 Heckbert, P., and Garland, M., 1997. Survey of Polygonal Surface Simplification Algorithms, SIGGRAPH 1997 Course on Multiresolution Surface Modelling. Heller, M., 1990. Triangulation algorithms for adaptive terrain modelling. In: Proceedings of the 4th International Symposium on Spatial Data Handling, Columbus, 163-174. 102
- 120. References Helman, J.L., and Hesselink, L., 1991. Visualizing vector field topology in fluid flows, IEEE Computer Graphics & Applications, 11, 36–46. Hutchinson, M., 1988. Calculation of hydrologically sound digital elevation models. In: Proceedings of the 3rd International Symposium on Spatial Data Handling, Sydney, 17-19. Hutchinson, M., 1989. A new procedure for griddding elevation and stream line data with automatic removal of pits, Journal of Hydrology, 106, 211-232. Hutchinson, M., 1996. A locally adaptive approach to the interpolation of digital elevation moodels, In: Proceedings of the 3rd International Conference/Workshop on integrating GIS and Environmental Modelling, Santa Fe, Hutchinson, M., and Gallant, J., 2000. Representation of terrain, In: P.A.Longley, M.F. Goodchild, D.J. Maguire, and D.W. Rhind (eds.), Geographical Information Systems, Vol. 1, John Wiley and Sons, 105-124. Izraelevitz, D., 2003. A fast algorithm for approximate viewshed computation, Photogrammetric Engineering and Remote Sensing, 69, 767–774. Johnson, C.K., Burnett, M.N., and Dunbar, W.D., 1999. Crystallographic topology and its appli cations, In: P. Bourne, and K. Watenpaugh (eds.), Crystallographics Computing 7: Proceedings from the Macromolecular Crystallography Computing School, Oxford University Press, Oxford. Kerlick G, D., 1990. Moving iconic objects in scientific visualization. In: IEEE Conference on Visualization '90, San Francisco, California, USA, 124-129. Kidner, D.B., Dorey, M., and Smith, D., 1999. What’s the point? Interpolation and extrapolation with regular grid DEM, In: Proceedings GeoComputation 1999, Fredericksburg, VA, On CD-ROM. Kidner, D.B., Eynon, C., and Smith, D., 2001. Multiscale terrain databases, In: Proceedings GISRUK 2001, Glamorgan, 151-153. Kim, Y.-H., Rana, S., and Wise, S., 2002. Exploring multiple viewshed analysis using terrain features and optimisation techniques, In: Proceedings of GISRUK 2002 , University of Sheffield, Sheffield. Koenderink, J.J., 1984. The structure of images, Biological Cybernetics, 50, 363-370. Koenderink, J.J., and van Doorn, A.J., 1979. The structure of two dimensional scalar fields with applications to vision. Biological Cybernetics, 33, 151–158. Koenderink, J.J., and van Doorn, A.J., 1998. The structure of relief, Advances in Imaging and Electron Physics, 103, 66–150. Koussoulakou, A., 1990. Computer-Assisted Cartography for Monitoring Spatiotemporal Aspects of Urban Air Pollution, Ph.D. thesis, Delft Press, University of Delft, Delft. 103
- 121. References Kraus, M., and Ertl, T., 2004. Topology-guided downsampling and volume visualisation, In: S. Rana (ed.), Topological Data Structures for Surfaces: An introduction to Geographical Information Science, John Wiley and Sons, 131-142. Krygier, J.B., Reeves, C., DiBiase, D.W., and Cupp, J., 1997. Design, implementation and evaluation of multimedia resources for geography and earth science education, Journal of Geog raphy in Higher Education, 21, 17–39. Kumler, M.P., 1994. An intensive comparison of triangulated irregular networks (TINs) and digital elevation models (DEMs), Cartographica, 31, 1-48. Kweon, I.S., and Kanade, T., 1994. Extracting topographic terrain features from elevation maps, Computer Vision, Graphics, and Image Understanding, 59, 171–182. Lake, M., and Woodman, P., 2003. Visibility studies in archaeology: a review and case study, Environment and Planning–B, 30, 689-707. Lee, J., 1991. Analyses of visibility sites on topographic surfaces, International Journal of Geographical Information Systems, 5, 413–429. Lee, J., 1992. Visibility dominance and topographical features on digital elevation models, In: Proceedings of the 5th International Symposium on Spatial Data Handling , Charleston, SC, 622–631. Levy, M.A., Pollack, H.N., Pomeroy, and P.W., 1970. Motion picture of the seismicity of the earth, 1961–1971, Bulletin of the Seismological Society of America, 60, 1015– 1016. Lindeberg, T., 1994. Scale-space Theory in Computer Vision, Kluwer Academic Press, Dordrecht. Lucas, C. 2004. Self-Organizing Systems FAQ. http://www.calresco.org/sos/sosfaq.htm (accessed 16 June 2004). MacEachren, A.M., 1995. How Maps Work: Issues in Representations and Design, Guildford Press, New York. Mackey, B.G., Mullen, I.C., Baldwin, K.C., Gallant, J.C., Sims, R.A., and McKinney, D.W., 2000. Toward spatial model of boreal forest ecosystems: The role of digital terrain analysis, In: J.P. Wilson and J.C. Gallant (eds.) Terrain Analysis: Principles and Applications, John Wiley and Sons, 391-422. Mark, D.M., 1977. Topological Randomness of Geomorphic Surfaces, Technical Report No. 15, Geographic Data Structures Project, ONR Contract N00014-75-C-0886, 138. Mark, D. M., 1979. Topology of ridge patterns: randomness and constraints, Geological Society of America Bulletin, 90, 164-72. Maxwell, J.C., 1870. On hills and dales, The London, Edinburgh, and Dublin, Philosophical Magazine and Journal of Science, Series 4, 40, 421–427. McClean, C., and Evans, I., 2001. Non-Fractal behaviour of real land surfaces, In: Proceedings GISRUK 2001, Glamorgan, UK, 408-410. 104
- 122. References McCloud, S., 1993. Understanding Comics, Kitchensink Press, Northampton, USA. Milgram, S., 1967. The small world problem, Psychology Today, 2, 60—67. Miller, G.A., 1956. The magical number seven plus, or minus two: some limits on our capacity for processing information, The Psychological Review, 63, 81–97. Misund, G., 1997. Varioscale TIN based surfaces, In: Advances in GIS Research II, London, Taylor and Francis, 353-364 Morrison, J.B., Betrancourt, M., and Tverksy, B., 2000. Animation: does it facilitate learning? In: Proceedings of American Association of Artificial Intelligence Spring Symposium, Smart Graphics 2000, Stanford, 53-60. Morse, M., 1925. Relations between the critical points of a real function on n independent variables, Transactions of the American Mathematical Society, 27, 345–396. Morse, S.P., 1968. A mathematical model for the analysis of contour-line data, Journal of the Association for Computing Machinery, 15, 205–220. Morse, S.P., 1969. Concepts of use in contour map processing, Communications of the ACM, 12, 147–152. Nackman, L.R., 1984. Two-dimensional critical point configuration graphs, IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 442–450. Natarajan, V., and Edelsbrunner, H., 2004. Simplification of three-dimensional density maps, IEEE Transactions on Visualisation and Computer Graphics, 10, 587-597. Ni, X., Garland, M., and Hart, J.C., 2004. Fair Morse functions for extracting the topological structure of a surface mesh, In: Proceedings SIGGRAPH 2004, Los Angeles, CA, 611-620. O’Sullivan, D., and Turner, A., 2001. Visibility graphs and landscape visibility analysis, International Journal of Geographical Information Systems, 15, 221–237. Ogao, P.J., and Blok, C.A., 2001. Cognitive aspects on the representation of dynamic environ mental phenomena using animations, In: C. Rautenstrauch, and S. Patig (eds.), Environmental Information Systems in Industry and Public Administration, Idea Group Publishers, Harrisburg, PA. Ogao, P.J., and Kraak, M.-J., 2001. Geospatial data exploration using interactive and intelligent cartographic animations, In: Proceedings of the International Cartographic Conference, Beijing, 2649–2657. Okabe, A., and Masuyama, A., 2004. A method for measuring structural similarity among activity surfaces and its application to the analysis of urban population surfaces in Japan, In: S. Rana (ed.) , Topological Data Structures for Surfaces: An introduction to Geographical Information Science, John Wiley and Sons, 105-120. Openshaw, S., Waugh, D., and Cross, A., 1994. Some ideas about the use of map animation as a spatial analysis tool, In: H.M. Hearnshaw, and D.J. Unwin (eds.), Visualization in Geographical Information Systems, John Wiley & Sons, Chichester. 105
- 123. References Palmer, B., 1984. Symbolic feature analysis and expert systems, In: Proceedings of the 1st International Symposium on Spatial Data Handling, Zurich, 465–478. Pascucci, V., 2004. Topology diagrams of scalar fields in scientific visualisation, In: S. Rana (ed.) , Topological Data Structures for Surfaces: An introduction to Geographical Information Science, John Wiley and Sons, 121-130. Perona, P., and Malik, J., 1990. Scale-space and edge detection using anisotropic diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12, 629- 639. Peschier, J., 1996. Characterisation of topographic surfaces on a triangulated irregular network, http://www.jarno.demon.nl/gavh.htm (Accessed 2 July 2004). Peterson, M.P., 1993. Interactive cartographic animation, Cartography and Geographical Information Systems, 20, 40–44. Peucker, T.K., and Douglas, D.D., 1975. Detection of surface-specific points by local parallel processing of discrete terrain elevation data, Computer Graphics and Image Processing, 4, 375–387. Pfaltz, J.L., 1976. Surface networks, Geographical Analysis, 8, 77–93. Pfaltz, J.L., 1978. Surface Networks, an Analytic Tool for the Study of Functional Surfaces, Final Report on NSF Grant DCR-74-13353, 99. Rana, S., 1998. Extraction of surface topology from digital elevation models, MSc(GIS) Dissertation, Department of Geography, University of Leicester, Leicester. Rana, S., 2000. Experiments on the generalisation and visualisation of surface networks, In Proceedings of GISRUK 2000, University of York, York. Rana, S., 2003a. Fast approximation of visibility dominance using topographic features as targets and the associated uncertainty, Photogrammetric Engineering and Remote Sensing, 69, 881–888. Rana, S., 2003b. Visibility analysis, Environment and Planning-B, 30, 641–642. Rana, S., 2004. Topological Data Structures for Surfaces: An Introduction to Geographical Information Science, John Wiley & Sons, Chichester. Rana, S., and Dykes, J., 2003. A framework for augmenting the visualisation of dynamic raster surfaces, Information Visualization, 2, 126–139. Rana, S., and Morley, J., 2002. Surface Networks, Centre for Advanced Spatial Analysis, Uni versity College London, Working Paper Series No. 43, 72. Raper, J., 2000. Multidimensional Geographic Information Systems, Taylor & Francis, London. Reeb, G., 1946. Sur les points singuliers d’une forme de Pfaff completement integrable ou d’une fonction numerique (On the singular points of a completely integrable Pfaff 106
- 124. References form or of a numerical function), Computes Rendus de l’ Académic des Sciences, Paris, 222, 847–849. Reech, M., 1858. Proprieté générale des surfaces fermenées, Ecole, Journal de l’ Ecole Polytechénique, 37, 169–178. Salomon, D., 2004. Data Compression: The complete reference, Springer, New York. Schneider, B., and Wood, J., 2004. Construction of metric surface networks from digital elevation models, In: S. Rana (ed.) , Topological Data Structures for Surfaces: An introduction to Geographical Information Science, John Wiley and Sons, 53-70. Scott, P.J., 2004. An application of surface networks in surface texture, In: S. Rana(ed.), Topological Data Structures for Surfaces: An Introduction to Geographical Information Science, John Wiley & Sons, Chichester. Shepherd, I.D.H., 1995. Putting time on the map: dynamic displays in data visualization and GIS, In: P.F. Fisher (ed.), Innovations in GIS 2, Taylor & Francis, London. Shinagawa, Y., Kunii, T.L., and Kergosien, Y.L., 1991. Surface coding based on morse theory, IEEE Computer Graphics & Applications, 11, 66–78. Sibson, R., 1981. A brief description of natural neighbor interpolations, In: V. Barnett (ed.), Interpreting Multivariate Data, John Wiley & Sons, Chichester. Slocum, T.A., Robeson, S.H., and Egbert, S.L., 1990. Traditional versus sequenced choropleth maps: an experimental investigation, Cartographica, 27, 67–88. Slocum, T., Yoder, S.C., Kessler, F.C., and Sluter, R.S., 2001. MapTime: software for exploring spatio-temporal data associated with point locations, Cartographica, 37, 14–32. Speight, J.G., 1976. Numerical classification of landform elements from air photo data, Zeitschrift für Geomorphologie, 25, 154–168. Takahashi, S., Ikeda, T., Shinagawa, Y., Kunii, T.L., and Ueda, M., 1995. Algorithms for extracting correct critical points and constructing topological graphs from discrete geographical elevation data, Computer Graphics Forum, 14, C181–C192. Takahashi, S., 1996. Critical point based modeling for smooth surfaces, Unpublished Ph.D. Thesis, Department of Information Science, Faculty of Science, University of Tokyo. Tang, L., 1992. Automatic extraction of specific geomorphological elements from contours, In Proceedings of the 5th International Symposium on Spatial Data Handling , Charleston, SC, 554–566. Tate, N.J., and Wood, J., 2001. Fractals and scale dependencies in topography, In: N.J. Tate and P. Atkinson (eds.), Modelling scale in Geographical Information Science, John Wiley and Sons, Chichester, UK, 35-52. Tobler, W.R., 1970. A computer movie simulating urban growth in the Detroit region, Economic Geography, 46, 234–240. 107
- 125. References Turner, A., 2001. Depthmap: a program to perform visibility graph analysis, In: Proceedings of 3rd International Symposium on Space Syntax, Georgia Institute of Technology, GA, 31.1-31.9. URL #1 ASPRS conference on Terrain Data and Applications - Making the connection, Annual Conference. http://www.asprs.org/terrain_data2003/index.htm. (accessed on 22 August 2004). URL #2 Center for International Earth Science Information Network. AIDS Data Animation Project. http://www.ciesin.org/datasets/cdc-nci/cdc-nci.html (accessed on 22 August 2004). URL #3 3D Studio MAX. http:// www.discreet.com/products/3dsmax/ (accessed on 22 August 2004). URL #4 Augmenting visualization of Dynamic Raster Surfaces. http://www.soi.city.ac.uk/~jad7/snv/ (accessed on 1 April 2003). Upson, C., and Kerlick D., 1989. Volumetric Visualization Techniques. In: ‘2D and 3D Visualization Workshop', 2nd Association for Computing Machinery SIGGRAPH Symposium on User Interface Software and Technology, Williamsburg, VA, USA, Tutorial No. 13, 86 pp. von Minusio, D.M., 2002. Models and Experiments for Quality Handling in Digital Terrain Modelling, Unpublished Ph.D. thesis, Department of Geography, University of Zurich, Zurich. Wang, J., Robinson, G.J., and White, K., 2000a. Estimating surface net solar radiation by use of Landsat-5 TM and digital elevation models, International Journal of Remote Sensing, 21, 31–43. Wang, J., Robinson, G.J., and White, K., 2000b. Generating viewsheds without using sightlines, Photogrammetric Engineering & Remote Sensing, 66, 87–90. Ware, C., 2000. Information Visualization: Perception for Design, Morgan Kaufmann, San Francisco, CA. Warntz, W., 1966. The topology of a socio-economic terrain and spatial flows, Papers of the Regional Science Association, 17, 47–61. Watts, D.J., and Strogatz, S.H., 1998. Collective dynamics of small-world networks, Nature, 393, 440-442. Weibel, R., 1992. Models and Experiments for Adaptive Computer-Assisted Terrain Generalization, Cartography and Geographic Information Systems, 19, 133-153. Weibel, R., and Dutton, G., 1999. Generalising spatial data and dealing with multiple representa tions, In: P.A. Longley, M.F. Goodchild, D.J. Maguire, and D.W. Rhind (Eds.), Geographical Information Systems: Volume 1 – Principles and Technical Issues, John Wiley & Sons, New York. 108
- 126. References Werner, C., 1988. Formal analysis of ridge and channel patterns in maturely eroded terrain, Annals of the Association of American Geographers, 78, 253–270. Witkin, A.P., 1983. Scale-space filtering, In: Proceedings of the 8th International Joint Confer ence on Artificial Intelligence, Karlsruhe, 1019–1022. Wolf, G.W., 1984. A mathematical model of cartographic generalization, Geo-Processing, 2, 271–286. Wolf, G.W., 1988. Generalisierung topographischer Karten mittels Oberflaechengraphen,Dissertation, Department of Geography, University of Klagenfurt, Klagenfurt, 250. Wolf, G.W., 1989. A practical example of cartographic generalization using weighted surface networks, In: F. Dollinger, and J. Strobl (eds.), Angewandte Geographische Informationstechnologie, Department of Geography, University of Salzburg, Salzburg. Wolf, G.W., 1990. Metric surface networks, In: Proceedings of the 4th International Symposium on Spatial Data Handling, Zurich, 844–856. Wolf, G.W., 1992. Hydrologic applications of weighted surface networks, In: Proceedings of the 5th International Symposium on Spatial Data Handling , Charleston, SC, 567– 579b. Wolf, G.W., 1993. Data structures for the topological characterization of topographic surfaces, In: D. Pumain (ed.), Systémes d’information géographique et systémes experts, Sixiéme Colloque européen de géographie théorique et quantitative, GIP RECLUS, Montpellier. Wood, J., 1996. The Geomorphological Characterisation of Digital Elevation Models, Ph.D. thesis, Department of Geography, University of Leicester, Leicester. Wood, J., 1998. Modelling the continuity of surface form using digital elevation models, In Pro ceedings of the 8th International Symposium on Spatial Data Handling, Vancouver, 725–736. Wood, J., 1999. Visualisation of scale dependencies in surface models, In: Proceedings of the International Cartographic Association Conference, Ottawa. Wood, J., and Rana, S., 2000. Constructing weighted surface networks for the representation and analysis of surface topology, In: Proceedings of the 5th International Conference on Geo computation, Chatham, UK (On CD-ROM). Zeverbergen, L. W., and C. R. Thorne, 1987. Quantitative Analysis of Land Surface Topography, Earth Surface Processes and Landforms, 12, 47-56. 109
- 127. Appendix Surface Network Data Structure Storage Formats Both the SNG and SNM file formats are plain ASCII space-delimited formats. The structure of the each file format is as follows: A.1 SNG file format Each .SNG file contains information about the metric coordinates of the critical points and whether they are internal or surrounding. In addition, .SNG also contains the topological connectivity of ridges and channels. The information on points is stored in six columns fields as follows ( represent one character space): PointType ID x-coord. y-coord. z-coord. Inside/Surrounding The valid values for the PointType field are X for pits, Y for passes, and Z for peaks. The ID field can be any unique alphanumeric identifier without spaces and special characters (such as %, !, * etc.). The coordinate fields can store any type of number with arbitrary precision. The last field is only relevant for pits and peaks. It is either 0, which means the point is surrounding or 1, which means the point is internal. The points can be stored in any arbitrary order and record for each point is stored on separate line. The information on the ridges and channels is stored in the following manner: Type Pass-ID Pit1-ID Pit2-ID Peak1-ID Peak2-ID The first field is fixed for all edges and has the value E. The rest of the fields contain information about the ridges and channels connected to a pass. There is no left/right convention followed in the storage of channels and ridges connected to a pass. There is no limit on the number of nodes and edges. An example of a .SNG file is as follows: Y y4 1.61 0.58 1150 X x5 0.77 0.45 1000 0 Z z6 2.74 0.35 2200 1 E y1 x1 x2 z1 z2 A.2 SNM file format The SNM file format is essentially an extended version of the SNG format and is similar to the description of the SNG file format except that it also contains the information about the intermediate ordinary nodes that make up a ridge or channel. The format of the SNM file is as follows:
- 128. Appendix PointType ID x-coord. y-coord. z-coord. Inside/Surrounding Type Pass-ID Pit-ID Real/Artificial x-coord. y-coord. z-coord. : END Type Pass-ID Peak-ID Real/Artificial x-coord. y-coord. z-coord. : END Each edge is described separately and contains the description of end-point critical point nodes followed by x,y,z coordinates of the intermediate ordinary nodes. The Type field’s value is either 1 (channel) or 2 (ridge). The Real/Artificial field’s value is either 0 (artificial) or 1 (real). SNM file format is particularly useful for the reconstruction of a continuous surface. It may even be desirable to distribute the SNM file as TIN by triangulating the ridge and channels edges as hard break lines. Additional morphological details for example, all the morphological features (e.g. faults, scarps) left out by surface network representation can be added to the TIN as ordinary points or edges during triangulation. 111

Full NameComment goes here.