AUTOMATIC ROAD EXTRACTION IN URBAN SCENES
CHAPTER 1
ABSTRACT:
This paper focuses on internal quality measures for automatic road extraction from aerial images
taken over urban areas.
The motivation of this work is twofold:
Firstly, any automatic system should provide the user with a small number of values indicating
the reliability of the obtained results. This is often referred to as”self-diagnosis” and is in
particular a crucial part of automatic image understanding systems.
Secondly, and more important in the scope of our research, a system designed for the extraction
of man-made objects in complex environments (like roads in urban areas) inherently implies
many decisions during the extraction process.
Such decisions are highly facilitated when both low level features and high level objects are
attached with confidence values indicating their relevance for further processing for defining
evaluation criteria from which the confidence values can be calculated is to split the components
of a semantic object model into two different types. The model components of the first type are
used for extracting features, i.e., parts of the object, and the components of the other type serve
as criteria for evaluating the quality of the extracted features. For guaranteeing an unbiased
evaluation one has to ensure that model components belonging to different types are independent
from each other (at least theoretically). We illustrate this concept by our system for road
extraction in urban areas. Examples are given for both low level features like lines and ribbons as
well as higher level features like lanes and road segments.
1.1 INTRODUCTION:
Due to its utility for a variety of applications, road extraction from digital imagery has been
intensively studied in the computer vision and remote sensing fields. Many techniques have been
proposed, which can be broadly classified as semi-automatic approaches and automatic
approaches. The main criterion for the classification is whether the approach requires human
intervention. In semi-automatic approaches, an operator provides information such as starting
points or starting directions, which provide critical assistance in tracking roads. Without human
intervention, an approach is considered automatic. Objects appear in natural scenes as groups of
similar sensory features. Gestalt psychology reveals a set of principles guiding the grouping
process based on local features. Elements tend to be perceptually grouped if they are close to
each other (proximity), similar to one another (similarity), form a smooth and continuous curve
(good continuation), or have similar temporal behaviors (common fate). Since roads in satellite
imagery tend to have uniform features that are distinct from neighboring regions, it is reasonable
to expect that they can be automatically extracted using Gestalt grouping principles.
Dynamical systems represent a promising approach to object segmentation. In particular,
research on automatic road extraction in urban areas is mainly motivated by the importance of
geographic information systems (GIS) and the need for data acquisition and update for GIS. This
demand is strikingly documented in the survey on 3D city models initiated by the European
Organization for Experimental Photogrammetric Research (OEEPE, now called EuroSDR) a few
years ago (Fuchs et al., 1998). Applications of road data of urban areas include analyses and
simulations of traffic flow, estimation of air and noise pollution, street maintenance, etc. From a
scientific perspective, the extraction of roads in complex environments is one of the challenging
issues in photogrammetry and computer vision, since many tasks related to automatic scene
interpretation are involved. Factors greatly influencing the scene complexity are, for instance, the
number of different objects, the amount of their interrelations, and the variability of both.
The work presented in this paper focuses on the development of internal quality measures for
automatic road extraction. The motivation of this specific aspect within an object extraction
system is twofold: Firstly, any automatic system should provide the user with some values
indicating the reliability of the obtained results. This is often referred to as ”self-diagnosis”
(Forstner, 1996) ¨ which is a crucial part of automatic image understanding systems, in
particular, when designed for practical applications. Secondly, and more important in the scope
of our research, confidence values also play an important role for the reliability of the extraction
itself, since they highly facilitate inevitable decisions which have to be made during the
extraction process. Consider, for instance, competing road hypotheses extracted from multiple
overlapping images which must be combined into a unique road network.
Moreover, each factor—and thus the scene complexity—is related to a particular scale. To
accommodate for such factors, techniques like detailed semantic modelling, contextual
reasoning, and self-diagnosis have proven to be of great importance over the past years. It is
clear that these techniques must be integral parts of an extraction system to attain reasonably
good results over a variety of scenes. Roads in a digital image appear as thin and elongated
homogeneous regions. Since leaders are required to be at the center of the large homogeneous
regions, roads rarely contain leaders and thus are mostly segmented to the background. For
obtaining road segments, a region growing process is adopted following self-diagnosis
segmentation. We treat each background pixel as a starting point and perform pixel-based
growing, which repeatedly checks the neighbor of the starting point and adding it into the region
if it is similar to the starting point. If a pixel can grow into a large enough area, the area is
considered as a new segment. Fig. 2(c) shows the result after the subsequent step, where road
segments are attained. It should be noticed that a road cannot be segmented as a single segment
for most real images.
1.2 LITRATURE SURVEY:
AUTOMATIC COMPLETION AND EVALUATION OF ROAD NETWORKS
AUTHOR: Wiedemann, C. and Ebner, H.,
PUBLISH: In: International Archives of Photogrammetry and Remote Sensing, Vol. 33, part B.
EXPLANATION:
Road networks automatically extracted from digital imagery are in general incomplete and
fragmented. Completeness and topology of the extracted network can be improved by the use of
the global network structure which is a result of the function of roads as part of the transport
network. This is especially – but not exclusively – important for the extraction of roads from
imagery with low resolution (e.g., ground pixel size > 1 m) because only little local evidence for
roads can be extracted from those images. In this paper, an approach is described for the
completion of incompletely extracted road networks. The completion is done by generating link
hypotheses between points on the network which are likely to be connected based on the network
characteristics. The proposed link hypotheses are verified based on the image data. A
quantitative evaluation of the achieved improvements is given. New developments presented in
this paper are the generation of link hypotheses between different connected components of the
extracted road network and the introduction of measures for the evaluation of the network
topology and connectivity. Results of the improved completion scheme are presented and
evaluated based on the introduced measures. The results show the feasibility of the presented
completion approach as well as its limitations. Major advantages of the completion of road
networks are the improved network topology and connectivity of the extraction result. The new
measures prove to be very useful for the evaluation of network topology and connectivity.
AUTOMATIC EXTRACTION OF ROADS FROM AERIAL IMAGES BASED ON
SCALE SPACE AND SNAKES
AUTHOR: Laptev, I., Mayer, H., Lindeberg, T., Eckstein, W., Steger, C. and Baumgartner, A.,
PUBLISH: Machine Vision and Applications 12(1), pp. 22–31.
EXPLANATION:
We propose a new approach for automatic road extraction from aerial imagery with a model and
a strategy mainly based on the multi-scale detection of roads in combination with geometry-
constrained edge extraction using snakes. A main advantage of our approach is, that it allows for
the first time a bridging of shadows and partially occluded areas using the heavily disturbed
evidence in the image. Additionally, it has only few parameters to be adjusted. The road network
is constructed after extracting crossings with varying shape and topology. We show the
feasibility of the approach not only by presenting reasonable results but also by evaluating them
quantitatively based on ground truth. Aerial imagery is one of the standard data sources for the
acquisition of topographic objects, like roads or buildings for geographic information systems
(GIS). Road data in GIS are of major importance for applications such as car navigation or
guidance systems for police, fire services or forwarding agencies. Since the manual extraction of
road data is time consuming, there is a need for automation.
MODELLING CONTEXTUAL KNOWLEDGE FOR CONTROLLING ROADEXTRACTIONIN
URBAN AREAS
AUTHOR: Hinz, S., Baumgartner, A. and Ebner, H.,
PUBLISH: In: IEEE/ISPRS joint Workshop on Remote Sensing and Data Fusion over Urban
Areas.
EXPLANATION:
This paper deals with the role of context for automatic extraction of man-made structures from
aerial images taken over urban areas. Due to the intrinsic high complexity of urban scenes we
propose to guide the extraction by contextual knowledge about the objects. We represent this
knowledge explicitly by a context model. Based upon this model we are able to split the complex
task of object extraction in urban areas into smaller sub-problems. The novelty presented in this
contribution mainly relates to the fact that essential contextual information is gathered at the
beginning of the extraction, thus, it is available during the whole extraction, and furthermore, it
allows for automatically controlling the extraction process: for data consistency reasons, we use
the imagery as the only source for both gaining contextual information and extracting roads.
Advantages and remaining deficiencies of the proposed strategy are discussed
CHAPTER 2
2.0 SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
Existing approach requires human intervention in semi-automatic approaches; an operator
provides information such as starting points or starting directions, which provide critical
assistance in tracking roads. Without human intervention, an approach is considered semi-
automatic objects appear in natural scenes as groups of similar sensory features. Gestalt
psychology reveals a set of principles guiding the grouping process based on local features.
Elements tend to be perceptually grouped if they are close to each other (proximity), similar to
one another (similarity), form a smooth and continuous curve (continuation), or have similar
temporal behaviors (common fate) roads in satellite imagery tend to have uniform features that
are distinct from neighboring regions, it is reasonable to expect that they can be semi-
automatically extracted using Gestalt grouping principles.
Dynamical systems represent a promising approach to object segmentation in particular, Locally
Excitatory the oscillatory correlation theory asserts that oscillators corresponding to the pixels of
the same object synchronize, and the oscillators corresponding to the pixels of different objects
desynchronize. It has been shown that a LEGION network building on relaxation oscillators can
rapidly achieve synchronization within a locally coupled oscillator assembly and de-
synchronization between different assemblies. LEGION has been successfully applied to a
number of scene analysis tasks, including image segmentation, object selection.
2.1.1 DISADVANTAGES:
1) Image segmentation using a Locally Excitatory Globally Inhibitory Oscillator Networks
LEGION network;
2) Medial axis extraction within each segment and selection of potential road segments;
3) Grouping of potential road segments using a LEGION model with alignment-dependent
connections based on extracted medial axis points. Here, well aligned segments are considered as
belonging to the same road.
2.2 PROPOSED SYSTEM:
Our proposed approach regarding”self-diagnosis” the role of internal evaluation is employed in
the system for finding consistent interpretations of SAR scenes (Synthetic Aperture RADAR). In
a first step, different low level operators with specific strengths are applied to extract image
primitives, i.e., cues for roads, rivers, urban/industrial areas, relief characteristics, etc. Since a
particular operator may vote for more than one object class (e.g. road and river), a so-called focal
and non-focal element is defined for each operator (usually the union of realworld object
classes). The operator response is transformed into a confidence value characterizing the match
with its focal element. Then, all confidence values are combined in an evidence theoretical
framework to assign unique semantics to each primitive attached with a certain probability.
Our feature adjacency graph is constructed in which global knowledge about objects (road
segments form a network, industrial areas are close to cities,) is introduced in form of object
adjacency probabilities. Based on the probabilities of objects and their relations the final scene
interpretation is formulated as a graph labelling problem that is solved by energy minimization.
Scene interpretation is based on a priori knowledge stored in a semantic net and rules for
controlling the extraction. Each instance of an object, e.g., a road axis, is hypothesized top-down
and internally evaluated by comparing the expected attribute values of the object with the actual
values measured in the image. Competing alternative hypotheses are stored in a search tree as
long as no further hypotheses can be formed.
Finally, the best interpretation is selected from the tree by an optimum path search In the
following steps the values are propagated and aggregated providing eventually a basis for the
final decision about the presence of the desired object. This procedure may cause problems since
the evaluation is purely based on local features while global object properties are neglected.
Therefore, some approaches introduce additional knowledge (e.g., roads forming a network or
fitting to”valleys” of a DSM) at a later stage when more evidence for an object has been
acquired. All mentioned approaches have in common that they use one predefined model for
simultaneously extracting and evaluating roads.
2.2.1 ADVANTAGES:
 Our system tries to accommodate aspects having proved to be of great importance for
road extraction: By integrating a flexible, detailed road and context model one can
capture the varying appearance of roads and the influence of background objects suchas
trees, buildings, and cars in complex scenes.
 The fusion of different scales helps to eliminate isolated disturbances on the road while
the fundamental structures are emphasized. This can be supported by considering the
function of roads connecting different sites and thereby forming a fairly dense and
sometimes even regular network.
 Hence, exploiting the network characteristics adds global information and, thus, the
selection of the correct hypotheses becomes easier. As basic data, our system expects
high resolution aerial images (resolution < 15 cm) and a reasonably accurate DSM with a
ground resolution of about 1 m. In the following, we sketch our road model and
extraction strategy.
2.3 HARDWARE & SOFTWARE REQUIREMENTS:
2.3.1 HARDWARE REQUIREMENT:
 Processor - Pentium –IV
 Speed - 1.1 GHz
 RAM - 256 MB (min)
 Hard Disk - 20 GB
 Floppy Drive - 1.44 MB
 Key Board - Standard Windows Keyboard
 Mouse - Two or Three Button Mouse
 Monitor - SVGA
2.3.2 SOFTWARE REQUIREMENTS:
.NET
 Operating System : Windows XP or Win7
 Front End : Microsoft Visual Studio .NET 2008
 Script : C# Script
 Document : MS-Office 2007
CHAPTER 3
3.0 SYSTEM DESIGN:
Data Flow Diagram / Use Case Diagram / Flow Diagram:
 The DFD is also called as bubble chart. It is a simple graphical formalism that can be
used to represent a system in terms of the input data to the system, various processing
carried out on these data, and the output data is generated by the system
 The data flow diagram (DFD) is one of the most important modeling tools. It is used to
model the system components. These components are the system process, the data used
by the process, an external entity that interacts with the system and the information flows
in the system.
 DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
 DFD is also known as bubble chart. A DFD may be used to represent a system at any
level of abstraction. DFD may be partitioned into levels that represent increasing
information flow and functional detail.
NOTATION:
SOURCE OR DESTINATION OF DATA:
External sources or destinations, which may be people or organizations or other entities
DATA SOURCE:
Here the data referenced by a process is stored and retrieved.
PROCESS:
People, procedures or devices that produce data’s in the physical component is not identified.
DATA FLOW:
Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of
data.
MODELING RULES:
There are several common modeling rules when creating DFDs:
1. All processes must have at least one data flow in and one data flow out.
2. All processes should modify the incoming data, producing new forms of outgoing data.
3. Each data store must be involved with at least one data flow.
4. Each external entity must be involved with at least one data flow.
5. A data flow must be attached to at least one process.
3.1 ARCHITECTURE DIAGRAM:
Context
Regions
Forest
Streaming
Analytics
Urban
Input Videos
Road Lane
Segmentation
Completion
Construction of
Lane Segmentation
Fusion Based Lane
Segmentation
Context Relations Road Extraction Tools
3.2 DATAFLOW DIAGRAM:
LEVEL 1:
LEVEL 2:
Analysis of Context
Relations
Shadow,
Occultations
Region of Interest
Lane Segments
Input Videos Streaming
Analysis
Streaming
Data
LEVEL 3:
Road ExtractionTools
Data Streaming
Analytics
Connection of
Hypothesis
Detection and
Removal of
Inconsistencies
Merging Lane
Segmentation
Results Roads
Segmentation
Extraction of
Marking
Detection of
Vehicle Outline
UML DIAGRAMS:
3.2 USE CASE DIAGRAM:
3.3 CLASS DIAGRAM:
3.4 SEQUENCE DIAGRAM:
3.5 ACTIVITY DIAGRAM:
CHAPTER 4
4.0 IMPLEMENTATION AND ALGORITHM:
ROAD AND CONTEXT MODEL:
Our system tries to accommodate aspects having proved to be of great importance for road
extraction: By integrating a flexible, detailed road and context model one can capture the varying
appearance of roads and the influence of background objects such as trees, buildings, and cars in
complex scenes. The fusion of different scales helps to eliminate isolated disturbances on the
road while the fundamental structures are emphasized (Mayer and Steger, 1998). This can be
supported by considering the function of roads connecting different sites and thereby forming a
fairly dense and sometimes even regular network. Hence, exploiting the network characteristics
adds global information and, thus, the selection of the correct hypotheses becomes easier. As
basic data, our system expects high resolution aerial images (resolution < 15 cm) and a
reasonably accurate DSM with a ground resolution of about 1 m. In the following, we sketch our
road model and extraction strategy. For a comprehensive description we refer the reader to (Hinz
et al., 2001a, Hinz et al., 2001b).
The road model illustrated in Fig. 1 a) compiles knowledge about radiometric, geometric, and
topological characteristics of urban roads in form of a hierarchical semantic net. The model
represents the standard case, i.e., the appearance of roads is not affected by relations to other
objects. It describes objects by means of”concepts”, and is split into three levels defining
different points of view.
The real world level comprises the objects to be extracted:
The road network, its junctions and road links, as well as their parts and specializations (road
segments, lanes, markings are connected to the concepts of the geometry and material level via
concrete relations (Tonjes et ¨ al., 1999). The geometry and material level is an intermediate
level which represents the 3D-shape of an object as well as its material describing objects
independently of sensor characteristics and viewpoint (Clement et al., 1993). In contrast, the ´
image level which is subdivided into coarse and fine scale comprises the features to detect in the
image: Lines, edges, homogeneous regions, etc. Whereas the fine scale gives detailed
information, the coarse scale adds global information. Because of the abstraction in coarse scale,
additional correct hypotheses for roads can be found and sometimes also false ones can be
eliminated based on topological criteria, while details, like exact width and position of the lanes
and markings, are integrated from fine scale. In this way the extraction benefits from both scales.
The road model is extended by knowledge about context: Socalled context objects, i.e.,
background objects like buildings or vehicles, may hinder road extraction if they are not
modelled appropriately but they substantially support the extraction if they are part of the road
model.
We define global and local context: Global context: The motivation for employing global
context stems from the observation that it is possible to find semantically meaningful image
regions – so-called context regions – where roads show typical prominent features and where
certain relations between roads and background objects have a similar importance.
Consequently, the relevance of different components of the road model and the importance of
different context relations (described below) must be adapted to the respective context region. In
urban areas, for instance, relations between vehicles and roads are more important since traffic is
usually much denser inside of settlements than in rural areas. As (Baumgartner et al., 1999), we
distinguish urban, forest, and rural context regions.
Local context: We model the local context with so-called context relations, i.e., certain relations
between a small number of road and context objects. In dense settlements, for instance, the
footprints of buildings are almost parallel to roads and they give therefore strong hints for road
sides. Vice-versa, buildings or other high objects potentially occlude larger parts of a road or cast
shadows on it. A context relation ”occlusion” gives rise to the selection of another image
providing a better view on this particular part of the scene, whereas a context relation ”shadow”
can tell an extraction algorithm to choose modified parameter settings. Also vehicles occlude the
pavement of a lane segment. Hence, vehicle outlines as, e.g., detected by the algorithm of (Hinz
and Baumgartner, 2001) can be directly treated as parts of a lane. In a very similar way, we
model the integration of GIS-axes and relations to sub-structures. Figure 1 b) summarizes the
relations between road objects, context objects, and sub-structures by using the concepts ”Lane
segment” and ”Junction” as the basic entities of a road network.
4.2 MODULES:
VIDEO PREPROCESSING:
EXTRACTION AND EVALUATION:
ROAD LANE EXTRACTION:
RESULTS EVALUTION:
4.3 MODULE DESCRIPTION:
VIDEO PREPROCESSING:
Video processing, internal evaluation is performed by not only aggregating previously derived
values but also exploiting knowledge not used in prior steps. This point has especially high
relevance for bottom-up driven image understanding systems (as ours), since essential global
object properties making different objects distinctive can be exploited only at later stages of
processing. Lane’s segments, for instance, are constructed from grouped markings and optional
road sides but they still have high similarity to, e.g., illuminated parts of gable roofs. Only their
collinear and parallel concatenation resulting in lanes, road segments, and roads makes them
distinctive and gives in turn new hints for missing lane segments (cf. Fig. 9, 10). Consider the
two-lane road segment in Fig. 10a). The continuity of the upper lane provides a strong hint for
bridging the gaps of the lower lane in spite of high intensity variation therein. Hence, at this
stage, the system can base its decision on more knowledge than purely the homogeneity within
the gaps.
EXTRACTION AND EVALUATION:
Our approach utilizes a semantic net for ¨ modeling. However, our methodology of internal
evaluation during extraction complements other work as we split the model of an object into
components used for extraction and components used for internal evaluation. The model
components used for extraction typically consist of quite generic geometric criteria which are
more robust against illumination changes, shadows, noise, etc., whereas those used for
evaluation are mostly object specific. In so doing, both extraction and evaluation may be
performed in a flexible rather than monolithic fashion and can adapt to the respective contextual
situation. The extraction of markings, for instance, is based on line detection while their
evaluation relies on the knowledge that markings are very bright and has symmetric contrast on
both sides because of the unicolored pavement (see Fig. 4). However, in case of shadow regions
as detected during context-based data analysis, the system automatically retrieves a different
parameter set for internal evaluation and, thus, accommodates the different situation.
ROAD LANE EXTRACTION:
In a very general sense, the extraction strategy inheres knowledge about how and when certain
parts of the road and context model are optimally exploited, thereby being the basic control
mechanism of the extraction process. It is subdivided into three levels (see also Fig. 2): Context-
based data analysis (Level 1) comprises the segmentation of the scene into the urban, rural, and
forest area and the analysis of context relations. While road extraction in forest areas seems
hardly possible without using additional sensors, e.g., infrared or LIDAR sensors, the extraction
in rural areas may be performed with the system of (Baumgartner et al., 1999).
In urban areas, extraction of salient roads (Level 2) includes the detection of homogeneous
ribbons in coarse scale, collinear grouping thin bright lines, i.e. road markings, and the
construction of lane segments from groups of road markings, road sides, and detected vehicles.
The lane segments are further grouped into lanes, road segments, and roads. During road
network completion (Level 3), finally, gaps in the extraction are iteratively closed by
hypothesizing and verifying connections between previously extracted roads. Similar to
(Wiedemann and Ebner, 2000), local as well as global criteria exploiting the network
characteristics are used. Figure 3 illustrates some intermediate steps and Figs. 11, 12 show
typical results. In the next section, we turn our focus on the integrated models for extraction and
internal evaluation.
RESULTS EVALUTION:
Final result of road extraction has been evaluated by matching the extracted road axes to
manually plotted reference data (Wiedemann and Ebner, 2000). As can be seen, major parts of
the road networks have been extracted (white lines indicate extracted road axes). Expressed in
numerical values, we achieve a completeness of almost 70 % and a correctness of about 95 %.
The system is able to detect shadowed road sections or road sections with rather dense traffic.
However, it must be noted that some of the axes’ underlying lane segments have been missed.
This is most evident at the complex road junctions in both scenes, where only spurious features
for the construction of lanes could be extracted. Thus, not enough evidence was given to accept
connections between the individual branches of the junction. Another obvious failure can be seen
at the right branch of the junction in the central part of Scene II (Fig. 12). The tram and trucks in
the center of the road have been missed since our vehicle detection module is only able to extract
vehicles similar to passenger cars. Thus, this particular road axis has been shifted to the lower
part of the road where the implemented parts of the model fit much better.
CHAPTER 5
5.0 SYSTEM STUDY:
5.1 FEASIBILITY STUDY:
The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is
not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY
5.1.1 ECONOMICAL FEASIBILITY:
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.
5.1.2 TECHNICAL FEASIBILITY:
This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead to
high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
5.1.3 SOCIAL FEASIBILITY:
The aspect of study is to check the level of acceptance of the system by the user. This includes
the process of training the user to use the system efficiently. The user must not feel threatened by
the system, instead must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the system and to make him
familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
5.2 SYSTEM TESTING:
Testing is a process of checking whether the developed system is working according to the
original objectives and requirements. It is a set of activities that can be planned in advance and
conducted systematically. Testing is vital to the success of the system. System testing makes a
logical assumption that if all the parts of the system are correct, the global will be successfully
achieved. In adequate testing if not testing leads to errors that may not appear even many
months. This creates two problems, the time lag between the cause and the appearance of the
problem and the effect of the system errors on the files and records within the system. A small
system error can conceivably explode into a much larger Problem. Effective testing early in the
purpose translates directly into long term cost savings from a reduced number of errors. Another
reason for system testing is its utility, as a user-oriented vehicle before implementation. The best
programs are worthless if it produces the correct outputs.
5.2.1 UNIT TESTING:
A program represents the logical elements of a system. For a program to run satisfactorily, it
must compile and test data correctly and tie in properly with other programs. Achieving an error
free program is the responsibility of the programmer. Program testing checks for two types of
errors: syntax and logical. Syntax error is a program statement that violates one or more rules
of the language in which it is written. An improperly defined field dimension or omitted
keywords are common syntax errors. These errors are shown through error message generated by
the computer. For Logic errors the programmer must examine the output carefully.
UNIT TESTING:
5.1.3 FUNCTIONAL TESTING:
Functional testing of an application is used to prove the application delivers correct results, using
enough inputs to give an adequate level of confidence that will work correctly for all sets of
inputs. The functional testing will need to prove that the application works for each client type
and that personalization function work correctly. When a program is tested, the actual output is
compared with the expected output. When there is a discrepancy the sequence of instructions
must be traced to determine the problem. The process is facilitated by breaking the program into
self-contained portions, each of which can be checked at certain key points. The idea is to
compare program values against desk-calculated values to isolate the problems.
Description Expected result
Test for application window
properties.
All the properties of the windows are to be
properly aligned and displayed.
Test for mouse operations.
All the mouse operations like click, drag,
etc. must perform the necessary operations
without any exceptions.
FUNCTIONAL TESTING:
5.1. 4 NON-FUNCTIONAL TESTING:
The Non Functional software testing encompasses a rich spectrum of testing strategies,
describing the expected results for every test case. It uses symbolic analysis techniques. This
testing used to check that an application will work in the operational environment. Non-
functional testing includes:
 Load testing
 Performance testing
 Usability testing
 Reliability testing
 Security testing
Description Expected result
Test for all modules.
All peers should communicate in the
group.
Test for various peer in a distributed
network framework as it display all
users available in the group.
The result after execution should
give the accurate result.
5.1.5 LOAD TESTING:
An important tool for implementing system tests is a Load generator. A Load generator is
essential for testing quality requirements such as performance and stress. A load can be a real
load, that is, the system can be put under test to real usage by having actual telephone users
connected to it. They will generate test input data for system test.
Load Testing
5.1.5 PERFORMANCE TESTING:
Performance tests are utilized in order to determine the widely defined performance of the
software system such as execution time associated with various parts of the code, response time
and device utilization. The intent of this testing is to identify weak points of the software system
and quantify its shortcomings.
Description Expected result
It is necessary to ascertain that the
application behaves correctly under
loads when ‘Server busy’ response is
received.
Should designate another active node as
a Server.
PERFORMANCE TESTING:
5.1.6 RELIABILITY TESTING:
The software reliability is the ability of a system or component to perform its required functions
under stated conditions for a specified period of time and it is being ensured in this testing.
Reliability can be expressed as the ability of the software to reveal defects under testing
conditions, according to the specified requirements. It the portability that a software system will
operate without failure under given conditions for a given time interval and it focuses on the
behavior of the software element. It forms a part of the software quality control team.
Description Expected result
This is required to assure that an
application perforce adequately, having
the capability to handle many peers,
delivering its results in expected time
and using an acceptable level of
resource and it is an aspect of
operational management.
Should handle large input values,
and produce accurate result in a
expected time.
RELIABILITY TESTING:
Description Expected result
This is to check that the server is rugged
and reliable and can handle the failure of
any of the components involved in
provide the application.
In case of failure of the server
an alternate server should take
over the job.
5.1.7 SECURITY TESTING:
Security testing evaluates system characteristics that relate to the availability, integrity and
confidentiality of the system data and services. Users/Clients should be encouraged to make sure
their security needs are very clearly known at requirements time, so that the security issues can
be addressed by the designers and testers.
SECURITY TESTING:
5.1.7 WHITE BOX TESTING:
White box testing, sometimes called glass-box testing is a test case design method that
uses the control structure of the procedural design to derive test cases. Using white box
testing method, the software engineer can derive test cases. The White box testing focuses
on the inner structure of the software structure to be tested.
Description
Expected result
Checking that the user identification is
authenticated.
In case failure it should not be
connected in the framework.
Check whether group keys in a tree are
shared by all peers.
The peers should know group key
in the same group.
5.1.8 WHITE BOX TESTING:
5.1.9 BLACK BOX TESTING:
Black box testing, also called behavioral testing, focuses on the functional requirements of the
software. That is, black testing enables the software engineer to derive sets of input
conditions that will fully exercise all functional requirements for a program. Black box
testing is not alternative to white box techniques. Rather it is a complementary approach that
is likely to uncover a different class of errors than white box methods. Black box testing
attempts to find errors which focuses on inputs, outputs, and principle function of a software
module. The starting point of the black box testing is either a specification or code. The contents
of the box are hidden and the stimulated software should produce the desired results.
Description Expected result
Exercise all logical decisions on
their true and false sides.
All the logical decisions must be valid.
Execute all loops at their boundaries
and within their operational bounds.
All the loops must be finite.
Exercise internal data structures to
ensure their validity.
All the data structures must be valid.
5.1.10 BLACK BOX TESTING:
All the above system testing strategies are carried out in as the development, documentation and
institutionalization of the proposed goals and related policies is essential.
Description Expected result
To check for incorrect or missing
functions.
All the functions must be valid.
To check for interface errors.
The entire interface must function
normally.
To check for errors in a data structures
or external data base access.
The database updation and retrieval
must be done.
To check for initialization and
termination errors.
All the functions and data structures
must be initialized properly and
terminated normally.
CHAPTER 7
7.0 SOFTWARE SPECIFICATION:
7.1 FEATURES OF .NET:
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating
XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET
Framework is a language-neutral platform for writing programs that can easily and securely
interoperate. There’s no language barrier with .NET: there are numerous languages available to
the developer including Managed C++, C#, Visual Basic and Java Script.
The .NET framework provides the foundation for components to interact seamlessly, whether
locally or remotely on different platforms. It standardizes common data types and
communications protocols so that components created in different languages can easily
interoperate.
“.NET” is also the collective name given to various software components built upon the .NET
platform. These will be both products (Visual Studio.NET and Windows.NET Server, for
instance) and services (like Passport, .NET My Services, and so on).
7.2 THE .NET FRAMEWORK
The .NET Framework has two main parts:
1. The Common Language Runtime (CLR).
2. A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
 Conversion from a low-level assembler-style language, called Intermediate
Language (IL), into code native to the platform being executed on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
 Loading and executing programs, with version control and other such features.
 The following features of the .NET framework are also worth description:
Managed Code
The code that targets .NET, and which contains certain extra Information - “metadata” - to
describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed
code contains the information that allows the CLR to guarantee, for instance, safe execution and
interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation and Deal location
facilities, and garbage collection. Some .NET languages use Managed Data by default, such as
C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR
can, depending on the language you’re using, impose certain constraints on the features
available. As with managed and unmanaged code, one can have both managed and unmanaged
data in .NET applications - data that doesn’t get garbage collected but instead is looked after by
unmanaged code.
Common Type System
The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety.
This ensures that all classes are compatible with each other, by describing types in a common
way. CTS define how types work within the runtime, which enables types in one language to
interoperate with types in another language, including cross-language exception handling. As
well as ensuring that types are only used in appropriate ways, the runtime also ensures that code
doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR provides built-in support for language interoperability. To ensure that you can develop
managed code that can be fully used by developers using any programming language, a set of
language features and rules for using them called the Common Language Specification (CLS)
has been defined. Components that follow these rules and expose only CLS features are
considered CLS-compliant.
7.3 THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the
namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as
well as Object. All objects derive from System. Object. As well as objects, there are value types.
Value types can be allocated on the stack, which can provide useful flexibility. There are also
efficient means of converting value types to object types if and when necessary.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O,
threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each providing distinct
areas of functionality, with dependencies between the namespaces kept to a minimum.
7.4 LANGUAGES SUPPORTED BY .NET
The multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of applications and XML
Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual
Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to
the family.
Visual Basic .NET has been updated to include many new and improved language features that
make it a powerful object-oriented programming language. These features include inheritance,
interfaces, and overloading, among others. Visual Basic also now supports structured exception
handling, custom attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can
use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating existing C++
applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid
Application Development”. Unlike other languages, its specification is just the grammar of the
language. It has no standard library of its own, and instead has been designed with the intention
of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the
world of XML Web Services and dramatically improves the interoperability of Java-language
programs with existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-aware applications
to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET
environment. Visual Perl includes support for Active State’s Perl Dev Kit.
Other languages for which .NET compilers are available include
 FORTRAN
 COBOL
 Eiffel
ASP.NET
XML WEB SERVICES
Windows Forms
Base Class Libraries
Common Language Runtime
Operating System
Fig1 .Net Framework
C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET
Framework; it manages the execution of the code and also makes the development process
easier by providing services.
C#.NET is a CLS-compliant language. Any objects, classes, or components that created in
C#.NET can be used in any other CLS-compliant language. In addition, we can use objects,
classes, and components created in other CLS-compliant languages in C#.NET .The use of
CLS ensures complete interoperability among applications, regardless of the languages used
to create the application.
CONSTRUCTORS AND DESTRUCTORS:
Constructors are used to initialize objects, whereas destructors are used to destroy them. In other
words, destructors are used to release the resources allocated to the object. In C#.NET the sub
finalize procedure is available. The sub finalize procedure is used to complete the tasks that must
be performed when an object is destroyed. The sub finalize procedure is called automatically
when an object is destroyed. In addition, the sub finalize procedure can be called only from the
class it belongs to or from derived classes.
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated
resources, such as objects and variables. In addition, the .NET Framework automatically releases
memory for reuse by destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us to define multiple procedures with
the same name, where each procedure has a different set of arguments. Besides using
overloading for procedures, we can use it for constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that supports multithreading can handle
multiple tasks simultaneously, we can use multithreading to decrease the time taken by an
application to respond to user interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to detect and remove errors at runtime.
In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using
Try…Catch…Finally statements, we can create robust and effective exception handlers to
improve the performance of our application.
7.5 THE .NET FRAMEWORK
The .NET Framework is a new computing platform that simplifies application development in
the highly distributed environment of the Internet.
OBJECTIVES OF .NET FRAMEWORK
1. To provide a consistent object-oriented programming environment whether object codes is
stored and executed locally on Internet-distributed, or executed remotely.
2. To provide a code-execution environment to minimizes software deployment and
guarantees safe execution of code.
3. Eliminates the performance problems.
There are different types of application, such as Windows-based applications and Web-based
applications.
7.6 FEATURES OF SQL-SERVER
The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000
Analysis Services. The term OLAP Services has been replaced with the term Analysis Services.
Analysis Services also includes a new data mining component. The Repository component
available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data
Services. References to the component now use the term Meta Data Services. The term
repository is used only in reference to the repository engine within Meta Data Services
SQL-SERVER database consist of six type of objects,
They are,
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
7.7 TABLE:
A database is a collection of data about a specific topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View
DesignView
To build or modify the structure of a table we work in the table design
view. We can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that answers the question
from one or more table. The data that make up the answer is either dynaset (if you edit it) or a
snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset.
Access either displays the dynaset or snapshot for us to view or perform an action on it, such as
deleting or updating.
CHAPTER 7
7.0 APPENDIX
7.1 SAMPLE SCREEN SHOTS:
7.2 SAMPLE SOURCE CODE:
CHAPTER 8
8.1 CONCLUSION AND FUTURE:
In summary, the results indicate that the presented system extracts roads even in complex
environments. The robustness is last but not least a result of the detailed modelling of both
extraction and evaluation components accommodating the mandatory flexibility of the
extraction. An obvious deficiency exists in form of the missing detection capability for vehicle
types as busses and trucks and the (still) weak model for complex junctions. The next extension
of our system, however, is the incorporation of multiple overlapping images in order to
accumulate more evidence for lanes and roads in such difficult cases. The internal evaluation
will greatly contribute to this because different – possibly competing – extraction results have to
be combined. Also for multiple images, we plan to treat the processing steps up to the generation
of lanes purely as 2D-problem. The results for each image are then projected onto the DSM and
fused there to achieve a consistent dataset. Then, new connections will be hypothesized and,
again, verified in each image separately.
CHAPTER 9
9.1 REFERENCES:
Hinz, S. and Baumgartner, A., 2001. Vehicle Detection in Aerial Images Using Generic Features,
Grouping, and Context. In: Pattern Recognition (DAGM 2001), Lecture Notes on Computer
Science 2191, SpringerVerlag, pp. 45–52.
Hinz, S., Baumgartner, A. and Ebner, H., 2001a. Modelling Contextual Knowledge for
Controlling Road Extraction in Urban Areas. In: IEEE/ISPRS joint Workshop on Remote
Sensing and Data Fusion over Urban Areas.
Hinz, S., Baumgartner, A., Mayer, H., Wiedemann, C. and Ebner, H., 2001b. Road Extraction
Focussing on Urban Areas. In: (Baltsavias et al., 2001), pp. 255–265.
Laptev, I., Mayer, H., Lindeberg, T., Eckstein, W., Steger, C. and Baumgartner, A., 2000.
Automatic Extraction of Roads from Aerial Images Based on Scale Space and Snakes. Machine
Vision and Applications 12(1), pp. 22–31.
Mayer, H. and Steger, C., 1998. Scale-Space Events and Their Link to Abstraction for Road
Extraction. ISPRS Journal of Photogrammetry and Remote Sensing 53(2), pp. 62–75.
Price, K., 2000. Urban Street Grid Description and Verification. In: 5th IEEE Workshop on
Applications of Computer Vision, pp. 148–154.
Tupin, F., Bloch, I. and Maitre, H., 1999. A First Step Toward Automatic Interpretaion of SAR
Images Using Evidential Fusion of Several Structure Detectors. IEEE Transactions on
Geoscience and Remote Sensing 37(3), pp. 1327–1343.

image processing

  • 1.
    AUTOMATIC ROAD EXTRACTIONIN URBAN SCENES CHAPTER 1 ABSTRACT: This paper focuses on internal quality measures for automatic road extraction from aerial images taken over urban areas. The motivation of this work is twofold: Firstly, any automatic system should provide the user with a small number of values indicating the reliability of the obtained results. This is often referred to as”self-diagnosis” and is in particular a crucial part of automatic image understanding systems. Secondly, and more important in the scope of our research, a system designed for the extraction of man-made objects in complex environments (like roads in urban areas) inherently implies many decisions during the extraction process. Such decisions are highly facilitated when both low level features and high level objects are attached with confidence values indicating their relevance for further processing for defining evaluation criteria from which the confidence values can be calculated is to split the components of a semantic object model into two different types. The model components of the first type are used for extracting features, i.e., parts of the object, and the components of the other type serve as criteria for evaluating the quality of the extracted features. For guaranteeing an unbiased evaluation one has to ensure that model components belonging to different types are independent from each other (at least theoretically). We illustrate this concept by our system for road
  • 2.
    extraction in urbanareas. Examples are given for both low level features like lines and ribbons as well as higher level features like lanes and road segments.
  • 3.
    1.1 INTRODUCTION: Due toits utility for a variety of applications, road extraction from digital imagery has been intensively studied in the computer vision and remote sensing fields. Many techniques have been proposed, which can be broadly classified as semi-automatic approaches and automatic approaches. The main criterion for the classification is whether the approach requires human intervention. In semi-automatic approaches, an operator provides information such as starting points or starting directions, which provide critical assistance in tracking roads. Without human intervention, an approach is considered automatic. Objects appear in natural scenes as groups of similar sensory features. Gestalt psychology reveals a set of principles guiding the grouping process based on local features. Elements tend to be perceptually grouped if they are close to each other (proximity), similar to one another (similarity), form a smooth and continuous curve (good continuation), or have similar temporal behaviors (common fate). Since roads in satellite imagery tend to have uniform features that are distinct from neighboring regions, it is reasonable to expect that they can be automatically extracted using Gestalt grouping principles. Dynamical systems represent a promising approach to object segmentation. In particular, research on automatic road extraction in urban areas is mainly motivated by the importance of geographic information systems (GIS) and the need for data acquisition and update for GIS. This demand is strikingly documented in the survey on 3D city models initiated by the European Organization for Experimental Photogrammetric Research (OEEPE, now called EuroSDR) a few years ago (Fuchs et al., 1998). Applications of road data of urban areas include analyses and simulations of traffic flow, estimation of air and noise pollution, street maintenance, etc. From a scientific perspective, the extraction of roads in complex environments is one of the challenging issues in photogrammetry and computer vision, since many tasks related to automatic scene interpretation are involved. Factors greatly influencing the scene complexity are, for instance, the number of different objects, the amount of their interrelations, and the variability of both.
  • 4.
    The work presentedin this paper focuses on the development of internal quality measures for automatic road extraction. The motivation of this specific aspect within an object extraction system is twofold: Firstly, any automatic system should provide the user with some values indicating the reliability of the obtained results. This is often referred to as ”self-diagnosis” (Forstner, 1996) ¨ which is a crucial part of automatic image understanding systems, in particular, when designed for practical applications. Secondly, and more important in the scope of our research, confidence values also play an important role for the reliability of the extraction itself, since they highly facilitate inevitable decisions which have to be made during the extraction process. Consider, for instance, competing road hypotheses extracted from multiple overlapping images which must be combined into a unique road network. Moreover, each factor—and thus the scene complexity—is related to a particular scale. To accommodate for such factors, techniques like detailed semantic modelling, contextual reasoning, and self-diagnosis have proven to be of great importance over the past years. It is clear that these techniques must be integral parts of an extraction system to attain reasonably good results over a variety of scenes. Roads in a digital image appear as thin and elongated homogeneous regions. Since leaders are required to be at the center of the large homogeneous regions, roads rarely contain leaders and thus are mostly segmented to the background. For obtaining road segments, a region growing process is adopted following self-diagnosis segmentation. We treat each background pixel as a starting point and perform pixel-based growing, which repeatedly checks the neighbor of the starting point and adding it into the region if it is similar to the starting point. If a pixel can grow into a large enough area, the area is considered as a new segment. Fig. 2(c) shows the result after the subsequent step, where road segments are attained. It should be noticed that a road cannot be segmented as a single segment for most real images.
  • 5.
    1.2 LITRATURE SURVEY: AUTOMATICCOMPLETION AND EVALUATION OF ROAD NETWORKS AUTHOR: Wiedemann, C. and Ebner, H., PUBLISH: In: International Archives of Photogrammetry and Remote Sensing, Vol. 33, part B. EXPLANATION: Road networks automatically extracted from digital imagery are in general incomplete and fragmented. Completeness and topology of the extracted network can be improved by the use of the global network structure which is a result of the function of roads as part of the transport network. This is especially – but not exclusively – important for the extraction of roads from imagery with low resolution (e.g., ground pixel size > 1 m) because only little local evidence for roads can be extracted from those images. In this paper, an approach is described for the completion of incompletely extracted road networks. The completion is done by generating link hypotheses between points on the network which are likely to be connected based on the network characteristics. The proposed link hypotheses are verified based on the image data. A quantitative evaluation of the achieved improvements is given. New developments presented in this paper are the generation of link hypotheses between different connected components of the extracted road network and the introduction of measures for the evaluation of the network topology and connectivity. Results of the improved completion scheme are presented and evaluated based on the introduced measures. The results show the feasibility of the presented completion approach as well as its limitations. Major advantages of the completion of road networks are the improved network topology and connectivity of the extraction result. The new measures prove to be very useful for the evaluation of network topology and connectivity.
  • 6.
    AUTOMATIC EXTRACTION OFROADS FROM AERIAL IMAGES BASED ON SCALE SPACE AND SNAKES AUTHOR: Laptev, I., Mayer, H., Lindeberg, T., Eckstein, W., Steger, C. and Baumgartner, A., PUBLISH: Machine Vision and Applications 12(1), pp. 22–31. EXPLANATION: We propose a new approach for automatic road extraction from aerial imagery with a model and a strategy mainly based on the multi-scale detection of roads in combination with geometry- constrained edge extraction using snakes. A main advantage of our approach is, that it allows for the first time a bridging of shadows and partially occluded areas using the heavily disturbed evidence in the image. Additionally, it has only few parameters to be adjusted. The road network is constructed after extracting crossings with varying shape and topology. We show the feasibility of the approach not only by presenting reasonable results but also by evaluating them quantitatively based on ground truth. Aerial imagery is one of the standard data sources for the acquisition of topographic objects, like roads or buildings for geographic information systems (GIS). Road data in GIS are of major importance for applications such as car navigation or guidance systems for police, fire services or forwarding agencies. Since the manual extraction of road data is time consuming, there is a need for automation.
  • 7.
    MODELLING CONTEXTUAL KNOWLEDGEFOR CONTROLLING ROADEXTRACTIONIN URBAN AREAS AUTHOR: Hinz, S., Baumgartner, A. and Ebner, H., PUBLISH: In: IEEE/ISPRS joint Workshop on Remote Sensing and Data Fusion over Urban Areas. EXPLANATION: This paper deals with the role of context for automatic extraction of man-made structures from aerial images taken over urban areas. Due to the intrinsic high complexity of urban scenes we propose to guide the extraction by contextual knowledge about the objects. We represent this knowledge explicitly by a context model. Based upon this model we are able to split the complex task of object extraction in urban areas into smaller sub-problems. The novelty presented in this contribution mainly relates to the fact that essential contextual information is gathered at the beginning of the extraction, thus, it is available during the whole extraction, and furthermore, it allows for automatically controlling the extraction process: for data consistency reasons, we use the imagery as the only source for both gaining contextual information and extracting roads. Advantages and remaining deficiencies of the proposed strategy are discussed
  • 8.
    CHAPTER 2 2.0 SYSTEMANALYSIS 2.1 EXISTING SYSTEM: Existing approach requires human intervention in semi-automatic approaches; an operator provides information such as starting points or starting directions, which provide critical assistance in tracking roads. Without human intervention, an approach is considered semi- automatic objects appear in natural scenes as groups of similar sensory features. Gestalt psychology reveals a set of principles guiding the grouping process based on local features. Elements tend to be perceptually grouped if they are close to each other (proximity), similar to one another (similarity), form a smooth and continuous curve (continuation), or have similar temporal behaviors (common fate) roads in satellite imagery tend to have uniform features that are distinct from neighboring regions, it is reasonable to expect that they can be semi- automatically extracted using Gestalt grouping principles. Dynamical systems represent a promising approach to object segmentation in particular, Locally Excitatory the oscillatory correlation theory asserts that oscillators corresponding to the pixels of the same object synchronize, and the oscillators corresponding to the pixels of different objects desynchronize. It has been shown that a LEGION network building on relaxation oscillators can rapidly achieve synchronization within a locally coupled oscillator assembly and de- synchronization between different assemblies. LEGION has been successfully applied to a number of scene analysis tasks, including image segmentation, object selection.
  • 9.
    2.1.1 DISADVANTAGES: 1) Imagesegmentation using a Locally Excitatory Globally Inhibitory Oscillator Networks LEGION network; 2) Medial axis extraction within each segment and selection of potential road segments; 3) Grouping of potential road segments using a LEGION model with alignment-dependent connections based on extracted medial axis points. Here, well aligned segments are considered as belonging to the same road.
  • 10.
    2.2 PROPOSED SYSTEM: Ourproposed approach regarding”self-diagnosis” the role of internal evaluation is employed in the system for finding consistent interpretations of SAR scenes (Synthetic Aperture RADAR). In a first step, different low level operators with specific strengths are applied to extract image primitives, i.e., cues for roads, rivers, urban/industrial areas, relief characteristics, etc. Since a particular operator may vote for more than one object class (e.g. road and river), a so-called focal and non-focal element is defined for each operator (usually the union of realworld object classes). The operator response is transformed into a confidence value characterizing the match with its focal element. Then, all confidence values are combined in an evidence theoretical framework to assign unique semantics to each primitive attached with a certain probability. Our feature adjacency graph is constructed in which global knowledge about objects (road segments form a network, industrial areas are close to cities,) is introduced in form of object adjacency probabilities. Based on the probabilities of objects and their relations the final scene interpretation is formulated as a graph labelling problem that is solved by energy minimization. Scene interpretation is based on a priori knowledge stored in a semantic net and rules for controlling the extraction. Each instance of an object, e.g., a road axis, is hypothesized top-down and internally evaluated by comparing the expected attribute values of the object with the actual values measured in the image. Competing alternative hypotheses are stored in a search tree as long as no further hypotheses can be formed. Finally, the best interpretation is selected from the tree by an optimum path search In the following steps the values are propagated and aggregated providing eventually a basis for the final decision about the presence of the desired object. This procedure may cause problems since the evaluation is purely based on local features while global object properties are neglected. Therefore, some approaches introduce additional knowledge (e.g., roads forming a network or fitting to”valleys” of a DSM) at a later stage when more evidence for an object has been acquired. All mentioned approaches have in common that they use one predefined model for simultaneously extracting and evaluating roads.
  • 11.
    2.2.1 ADVANTAGES:  Oursystem tries to accommodate aspects having proved to be of great importance for road extraction: By integrating a flexible, detailed road and context model one can capture the varying appearance of roads and the influence of background objects suchas trees, buildings, and cars in complex scenes.  The fusion of different scales helps to eliminate isolated disturbances on the road while the fundamental structures are emphasized. This can be supported by considering the function of roads connecting different sites and thereby forming a fairly dense and sometimes even regular network.  Hence, exploiting the network characteristics adds global information and, thus, the selection of the correct hypotheses becomes easier. As basic data, our system expects high resolution aerial images (resolution < 15 cm) and a reasonably accurate DSM with a ground resolution of about 1 m. In the following, we sketch our road model and extraction strategy.
  • 12.
    2.3 HARDWARE &SOFTWARE REQUIREMENTS: 2.3.1 HARDWARE REQUIREMENT:  Processor - Pentium –IV  Speed - 1.1 GHz  RAM - 256 MB (min)  Hard Disk - 20 GB  Floppy Drive - 1.44 MB  Key Board - Standard Windows Keyboard  Mouse - Two or Three Button Mouse  Monitor - SVGA 2.3.2 SOFTWARE REQUIREMENTS: .NET  Operating System : Windows XP or Win7  Front End : Microsoft Visual Studio .NET 2008  Script : C# Script  Document : MS-Office 2007
  • 13.
    CHAPTER 3 3.0 SYSTEMDESIGN: Data Flow Diagram / Use Case Diagram / Flow Diagram:  The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system  The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the system components. These components are the system process, the data used by the process, an external entity that interacts with the system and the information flows in the system.  DFD shows how the information moves through the system and how it is modified by a series of transformations. It is a graphical technique that depicts information flow and the transformations that are applied as data moves from input to output.  DFD is also known as bubble chart. A DFD may be used to represent a system at any level of abstraction. DFD may be partitioned into levels that represent increasing information flow and functional detail.
  • 14.
    NOTATION: SOURCE OR DESTINATIONOF DATA: External sources or destinations, which may be people or organizations or other entities DATA SOURCE: Here the data referenced by a process is stored and retrieved. PROCESS: People, procedures or devices that produce data’s in the physical component is not identified. DATA FLOW: Data moves in a specific direction from an origin to a destination. The data flow is a “packet” of data. MODELING RULES: There are several common modeling rules when creating DFDs: 1. All processes must have at least one data flow in and one data flow out. 2. All processes should modify the incoming data, producing new forms of outgoing data. 3. Each data store must be involved with at least one data flow. 4. Each external entity must be involved with at least one data flow. 5. A data flow must be attached to at least one process.
  • 15.
    3.1 ARCHITECTURE DIAGRAM: Context Regions Forest Streaming Analytics Urban InputVideos Road Lane Segmentation Completion Construction of Lane Segmentation Fusion Based Lane Segmentation Context Relations Road Extraction Tools
  • 16.
    3.2 DATAFLOW DIAGRAM: LEVEL1: LEVEL 2: Analysis of Context Relations Shadow, Occultations Region of Interest Lane Segments Input Videos Streaming Analysis Streaming Data
  • 17.
    LEVEL 3: Road ExtractionTools DataStreaming Analytics Connection of Hypothesis Detection and Removal of Inconsistencies Merging Lane Segmentation Results Roads Segmentation Extraction of Marking Detection of Vehicle Outline
  • 18.
    UML DIAGRAMS: 3.2 USECASE DIAGRAM: 3.3 CLASS DIAGRAM: 3.4 SEQUENCE DIAGRAM: 3.5 ACTIVITY DIAGRAM:
  • 19.
    CHAPTER 4 4.0 IMPLEMENTATIONAND ALGORITHM: ROAD AND CONTEXT MODEL: Our system tries to accommodate aspects having proved to be of great importance for road extraction: By integrating a flexible, detailed road and context model one can capture the varying appearance of roads and the influence of background objects such as trees, buildings, and cars in complex scenes. The fusion of different scales helps to eliminate isolated disturbances on the road while the fundamental structures are emphasized (Mayer and Steger, 1998). This can be supported by considering the function of roads connecting different sites and thereby forming a fairly dense and sometimes even regular network. Hence, exploiting the network characteristics adds global information and, thus, the selection of the correct hypotheses becomes easier. As basic data, our system expects high resolution aerial images (resolution < 15 cm) and a reasonably accurate DSM with a ground resolution of about 1 m. In the following, we sketch our road model and extraction strategy. For a comprehensive description we refer the reader to (Hinz et al., 2001a, Hinz et al., 2001b). The road model illustrated in Fig. 1 a) compiles knowledge about radiometric, geometric, and topological characteristics of urban roads in form of a hierarchical semantic net. The model represents the standard case, i.e., the appearance of roads is not affected by relations to other objects. It describes objects by means of”concepts”, and is split into three levels defining different points of view.
  • 20.
    The real worldlevel comprises the objects to be extracted: The road network, its junctions and road links, as well as their parts and specializations (road segments, lanes, markings are connected to the concepts of the geometry and material level via concrete relations (Tonjes et ¨ al., 1999). The geometry and material level is an intermediate level which represents the 3D-shape of an object as well as its material describing objects independently of sensor characteristics and viewpoint (Clement et al., 1993). In contrast, the ´ image level which is subdivided into coarse and fine scale comprises the features to detect in the image: Lines, edges, homogeneous regions, etc. Whereas the fine scale gives detailed information, the coarse scale adds global information. Because of the abstraction in coarse scale, additional correct hypotheses for roads can be found and sometimes also false ones can be eliminated based on topological criteria, while details, like exact width and position of the lanes and markings, are integrated from fine scale. In this way the extraction benefits from both scales. The road model is extended by knowledge about context: Socalled context objects, i.e., background objects like buildings or vehicles, may hinder road extraction if they are not modelled appropriately but they substantially support the extraction if they are part of the road model. We define global and local context: Global context: The motivation for employing global context stems from the observation that it is possible to find semantically meaningful image regions – so-called context regions – where roads show typical prominent features and where certain relations between roads and background objects have a similar importance. Consequently, the relevance of different components of the road model and the importance of different context relations (described below) must be adapted to the respective context region. In urban areas, for instance, relations between vehicles and roads are more important since traffic is usually much denser inside of settlements than in rural areas. As (Baumgartner et al., 1999), we distinguish urban, forest, and rural context regions.
  • 21.
    Local context: Wemodel the local context with so-called context relations, i.e., certain relations between a small number of road and context objects. In dense settlements, for instance, the footprints of buildings are almost parallel to roads and they give therefore strong hints for road sides. Vice-versa, buildings or other high objects potentially occlude larger parts of a road or cast shadows on it. A context relation ”occlusion” gives rise to the selection of another image providing a better view on this particular part of the scene, whereas a context relation ”shadow” can tell an extraction algorithm to choose modified parameter settings. Also vehicles occlude the pavement of a lane segment. Hence, vehicle outlines as, e.g., detected by the algorithm of (Hinz and Baumgartner, 2001) can be directly treated as parts of a lane. In a very similar way, we model the integration of GIS-axes and relations to sub-structures. Figure 1 b) summarizes the relations between road objects, context objects, and sub-structures by using the concepts ”Lane segment” and ”Junction” as the basic entities of a road network.
  • 22.
    4.2 MODULES: VIDEO PREPROCESSING: EXTRACTIONAND EVALUATION: ROAD LANE EXTRACTION: RESULTS EVALUTION:
  • 23.
    4.3 MODULE DESCRIPTION: VIDEOPREPROCESSING: Video processing, internal evaluation is performed by not only aggregating previously derived values but also exploiting knowledge not used in prior steps. This point has especially high relevance for bottom-up driven image understanding systems (as ours), since essential global object properties making different objects distinctive can be exploited only at later stages of processing. Lane’s segments, for instance, are constructed from grouped markings and optional road sides but they still have high similarity to, e.g., illuminated parts of gable roofs. Only their collinear and parallel concatenation resulting in lanes, road segments, and roads makes them distinctive and gives in turn new hints for missing lane segments (cf. Fig. 9, 10). Consider the two-lane road segment in Fig. 10a). The continuity of the upper lane provides a strong hint for bridging the gaps of the lower lane in spite of high intensity variation therein. Hence, at this stage, the system can base its decision on more knowledge than purely the homogeneity within the gaps.
  • 24.
    EXTRACTION AND EVALUATION: Ourapproach utilizes a semantic net for ¨ modeling. However, our methodology of internal evaluation during extraction complements other work as we split the model of an object into components used for extraction and components used for internal evaluation. The model components used for extraction typically consist of quite generic geometric criteria which are more robust against illumination changes, shadows, noise, etc., whereas those used for evaluation are mostly object specific. In so doing, both extraction and evaluation may be performed in a flexible rather than monolithic fashion and can adapt to the respective contextual situation. The extraction of markings, for instance, is based on line detection while their evaluation relies on the knowledge that markings are very bright and has symmetric contrast on both sides because of the unicolored pavement (see Fig. 4). However, in case of shadow regions as detected during context-based data analysis, the system automatically retrieves a different parameter set for internal evaluation and, thus, accommodates the different situation.
  • 25.
    ROAD LANE EXTRACTION: Ina very general sense, the extraction strategy inheres knowledge about how and when certain parts of the road and context model are optimally exploited, thereby being the basic control mechanism of the extraction process. It is subdivided into three levels (see also Fig. 2): Context- based data analysis (Level 1) comprises the segmentation of the scene into the urban, rural, and forest area and the analysis of context relations. While road extraction in forest areas seems hardly possible without using additional sensors, e.g., infrared or LIDAR sensors, the extraction in rural areas may be performed with the system of (Baumgartner et al., 1999). In urban areas, extraction of salient roads (Level 2) includes the detection of homogeneous ribbons in coarse scale, collinear grouping thin bright lines, i.e. road markings, and the construction of lane segments from groups of road markings, road sides, and detected vehicles. The lane segments are further grouped into lanes, road segments, and roads. During road network completion (Level 3), finally, gaps in the extraction are iteratively closed by hypothesizing and verifying connections between previously extracted roads. Similar to (Wiedemann and Ebner, 2000), local as well as global criteria exploiting the network characteristics are used. Figure 3 illustrates some intermediate steps and Figs. 11, 12 show typical results. In the next section, we turn our focus on the integrated models for extraction and internal evaluation.
  • 26.
    RESULTS EVALUTION: Final resultof road extraction has been evaluated by matching the extracted road axes to manually plotted reference data (Wiedemann and Ebner, 2000). As can be seen, major parts of the road networks have been extracted (white lines indicate extracted road axes). Expressed in numerical values, we achieve a completeness of almost 70 % and a correctness of about 95 %. The system is able to detect shadowed road sections or road sections with rather dense traffic. However, it must be noted that some of the axes’ underlying lane segments have been missed. This is most evident at the complex road junctions in both scenes, where only spurious features for the construction of lanes could be extracted. Thus, not enough evidence was given to accept connections between the individual branches of the junction. Another obvious failure can be seen at the right branch of the junction in the central part of Scene II (Fig. 12). The tram and trucks in the center of the road have been missed since our vehicle detection module is only able to extract vehicles similar to passenger cars. Thus, this particular road axis has been shifted to the lower part of the road where the implemented parts of the model fit much better.
  • 27.
    CHAPTER 5 5.0 SYSTEMSTUDY: 5.1 FEASIBILITY STUDY: The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are  ECONOMICAL FEASIBILITY  TECHNICAL FEASIBILITY  SOCIAL FEASIBILITY 5.1.1 ECONOMICAL FEASIBILITY: This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased. 5.1.2 TECHNICAL FEASIBILITY: This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical
  • 28.
    resources. This willlead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system. 5.1.3 SOCIAL FEASIBILITY: The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
  • 29.
    5.2 SYSTEM TESTING: Testingis a process of checking whether the developed system is working according to the original objectives and requirements. It is a set of activities that can be planned in advance and conducted systematically. Testing is vital to the success of the system. System testing makes a logical assumption that if all the parts of the system are correct, the global will be successfully achieved. In adequate testing if not testing leads to errors that may not appear even many months. This creates two problems, the time lag between the cause and the appearance of the problem and the effect of the system errors on the files and records within the system. A small system error can conceivably explode into a much larger Problem. Effective testing early in the purpose translates directly into long term cost savings from a reduced number of errors. Another reason for system testing is its utility, as a user-oriented vehicle before implementation. The best programs are worthless if it produces the correct outputs. 5.2.1 UNIT TESTING: A program represents the logical elements of a system. For a program to run satisfactorily, it must compile and test data correctly and tie in properly with other programs. Achieving an error free program is the responsibility of the programmer. Program testing checks for two types of errors: syntax and logical. Syntax error is a program statement that violates one or more rules of the language in which it is written. An improperly defined field dimension or omitted keywords are common syntax errors. These errors are shown through error message generated by the computer. For Logic errors the programmer must examine the output carefully.
  • 30.
    UNIT TESTING: 5.1.3 FUNCTIONALTESTING: Functional testing of an application is used to prove the application delivers correct results, using enough inputs to give an adequate level of confidence that will work correctly for all sets of inputs. The functional testing will need to prove that the application works for each client type and that personalization function work correctly. When a program is tested, the actual output is compared with the expected output. When there is a discrepancy the sequence of instructions must be traced to determine the problem. The process is facilitated by breaking the program into self-contained portions, each of which can be checked at certain key points. The idea is to compare program values against desk-calculated values to isolate the problems. Description Expected result Test for application window properties. All the properties of the windows are to be properly aligned and displayed. Test for mouse operations. All the mouse operations like click, drag, etc. must perform the necessary operations without any exceptions.
  • 31.
    FUNCTIONAL TESTING: 5.1. 4NON-FUNCTIONAL TESTING: The Non Functional software testing encompasses a rich spectrum of testing strategies, describing the expected results for every test case. It uses symbolic analysis techniques. This testing used to check that an application will work in the operational environment. Non- functional testing includes:  Load testing  Performance testing  Usability testing  Reliability testing  Security testing Description Expected result Test for all modules. All peers should communicate in the group. Test for various peer in a distributed network framework as it display all users available in the group. The result after execution should give the accurate result.
  • 32.
    5.1.5 LOAD TESTING: Animportant tool for implementing system tests is a Load generator. A Load generator is essential for testing quality requirements such as performance and stress. A load can be a real load, that is, the system can be put under test to real usage by having actual telephone users connected to it. They will generate test input data for system test. Load Testing 5.1.5 PERFORMANCE TESTING: Performance tests are utilized in order to determine the widely defined performance of the software system such as execution time associated with various parts of the code, response time and device utilization. The intent of this testing is to identify weak points of the software system and quantify its shortcomings. Description Expected result It is necessary to ascertain that the application behaves correctly under loads when ‘Server busy’ response is received. Should designate another active node as a Server.
  • 33.
    PERFORMANCE TESTING: 5.1.6 RELIABILITYTESTING: The software reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time and it is being ensured in this testing. Reliability can be expressed as the ability of the software to reveal defects under testing conditions, according to the specified requirements. It the portability that a software system will operate without failure under given conditions for a given time interval and it focuses on the behavior of the software element. It forms a part of the software quality control team. Description Expected result This is required to assure that an application perforce adequately, having the capability to handle many peers, delivering its results in expected time and using an acceptable level of resource and it is an aspect of operational management. Should handle large input values, and produce accurate result in a expected time.
  • 34.
    RELIABILITY TESTING: Description Expectedresult This is to check that the server is rugged and reliable and can handle the failure of any of the components involved in provide the application. In case of failure of the server an alternate server should take over the job. 5.1.7 SECURITY TESTING: Security testing evaluates system characteristics that relate to the availability, integrity and confidentiality of the system data and services. Users/Clients should be encouraged to make sure their security needs are very clearly known at requirements time, so that the security issues can be addressed by the designers and testers.
  • 35.
    SECURITY TESTING: 5.1.7 WHITEBOX TESTING: White box testing, sometimes called glass-box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Using white box testing method, the software engineer can derive test cases. The White box testing focuses on the inner structure of the software structure to be tested. Description Expected result Checking that the user identification is authenticated. In case failure it should not be connected in the framework. Check whether group keys in a tree are shared by all peers. The peers should know group key in the same group.
  • 36.
    5.1.8 WHITE BOXTESTING: 5.1.9 BLACK BOX TESTING: Black box testing, also called behavioral testing, focuses on the functional requirements of the software. That is, black testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black box testing is not alternative to white box techniques. Rather it is a complementary approach that is likely to uncover a different class of errors than white box methods. Black box testing attempts to find errors which focuses on inputs, outputs, and principle function of a software module. The starting point of the black box testing is either a specification or code. The contents of the box are hidden and the stimulated software should produce the desired results. Description Expected result Exercise all logical decisions on their true and false sides. All the logical decisions must be valid. Execute all loops at their boundaries and within their operational bounds. All the loops must be finite. Exercise internal data structures to ensure their validity. All the data structures must be valid.
  • 37.
    5.1.10 BLACK BOXTESTING: All the above system testing strategies are carried out in as the development, documentation and institutionalization of the proposed goals and related policies is essential. Description Expected result To check for incorrect or missing functions. All the functions must be valid. To check for interface errors. The entire interface must function normally. To check for errors in a data structures or external data base access. The database updation and retrieval must be done. To check for initialization and termination errors. All the functions and data structures must be initialized properly and terminated normally.
  • 38.
    CHAPTER 7 7.0 SOFTWARESPECIFICATION: 7.1 FEATURES OF .NET: Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. There’s no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate. “.NET” is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).
  • 39.
    7.2 THE .NETFRAMEWORK The .NET Framework has two main parts: 1. The Common Language Runtime (CLR). 2. A hierarchical set of class libraries. The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are  Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.  Memory management, notably including garbage collection.  Checking and enforcing security restrictions on the running code.  Loading and executing programs, with version control and other such features.  The following features of the .NET framework are also worth description: Managed Code The code that targets .NET, and which contains certain extra Information - “metadata” - to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.
  • 40.
    Managed Data With ManagedCode comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language you’re using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications - data that doesn’t get garbage collected but instead is looked after by unmanaged code. Common Type System The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it. Common Language Specification The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.
  • 41.
    7.3 THE CLASSLIBRARY .NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary. The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity. The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum. 7.4 LANGUAGES SUPPORTED BY .NET The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family. Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.
  • 42.
    Visual Basic .NETis also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET. Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework. C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid Application Development”. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own. Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages. Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit. Other languages for which .NET compilers are available include  FORTRAN  COBOL  Eiffel
  • 43.
    ASP.NET XML WEB SERVICES WindowsForms Base Class Libraries Common Language Runtime Operating System Fig1 .Net Framework C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services. C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.
  • 44.
    CONSTRUCTORS AND DESTRUCTORS: Constructorsare used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes. GARBAGE COLLECTION Garbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use. In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object. OVERLOADING Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.
  • 45.
    MULTITHREADING: C#.NET also supportsmultithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction. STRUCTURED EXCEPTION HANDLING C#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create exception handlers. Using Try…Catch…Finally statements, we can create robust and effective exception handlers to improve the performance of our application.
  • 46.
    7.5 THE .NETFRAMEWORK The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. OBJECTIVES OF .NET FRAMEWORK 1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely. 2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code. 3. Eliminates the performance problems. There are different types of application, such as Windows-based applications and Web-based applications.
  • 47.
    7.6 FEATURES OFSQL-SERVER The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services SQL-SERVER database consist of six type of objects, They are, 1. TABLE 2. QUERY 3. FORM 4. REPORT 5. MACRO
  • 48.
    7.7 TABLE: A databaseis a collection of data about a specific topic. VIEWS OF TABLE: We can work with a table in two types, 1. Design View 2. Datasheet View DesignView To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold. Datasheet View To add, edit or analyses the data itself we work in tables datasheet view mode. QUERY: A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.
  • 49.
    CHAPTER 7 7.0 APPENDIX 7.1SAMPLE SCREEN SHOTS: 7.2 SAMPLE SOURCE CODE:
  • 50.
    CHAPTER 8 8.1 CONCLUSIONAND FUTURE: In summary, the results indicate that the presented system extracts roads even in complex environments. The robustness is last but not least a result of the detailed modelling of both extraction and evaluation components accommodating the mandatory flexibility of the extraction. An obvious deficiency exists in form of the missing detection capability for vehicle types as busses and trucks and the (still) weak model for complex junctions. The next extension of our system, however, is the incorporation of multiple overlapping images in order to accumulate more evidence for lanes and roads in such difficult cases. The internal evaluation will greatly contribute to this because different – possibly competing – extraction results have to be combined. Also for multiple images, we plan to treat the processing steps up to the generation of lanes purely as 2D-problem. The results for each image are then projected onto the DSM and fused there to achieve a consistent dataset. Then, new connections will be hypothesized and, again, verified in each image separately.
  • 51.
    CHAPTER 9 9.1 REFERENCES: Hinz,S. and Baumgartner, A., 2001. Vehicle Detection in Aerial Images Using Generic Features, Grouping, and Context. In: Pattern Recognition (DAGM 2001), Lecture Notes on Computer Science 2191, SpringerVerlag, pp. 45–52. Hinz, S., Baumgartner, A. and Ebner, H., 2001a. Modelling Contextual Knowledge for Controlling Road Extraction in Urban Areas. In: IEEE/ISPRS joint Workshop on Remote Sensing and Data Fusion over Urban Areas. Hinz, S., Baumgartner, A., Mayer, H., Wiedemann, C. and Ebner, H., 2001b. Road Extraction Focussing on Urban Areas. In: (Baltsavias et al., 2001), pp. 255–265. Laptev, I., Mayer, H., Lindeberg, T., Eckstein, W., Steger, C. and Baumgartner, A., 2000. Automatic Extraction of Roads from Aerial Images Based on Scale Space and Snakes. Machine Vision and Applications 12(1), pp. 22–31. Mayer, H. and Steger, C., 1998. Scale-Space Events and Their Link to Abstraction for Road Extraction. ISPRS Journal of Photogrammetry and Remote Sensing 53(2), pp. 62–75. Price, K., 2000. Urban Street Grid Description and Verification. In: 5th IEEE Workshop on Applications of Computer Vision, pp. 148–154. Tupin, F., Bloch, I. and Maitre, H., 1999. A First Step Toward Automatic Interpretaion of SAR Images Using Evidential Fusion of Several Structure Detectors. IEEE Transactions on Geoscience and Remote Sensing 37(3), pp. 1327–1343.