3-D Environment Model Construction and Adaptive Foreground Detection for Multi-Camera Surveillance System          Yi-Yuan...
the physical property of floor, directions of light sources and           System configuration On-line monitoringadditive n...
relationship between points on two planes:                      sct = Hcs ,                           (1)where s is a scal...
our method such that the background doesn’t have to be                                                                    ...
Fig. 7. Orientation determination of the axis-aligned billboarding. L is                                                  ...
C1                 C3                                                                                                     ...
Fig. 11.     Dynamic background removal by ground mask. There is an                                                       ...
Morphing And Texturing Based On The Transformation Between Triangle Mesh And                                   Point      ...
A = { p ( x, y , z ) | pn T − v i n T = 0 , i ∈ (1,2,3), p ∈ Bin }                The experiments use some objects file wh...
is made by 2D image as shown in the middle image of first                            confirmed by the scalability and flex...
Figure 4. The models created by objects morphing with different weights.Figure 5. The process of 3D model texturing with 2...
Figure 6. The results of point-based texture mapping with α = β = 4 and different objects.                                ...
LAYERED LAYOUTS OF DIRECTED GRAPHS USING A GENETIC                    ALGORITHM            Chun-Cheng Lin1,∗, Yi-Ting Lin2...
a great deal of work with respect to each step of        is quite different between drawing layered layoutsSugiyama’s algor...
Sugiyama’s Algorithm         Cyclic Leveling                               Genetic Algorithm      Cycle Removel           ...
the node would be determined later. In the algo-                  then priority(v) = B − (|k − m/2|), in which B is arithm...
3.1. Definitions                              4. MAIN COMPONENTS OF OUR GAFor arranging nodes on layers, if the relative ho...
domly choose one. The layer of the chosen node is         Note that the x-coordinate assignment problemequal to base layer...
Table 1: The result after redrawing random graphswith 30 nodes and unlimited layout width. method       measure density =2...
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
CVGIP 2010 Part 3
Upcoming SlideShare
Loading in...5
×

CVGIP 2010 Part 3

2,184

Published on

CVGIP 2010 The 23th IPPR Conference on Computer Vision, Graphics, and Image Processing

Published in: Business, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
2,184
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

CVGIP 2010 Part 3

  1. 1. 3-D Environment Model Construction and Adaptive Foreground Detection for Multi-Camera Surveillance System Yi-Yuan Chen1† , Hung-I Pai2† , Yung-Huang Huang∗ , Yung-Cheng Cheng∗ , Yong-Sheng Chen∗ Jian-Ren Chen† , Shang-Chih Hung† , Yueh-Hsun Hsieh† , Shen-Zheng Wang† , San-Lung Zhao† † Industrial Technology Research Institute, Taiwan 310, ROC ∗ Department of Computer and Information Science, National Chiao-Tung University, Taiwan 30010, ROC E-mail:1 yiyuan@itri.org.tw, 2 HIpai@itri.org.tw Abstract— Conventional surveillance systems usually usemultiple screens to display acquired video streams and maycause trouble to keep track of targets due to the lack of spatialrelationship among the screens. This paper presents an effectiveand efficient surveillance system that can integrate multiplevideo contents into one single comprehensive view. To visualizethe monitored area, the proposed system uses planar patches toapproximate the 3-D model of the monitored environment anddisplays the video contents of cameras by applying dynamictexture mapping on the model. Moreover, a pixel-based shadowdetection scheme for surveillance system is proposed. Afteran offline training phase, our method exploits the thresholdwhich determines whether a pixel is in a shadow part of the Fig. 1. A conventional surveillance system with multiple screens.frame. The thresholds of pixels would be automatically adjustedand updated according to received video streams. The movingobjects are extracted accurately with removing cast shadows andthen visualized through axis-aligned billboarding. The system direction of cameras, and locations of billboards indicateprovides security guards a better situational awareness of the the positions of cameras, but the billboard contents will bemonitored site, including the activities of the tracking targets. hard to perceive if the angles between viewing direction and Index Terms— Video surveillance system, planar patch mod- normal directions of billboards are too large. However, ineling, axis-aligned billboarding, cast shadow removal rotating billboard method, when the billboard rotates and faces to the viewpoint of user, neither camera orientations I. I NTRODUCTION nor capturing areas will be preserved. In outdoor surveillance Recently, video surveillance has experienced accelerated system, an aerial or satellite photograph can be used asgrowth because of continuously decreasing price and better a reference map and some measurement equipments arecapability of cameras [1] and has become an important used to build the 3-D environment [3]–[5]. Neumann, et al.research topic in the general field of security. Since the utilized an airborne LiDAR (Light Detection and Ranging)monitored regions are often wide and the field of views sensor system to collect 3-D geometry samples of a specificof cameras are limited, multiple cameras are required to environment [6]. In [3], image registration seams the video oncover the whole area. In the conventional surveillance system, the 3-D model. Furthermore, video projection, such as videosecurity guards in the control center monitor the security flashlight or virtual projector, is another way to display videoarea through a screen wall (Figure 1). It is difficult for the in the 3-D model [4], [7].guards to keep track of targets because the spatial relationship However, the multi-camera surveillance system still hasbetween adjacent screens is not intuitively known. Also, it is many open problems to be solved, such as object trackingtiresome to simultaneously gaze between many screens over across cameras and object re-identification. The detection ofa long period of time. Therefore, it is beneficial to develop a moving objects in video sequences is the first relevant step insurveillance system that can integrate all the videos acquired the extraction of information in vision-based applications. Inby the monitoring cameras into a single comprehensive view. general, the quality of object segmentation is very important. Many researches on integrated video surveillance systems The more accurate positions and shapes of objects are, theare proposed in the literature. Video billboards and video more reliable identification and tracking will be. Cast shadowon fixed planes project camera views including foreground detection is an issue for precise object segmentation orobjects onto individual vertical planes in a reference map to tracking. The characteristics of shadow are quite differentvisualize the monitored area [2]. In fixed billboard method, in outdoor and indoor environment. The main difficulties inbillboards face to specified directions to indicate the capturing separating the shadow from an interesting object are due to 988
  2. 2. the physical property of floor, directions of light sources and System configuration On-line monitoringadditive noise in indoor environment. Based on brightness Manual operation videoand chromaticity, some works are proposed to decide thresh- streamsolds of these features to roughly detect the shadow fromobjects [8]–[10]. However, current local threshold methods 2D 3D patterns model Background modelingcouple blob-level processing with pixel-level detection. Itcauses the performance of these methods to be limited dueto the averaging effect of considering a big image region. Registration with Segmentation and corresponding points refinement Two works to remove shadow are proposed to updatethe threshold with time and detect cast shadow in different Axis-alignedscenes. Carmona et al [11] propose a method to detect Lookup billboardingshadow by using the properties of shadow in Angle-Module tablesspace. Blob-level knowledge is used to identify shadow, 3-D modelrefection and ghost. This work also proposes a method to constructionupdate the thresholds to remove shadow in different positionsof the scene. However there are many undetermined param- Fig. 2. The flowchart and components of the proposed 3-D surveillance system.eters to update the thresholds and the optimal parameters arehard to find in practice. Martel-Brisson et al [12] proposea method, called GMSM, which initially uses Gaissian ofMixture Model (GMM) to define the most stable Gaussiandistributions as the shadow and background distributions.Since a background model is included in this method, morecomputation is needed for object segmentation if a more com-plex background model is included in the system. Besides,because that each pixel has to be updated no matter how manyobjects moving, it cost more computation in few objects. In this paper, we develop a 3-D surveillance system basedon multiple cameras integration. We use planar patches tobuild the 3-D environment model firstly and then visualizevideos by using dynamic texture mapping on the 3-D model.To obtain the relationship between the camera contents andthe 3-D model, homography transformations are estimatedfor every pair of image regions in the video contents andthe corresponding areas in the 3-D model. Before texture Fig. 3. Planar patch modeling for 3-D model construction. Red patchesmapping, patches are automatically divided into smaller ones (top-left), green patches (top-right), and blue patches (bottom-left) representwith appropriate sizes according to the environment. Lookup the mapping textures in three cameras. The yellow point is the origin of the 3-D model. The 3-D environment model (bottom-right) is composed oftables for the homography transformations are also built for horizontal and vertical patches from these three cameras.accelerating the coordinate mapping in the video visual-ization processing. Furthermore, a novel method to detectmoving shadow is also proposed. It consists of two phases. quired from IP cameras deployed in the scene to the 3-DThe first phase is an off-line training phase which determines model by specifying corresponding points between the 3-Dthe threshold of every pixel by judging whether the pixel is model and the 2-D images. Since the cameras are fixed, thisin the shadow part. In the second phase, the statistic data configuration procedure can be done only once beforehand.of every pixel is updated with time, and the threshold is Then in the on-line monitoring stage, based on the 3-Dadjusted accordingly. By this way, a fixed parameters setting model, all videos will be integrated and visualized in a singlefor detecting shadow can be avoided. The moving objects are view in which the foreground objects extracted from imagessegmented accurately from the background and are displayed are displayed through billboards.via axis-aligned billboarding for better 3-D visual effects. A. Image registration II. S YSTEM CONFIGURATION For a point on a planar object, its coordinates on the plane Figure 2 illustrates the flowchart of constructing the pro- can be mapped to 2-D image through homography citeho-posed surveillance system. First, we construct lookup tables mography, which is a transformation between two planarfor the coordinate transformation from the 2-D images ac- coordinate systems. A homography matrix H represents the 989
  3. 3. relationship between points on two planes: sct = Hcs , (1)where s is a scalar factor and cs and ct are a pair of corre-sponding points in the source and target patches, respectively.If there are at least four correspondences where no threecorrespondences in each patch are collinear, we can estimateH through the least-squares approach. We regard cs as points of 3-D environment model and ctas points of 2-D image and then calculate the matrix H tomap points from the 3-D model to the images. In the reverseorder, we can also map points from the images to the 3-Dmodel.B. Planar patch modeling Precise camera calibration is not an easy job [13]. In the Fig. 4. The comparison of rendering layouts between different numbers and sizes of patches. A large distortion occurs if there are fewer patches forvirtual projector methods [4], [7], the texture image will be rendering (left). More patches make the rendering much better (right).miss-aligned to the model if the camera calibration or the3-D model reconstruction has large error. Alternatively, wedevelop a method that approximates the 3-D environment where Iij is the intensity of the point obtained from homog-model through multiple yet individual planar patches and ˜ raphy transformation, Iij is the intensity of the point obtainedthen renders the image content of every patches to generate from texture mapping, i and j are the coordinates of row anda synthesized and integrated view of the monitored scene. In column in the image, respectively, and m × n represents thethis way we can easily construct a surveillance system with dimension of the patch in the 2-D image. In order to have3-D view of the environment. an reference scale to quantify the distortion amount, a peak Mostly we can model the environment with two basic signal-to-noise ratio is calculated bybuilding components, horizontal planes and vertical planes.The horizontal planes for hallways and floors are usually MAX2 Isurrounded by doors and walls, which are modeled as the PSNR = 10 log10 , (3) MSEvertical planes. Both two kinds of planes are further dividedinto several patches according to the geometry of the scenes where MAXI is the maximum pixel value of the image.(Figure 3). If the scene consists of simple structures, a few Typical values for the PSNR are between 30 and 50 dB andlarge patches can well represent the scene with less rendering an acceptable value is considered to be about 20 dB to 25 dBcosts. On the other hand, more and smaller patches are in this work. We set a threshold T to determine the qualityrequired to accurately render a complex environment, at the of texture mapping byexpense of more computational costs. In the proposed system, the 3-D rendering platform is PSNR ≥ T . (4)developed on OpenGL and each patch is divided into tri-angles before rendering. Since linear interpolation is used If the PSNR of the patch is lower than T , the procedureto fill triangles with texture in OpenGL and not suitable divides it into smaller patches and repeats the process untilfor the perspective projection, distortion will appear in the the PSNR values of every patches are greater than the givenrendering result. One can use a lot of triangles to reduce this threshold T .kind of distortion, as shown in Figure 4, it will enlarge thecomputational burden and therefore not feasible for real-time III. O N - LINE MONITORINGsurveillance systems. To make a compromise between visualization accuracy and The proposed system displays the videos on the 3-D model.rendering cost, we propose a procedure that automatically However, the 3-D foreground objects such as pedestrians aredivides each patch into smaller ones and decides suitable projected to image frame and become 2-D objects. They willsizes of patches for accurate rendering (Figure 4). We use the appear flattened on the floor or wall since the system displaysfollowing mean-squared error method to estimate the amount them on planar patches. Furthermore, there might be ghostingof distortion when rendering image patches: effects when 3-D objects are in the overlapping areas of m−1 n−1 different camera views. We need to tackle this problem by 1 ˜ MSE = (Iij − Iij )2 , (2) separating and rendering 3-D foreground objects in addition m×n i=0 j=0 to the background environment. 990
  4. 4. our method such that the background doesn’t have to be determined again. In the indoor environment, we assume the color in eq.(7) is similar between shadow and background in a pixel although it is not evidently in sunshine in outdoor. Only the case of indoor environment is considered in this paper.Fig. 5. The tracking results obtained by using different shadow thresholds B. Collecting sampleswhile people stand on different positions of the floor. (a) Tr = 0.8 (b) Samples I(x, y, t) in some frames are collected to decideTr = 0.3. The threshold value Tθ = 6o is the same for both. the shadow area, where t is the time. In [12] all samples are collected including the classification of background, shadowA. Underlying assumption and foreground by the pixel value changed with time. But if a good background model has already built and some Shadow is a type of foreground noise. It appears in any initial foreground objects were segmented, the backgroundzone of the camera scene. In [8], each pixel belongs to samples are not necessary. Only foreground and shadowa shadow blob is detected by two properties. First, the samples If (x, y, t) were needed to consider. Besides, sincecolor vector of a pixel in shadow blob has similar direction background pixels are dropped from the samples list, this canto that of the background pixel in the same position of save the computer and memory especially in a scene withimage. Second, the magnitude of the color vector in the T few objects. Future, If θ (x, y, t) is obtained by dropping theshadow is slightly less than the corresponding color vector of samples which not satisfy inequality eq.(7) from If (x, y, t).background. Similar to [11], RGB or other color space can be Obviously, the samples data composed of more shadowstransformed into two dimensional space (called angle-module samples and less foreground samples. This also leads to thatspace). The color vector of a pixel in position (x, y) of current the threshold r(x, y, t) can be derived more easily than theframe, Ic (x, y), θ(x, y) is the angle between background threshold derived from samples of If (x, y, t).vector Ib (x, y) and Ic (x, y), and the magnitude ratio r(x, y)are defined as C. Deciding module ratio threshold arccos(Ic (x, y) · Ib (x, y)) The initial threshold Tθ (x, y, 0) is set according to the θ(x, y) = (5) experiment. In this case, Tθ (x, y, 0) = cos(6◦ ) is set as |Ic (x, y)||Ib (x, y)| + the initial value. After collecting enough samples, the ini- |Ic (x, y)| r(x, y) = (6) tial module ratio threshold Tr (x, y, 0) can be decided by |Ib (x, y)| this method, Fast step minimum searching (FSMS). FSMSwhere is a small number to avoid zero denominator. In [11], can fast separate the shadow from foreground distributionthe shadow of a pixels have to satisfy which collected samples are described above. The detail of this method is described below. The whole distribution is Tθ < cos θ(x, y) < 1 (7) separated by a window size w. The height of each window Tr < r(x, y) < 1 (8) is the sum of the samples. Besides the background peak, two peaks were found. The threshold Tr is used to search the where Tθ is the angle threshold and Tr is the module peak which is closest to the average background value andratio threshold. According to the demonstration showed in smaller than the background value, the shadow threshold canFigure 5, the best shadow thresholds are highly depends on be found by searching the minimum value or the value closepositions (pixels) in the scene, because of the complexity to zero.of environment, the light sources and objects positions.Therefore, we propose a method to automatically adjust the D. Updating angle thresholdthresholds for detecting shadow for each pixel. The threshold When a pixel satisfies both conditions in inequality eq.(7,for a pixel to be classified as shadow or not is determined by 8) at the same time, the pixel is classified as shadow. In otherthe necessary samples (data) collected with time. Only one words, if the pixel Is (x, y) is actually a shadow pixel, and c is classified as one of candidate of shadow by FSMS, theparameter has to be manually initialized. It is Tθ (0), where0 means the initial time. Then the method can update the property of the pixel is require to satisfy the below equationthresholds automatically and fast. Our method is faster than at the same timethe similar idea, GMSM method [12], when a backgroundmodel has built up. There are two major advantages of 0 ≤ cos θ(x, y, t) < Tθ (x, y, t) (9)the computation time for our method. First, only necessarysamples are collected. Second, compared with method [12], Tθ (x, y, t) can be decided by searching the minimumany background or foreground results can combine with cos(θ) of pixels in Is which is obtained by FSMS. However 991
  5. 5. Fig. 7. Orientation determination of the axis-aligned billboarding. L is the location of the billboard, E is the location projected vertically from the viewpoint to the floor, and v is the vector from L to E. The normal vector (n) of the billboard is rotated according to the location of the viewpoint. Y is the rotation axis and φ is the rotation angle.Fig. 6. A flowchart to illustrate the whole method. The purple part is basedon pixel. are always moving on the floor, the billboards can be aligned to be perpendicular to the floor in the 3-D model. The 3-Dwe propose another method to find out Tθ (x, y, t) more fast. location of the billboard is estimated by mapping the bottom-The number of samples which are classified as shadow or middle point of the foreground bounding box in the 2-Dbackground at time t is ATr (x, y, t) by using FSMS. We {b,s} image through the lookup tables. The ratio between the heightdefine a ratio R(Tr ) = ATr /A{b,s,f } where A{b,s,f } is all {b,s} of the bounding box and the 3-D model determines the heightsamples in position x, y, where b, s, f represent the back- of the billboard in the 3-D model. The relationship betweenground, shadow and foreground respectively. The threshold the direction of a billboard and the viewpoint is defined asTθ (x, y, t) can be updating to Tθ (x, y, t) by R(Tr ). The shown in Figure 7.number of samples whose cos(θ(x, y)) values are larger than The following equations are used to calculate the rotationthe Tθ (x, y, t) is equal to A{b,s} and is required angle of the billboard: R(Tθ (x, y, t)) = R(Tr ) (10) Y = (n × v) , (12) Besides, we add a perturbation δTθ to the Tθ (x, y, t). TSince FSMS only finds out a threshold in If θ (x, y, t), if the φ = cos−1 (v · n) , (13)initial threshold Tθ (x, y, 0) is set larger than true threshold,the best updating threshold is equal to threshold Tθ not where v is the vector from the location of the billboard, L, tosmaller than threshold Tθ . Therefore the true angle threshold the location E projected vertically from the viewpoint to thewill never be found with time. To solve this problem, a per- floor, n is the normal vector of the billboard, Y is the rotationturbation of the updating threshold is added to the updating axis, and φ is the estimated rotation angle. The normal vectorthreshold of the billboard is parallel to the vector v and the billboard is always facing toward the viewpoint of the operator. Tθ (x, y, t) = Tθ (x, y, t) − δTθ (11) F. Video content integration Since the new threshold Tθ (x, y, t) has smaller valueto cover more samples, it can approach the true threshold If the fields of views of cameras are overlapped, objects inwith time. This perturbation can also make the method more these overlapping areas are seen by multiple cameras. In thisadaptable to the change of environment. Here is a flowchart case, there might be ghosting effects when we simultaneouslyFigure 6 to illustrate the whole method. display videos from these cameras. To deal with this problem, we use 3-D locations of moving objects to identify the cor-E. Axis-aligned billboarding respondence of objects in different views. When the operator In visualization, axis-aligned billboarding [14] constructs chooses a viewpoint, the rotation angles of the correspondingbillboards in the 3-D model for moving objects, such as billboards are estimated by the method presented above andpedestrians, and the billboard always faces to the viewpoint of the system only render the billboard whose rotation angle isthe user. The billboard has three properties: location, height, the smallest among all of the corresponding billboards, asand direction. By assuming that all the foreground objects shown in Figure 8. 992
  6. 6. C1 C3 C2 C1Fig. 8. Removal of the ghosting effects. When we render the foregroundobject from one view, the object may appear in another view and thuscause the ghosting effect (bottom-left). Static background images without Fig. 9. Determination of viewpoint switch. We divide the floor areaforeground objects are used to fill the area of the foreground objects (top). depending on the fields of view of the cameras and associate each area to oneGhosting effects are removed and static background images can be update of the viewpoint close to a camera. The viewpoint is switched automaticallyby background modeling. to the predefined viewpoint of the area containing more foreground objects.G. Automatic change of viewpoint The experimental results shown in Figure 12 demonstrate that the viewpoint can be able to be chosen arbitrarily in The proposed surveillance system provides target tracking the system and operators can track targets with a closerfeature by determining and automatic switching the view- view or any viewing direction by moving the virtual camera.points. Before rendering, several viewpoints are specified in Moreover, the moving objects are always facing the virtualadvance to be close to the locations of the cameras. During camera by billboarding and the operators can easily perceivethe viewpoint switching from one to another, the parameters the spatial information of the foreground objects from anyof the viewpoints are gradually changed from the starting viewpoint.point to the destination point for smooth view transition. The switching criterion is defined as the number of blobs V. C ONCLUSIONSfound in the specific areas. First, we divide the floor area into In this work we have developed an integrated video surveil-several parts and associate them to each camera, as shown lance system that can provide a single comprehensive viewin Figure 9. When people move in the scene, the viewpoint for the monitored areas to facilitate tracking moving targetsis switched automatically to the predefined viewpoint of the through its interactive control and immersive visualization.area containing more foreground objects. We also make the We utilize planar patches for 3-D environment model con-billboard transparent by setting the alpha value of textures, so struction. The scenes from cameras are divided into severalthe foreground objects appear with fitting shapes, as shown patches according to their structures and the numbers andin Figure 10. sizes of patches are automatically determined for compromis- ing between the rendering effects and efficiency. To integrate IV. E XPERIMENT RESULTS video contents, homography transformations are estimated for relationships between image regions of the video contents We developed the proposed surveillance system on a PC and the corresponding areas of the 3D model. Moreover,with Intel Core Quad Q9550 processor, 2GB RAM, and one the proposed method to remove moving cast shadow cannVidia GeForce 9800GT graphic card. Three IP cameras with automatically decide thresholds by on-line learning. In this352 × 240 pixels resolution are connected to the PC through way, the manual setting can be avoided. Compared with theInternet. The frame rate of the system is about 25 frames per work based on frames, our method increases the accuracy tosecond. remove shadow. In visualization, the foreground objects are In the monitored area, automated doors and elevators are segmented accurately and displayed on billboards.specified as background objects, albeit their image do changewhen the doors open or close. These areas will be modeled in R EFERENCESbackground construction and not be visualized by billboards, [1] R. Sizemore, “Internet protocol/networked video surveillance market:the system use a ground mask to indicate the region of Equipment, technology and semiconductors,” Tech. Rep., 2008.interesting. Only the moving objects located in the indicated [2] Y. Wang, D. Krum, E. Coelho, and D. Bowman, “Contextualized videos: Combining videos with environment models to support situa-areas are considered as moving foreground objects, as shown tional understanding,” IEEE Transactions on Visualization and Com-in Figure 11. puter Graphics, 2007. 993
  7. 7. Fig. 11. Dynamic background removal by ground mask. There is an automated door in the scene (top-left) and it is visualized by a billboard (top- right). A mask covered the floor (bottom-left) is used to decide whether to visualize the foreground or not. With the mask, we can remove unnecessary billboards (bottom-right).Fig. 10. Automatic switching the viewpoint for tracking targets. People Fig. 12. Immersive monitoring at arbitary viewpoint. We can zoom out thewalk in the lobby and the viewpoint of the operator automatically switches viewpoint to monitor the whole surveillance area or zoom in the viewpointto keep track of the targets. to focus on a particular place. [3] Y. Cheng, K. Lin, Y. Chen, J. Tarng, C. Yuan, and C. Kao, “Accurate transactions on Geosci. and remote sens., 2009. planar image registration for an integrated video surveillance system,” [10] J. Kim and H. Kim, “Efficient regionbased motion segmentation for a Computational Intelligence for Visual Intelligence, 2009. video monitoring system,” Pattern Recognition Letters, 2003. [4] H. Sawhney, A. Arpa, R. Kumar, S. Samarasekera, M. Aggarwal, [11] E. J. Carmona, J. Mart´nez-Cantos, and J. Mira, “A new video seg- ı S. Hsu, D. Nister, and K. Hanna, “Video flashlights: real time ren- mentation method of moving objects based on blob-level knowledge,” dering of multiple videos for immersive model visualization,” in 13th Pattern Recognition Letters, 2008. Eurographics workshop on Rendering, 2002. [12] N. Martel-Brisson and A. Zaccarin, “Learning and removing cast [5] U. Neumann, S. You, J. Hu, B. Jiang, and J. Lee, “Augmented virtual shadows through a multidistribution approach,” IEEE transactions on environments (ave): dynamic fusion of imagery and 3-d models,” IEEE pattern analysis and machine intelligence, 2007. Virtual Reality, 2003. [13] S. Teller, M. Antone, Z. Bodnar, M. Bosse, S. Coorg, M. Jethwa, and [6] S. You, J. Hu, U. Neumann, and P. Fox, “Urban site modeling from N. Master, “Calibrated, registered images of an extended urban area,” lidar,” Lecture Notes in Computer Science, 2003. International Journal of Computer Vision, 2003. [7] I. Sebe, J. Hu, S. You, and U. Neumann, “3-d video surveillance [14] A. Fernandes, “Billboarding tutorial,” 2005. with augmented virtual environments,” in International Multimedia Conference, 2003. [8] T. Horprasert, D. Harwood, and L. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” IEEE ICCV. (1999). [9] K. Chung, Y. Lin, and Y. Huang, “Efficient shadow detection of color aerial images based on successive thresholding scheme,” IEEE 994
  8. 8. Morphing And Texturing Based On The Transformation Between Triangle Mesh And Point Wei-Chih Hsu Wu-Huang Cheng Department of Computer and Communication Institute of Engineering Science and Technology, Engineering, National Kaohsiung First University of National Kaohsiung First University of Science and Science and Technology. Kaohsiung, Taiwan Technology. Kaohsiung, Taiwan u9715901@nkfust.edu.twAbstract—This research proposes a methodology of [1] has proposed a method to represent multi scale surface.transforming triangle mesh object into point-based object and M. Müller et al. The [2] has developed a method forthe applications. Considering the cost and program functions, modeling and animation to show that point-based hasthe experiments of this paper adopt C++ instead of 3D flexible property.computer graphic software to create the point cloud from Morphing can base on geometric, shape, or other features.meshes. The method employs mesh bounded area and planar Mesh-based morphing sometimes involves geometry, meshdilation to construct the point cloud of triangle mesh. Two structure, and other feature analysis. The [3] haspoint-based applications are addressed in this research. 3D demonstrated a method to edit free form surface based onmodel generation can use point-based object morphing to geometric. The method applies complex computing to dealsimplify computing structure. Another application for texturemapping is using the relation of 2D image pixel and 3D planar. with topology, curve face property, and triangulation. The [4]The experiment results illustrate some properties of point- not only has divided objects into components, but also usedbased modeling. Flexibility and scalability are the biggest components in local-level and global-level morphing. The [5]advantages among the properties of point-based modeling. The has adopted two model morphing with mesh comparison andmain idea of this research is to detect more sophisticated merging to generate new model. The methods involvedmethods of 3D object modeling from point-based object. complicate data structure and computing. This research has illustrated simple and less feature analysis to create new Keywords-point-based modeling; triangle mesh; texturing; model by using regular point to morph two or more objects.morphing Texturing is essential in rendering 3D model. In virtual reality, the goal of texture mapping is try to be as similar to I. INTRODUCTION the real object as possible. In special effect, exaggeration texturing is more suitable for demand. The [6] has built a In recent computer graphic related researches, form•Z, mesh atlas for texturing. The texture atlases coordinates,Maya, 3DS, Max, Blender, Lightwave, Modo, solidThinking considered with triangle mesh structure, were mapped to 3Dand other 3D computer graphics software are frequently model. The [7] has used the conformal equivalence ofadopted tools. For example, Maya is a very popular software, triangle meshes to find the flat mesh for texture mapping.and it includes many powerful and efficient functions for This method is more comprehensible and easy to implement.producing results. The diverse functions of software can The rest arrangements are described as followings:increase the working efficiency, but the methodology design Transforming triangle mesh into point set for modeling aremust follow the specific rules and the cost is usually high. addressed in Section II and III, and that introduce point-Using C++ as the research tool has many advantages, based morphing for model creating. The point-based textureespecially in data information collection. Powerful functions mapping is addressed in Section IV, and followed by thecan be created by C language instructions, parameters and conclusion of Section V.C++ oriented object. More complete the data of 3D beabstracted, more unlimited analysis can be produced. II. TRANSFORMING TRIANGLE MESH INTO POINT SET The polygon mesh is widely used to represent 3D models In order to implement the advantages of point-basedand has some drawbacks in modeling. Unsmooth surface of model, transforming triangle mesh into point is the first step.combined meshes is one of them. Estimating vertices of The point set can be estimated by using three normal boundobjects and constructing each vertex set of mesh are the lines of triangle mesh. The normal denoted by n can befactors of modeling inefficiency. Point-based modeling is the calculated by three triangle vertices. The point in the trianglesolution to conquer some disadvantages of mesh modeling. area is denoted by B in , A denotes the triangle mesh area, thePoint-based modeling is based on point primitives. Nostructure of each point to another is needed. To simplify the 3D space planar can be presented by p with coordinatepoint based data can employ marching cube and Delaunay ( x, y , z ) , vi =1, 2,3 denotes three triangle vertices of triangletriangulation to transform point-based model into polygonmesh. Mark Pauly has published many related researches mesh, v denotes the mean of three triangle vertices. Theabout point-based in international journals as followings: the formula that presents the triangle area is described below. 995
  9. 9. A = { p ( x, y , z ) | pn T − v i n T = 0 , i ∈ (1,2,3), p ∈ Bin } The experiments use some objects file which is the wave front file format (.obj) from NTU 3D model database ver.1 Bin = { p( x, y, z ) | f (i , j ) ( p) × f (i , j ) (v) > 0} of National Taiwan University. The process of transforming f (i , j ) ( p) = r × a − b + s triangle mesh into point-based is shown in Figure 1. It is clear to see that some areas with uncompleted the whole b j − bi point set shown in red rectangle of figure 1. The planar r= , s = bi - r × ai dilation process is employed to refine fail areas. a j − ai Planar dilation process uses 26-connected planar to refine i, j = 1,2,3 a , b = x, y , z i < j a<b the spots leaved in the area. The first half portion of Figure 2 shows 26 positions of connected planar. If any planar and its 26 neighbor positions are the object planar is the condition. The main purpose to estimate the object planar is to verify the condition is true. The result in second half portion of Figure 2 reveals the efficiency of planar dilation process. III. POINT-BASED MORPHING FOR MODEL CREATING The more flexible in objects combining is one of property of point-based. No matter what the shape or category of the objects, the method of this study can put them into morphing process to create new objects. The morphing process includes 3 steps. Step one is to Figure 1. The process of transforming triangle mesh into point-based equalize the objects. Step two is to calculate each normal point of objects in morphing process. Step three is to estimate each point of target object by using the same normal point of two objects with the formula as described below. n −1 ot = p r1o1 + p r 2 o2 + ⋅ ⋅ ⋅ + (1 − ∑ p ri )o n i =1 n 0 ≤ p r1 , p r 2 ,⋅ ⋅ ⋅, p r ( n −1) ≤ 1 , ∑ p ri = 1 i =1 ot presents each target object point of morphing, and oi is the object for morphing process. p ri donates the object effect weight in morphing process, and i indicates the number of object. The new model appearance generated from morphing is depended on which objects were chosen and the value of each object weight as well. The research experiments use two objects, therefore i = 1 or 2, n = 2 . The results are shown in Figure 3. First row is a simple flat board and a character morphing. The second row shows the object selecting free in point-based modeling, because two totally different objects can be put into morphing and produced the satisfactory results. The models the created by objects morphing with different weights can be seen in figure 4. IV. POINT-BASED TEXTURE MAPPING Texturing mapping is a very plain in this research method. It uses a texture metric to map the 3D model to the 2D image pixel by using the concept of 2D image transformation into 3D. Assuming 3D spaces is divided into α × β blocks, α is the number of row, and β is the number of column. Hence the length, the width, and the height of 3D space is h × h × h ; afterwards the ( X , Y ) and ( x. y, z ) will denote the image coordination and 3D model respectively. Figure 2. Planar dilation process. The texture of each block is assigned by texture cube, and it 996
  10. 10. is made by 2D image as shown in the middle image of first confirmed by the scalability and flexibility of proposedraw in figure 5. The process can be expressed by a formula methodologies.as below. At T = c T REFERENCES h h h [1] MARK PAULY, “Point-Based Multiscale Surface t = [ x mod , y mod , z mod ] , c = [ X,Y ] α β β Representation,” ACM Transactions on Graphics, Vol. 25, No. 2, pp. 177–193, April 2006. ⎡α 0 0⎤ A=⎢ β (h − z ) ⎥ [2] M. Müller1, R. Keiser1, A. Nealen2, M. Pauly3, M. Gross1 ⎢ 0 0 ⎥ and M. Alexa2, “Point Based Animation of Elastic, Plastic ⎣ y ⎦ and Melting Objects,” Eurographics/ACM SIGGRAPH A denotes the texture transforming metric, t denotes the 3D Symposium on Computer Animation, pp. 141-151, 2004.model current position, c denotes the image pixel content in [3] Theodoris Athanasiadis, Ioannis Fudos, Christophoros Nikou,the current position. “Feature-based 3D Morphing based on Geometrically The experiment results are shown in the second row of Constrained Sphere Mapping Optimization,” SAC’10 Sierre,figure 5 and 6. The setting results α = β = 2 are shown in Switzerland, pp. 1258-1265, March 22-26, 2010.second row of figure 5. The setting results α = β = 4 create [4] Yonghong Zhao, Hong-Yang Ong, Tiow-Seng Tan and Yongguan Xiao, “Interactive Control of Component-basedthe images are shown in the first row of figure 6. The last Morphing,” Eurographics/SIGGRAPH Symposium onrow images of figure 6 indicate the proposed texture Computer Animation , pp. 340-385, 2003.mapping method can be applied into any point-based model. [5] Kosuke Kaneko, Yoshihiro Okada and Koichi Niijima, “3D V. CONCLUSION Model Generation by Morphing,” IEEE Computer Graphics, Imaging and Visualisation, 2006. In sum, the research focuses on point-based modeling [6] Boris Springborn, Peter Schröder, Ulrich Pinkall, “Conformalapplications by using C++ instead of convenient facilities or Equivalence of Triangle Meshes,” ACM Transactions onother computer graphic software. The methodologies that Graphics, Vol. 27, No. 3, Article 77, August 2008.developed by point-based include the simple data structureproperties and less complex computing. Moreover, the [7] NATHAN A. CARR and JOHN C. HART, “Meshed Atlases for Real-Time Procedural Solid Texturing,” ACMmethods can be compiled with two applications morphing Transactions on Graphics, Vol. 21,No. 2, pp. 106–131, Apriland texture mapping. The experiment results have been 2002. Figure 3. The results of point-based modeling using different objects morphing. 997
  11. 11. Figure 4. The models created by objects morphing with different weights.Figure 5. The process of 3D model texturing with 2D image shown in first row and the results shown in second row. 998
  12. 12. Figure 6. The results of point-based texture mapping with α = β = 4 and different objects. 999
  13. 13. LAYERED LAYOUTS OF DIRECTED GRAPHS USING A GENETIC ALGORITHM Chun-Cheng Lin1,∗, Yi-Ting Lin2 , Hsu-Chun Yen2,† , Chia-Chen Yu3 1 Dept. of Computer Science, Taipei Municipal University of Education, Taipei, Taiwan 100, ROC 2 Dept. of Electrical Engineering, National Taiwan University, Taipei, Taiwan 106, ROC 3 Emerging Smart Technology Institute, Institute for Information Industry, Taipei, Taiwan, ROC ABSTRACT charts, maps, posters, scheduler, UML diagrams, etc. It is important that a graph be drawn “clear”,By layered layouts of graphs (in which nodes are such that users can understand and get informationdistributed over several layers and all edges are di- from the graph easily. This paper focuses on lay-rected downward as much as possible), users can ered layouts of directed graphs, in which nodes areeasily understand the hierarchical relation of di- distributed on several layers and in general edgesrected graphs. The well-known method for generat- should point downward as shown in Figure 1(b).ing layered layouts proposed by Sugiyama includes By this layout, users can easily trace each edge fromfour steps, each of which is associated with an NP- top to bottom and understand the priority or orderhard optimization problem. It is observed that the information of these nodes clearly.four optimization problems are not independent, inthe sense that each respective aesthetic criterionmay contradict each other. That is, it is impossi-ble to obtain an optimal solution to satisfy all aes-thetic criteria at the same time. Hence, the choicefor each criterion becomes a very important prob-lem. In this paper, we propose a genetic algorithmto model the first three steps of the Sugiyama’s al-gorithm, in hope of simultaneously considering the Figure 1: The layered layout of a directed graph.first three aesthetic criteria. Our experimental re-sults show that this proposed algorithm could make Specifically, we use the following criteria to es-layered layouts satisfy human’s aesthetic viewpoint. timate the quality of a directed graph layout: to minimize the total length of all edges; to mini-Keywords: Visualization, genetic algorithm, mize the number of edge crossings; to minimize thegraph drawing. number of edges pointing upward; to draw edges as straight as possible. Sugiyama [9] proposed a 1. INTRODUCTION classical algorithm for producing layered layouts of directed graphs, consisting of four steps: cycleDrawings of directed graphs have many applica- removal, layer assignment, crossing reduction, andtions in our daily lives, including manuals, flow assignment of horizontal coordinates, each of which ∗ Research supported in part by National Science Council addresses a problem of achieving one of the aboveunder grant NSC 98-2218-E-151-004-MY3 criteria, respectively. Unfortunately, the first three † Research supported in part by National Science Council problems have been proven to be NP-hard when theunder grant NSC 97-2221-E-002-094-MY3 width of the layout is restricted. There has been 1000
  14. 14. a great deal of work with respect to each step of is quite different between drawing layered layoutsSugiyama’s algorithm in the literature. of acyclic and cyclic directed graphs. In acyclic Drawing layered layouts by four independent graphs, one would not need to solve problems onsteps could be executed efficiently, but it may not cyclic removal. If the algorithm does not restrictalways obtain nice layouts because preceding steps the layer by a fixed width, one also would not needmay restrain the results of subsequent steps. For to solve the limited layer assignment problem. Noteexample, four nodes assigned at two levels after the that the unlimited-width layer assignment is not anlayer assignment step lead to an edge crossing in NP-hard problem, because the layers of nodes canFigure 2(a), so that the edge crossing cannot be be assigned by a topological logic ordering. Theremoved during the subsequent crossing reduction algorithm in [10] only focuses on minimizing thestep, which only moves each node’s relative posi- number of edge crossings and making the edges astion on each layer, but in fact the edge crossing straight as possible. Although it also combinedcan be removed as drawn in Figure 2(b). Namely, three steps of Sugiyama’s algorithm, but it onlythe crossing reduction step is restricted by the layer contained one NP-hard problem. Oppositely, ourassignment step. Such a negative effect exists ex- algorithm combined three NP-hard problems, in-clusively not only for these two particular steps but cluding cycle removal, limited-width layer assign-also for every other preceding/subsequent step pair. ment, and crossing reduction. In addition, our algorithm has the following ad- vantages. More customized restrictions on layered layouts are allowed to be added in our algorithm. For example, some nodes should be placed to the (a) (b) left of some other nodes, the maximal layer number should be less than or equal to a certain number, Figure 2: Different layouts of the same graph. etc. Moreover, the weighting ratio of each optimal criterion can be adjusted for different applications. Even if one could obtain the optimal solution for According to our experimental results, our geneticeach step, those “optimal solutions” may not be the algorithm may effectively adjust the ratio betweenreal optimal solution, because those locally optimal edge crossings number and total edge length. Thatsolutions are restricted by their respective preced- is, our algorithm may make layered layouts moreing steps. Since we cannot obtain the optimal solu- appealing to human’s aesthetic viewpoint.tion satisfying all criteria at the same time, we haveto make a choice in a trade-off among all criteria. For the above reasons, the basic idea of our 2. PRELIMINARIESmethod for drawing layered layouts is to combinethe first three steps together to avoid the restric-tions due to criterion trade-offs. Then we use the The frameworks of three different algorithms forgenetic algorithm to implement our idea. In the layered layouts of directed graphs (i.e., Sugiyama’sliterature, there has existed some work on produc- algorithm, the cyclic leveling algorithm, and ouring layered layouts of directed graphs using ge- algorithm) are illustrated in Figure 2(a)–2(c), re-netic algorithm, e.g., using genetic algorithm to re- spectively. See Figure 2. Sugiyama’s algorithmduce edge crossings in bipartite graphs [7] or entire consists of four steps, as mentioned previously; theacyclic layered layouts [6], modifying nodes in a other two algorithms are based on Sugiyama’s algo-subgraph of the original graph on a layered graph rithm, in which the cyclic leveling algorithm com-layout [2], drawing common layouts of directed or bines the first two steps, while our genetic algo-undirected graphs [3] [11], and drawing layered lay- rithm combines the first three steps. Furthermore,outs of acyclic directed graphs [10]. a barycenter algorithm is applied to the crossing re- Note that the algorithm for drawing layered lay- duction step of the cyclic leveling and our geneticouts of acyclic directed graphs in [10] also com- algorithms, and the priority layout method is ap-bined three steps of Sugiyama’s algorithm, but it plied to the x-coordinate assignment step. 1001
  15. 15. Sugiyama’s Algorithm Cyclic Leveling Genetic Algorithm Cycle Removel Cycle Removel Cycle Removel edge-node crossing Layer Assignment Layer Assignment Layer Assignment edge crossing Crossing Reduction (a) An edge crossing. (b) An edge-node crossing Crossing Reduction Crossing Reduction Barycenter Algorithm x-Coordinte Assignment x-Coordinte Assignment Priority Layout Method x-Coordinte Assignment Figure 4: Two kinds of crossings. (a) Sugiyama (b) Cyclic Leveling (c) Our we reverse as few edges as possible such that theFigure 3: Comparison among different algorithms. input graph becomes acyclic. This problem can be stated as the maximum acyclic subgraph prob- 2.1. Basic Definitions lem, which is NP-hard. (2) Layer assignment: Each node is assigned to a layer so that the total verticalA directed graph is denoted by G(V, E), where V is length of all edges is minimized. If an edge spansthe set of nodes and E is the set of edges. An edge across at least two layers, then dummy nodes shoulde is denoted by e = (v1 , v2 ) ∈ E, where v1 , v2 ∈ V ; be introduced to each crossed layer. If the maxi-edge e is directed from v1 to v2 . A so-called layered mum width is bounded greater or equal to three,layout is defined by the following conditions: (1) the problem of finding a layered layout with min-Let the number of layers in this layout denoted by imum height is NP-compete. (3) Crossings reduc-n, where n ∈ N and n ≥ 2. Moreover, the n-layer tion: The relative positions of nodes on each layerlayout is denoted by G(V, E, n). (2) V is parti- are reordered to reduce edges crossings. Even if wetioned into n subsets: V = V1 ∪ V2 ∪ V3 ∪ · · · ∪ Vn , restrict the problem to bipartite (two-layer) graphs,where Vi ∩ Vj = ∅, ∀i ̸= j; nodes in Vk are assigned it is also an NP-hard problem. (4) x-coordinate as-to layer k, 1 ≤ k ≤ n. (3) A sequence ordering, signment: The x-coordinates of nodes and dummyσi , of Vi is given for each i ( σi = v1 v2 v3 · · · v|Vi | nodes are modified, such that all the edges on thewith x(v1 ) < x(v2 ) < · · · < x(v|Vi | )). The n- original graph structure are as straight as possi-layer layout is denoted by G(V, E, n, σ), where σ = ble. This step includes two objective functions: to(σ1 , σ2 , · · · , σn ) with y(σ1 ) < y(σ2 ) < · · · < y(σn ). make all edges as close to vertical lines as possible; An n-layer layout is called “proper” when it fur- to make all edge-paths as straight as possible.ther satisfies the following condition: E is parti-tioned into n − 1 subsets: E = E1 ∪ E2 ∪ E3 ∪ 2.3. Cyclic Leveling Algorithm· · · ∪ En−1 , where Ei ∩ Ej = ∅, ∀i ̸= j, andEk ⊂ Vk × Vk+1 , 1 ≤ k ≤ n − 1. The cyclic leveling algorithm (CLA) [1] combines the first two steps of Sugiyama’s algorithm, i.e., it An edge crossing (assuming that the layout is focuses on minimizing the number of edges point-proper) is defined as follows. Consider two edges ing upward and total vertical length of all edges.e1 = (v11 , v12 ), e2 = (v21 , v22 ) Ei, in which v11 It introduces a number called span that representsand v21 are the j1 -th and the j2 -th nodes in σi , the number of edges pointing upward and the totalrespectively; v12 and v22 are the k1 -th and the k2 - vertical length of all edges at the same time.th nodes in σi+1 , respectively. If either j1 < j2 &k1 > k2 or j1 > j2 & k1 < k2 , there is an edge The span number is defined as follows. Considercrossing between e1 and e2 (see Figure 4(a)). a directed graph G = (V, E). Given k ∈ N, define a layer assignment function ϕ : V → {1, 2, · · · , k}. An edge-node crossing is defined as follows. Con- Let span(u, v) = ϕ(v) − ϕ(u), if ϕ(u) < ϕ(v);sider an edge e = (v1 , v2 ), where v1 , v2 ∈ V i; v1 span(u, v) = ϕ(v) − ϕ(u) + k, otherwise. For eachand v2 are the j-th and the k-th nodes in σi , re- edge e = (u, v) ∈ E, denote span(e) = span(u, v) ∑spectively. W.l.o.g., assuming that j > k, there are and span(G) = e∈E span(e). In brief, span(k − j − 1) edge-node crossings (see Figure 4(b)). means the sum of vertical length of all edges and the penalty of edges pointing upward or horizontal, 2.2. Sugiyama’s Algorithm provided maximum height of this layout is given.Sugiyama’s algorithm [9] consists of four steps: (1) The main idea of the CLA is: if a node causesCycle removal: If the input directed graph is cyclic, a high increase in span, then the layer position of 1002
  16. 16. the node would be determined later. In the algo- then priority(v) = B − (|k − m/2|), in which B is arithm, the distance function is defined to decide big given number, and m is the number of nodes inwhich nodes should be assigned first and is ap- layer k; if down procedures (resp., up procedures),plied. There are four such functions as follows, then priority(v) = connected nodes of node v onbut only one can be chosen to be applied to all layer p − 1 (resp., p + 1).the nodes: (1) Minimum Increase in Span Moreover, the x-coordinate position of each node= minϕ(v)∈{1,··· ,k} span(E(v, V ′ )); (2) Minimum v is defined as the average x-coordinate position ofAverage Increase in Span (MST MIN AVG) connected nodes of node v on layer k − 1 (resp.,= minϕ(v)∈{1,··· ,k} span(E(v, V ′ ))/E(v, V ′ ); (3) k + 1), if down procedures (resp., up procedures).Maximum Increase in Span = 1/δM IN (v);(4) Maximum Average Increase in Span = 2.6. Genetic Algorithm1/δM IN AV G (v). From the experimental resultsin [1], using “MST MIN AVG” as the distance The Genetic algorithm (GA) [5] is a stochasticfunction yields the best result. Therefore, our global search method that has proved to be success-algorithm will be compared with the CLA using ful for many kinds of optimization problems. GAMST MIN AVG in the experimental section. is categorized as a global search heuristic. It works with a population of candidate solutions and tries 2.4. Barycenter Algorithm to optimize the answer by using three basic princi-The barycenter algorithm is a heuristic for solv- ples, including selection, crossover, and mutation.ing the edge crossing problem between two lay- For more details on GA, readers are referred to [5].ers. The main idea is to order nodes on eachlayer by its barycentric ordering. Assuming that 3. OUR METHODnode u is located on the layer i (u ∈ Vi ), the The major issue for drawing layered layouts of di-barycentric∑ value of node u is defined as bary(u) = rected graphs is that the result of the preceding step(1/|N (u)|) v∈N (u) π(v), where N (u) is the set may restrict that of the subsequent step on the firstconsisting of u’s connected nodes on u’s below or three steps of Sugiyama’s algorithm. To solve it, weabove layer (Vi−1 or Vi+1 ); π(v) is the order of v design a GA that combines the first three steps ofin σi−1 or σi+1 . The process in this algorithm is Sugiyama’s algorithm. Figure 5 is the flow chartreordering the relative positions of all nodes accord- of our GA. That is, our method consists of a GAing to the ordering: layer 2 to layer n and then layer and an x-coordinate assignment step. Note thatn − 1 to layer 1 by barycentric values. the barycenter algorithm and the priority method are also used in our method, in which the former is 2.5. Priority Layout Method used in our GA to reduce the edge crossing, whileThe priority layout method solves the x-coordinate the latter is applied to the x-coordinate assignmentassignment problem. Its idea is similar to the step of our method.barycenter algorithm. It assigns the x-coordinateposition of each node layer to layer according to the Initializationpriority value of each node. At first, these nodes’ x-coordinate positions ineach layer are given by xi = x0 + k, where x0 is k Assign dummy nodes ia given integer, and xk is the k-th element of σi . Draw the best Chromosome Terminate? BarycenterNext, nodes’ x-coordinate positions are adjusted Fine tune Selectionaccording to the order from layer 2 to layer n, layern − 1 to layer 1, and layer n/2 to layer n. The im- Mutation Remove dummy nodesprovements of the positions of nodes from layer 2 to Crossoverlayer n are called down procedures, while those fromlayer n−1 to layer 1 are called up procedures. Basedon the above, the priority value of a k-th node v on Figure 5: The flow chart of our genetic algorithm.layer p is defined as: if node v is a dummy node, 1003
  17. 17. 3.1. Definitions 4. MAIN COMPONENTS OF OUR GAFor arranging nodes on layers, if the relative hori- Initialization: For each chromosome, we ran- √ √zontal positions of nodes are determined, then the domly assign nodes to a ⌈ |V |⌉ × ⌈ |V |⌉ grid.exact x-coordinate positions of nodes are also de- Selection: To evaluate the fitness value of eachtermined according to the priority layout method. chromosome, we have to compute the number ofHence, in the following, we only consider the rela- edge crossings, which however cannot be computedtive horizontal positions of nodes, and each node is at this step, because the routing of each edge isarranged on a grid. We use GA to model the lay- not determined yet. Hence, some dummy nodesered layout problem, so define some basic elements: should be introduced to determine the routing ofPopulation: A population (generation) includes edges. In general, these dummy nodes are placedmany chromosomes, and the number of chromo- on the best relative position with the optimal edgesomes depends on setting of initial population size. crossings between two adjacent layers. Neverthe-Chromosome: One chromosome represents one less, permuting these nodes on each layer for thegraph layout, where the absolute position of each fewest edge crossings is an NP-hard problem [4].(dummy) node on the grid is recorded. Since the Hence, the barycenter algorithm (which is also usedadjacencies of nodes and the directions of edges by the CLA) is applied to reducing edge crossingswill not be altered after our GA, we do not need on each chromosome before selection. Next, therecord the information on chromosomes. On this selection step is implemented by the truncation se-grid, one row represents one layer; a column rep- lection, which duplicates the best (selection rate ×resents the order of nodes on the same layer, and population size) chromosomes (1/selection rate)these nodes on the same layer are always placed times to fill the entire population. In addition, wesuccessively. The best-chromosome window reserves use a best-chromosome window to reserve some ofthe best several chromosomes during all antecedent the best chromosomes in the previous generationsgenerations; the best-chromosome window size ra- as shown in Figure 6.tio is the ratio of the best-chromosome window size Best-Chromosome Windowto the population size. Best-Chromosome WindowFitness Function: The ‘fitness’ value in our def- duplicateinition is abused to be defined as the penalty forthe bad quality of chromosome. That is, larger ‘fit- Parent Population Child Population Child Populationness’ value implies worse chromosome. Hence, ourGA aims to find the chromosome with minimal ‘fit-ness’ value. Some aesthetical criteria to determine Figure 6: The selection process of our GA.the quality of chromosomes (layouts) are given asfollows (noticing that these criteria are referred Crossover: Some main steps of our crossover pro- ∑7from [8] and [9]): f itness value = i=1 Ci × Fi cess are detailed as follows: (1) Two ordered par-where Ci are constants, 1 ≤ i ≤ 7, ∀i; F1 is the to- ent chromosomes are called the 1st and 2nd parenttal edge vertical length; F2 is the number of edges chromosome. W.l.o.g., we only introduce how topointing upward; F3 is the number of edges point- generate the first child chromosome from the twoing horizontally; F4 is the number of edge crossing; parent chromosomes, and the other child is similar.F5 is the number of edge-node crossing; F6 is the (2) Remove all dummy nodes from these two par-degree of layout height over limited height; F7 is ent chromosomes. (3) Choose a half of the nodesthe degree of layout width over limited width. from each layer of the 1st parent chromosome and In order to experimentally compare our GA place them on the same relative layers of child chro-with the CLA in [1], the fitness function of our mosome in the same horizontal ordering. (4) TheGA is tailored to satisfy the CLA as follows: information on the relative positions of the remain-f itness value = span + weight × edge crossing + ing nodes all depends on the 2nd chromosomes.C6 × F6 + C7 × F7 where we will adjust the weight Specifically, we choose a node adjacent to the small-of edge crossing number in our experiment to rep- est number of unplaced nodes until all nodes areresent the major issue which we want to discuss. placed. If there are many candidate nodes, we ran- 1004
  18. 18. domly choose one. The layer of the chosen node is Note that the x-coordinate assignment problemequal to base layer plus relative layer, where base (step 4) is solved by the priority layout methodlayer is the average of its placed connected nodes’ in our experiment. In fact, this step would notlayers in the child chromosome and relative layer is affect the span number or edge crossing number. Inthe relative layer position of its placed connected addition, the second step of Sugiyama’s algorithmnodes’ layers in the 2nd parent chromosome. (5) (layer assignment) is an NP-hard problem when theThe layers of this new child chromosome are mod- width of the layered layout is restricted. Hence,ified such that layers start from layer 1. we will respectively investigate the cases when theMutation: In the mutated chromosome, a node width of the layered layout is limited or not.is chosen randomly, and then the position of thechosen node is altered randomly. 5.1. Experimental EnvironmentTermination: If the difference of average fitness All experiments run on a 2.0 GHz dual core lap-values between successive generations in the latest top with 2GB memory under Java 6.0 platformten generations is ≤ 1% of the average fitness value from Sun Microsystems, Inc. The parameters ofof these ten generations, then our GA algorithm our GA are given as follows: Population size:stops. Then, the best chromosome from the latest 100; Max Generation: 100; Selection Rate: 0.7;population is chosen, and its corresponding graph Best-Chromosome Window Size Ratio: 0.2; Mutatelayout (including dummy nodes at barycenter po- Probability: 0.2; C6 : 500; C7 : 500; f itness value =sitions) is drawn. span + weight × edgecrossing + C6 × F6 + C7 × F7 .Fine Tune: Before the selection step or after thetermination step, we could tune better chromo- 5.2. Unlimited Layout Widthsomes according to the fitness function. For ex-ample, we remove all layers which contain only Because it is necessary to limit the layout widthdummy nodes but no normal nodes, called dummy and height for L M algorithm, we set both limitslayers. Such a process does not necessarily worsen for width and height to be 30. It implies that therethe edge crossing number but it would improve are at most 30 nodes (dummy nodes excluded) onthe span number. In addition, some unnecessary each layer and at most 30 layers in each layout. Ifdummy nodes on each edge can also be removed we let the maximal node number to be 30 in ourafter the termination step, in which the so-called experiment, then the range for node distributionunnecessary dummy node is a dummy node that is equivalently unlimited. In our experiments, weis removed without causing new edge crossings or consider a graph with 30 nodes under three differ-worsening the fitness value. ent densities (2%, 5%, 10%), in which the density is the ratio of edge number to all possible edges, 5. EXPERIMENTAL RESULTS i.e. density = edge number/(|V |(|V | − 1)/2). Let the weight ratio of edge crossing to span be de-To evaluate the performance of our algorithm, our noted by α. In our experiments, we consider fivealgorithm is experimentally compared with the different α values 1, 3, 5, 7, 9. The statistics forCLA (combing the first two steps of Sugiyama’s the experimental results is given in Table 1.algorithm) using MST MIN AVG as the distance Consider an example of a 30-node graph withfunction [1], as mentioned in the previous sections. 5% density. The layered layout by the LM B algo-For convenience, the CLA using MST MIN AVG rithm, our algorithm under α = 1 and α = 9 aredistance function is called as the L M algorithm shown in Figure 7, Figure 8(a) and Figure 8(b), re-(Leveling with MST MIN AVG). The L M algo- spectively. Obviously, our algorithm performs bet-rithm (for step 1 + step 2) and barycenter algo- ter than the LM B.rithm (for step 3) can replace the first three stepsin Sugiyama’s algorithm. In order to be compared 5.3. Limited Layout Widthwith our GA (for step 1 + step 2 + step 3), we con-sider the algorithm combining the L M algorithm The input graph used in this subsection is the sameand barycenter algorithm, which is called LM B al- as the previous subsection (i.e., a 30-node graph).gorithm through the rest of this paper. The limited width is set to be 5, which is smaller 1005
  19. 19. Table 1: The result after redrawing random graphswith 30 nodes and unlimited layout width. method measure density =2%density=5%density=10% span 30.00 226.70 798.64 LM B crossing 4.45 57.90 367.00 running time 61.2ms 151.4ms 376.8ms α =1 span 30.27 253.93 977.56 crossing 0.65 38.96 301.75 α =3 span 31.05 277.65 1338.84 crossing 0.67 32.00 272.80our α =5 span 30.78 305.62 1280.51GA crossing 0.67 29.89 218.45 α =7 span 32.24 329.82 1359.46 crossing 0.75 26.18 202.53 (a) α = 1 (b) α = 9 α =9 span 31.65 351.36 1444.27 crossing 0.53 24.89 200.62 (span: 188, crossing: 30)(span: 238, crossing: 14) running time 3.73s 17.32s 108.04s Figure 8: Layered layouts by our GA. Table 2: The result after redrawing random graphs with 30 nodes and limited layout width 5. method measure density =2%density=5%density=10% span 28.82 271.55 808.36 LM B crossing 5.64 59.09 383.82 running time 73.0ms 147.6ms 456.2msFigure 7: Layered layout by LM B (span:262, α =1 span 32.29 271.45 1019.56crossing:38). crossing 0.96 39.36 292.69 α =3 span 31.76 294.09 1153.60 crossing 0.80 33.16 232.76 our α =5 span 31.82 322.69 1282.24 GA crossing 0.82 30.62 202.31than the square root of the node number (30), be- α =7 span 32.20 351.00 1369.73cause we hope the results under limit and unlimited crossing 0.69 27.16 198.20 α =9 span 33.55 380.20 1420.31conditions have obvious differences. The statistics crossing 0.89 24.95 189.25for the experimental results under the same settings running time 3.731s 3.71s 18.07sin the previous subsection is given in Table 2. Consider an example of a 30-node graph with 5%density. The layered layout for this graph by the our GA may produce simultaneously span and edgeLM B algorithm, our algorithm under α = 1 and crossing numbers both smaller than that by LM B.α = 9 are shown in Figure 9, Figure 10(a) and Fig-ure 10(b), respectively. Obviously, our algorithm Moreover, we discovered that under any condi-also performs better than the LM B. tions the edge crossing number gets smaller and the span number gets greater when increasing the weight of edge crossing. It implies that we may ef- 5.4. Discussion fectively adjust the weight between edge crossingsDue to page limitation, only the case of 30-node and spans. That is, we could reduce the edge cross-graphs is included in this paper. In fact, we con- ing by increasing the span number.ducted many experiments for various graphs. Be- Under limited width condition, because the re-sides those results, those tables and figures show sults of L M are restricted, its span number shouldthat under any conditions (node number, edge den- be larger than that under unlimited condition.sity, and limited width or not) the crossing number However, there are some unusual situations in ourby our GA is smaller than that by LM B. How- GA. Although the results of our GA are also re-ever, the span number by our GA is not neces- stricted under limited width condition, its spansarily larger than that by LM B. When the layout number is smaller than that under unlimited widthwidth is limited and the node number is sufficiently condition. Our reason is that the limited widthsmall (about 20 from our experimental evaluation), condition may reduce the possible dimension. In 1006

×