Recently with the increasing development of distributed computer systems (DCSs) in networked
industrial and manufacturing applications on the World Wide Web (WWW) platform, including service-oriented
architecture and Web of Things QoS-aware systems, it has become important to predict the Web performance.
In this paper, we present Web performance prediction in time by making a forecast of a Web resource
downloading using the Efficient Turning Bands (TB) geostatistical simulation method. Real-life data for the
research were obtained from our own website named "Distributed forecasting system". Generation of log file
form website and performing monitoring of a group of Web clients from connected LAN. For better web
prediction we used spatio temporal prediction method with time utility for downloading particular file from
website and calculate forecasting result using Turning bands method but improving more forecasting
accuracy use the efficient turning band method basically efficient turning band use Naive bays algorithm and
calculate efficient result and that result is compared with Turning band and efficient turning band method.
The efficient turning band method result show good forecasting quality of Web performance prediction and
forecasting.
AN OPEN JACKSON NETWORK MODEL FOR HETEROGENEOUS INFRASTRUCTURE AS A SERVICE O...IJCNCJournal
Cloud computing is an environment which provides services for user demand such as software, platform, infrastructure. Applications which are deployed on cloud computing have become more varied and complex to adapt to increase end-user quantity and fluctuating workload. One popular characteristic of
cloud computing is the heterogeneity of network, hosts and virtual machines (VM). There were many studies on cloud computing modeling based on queuing theory, but most studies have focused on homogeneity characteristic. In this study, we propose a cloud computing model based on open Jackson
network for multi-tier application systems which are deployed on heterogeneous VMs of IaaS cloud computing. The important metrics are analyzed in our experiments such as mean waiting time; mean request quantity, the throughput of the system. Besides that, metrics in model is used to modify number VMs
allocated for applications. Result of experiments shows that open queue network provides high efficiency.
THE DEVELOPMENT AND STUDY OF THE METHODS AND ALGORITHMS FOR THE CLASSIFICATIO...IJCNCJournal
This paper represents the results of the research, which have allowed us to develop a hybrid
approach to the processing, classification, and control of traffic routes. The approach enables to
identify traffic flows in the virtual data center in real-time systems. Our solution is based on the
methods of data mining and machine learning, which enable to classify traffic more accurately
according to more criteria and parameters. As a practical result, the paper represents the
algorithmic solution of the classification of the traffic flows of cloud applications and services
embodied in a module for the controller of the software-defined network. This solution enables to
increase the efficiency of handling user requests to cloud applications and reduce the response
time, which has a positive effect on the quality of service in the network of the virtual data center
Performing initiative data prefetchingKamal Spring
Abstract—This paper presents an initiative data prefetching scheme on the storage servers in distributed file systems for cloud
computing. In this prefetching technique, the client machines are not substantially involved in the process of data prefetching, but the
storage servers can directly prefetch the data after analyzing the history of disk I/O access events, and then send the prefetched data
to the relevant client machines proactively. To put this technique to work, the information about client nodes is piggybacked onto the
real client I/O requests, and then forwarded to the relevant storage server. Next, two prediction algorithms have been proposed to
forecast future block access operations for directing what data should be fetched on storage servers in advance. Finally, the prefetched
data can be pushed to the relevant client machine from the storage server. Through a series of evaluation experiments with a
collection of application benchmarks, we have demonstrated that our presented initiative prefetching technique can benefit distributed
file systems for cloud environments to achieve better I/O performance. In particular, configuration-limited client machines in the cloud
are not responsible for predicting I/O access operations, which can definitely contribute to preferable system performance on them.
A Multipath Connection Model for Traffic MatricesIJERA Editor
Peer-to-Peer (P2P) applications have witnessed an increasing popularity in recent years, which brings new challenges to network management and traffic engineering (TE). As basic input information, P2P traffic matrices are of significant importance for TE. Because of the excessively high cost of direct measurement. In this paper,A multipath connection model for traffic matrices in operational networks. Media files can share the peer to peer, the localization ratio of peer to peer traffic. This evaluates its performance using traffic traces collected from both the real peer to peer video-on-demand and file-sharing applications. The estimation of the general traffic matrices (TM) then used for sending the media file without traffic. Share the media file, source to destination traffic is not occur. So it give high performance and short time process.
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...cscpconf
For performing distributed data mining two approaches are possible: First, data from several sources are copied to a data warehouse and mining algorithms are applied in it. Secondly,
mining can performed at the local sites and the results can be aggregated. When the number of
features is high, a lot of bandwidth is consumed in transferring datasets to a centralized location. For this dimensionality reduction can be done at the local sites. In dimensionality reduction a certain encoding is applied on data so as to obtain its compressed form. The
reduced features thus obtained at the local sites are aggregated and data mining algorithms are applied on them. There are several methods of performing dimensionality reduction. Two most important ones are Discrete Wavelet Transforms (DWT) and Principal Component Analysis (PCA). Here a detailed study is done on how PCA could be useful in reducing data flow across a distributed network.
A Review - Synchronization Approaches to Digital systemsIJERA Editor
Synchronization is a prime requirement in the process of Digital systems. Wherein new devices are upcoming
towards providing higher service level, advanced distributed systems are been integrated onto a single platform
for higher service provision. However with the integration of large processing units, the distributed processing
needs a high level synchronization with minimum processing overhead. The issue of synchronization was
processed by various approaches. This paper outlines a brief review on the developments made in the field of
synchronization approach to digital system, under distributed mode operation.
Recently with the increasing development of distributed computer systems (DCSs) in networked
industrial and manufacturing applications on the World Wide Web (WWW) platform, including service-oriented
architecture and Web of Things QoS-aware systems, it has become important to predict the Web performance.
In this paper, we present Web performance prediction in time by making a forecast of a Web resource
downloading using the Efficient Turning Bands (TB) geostatistical simulation method. Real-life data for the
research were obtained from our own website named "Distributed forecasting system". Generation of log file
form website and performing monitoring of a group of Web clients from connected LAN. For better web
prediction we used spatio temporal prediction method with time utility for downloading particular file from
website and calculate forecasting result using Turning bands method but improving more forecasting
accuracy use the efficient turning band method basically efficient turning band use Naive bays algorithm and
calculate efficient result and that result is compared with Turning band and efficient turning band method.
The efficient turning band method result show good forecasting quality of Web performance prediction and
forecasting.
AN OPEN JACKSON NETWORK MODEL FOR HETEROGENEOUS INFRASTRUCTURE AS A SERVICE O...IJCNCJournal
Cloud computing is an environment which provides services for user demand such as software, platform, infrastructure. Applications which are deployed on cloud computing have become more varied and complex to adapt to increase end-user quantity and fluctuating workload. One popular characteristic of
cloud computing is the heterogeneity of network, hosts and virtual machines (VM). There were many studies on cloud computing modeling based on queuing theory, but most studies have focused on homogeneity characteristic. In this study, we propose a cloud computing model based on open Jackson
network for multi-tier application systems which are deployed on heterogeneous VMs of IaaS cloud computing. The important metrics are analyzed in our experiments such as mean waiting time; mean request quantity, the throughput of the system. Besides that, metrics in model is used to modify number VMs
allocated for applications. Result of experiments shows that open queue network provides high efficiency.
THE DEVELOPMENT AND STUDY OF THE METHODS AND ALGORITHMS FOR THE CLASSIFICATIO...IJCNCJournal
This paper represents the results of the research, which have allowed us to develop a hybrid
approach to the processing, classification, and control of traffic routes. The approach enables to
identify traffic flows in the virtual data center in real-time systems. Our solution is based on the
methods of data mining and machine learning, which enable to classify traffic more accurately
according to more criteria and parameters. As a practical result, the paper represents the
algorithmic solution of the classification of the traffic flows of cloud applications and services
embodied in a module for the controller of the software-defined network. This solution enables to
increase the efficiency of handling user requests to cloud applications and reduce the response
time, which has a positive effect on the quality of service in the network of the virtual data center
Performing initiative data prefetchingKamal Spring
Abstract—This paper presents an initiative data prefetching scheme on the storage servers in distributed file systems for cloud
computing. In this prefetching technique, the client machines are not substantially involved in the process of data prefetching, but the
storage servers can directly prefetch the data after analyzing the history of disk I/O access events, and then send the prefetched data
to the relevant client machines proactively. To put this technique to work, the information about client nodes is piggybacked onto the
real client I/O requests, and then forwarded to the relevant storage server. Next, two prediction algorithms have been proposed to
forecast future block access operations for directing what data should be fetched on storage servers in advance. Finally, the prefetched
data can be pushed to the relevant client machine from the storage server. Through a series of evaluation experiments with a
collection of application benchmarks, we have demonstrated that our presented initiative prefetching technique can benefit distributed
file systems for cloud environments to achieve better I/O performance. In particular, configuration-limited client machines in the cloud
are not responsible for predicting I/O access operations, which can definitely contribute to preferable system performance on them.
A Multipath Connection Model for Traffic MatricesIJERA Editor
Peer-to-Peer (P2P) applications have witnessed an increasing popularity in recent years, which brings new challenges to network management and traffic engineering (TE). As basic input information, P2P traffic matrices are of significant importance for TE. Because of the excessively high cost of direct measurement. In this paper,A multipath connection model for traffic matrices in operational networks. Media files can share the peer to peer, the localization ratio of peer to peer traffic. This evaluates its performance using traffic traces collected from both the real peer to peer video-on-demand and file-sharing applications. The estimation of the general traffic matrices (TM) then used for sending the media file without traffic. Share the media file, source to destination traffic is not occur. So it give high performance and short time process.
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...cscpconf
For performing distributed data mining two approaches are possible: First, data from several sources are copied to a data warehouse and mining algorithms are applied in it. Secondly,
mining can performed at the local sites and the results can be aggregated. When the number of
features is high, a lot of bandwidth is consumed in transferring datasets to a centralized location. For this dimensionality reduction can be done at the local sites. In dimensionality reduction a certain encoding is applied on data so as to obtain its compressed form. The
reduced features thus obtained at the local sites are aggregated and data mining algorithms are applied on them. There are several methods of performing dimensionality reduction. Two most important ones are Discrete Wavelet Transforms (DWT) and Principal Component Analysis (PCA). Here a detailed study is done on how PCA could be useful in reducing data flow across a distributed network.
A Review - Synchronization Approaches to Digital systemsIJERA Editor
Synchronization is a prime requirement in the process of Digital systems. Wherein new devices are upcoming
towards providing higher service level, advanced distributed systems are been integrated onto a single platform
for higher service provision. However with the integration of large processing units, the distributed processing
needs a high level synchronization with minimum processing overhead. The issue of synchronization was
processed by various approaches. This paper outlines a brief review on the developments made in the field of
synchronization approach to digital system, under distributed mode operation.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
RESOURCE ALLOCATION METHOD FOR CLOUD COMPUTING ENVIRONMENTS WITH DIFFERENT SE...IJCNCJournal
In a cloud computing environment with multiple data centers over a wide area, it is highly likely that each data center would provide the different service quality to users at different locations. It is also required to consider the nodes at the edge of the network (local cloud) which support applications such as IoTs that require low latency and location awareness. The authors proposed the joint multiple resource allocation method in a cloud computing environment that consists of multiple data centers and each data center provides the different network delay. However, the existing method does not take account of cases where requests that require a short network delay occur more than expected. Moreover, the existing method does not take account of service processing time in data centers and therefore cannot provide the optimal resource allocation when it is necessary to take the total processing time (both network delay and service processing time in a data center) into consideration in resource allocation.
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
DYNAMIC ASSIGNMENT OF USERS AND MANAGEMENT OF USER’S DATA IN SOCIAL NETWORK ijiert bestjournal
The issue of dynamic assignment of users to servers is studied widely but the proposed system substant iates the solution for this problem. The system - Dynamic Ass ignment of users to servers in social networking us e community analysis algorithm (CAA) to resolve the e xtra load that the servers. The proposed system has capability to keep inter-server communication below required level in distributed environment (Online social network). The assessment of potential servers is ma de first and then proper assignment is done. CAA ca lculate communication degree (CD) for each user. On the bas is of this CD parameter different communities are f ormed. This CD will help to relation between communication data. Along with previously mentioned features the proposed system will also provide user data managem ent feature which will keep currently activated use rs who are linked with same event in one virtual group. In this way the proposed system will bound to give th e results which will handle the overload of different distrib uted servers and also manage user data efficiently in system which will give more convenience to the users as we ll as to the system.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
Parallel and Distributed System IEEE 2015 ProjectsVijay Karan
List of Parallel and Distributed System IEEE 2015 Projects. It Contains the IEEE Projects in the Domain Parallel and Distributed System for the year 2015
A BAYE'S THEOREM BASED NODE SELECTION FOR LOAD BALANCING IN CLOUD ENVIRONMENThiij
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches
A Baye's Theorem Based Node Selection for Load Balancing in Cloud Environmentneirew J
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
Web Graph Clustering Using Hyperlink Structureaciijournal
Now, information is useful for every environment in which time similarity is more important case. The most
of people are strongly interested in Internet. Web pages in the Internet are linked thorough hyperlinks that
contain useful information. By using hyperlinks, web graphs are constructed for time similarity web links in
which webs have been seen by users at past. These activities are needed to use for tracing who used the
websites for something at the time. So this paper provides the history of users who connect the person
started the news. We found that the normalized-cut method with the new similarity metric is particularly
effective, as demonstrated on a web log file
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
JAVA 2013 IEEE DATAMINING PROJECT Distributed web systems performance forecas...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Map as a Service: A Framework for Visualising and Maximising Information Retu...M H
This paper presents a distributed information extraction and visualisation service, called the mapping service, for maximising information return from large-scale wireless sensor networks. Such a service would greatly simplify the production of higher-level, information-rich, representations suitable for informing other network services and the delivery of field information visualisations. The mapping service utilises a blend of inductive and deductive models to map sense data accurately using externally available knowledge. It utilises the special characteristics of the application domain to render visualisations in a map format that are a precise reflection of the concrete reality. This service is suitable for visualising an arbitrary number of sense modalities. It is capable of visualising from multiple independent types of the sense data to overcome the limitations of generating visualisations from a single type of sense modality. Furthermore, the mapping service responds dynamically to changes in the environmental conditions, which may affect the visualisation performance by continuously updating the application domain model in a distributed manner. Finally, a distributed self-adaptation function is proposed with the goal of saving more power and generating more accurate data visualisation. We conduct comprehensive experimentation to evaluate the performance of our mapping service and show that it achieves low communication overhead, produces maps of high fidelity, and further minimises the mapping predictive error dynamically through integrating the application domain model in the mapping service.
Recently graph data rises in many applications and there is need to manage such large amount of data by performing various graph operations over graphs using some graph search queries. Many approaches and algorithms serve this purpose but continuously require improvement over it in terms of stability and performance. Such approaches are less efficient when large and complex data is involved. Applications need to execute faster in order to improve overall performance of the system and need to perform many
advanced and complex operations. Shortest path estimation is one of the key search queries in many applications. Here we present a system which will find the shortest path between nodes and contribute to performance of the system with the help of different shortest path algorithms such as bidirectional search and AStar algorithm and takes a relational approach using some new standard SQL queries to solve the
problem, utilizing advantages of relational database which solves the problem efficiently.
PAGE: A Partition Aware Engine for Parallel Graph Computation1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
RESOURCE ALLOCATION METHOD FOR CLOUD COMPUTING ENVIRONMENTS WITH DIFFERENT SE...IJCNCJournal
In a cloud computing environment with multiple data centers over a wide area, it is highly likely that each data center would provide the different service quality to users at different locations. It is also required to consider the nodes at the edge of the network (local cloud) which support applications such as IoTs that require low latency and location awareness. The authors proposed the joint multiple resource allocation method in a cloud computing environment that consists of multiple data centers and each data center provides the different network delay. However, the existing method does not take account of cases where requests that require a short network delay occur more than expected. Moreover, the existing method does not take account of service processing time in data centers and therefore cannot provide the optimal resource allocation when it is necessary to take the total processing time (both network delay and service processing time in a data center) into consideration in resource allocation.
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
Harnessing the cloud for securely outsourcing large scale systems of linear e...Muthu Samy
Sybian Technologies Pvt Ltd
Final Year Projects & Real Time live Projects
JAVA(All Domains)
DOTNET(All Domains)
ANDROID
EMBEDDED
VLSI
MATLAB
Project Support
Abstract, Diagrams, Review Details, Relevant Materials, Presentation,
Supporting Documents, Software E-Books,
Software Development Standards & Procedure
E-Book, Theory Classes, Lab Working Programs, Project Design & Implementation
24/7 lab session
Final Year Projects For BE,ME,B.Sc,M.Sc,B.Tech,BCA,MCA
PROJECT DOMAIN:
Cloud Computing
Networking
Network Security
PARALLEL AND DISTRIBUTED SYSTEM
Data Mining
Mobile Computing
Service Computing
Software Engineering
Image Processing
Bio Medical / Medical Imaging
Contact Details:
Sybian Technologies Pvt Ltd,
No,33/10 Meenakshi Sundaram Building,
Sivaji Street,
(Near T.nagar Bus Terminus)
T.Nagar,
Chennai-600 017
Ph:044 42070551
Mobile No:9790877889,9003254624,7708845605
Mail Id:sybianprojects@gmail.com,sunbeamvijay@yahoo.com
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
DYNAMIC ASSIGNMENT OF USERS AND MANAGEMENT OF USER’S DATA IN SOCIAL NETWORK ijiert bestjournal
The issue of dynamic assignment of users to servers is studied widely but the proposed system substant iates the solution for this problem. The system - Dynamic Ass ignment of users to servers in social networking us e community analysis algorithm (CAA) to resolve the e xtra load that the servers. The proposed system has capability to keep inter-server communication below required level in distributed environment (Online social network). The assessment of potential servers is ma de first and then proper assignment is done. CAA ca lculate communication degree (CD) for each user. On the bas is of this CD parameter different communities are f ormed. This CD will help to relation between communication data. Along with previously mentioned features the proposed system will also provide user data managem ent feature which will keep currently activated use rs who are linked with same event in one virtual group. In this way the proposed system will bound to give th e results which will handle the overload of different distrib uted servers and also manage user data efficiently in system which will give more convenience to the users as we ll as to the system.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
Parallel and Distributed System IEEE 2015 ProjectsVijay Karan
List of Parallel and Distributed System IEEE 2015 Projects. It Contains the IEEE Projects in the Domain Parallel and Distributed System for the year 2015
A BAYE'S THEOREM BASED NODE SELECTION FOR LOAD BALANCING IN CLOUD ENVIRONMENThiij
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches
A Baye's Theorem Based Node Selection for Load Balancing in Cloud Environmentneirew J
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
Web Graph Clustering Using Hyperlink Structureaciijournal
Now, information is useful for every environment in which time similarity is more important case. The most
of people are strongly interested in Internet. Web pages in the Internet are linked thorough hyperlinks that
contain useful information. By using hyperlinks, web graphs are constructed for time similarity web links in
which webs have been seen by users at past. These activities are needed to use for tracing who used the
websites for something at the time. So this paper provides the history of users who connect the person
started the news. We found that the normalized-cut method with the new similarity metric is particularly
effective, as demonstrated on a web log file
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
JAVA 2013 IEEE DATAMINING PROJECT Distributed web systems performance forecas...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Map as a Service: A Framework for Visualising and Maximising Information Retu...M H
This paper presents a distributed information extraction and visualisation service, called the mapping service, for maximising information return from large-scale wireless sensor networks. Such a service would greatly simplify the production of higher-level, information-rich, representations suitable for informing other network services and the delivery of field information visualisations. The mapping service utilises a blend of inductive and deductive models to map sense data accurately using externally available knowledge. It utilises the special characteristics of the application domain to render visualisations in a map format that are a precise reflection of the concrete reality. This service is suitable for visualising an arbitrary number of sense modalities. It is capable of visualising from multiple independent types of the sense data to overcome the limitations of generating visualisations from a single type of sense modality. Furthermore, the mapping service responds dynamically to changes in the environmental conditions, which may affect the visualisation performance by continuously updating the application domain model in a distributed manner. Finally, a distributed self-adaptation function is proposed with the goal of saving more power and generating more accurate data visualisation. We conduct comprehensive experimentation to evaluate the performance of our mapping service and show that it achieves low communication overhead, produces maps of high fidelity, and further minimises the mapping predictive error dynamically through integrating the application domain model in the mapping service.
Recently graph data rises in many applications and there is need to manage such large amount of data by performing various graph operations over graphs using some graph search queries. Many approaches and algorithms serve this purpose but continuously require improvement over it in terms of stability and performance. Such approaches are less efficient when large and complex data is involved. Applications need to execute faster in order to improve overall performance of the system and need to perform many
advanced and complex operations. Shortest path estimation is one of the key search queries in many applications. Here we present a system which will find the shortest path between nodes and contribute to performance of the system with the help of different shortest path algorithms such as bidirectional search and AStar algorithm and takes a relational approach using some new standard SQL queries to solve the
problem, utilizing advantages of relational database which solves the problem efficiently.
PAGE: A Partition Aware Engine for Parallel Graph Computation1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
On Traffic-Aware Partition and Aggregation in Map Reduce for Big Data Applica...dbpublications
The MapReduce programming model simplifies
large-scale data processing on commodity cluster by
exploiting parallel map tasks and reduces tasks.
Although many efforts have been made to improve
the performance of MapReduce jobs, they ignore the
network traffic generated in the shuffle phase, which
plays a critical role in performance enhancement.
Traditionally, a hash function is used to partition
intermediate data among reduce tasks, which,
however, is not traffic-efficient because network
topology and data size associated with each key are
not taken into consideration. In this paper, we study
to reduce network traffic cost for a MapReduce job
by designing a novel intermediate data partition
scheme. Furthermore, we jointly consider the
aggregator placement problem, where each
aggregator can reduce merged traffic from multiple
map tasks. A decomposition-based distributed
algorithm is proposed to deal with the large-scale
optimization problem for big data application and an
online algorithm is also designed to adjust data
partition and aggregation in a dynamic manner.
Finally, extensive simulation results demonstrate that
our proposals can significantly reduce network traffic
cost under both offline and online cases.
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESSubhajit Sahu
https://gist.github.com/wolfram77/54c4a14d9ea547183c6c7b3518bf9cd1
There exist a number of dynamic graph generators. Barbasi-Albert model iteratively attach new vertices to pre-exsiting vertices in the graph using preferential attachment (edges to high degree vertices are more likely - rich get richer - Pareto principle). However, graph size increases monotonically, and density of graph keeps increasing (sparsity decreasing).
Gorke's model uses a defined clustering to uniformly add vertices and edges. Purohit's model uses motifs (eg. triangles) to mimick properties of existing dynamic graphs, such as growth rate, structure, and degree distribution. Kronecker graph generators are used to increase size of a given graph, with power-law distribution.
To generate dynamic graphs, we must choose a metric to compare two graphs. Common metrics include diameter, clustering coefficient (modularity?), triangle counting (triangle density?), and degree distribution.
In this paper, the authors propose Dygraph, a dynamic graph generator that uses degree distribution as the only metric. The authors observe that many real-world graphs differ from the power-law distribution at the tail end. To address this issue, they propose binning, where the vertices beyond a certain degree (minDeg = min(deg) s.t. |V(deg)| < H, where H~10 is the number of vertices with a given degree below which are binned) are grouped into bins of degree-width binWidth, max-degree localMax, and number of degrees in bin with at least one vertex binSize (to keep track of sparsity). This helps the authors to generate graphs with a more realistic degree distribution.
The process of generating a dynamic graph is as follows. First the difference between the desired and the current degree distribution is calculated. The authors then create an edge-addition set where each vertex is present as many times as the number of additional incident edges it must recieve. Edges are then created by connecting two vertices randomly from this set, and removing both from the set once connected. Currently, authors reject self-loops and duplicate edges. Removal of edges is done in a similar fashion.
Authors observe that adding edges with power-law properties dominates the execution time, and consider parallelizing DyGraph as part of future work.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
A distributed system can be viewed as an environment in which, number of computers/nodes are connected and resources are shared among these computers/nodes. But unfortunately, distributed systems often face the problem of traffic, which can degrade the performance of the system. Traffic management is used to improve scalability and overall system throughput in distributed systems using Software Defined Network (SDN) based systems. Traffic management improves system performance by dividing the work traffic effectively among the participating computers/nodes. Many algorithms were proposed for traffic management and their performance is measured based on certain parameters such as response time, resource utilization, and fault tolerance. Traffic management algorithms are broadly classified into two categories- scheduling and machine learning traffic management. This work presents the study of performance analysis of traffic management algorithms. This analysis can further help in the design of new algorithms. However, when multiple servers are assigned to compile the mysterious code, different kinds of techniques are used. One common example is traffic management. The processes are managed based on power efficiency, networking bandwidth, Processor speed. The desired output will again send back to the developer. If multiple programs have to be compiled then appropriate technique such as scheduling algorithm is used. So the compilation process becomes faster and also the other process can get a chance to compile. SDN based clustering algorithm based on Simulated Annealing whose main goal is to increase network lifetime while maintaining adequate sensing coverage in scenarios where sensor nodes produce uniform or non-uniform data traffic.
An elastic , effective, activety or intelligent ,graceful networking architecture layout be desired to make processing massive data. next to that ,existent network architectures be considerably incapable for
cleatting the huge data. massive data thrusts network exchequers into border it consequence with in network overcrowding ,needy achievement, then permicious employer exprtises. this offered the current state-of-the-art research affronts ,potential solutions into huge data networking notion. More specifically, present the state of networking problems into massive data connected intrequirements,capacity,running ,
data manipulating also will introduce the architectures of MapReduce , Hadoop paradigm within research
requirements, fabric networks and software defined networks which utilizized into making today’s idly growing digital world and compare and contrast into identify relevant drawbacks and solutions.
The development of a Geographic Information System for traffic route planni...Matthew Pulis
This was my MSc. Informatics thesis. The project started with a Literature Review studying the historic advancements of Location Based Services and Geographic Information Systems, in particular Open Source GIS. Case Studies were reviewed so as to gain knowledge from past experiences. The methodology used for this project followed the DSDM methodology and requirements were drawn following the MoSCoW priorities. A full working version of the project which is presented in
a Web Interface can be accessed online.
Stream Processing Environmental Applications in Jordan ValleyCSCJournals
Database system architectures have been gone through innovative changes, specially the unifications of algorithms and data via the integration of programming languages with the database system. Such an innovative changes is needed in Stream-based applications since they have different requirements for the principal of stream data processing system. For example, the monitoring component requires query processing system to detect user-defined events in a timely manner as in real time monitoring system. Furthermore, stream processing fits a large class of new applications for which conventional DBMSs fall short since many stream-oriented systems are inherently geographically distributed and the distribution offers a scalable load management and higher availability. This paper presents statistical information about metrological data such as the weather, soil and evapotranspiration as collected by the weather stations distributed in different locations in Jordan Valley. In addition, it shows the importance of Stream Processing in some real life applications, and shows how the database systems can help researcher in building prototypes that can be implemented and used in a continuous monitoring system.
Satellite image processing is an intricate task that requires vast computation and data processing, which cannot
be handled by a single computer. Furthermore, the processing of the massive amount of data accumulated by
the satellite is a huge challenge for the end user. Hence, grid computing is the essential platform to provide high
computing performance at the user end. This article reviews the grid services used for satellite image processing
and significant data processing.
Application Profiling and Mapping on NoC-based MPSoC Emulation Platform on Re...TELKOMNIKA JOURNAL
In network-on-chip (NoC) based multi-processor system-on-chip (MPSoC) development, application
profiling is one of the most crucial step during design time to search and explore optimal mapping.
Conventional mapping exploration methodologies analyse application-specific graphs by estimating its runtime
behaviour using analytical or simulation models. However, the former does not replicate the actual
application run-time performance while the latter requires significant amount of time for exploration. To map
applications on a specific MPSoC platform, the application behaviour on cycle-accurate emulated platform
should be considered for obtaining better mapping quality. This paper proposes an application mapping
methodology that utilizes a MPSoC prototyped in Field-Programmable Gate Array (FPGA). Applications are
implemented on homogeneous MPSoC cores and their costs are analysed and profiled on the platform
in term of execution time, intra-core communication and inter-core communication delays. These metrics
are utilized in analytical evaluation of the application mapping. The proposed analytical-based mapping is
demonstrated against the exhaustive brute force method. Results show that the proposed method is able to
produce quality mappings compared to the ground truth solutions but in shorter evaluation time.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
Concept for a web map implementation with faster query response
1. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.1, 2012
Concept for a Web Map Implementation with Faster Query
Response
M. A. Bashar1* Monirul Islam2 M. A. Chowdhury2 M. P. Sajjad2 M. T. Ahmed3
1. Department of Computer Science and Engineerig, Comilla University Comilla, Bangladesh
2. Structured Data System Limited (SDSL), Lalmatia, Dhaka-1207, Bangladesh
3. Department of Information and Communication Technology, Comilla University Comilla,
Bangladesh
* E-mail of the corresponding author: basharcse@gmail.com
Abstract
Vector data and in particular road networks are being used in many application domains such as in
mobile computing. These systems would prefer to receive the query results very quickly. Lots of
research is going on to make the query response faster. One technique is to compress vector data
so that they can be transferred to the client quickly. If we look different compression technique
that are used to make the response faster, we will see that some of them do not make the response
fast enough and some of them make response fast but very complex to implement. We report the
concept for the implementation of a web map with a simple compression technique to send query
response to the client, and found it making response fast. We have used some open source/free
components to make the development quick and easy. This paper may work as a guide line for
quick implementation of a web map.
Keywords: Web Map, PostGIS, Geoserver, GeoWebCache, Compression.
1. Introduction
Consisting of thousands of points and line segments representing various geographical features, this
data requires a significant amount of time to be generated at the server side and is even more time-
consuming to be rendered at the client side. The common practice to address the performance/storage
issue in GIS applications is to send a raster image, which is a rendition of requested geospatial data at
the server side, to the client. A new map was generated every time users panned to a different location
or changed zoom level (since there are an infinite number of combinations of map extents and zoom-
levels it was impossible to generate those maps in advance).
Google map developers had broken with that tradition [1]. They opted for a solution where maps could
be produced in advance and served as small tiles for assembling into one big image at user end. The
advantage of this approach is consistency of appearance and graphical quality of the map (which was
rare prior to release of Google Map!) and, probably more important, enormous scalability that could be
achieved. There is no need for server side processing to generate maps and individual map tiles are
much smaller than the whole map presented at the user end, so they are able to be delivered and
displayed much faster. The trade off was a big effort up front to generate nice looking maps and the
need to fix zoom levels rather than allowing a continuous zoom, as is the case with the traditional
approach. Pre-designed map tiles approach is brilliant for speed and performance but it does not allow
for dynamic map content.
Transmitted raster image is only a visual representation of the geospatial data and hence does not
include the geometric objects and corresponding metadata. Instead, the objects are rendered in the
image and some of the corresponding metadata is superimposed as limited labels. Any subsequent
query (e.g., nearest neighbor query) in the client results in another handshake with online server which
hosts the original dataset. That is, the client application cannot manipulate the results for further
processing. A solution to this problem is sending the original query result in vector data format to the
client to enable further interaction with it and to preserve the metadata. This way, server sends the
30
2. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.1, 2012
vector data instead of the raster image to the client that enables users to further interact with query
results (e.g., to select a specific geometric object or to issue a query based on returned results). An
example of such approach is recently taken by Yahoo! Maps Beta [2] which allows users to highlight
different sections of a path (i.e., different line strings). Such level of interactivity can greatly improve
user experience with the system and enable more sophisticated query analysis on the client [3], [4].
It is important to compress road vector database so that we can send them to client side very
quickly. A careful study of road vector databases in general and such query results in particular
reveals that the data returned to the client is highly repetitive in nature and shows strong
correlation. Such data behavior suggests finding a way to reduce this redundancy. By finding the
difference of successive point of road we can eliminate a huge amount of redundancy from the
data before compressing it. Also the nature of road vector database suggests us that if we use gzip
we will achieve a huge amount data compression. After doing these two steps we send data to the
client and achieve a faster response from server in practice. Novelty of our compression technique
is that it is simple and works fine in real life.
2. Analysis of Different Compression Techniques
The problem of vector data compression has been addressed by two independent communities:
• Data compression community that takes numerical approaches embedded in compression schemes
to focus on the problem of compressing road segments [5]–[7]. The advantages of these methods
lie in their simplicity and their ability to compress a given vector data up to a certain level.
However, the drawback of using a pure data compression approach is that it ignores the important
spatial characteristics of vector data inherent in their structure. A successful approach must be
devised with the rendering process and user’s requested level of details in mind as this is the final
deliverable of the entire process. Also none of the above references study the efficiency of their
approaches with regards to overall response time.
• GIS community that uses hierarchical data structures such as trees and multi-resolution databases
to represent vector data at different levels of abstraction [8]-[19]. With most of these methods,
displaying data in a certain zoom level requires sending all the coarser levels. Furthermore, using
generalization operators introduces the issue of choosing which objects to be displayed at each
level of abstraction. While both approaches have their own merits, neither of these techniques
blends compression schemes with hierarchical representation of vector data. Furthermore, most of
the above approaches do not perform empirical experiments with bulky real-world data to study
the effect of performing such compression techniques on the client, transmission and server times.
One of the most well-known line generalization and simplification schemes proposed in the GIS
community is the Douglas-Peucker algorithm [12]. The idea behind this algorithm is to propose an
effective way of generalizing a linear dataset by recursively generating long lines and discarding points
that are closer than a certain distance to the lines generated. More specifically, this algorithm takes a
top down approach by generating an initial simplified poly-line joining the first and last poly-line
vertices. The vertices in between are then tested for closeness to that edge. If all vertices in between are
closer than a specified tolerance, ε > 0, to the edge, the approximation is considered satisfactory.
However, if there are points located further than ε to the simplified poly-line, the point furthest away
from the poly-line will be chosen to subdivide the poly-line into two segments and the algorithm is
recursively repeated for the two generated (shorter) poly-lines.
The Douglas–Peucker algorithm delivers the best perceptual representations of the original lines and
therefore is extensively used in computer graphics and most commercial GIS systems. However, it may
affect the topological consistency of the data by introducing self-intersecting simplified lines if the
accepted approximation is not sufficiently fine. Several studies propose techniques to avoid the above
issue known as self-intersection property [17]-[19]. For instance [17] proposes two simple criteria for
detecting and correcting topological inconsistencies and [18] present an alternative approach to the
original Douglas-Peucker algorithm to avoid self-intersections without introducing more time
complexity. Finally, based on Saalfeld’s algorithm [19] propose an improvement to detect possible self-
intersections of a simplified poly-line more efficiently. However, due to high cost of performing the
proposed generalization in real-time [19] suggest a pre-computation of a sequence of topologically
31
3. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.1, 2012
consistent representations (i.e., levels of detail) of a map which are then progressively transmitted to
the client upon request. There is a nice approach [20] which mainly differs from Zhou and Bertolotto
[19] in that they do not store multiple representations of data at different levels offline and they
perform the entire process of generating user’s requested level of detail on the fly. Another important
difference is that Zhou and Bertolotto [19] propose sending the coarsest data layer to the user initially
and then progressively transmitting more detailed representations of the same data to the client.
However, [20] only send the single desirable level of detail requested, to the user. Therefore, as the
need for having more levels of detail increases, use of their proposed progressive transformation
increases the amount of data being transferred redundantly. Furthermore, as opposed to Zhou and
Bertolotto [19], [20] do not focus extensively on topology preservation; however, it is important to note
that their aggregation does not affect topology at all. Also the finest level of detail will contain the
original data and thus completely preserves topology. For all other levels of detail their visual
investigation of the results show strong indication that they do preserve topology.
Another work based on the Douglas–Peucker algorithm is the work of Buttenfield [11] in progressive
vector data transmission over the web. Similar to Zhou and Bertolotto [19], Buttenfield proposes a
hierarchical simplification preprocessing where each packet stores the representation of a complete
vector at some intermediate level in a tree. Packets are constructed using the Douglas–Peucker
algorithm for different levels of detail and based on the user request; the entire row of a certain height
is transferred over the network. Buttenfield argues that “transmission time and disk space remain as
two significant impediments to vector transmission”. Again [11] does not report any experimental
evaluation of the proposed system and it is only tested on single poly-line packets contained in a small
geographic database and the efficiency of this method in dealing with real-world databases is yet to be
studied.
A hybrid aggregation and compression technique for road network databases are discussed here
[20]. It is a nice approach in vector data compression which is integrated within a geospatial query
processing system. It uses line aggregation to reduce the number of relevant tuples and Huffman
compression to achieve a multi-resolution compressed representation of a road network database.
Problem with this technique is that it is complex system. Complexity associate with this system
weeks its applicability in many simple and quick development approach.
3. OUR COMPRESSION TECHNIQUE
Analyzing different compression technique, their advantage, disadvantage and implementation
complexity we could not accept them for our purpose. We have come to use a simple and fast enough
compression technique - we find a reference point and for other points we just calculate the successive
difference instead of full value and represent them as comma separated. Then outcome is compressed
by gzip [21] and finally send to the client over the network. If we look at road vector databases in
general and query results in particular, we see that the consecutive points (latitude and longitude) are
very near to each other. Sending full, say 8 digit long point (latitude and longitude) is redundant. Let us
consider 5 consecutive points’ latitudes are 27.312312,27.312316, 27.312323,27.312325,27.312328.
Sending these 5 points’ latitudes require 40 digits to send but we can send this information segment
with 27.312312,4,7,2,3 without any difficulty, which means only 12 digits are enough for sending the
information segment. Hence we can eliminate a huge amount of redundant data. This technique is very
easy and straightforward. Again gzip is based on the DEFLATE algorithm [22], which is a combination
of LZ77 [23] and Huffman coding [24]. As a compression technique that significantly compresses such
highly correlated data, we use gzip coding and it is a free software application. In fig. 1 we are showing
actual and compressed files size for map data of Pretoria, South Africa.
4. IMPLEMENTATION
To make the development faster and cost effective we use different open/free components. Generic
architecture of our implemented system is as follow –
At the bottom of this architecture is a database (PostGIS). Application servers (GeoServer and
GeoWebCache) are in the middle. On the top of this architecture there is a user interface layer. The
database and GeoServer interact via SQL (with Open Geospatial Consortium standard spatial
32
4. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.1, 2012
extensions). The applications servers and user interface layers interact via standard web encoding
(JSON) over an HTTP transport. Again GeoServer and GeoWebCache interact over HTTP.
GeoWebCache and user interface layers interact via standard web encoding (images) over HTTP.
Components in this architecture works as follow:
• PostGIS: This database can answer spatial queries as well as standard attribute queries. It is
certified as compliant with the OGC "Simple Features for SQL" specification. PostGIS is an
extension to the PostgreSQL object-relational database system which allows GIS (Geographic
Information Systems) objects to be stored in the database. PostGIS includes support for GiST-
based R-Tree spatial indexes, and functions for analysis and processing of GIS objects [25]-[27].
In addition, PostGIS adds types, functions and indexes to support the storage, management, and
analysis of geospatial objects: points, line-strings, polygons, multi-points, multi-line-strings,
multi-polygons and geometry collections. As a spatial database, PostGIS can store very large
contiguous areas of spatial data, and provide read/write random access to that data. This is an
improvement over old file-based management structures that were restricted by file-size
limitations and the need to lock the whole files during write operations. The spatial SQL
functions available in PostGIS make analyses possible. Complete manual for PostGIS is given
here [26], [27]. We can summarize functionality of PostGIS extension as follows –
It adds a “geometry” data type to the usual database types (e.g. “varchar”, “char”, “integer”,
“date”, etc).
It adds new functions that take in the “geometry” type and provide useful information back
(e.g. ST_Distance (geometry, geometry), ST_Area (geometry), ST_Length (geometry),
ST_Intersects (geometry, geometry), etc).
It adds an indexing mechanism to allow queries with spatial restrictions (“within this
bounding box”) to return records very quickly from large data tables.
The core functionalities of a spatial database are easy to list: types, functions, and indexes. What
is impressive is how much spatial processing can be done inside the database once those simple
capabilities are present: overlay analyses, re-projections, massive seamless spatial tables,
proximity searches, compound spatial/attribute filters, and much more.
• Geoserver: This map/feature server provides standardized web access to underlying GIS data
source [5]. GeoServer provides an HTTP access method for geospatial objects and queries on
those objects. GeoServer presents spatial data (tables in a database) as feature collections, and
allows HTTP clients to perform operations on those collections:
Render them to an image, as an attractive cartography product.
Apply a logical filter to them and retrieve a subset, or a summary.
Retrieve them in multiple formats (KML, GML, GeoJSON).
A document with comprehensive guide to all aspects of using GeoServer which covers topics
from initial installation to advanced features is available here [28].
• GeoWebCache: It is tile server and can intelligently store and serve map tiles using standard web
protocols for requests and responses [25]. Like GeoServer, GeoWebCache is a protocol gateway.
GeoWebCache sits between tiled mapping components (like OpenLayers, Google Maps and
Microsoft Virtual Earth) and rendering engine in GeoServer. Tiled map components generate a
large number of parallel requests for map tiles, and the tiles always have the same bounds, so
they are prime candidates for caching. GeoWebCache receives tile requests, checks its internal
cache to see if it already has a copy of the response, returns it if it does, or delegates to the
rendering engine (GeoServer) if it does not. GeoWebCache documentation is available here [29].
Client-side: Client in general has two main functionalities: It enables the user to specify his/her query
and corresponding parameters and more importantly processes and renders the query results back to the
user. We implement three different steps on the client side that enable processing and displaying the
data: gzip decompression, reconstruction and rendering. These steps together, convert the compressed
data received from the server to a meaningful and displayable format. Two different types of clients
were used in our system: Heavyweight client and Lightweight client. In a heavyweight or slow client,
besides the resulting vector data, additional data such as raster or satellite images are sent and displayed
33
5. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.1, 2012
on the client. In a lightweight or fast client, only the desired vector data and nothing more are sent and
rendered on the client.
5. Conclusion
Transmitted raster image is only a visual representation of the geospatial data and hence does not
include the geometric objects and corresponding metadata. Client cannot do any interaction with
raster image but many applications such as navigation requires farther interaction with query
result. A solution to this problem is sending the original query result in vector data format to the
client to enable further interaction with it and to preserve the metadata. In that case it is important
to compress query result so that we can send them to client side very quickly. Analyzing different
compression technique, their advantage, disadvantage and implementation complexity we have
come to the decision that some compression technique do not make the response fast enough and
some make response faster but their complexity to implement makes their
References
[1] Online at http://all-things-spatial.blogspot.com/2009/06/ingenuity-of-google-map-
architecture.html accessed December 31, 2011
[2] Online at http://maps.yahoo.com accessed January 06, 2012
[3] Cai Y, Stumpf R,Wynne T, TomlinsonM, Chung DSH, Boutonnier X, IhmigM, Franco R,
Bauernfeind N (2007), “Visual transformation for interactive spatiotemporal data mining”, Knowl
Inf Syst 13(2):119–142 ISSN 0219-1377. doi:10.1007/s10115-007-0075-5
[4] Shahabi C, Kolahdouzan MR, Safar M (2004), “Alternative strategies for performing spatial joins
on web Sources”, Knowl Inf Syst 6(3):290–314. ISSN 0219-1377. doi:10.1007/s10115-003-0104-
y
[5] Akimov A, Kolesnikov A, Fränti P (2004), “Reference line approach for vector data compression”,
ICIP, pp 1891–1894
[6] Shekhar S, Huang Y, Djugash J, Zhou C (2002), “Vector map compression: a clustering
approach”, Voisard A, Chen SC (eds) ACM-GIS, ACM, pp 74–80 ISBN 1-58113-591-2
[7] Zhu Q, Yao X, Huang D, Zhang Y (2002), “An efficient data management approach for large
cybercity gis”, ISPRS
[8] Ai T, Li Z, Liu Y (2003), “Progressive transmission of vector data based on changes accumulation
model”, SDH, Leicester, Springer, Berlin, pp 85–96
[9] Bertolotto M, Egenhofer MJ (2001), “Progressive transmission of vector map data over the world
wide Web”. Geoinformatica 5(4):345–373 URL
http://citeseer.ist.psu.edu/bertolotto01progressive.html
[10] Bertolotto M, ZhouM (2007), “Efficient line simplification for web-mapping”, International
journal of web engineering and technology, special issue on web and wireless. GIS 3(2):139–156
[11] Buttenfield B (2002), “Transmitting vector geospatial data across the internet”, In: GIScience ’02:
proceedings of the 2nd international conference on geographic information science, London, UK,
Springer, Heidelberg, pp 51–64. ISBN 3-540-44253-7
[12] Douglas DH, Peucker TK (1973), “Algorithms for the reduction of the number of points required
to represent a digitized line or its caricature”, Can Cartogr 10(2):112–122
[13] Han Q, Bertolotto M (2004), “A multi-level data structure for vector maps”, In: GIS ’04:
proceedings of the 12th annual ACM international workshop on geographic information systems,
New York, NY, USA, ACM Press, pp 214–221 ISBN 1-58113-979-9.
doi:10.1145/1032222.1032254
[14] Paiva AC, da Silva ED, Leite FL Jr, de Souza Baptista C (2004), “A multiresolution approach for
internet gis applications”, In: DEXA Workshops, IEEE Computer Society, pp 809–813 ISBN 0-
7695-2195-9
[15] Persson J (2004), “Streaming of compressed multi-resolution geographic vector data”,
34
6. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.1, 2012
Geoinformatics, Sweden.
[16] Puppo E, Dettori G (1995), “Towards a formal model for multi-resolution spatial maps”, In:
Egenhofer MJ, Herring JR (eds) SSD, volume 951 of Lecture Notes in Computer Science,
Springer, Heidelberg, pp 152–169. ISBN 3-540-60159-7
[17] Saalfeld A (1999), “Topologically consistent line simplification with the douglas-peucker
algorithm”, Cartogr Geogr Inf Sci 26(1):7–17
[18] Wu ST, Márquez MRG (2003), “A non-self-intersection douglas-peucker algorithm”, In:
SIBGRAPI, IEEE Computer Society, pp 60–66. ISBN 0-7695-2032-4
[19] Zhou M, Bertolotto M (2005), “Efficiently generating multiple representations for web mapping”,
In: Li K-J, Vangenot C (eds) W2GIS, volume 3833 of Lecture Notes in Computer Science,
Springer, Heidelberg, pp 54–65. ISBN 3-540-30848-2
[20] Ali Khoshgozaran, Ali Khodaei, Mehdi Sharifzadeh, Cyrus Shahabi (2008), “A hybrid aggregation
and compression technique for road network databases”, Knowledge and Information Systems,
Volume 17, Issue 3, Pages 265-286
[21] Online at http://en.wikipedia.org/wiki/Gzip accessed December 15, 2011
[22] Online at http://en.wikipedia.org/wiki/DEFLATE accessed January 04, 2012
[23] Online at http://en.wikipedia.org/wiki/LZ77_and_LZ78 accessed January 04, 2012
[24] Online at http://en.wikipedia.org/wiki/Huffman_coding accessed January 04, 2012
[25] Online at http://opengeo.org/publications/opengeo-architecture/ accessed
[26] Ramsey, P., Refractions Research Inc, “PostGIS Manual”. Online at
http://www.dcc.fc.up.pt/~michel/TABD/postgis.pdf accessed January 06, 2012.
[27] Ramsey, P., Refractions Research Inc, “INTRODUCTION TO POSTGIS”, Online at
http://2007.foss4g.org/workshops/W-04/PostGIS%20Workshop.doc. accessed October1, 2011
[28] Online at http://docs.geoserver.org/1.7.x/en/user/ accessed October1, 2009
[29] Online at http://geowebcache.org/trac/wiki/Documentation accessed October1, 2009
35
7. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.1, 2012
Fig. 1: Actual and compressed files size (in KB).
Fig.1:
Client
User Interface
IMG/HTTP
JSON/HTTP
GeoWebCache
Application Server
Difference Calculation and gzip
HTTP Comression
GeoServer
SQL/JDBC
Data Base
PostGIS
Fig. 2: Architecture of our implemented web map.
36
8. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.1, 2012
Fig 3: Navigation in J2ME client (shot 1).
Fig 4: Navigation in J2ME client (shot 2).
37