With the use of data for embedded GIS system continues to increase and the requirement of
application for embedded GIS system continues to improve, the quad-tree index algorithms and block
classification data organization mode that are currently used to handle large amounts of data reflects a
certain limitation. Combining the characteristics of embedded GIS data, the authors put forward the multilevel
data indexing and dynamic data loading, and realize the data loading when required, and enhance
the real-time response speed, solves the limitation on large volume data.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
A New Mechanism for Service Recovery Technology by using Recovering Service’s...ijfcstjournal
Service recovery technology is an important constituent part of the emergency response technologies. The service recovery goal is to build a technology system of service recovery focusing on the survival of information system services. By analyzing the relationship between service and data, we present a service recovery mechanism by recovering service’s data. We introduce a third party service monitor to monitor the state changes of the service, design the data recovery model, and give an example of quick data recovery. At Last we present a prototype system of service recovery; the experimental results toward the prototype system show that the mechanism which designed by us can greatly improve the service recovery efficiency and it can meet the timeliness requirements of the information service
Stochastic Scheduling Algorithm for Distributed Cloud Networks using Heuristi...Eswar Publications
Rule based heuristic scheduling algorithms in real time and cloud computing Systems employ for resource or task scheduling since they are suitable to implement for NP-complete problems. However, they are simple but there is much room to improve these algorithms. This study presents a heuristic scheduling algorithm, called High performance hyper-heuristic scheduling algorithm (HHSA) using detection operator, to find better scheduling solutions for real and cloud computing systems. The two operators - diversity detection and
improvement detection operators - are employed in this algorithm to determine the timing to determine the heuristic algorithm.. These two are employed to dynamically determine a low level heuristic that can be used to find better solution. To evaluate the performance of this method, authors examined the above method with several scheduling algorithms and results prove that Hyper Heuristic Scheduling Algorithm can
significantly decrease the makespan of task scheduling when compared with all other scheduling algorithms.
A novel high-performance hyper-heuristic algorithm is proposed for scheduling on cloud computing systems to
reduce the makespan. This algorithm can be applied to both sequence dependent and sequence independent scheduling problems.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
A New Mechanism for Service Recovery Technology by using Recovering Service’s...ijfcstjournal
Service recovery technology is an important constituent part of the emergency response technologies. The service recovery goal is to build a technology system of service recovery focusing on the survival of information system services. By analyzing the relationship between service and data, we present a service recovery mechanism by recovering service’s data. We introduce a third party service monitor to monitor the state changes of the service, design the data recovery model, and give an example of quick data recovery. At Last we present a prototype system of service recovery; the experimental results toward the prototype system show that the mechanism which designed by us can greatly improve the service recovery efficiency and it can meet the timeliness requirements of the information service
Stochastic Scheduling Algorithm for Distributed Cloud Networks using Heuristi...Eswar Publications
Rule based heuristic scheduling algorithms in real time and cloud computing Systems employ for resource or task scheduling since they are suitable to implement for NP-complete problems. However, they are simple but there is much room to improve these algorithms. This study presents a heuristic scheduling algorithm, called High performance hyper-heuristic scheduling algorithm (HHSA) using detection operator, to find better scheduling solutions for real and cloud computing systems. The two operators - diversity detection and
improvement detection operators - are employed in this algorithm to determine the timing to determine the heuristic algorithm.. These two are employed to dynamically determine a low level heuristic that can be used to find better solution. To evaluate the performance of this method, authors examined the above method with several scheduling algorithms and results prove that Hyper Heuristic Scheduling Algorithm can
significantly decrease the makespan of task scheduling when compared with all other scheduling algorithms.
A novel high-performance hyper-heuristic algorithm is proposed for scheduling on cloud computing systems to
reduce the makespan. This algorithm can be applied to both sequence dependent and sequence independent scheduling problems.
Advances in micro fabrication and communication techniques have led to unimaginable proliferation of
WSN applications. Research is focussed on reduction of setup operational energy costs. Bulk of operational
energy costs are linked to communication activities of WSN. Any progress towards energy efficiency has a
potential of huge savings globally. Therefore, every energy efficient step is an endeavour to cut costs and
‘Go Green’. In this paper, we have proposed a framework to reduce communication workload through: Innetwork
compression and multiple query synthesis at the base-station and modification of query syntax through introduction of Static Variables. These approaches are general approaches which can be used in any WSN irrespective of application.
Role of Operational System Design in Data Warehouse Implementation: Identifyi...iosrjce
Data warehouse designing process takes input from operational system of the organization. Quality
of data warehousing solution depends on design of operational system. Often, operational system
implementations of organizations have some limitations. Thus, we cannot proceed for data warehouse
designing so easily. In this paper, we have tried to investigate operational system of the organization for
identifying such limitations and determine role of operational system design in the process of data warehouse
design and implementation. We have worked out to find possible methods to handle such limitations and have
proposed techniques to get a quality data warehousing solution under such limitations. To make the work based
on live example, National Rural Health Mission (NRHM) Project has been taken. It is a national project of
health sector, managed by Indian Government across the country. The complex structure and high volume of
data makes it an ideal case for data warehouse implementation.
DISTRIBUTED AND BIG DATA STORAGE MANAGEMENT IN GRID COMPUTINGijgca
Big data storage management is one of the most challenging issues for Grid computing environments, since large amount of data intensive applications frequently involve a high degree of data access locality. Grid applications typically deal with large amounts of data. In traditional approaches high-performance computing consists dedicated servers that are used to data storage and data replication. In this paper we present a new mechanism for distributed and big data storage and resource discovery services. Here we proposed an architecture named Dynamic and Scalable Storage Management (DSSM) architecture in grid environments. This allows in grid computing not only sharing the computational cycles, but also share the storage space. The storage can be transparently accessed from any grid machine, allowing easy data sharing among grid users and applications. The concept of virtual ids that, allows the creation of virtual spaces has been introduced and used. The DSSM divides all Grid Oriented Storage devices (nodes) into multiple geographically distributed domains and to facilitate the locality and simplify the intra-domain storage management. Grid service based storage resources are adopted to stack simple modular service piece by piece as demand grows. To this end, we propose four axes that define: DSSM architecture and algorithms description, Storage resources and resource discovery into Grid service, Evaluate purpose prototype system, dynamically, scalability, and bandwidth, and Discuss results. Algorithms at bottom and upper level for standardization dynamic and scalable storage management, along with higher bandwidths have been designed.
Efficient and scalable multitenant placement approach for in memory database ...CSITiaesprime
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, multi-tenant placement (MTP) and best-fit greedy to show the quality of tenant placement. The experimental results show that MTP algorithm is scalable and efficient in comparison with best-fit greedy algorithm over proposed architecture.
Data Partitioning in Mongo DB with CloudIJAAS Team
Cloud computing offers various and useful services like IAAS, PAAS SAAS for deploying the applications at low cost. Making it available anytime anywhere with the expectation to be it scalable and consistent. One of the technique to improve the scalability is Data partitioning. The alive techniques which are used are not that capable to track the data access pattern. This paper implements the scalable workload-driven technique for polishing the scalability of web applications. The experiments are carried out over cloud using NoSQL data store MongoDB to scale out. This approach offers low response time, high throughput and less number of distributed transaction. The result of partitioning technique is conducted and evaluated using TPC-C benchmark.
Graph Based Workload Driven Partitioning System by Using MongoDBIJAAS Team
The web applications and websites of the enterprises are accessed by a huge number of users with the expectation of reliability and high availability. Social networking sites are generating the data exponentially large amount of data. It is a challenging task to store data efficiently. SQL and NoSQL are mostly used to store data. As RDBMS cannot handle the unstructured data and huge volume of data, so NoSQL is better choice for web applications. Graph database is one of the efficient ways to store data in NoSQL. Graph database allows us to store data in the form of relation. In Graph representation each tuple is represented by node and the relationship is represented by edge. But, to handle the exponentially growth of data into a single server might decrease the performance and increases the response time. Data partitioning is a good choice to maintain a moderate performance even the workload increases. There are many data partitioning techniques like Range, Hash and Round robin but they are not efficient for the small transactions that access a less number of tuples. NoSQL data stores provide scalability and availability by using various partitioning methods. To access the Scalability, Graph partitioning is an efficient way that can be easily represent and process that data. To balance the load data are partitioned horizontally and allocate data across the geographical available data stores. If the partitions are not formed properly result becomes expensive distributed transactions in terms of response time. So the partitioning of the tuple should be based on relation. In proposed system, Schism technique is used for partitioning the Graph. Schism is a workload aware graph partitioning technique. After partitioning the related tuples should come into a single partition. The individual node from the graph is mapped to the unique partition. The overall aim of Graph partitioning is to maintain nodes onto different distributed partition so that related data come onto the same cluster.
An elastic , effective, activety or intelligent ,graceful networking architecture layout be desired to make processing massive data. next to that ,existent network architectures be considerably incapable for
cleatting the huge data. massive data thrusts network exchequers into border it consequence with in network overcrowding ,needy achievement, then permicious employer exprtises. this offered the current state-of-the-art research affronts ,potential solutions into huge data networking notion. More specifically, present the state of networking problems into massive data connected intrequirements,capacity,running ,
data manipulating also will introduce the architectures of MapReduce , Hadoop paradigm within research
requirements, fabric networks and software defined networks which utilizized into making today’s idly growing digital world and compare and contrast into identify relevant drawbacks and solutions.
SUITABILITY OF SERVICE ORIENTED ARCHITECTURE FOR SOLVING GIS PROBLEMSijait
Nowadays spatial data is becoming as a key element for effective planning and decision making in all aspects of societies. Spatial data are those data which are related to the features on the ground. In this way, a Geographic Information System (GIS) is a system that captures, analyzes, and manages any spatially referenced data. This paper analyzes the architecture and main features of Geographic Information Systems and aims at discussing some important problems emerged in the research of applying GIS in the organizations. It focuses on some of them such as lack of interoperability, agility and business alignment. We explain that SOA as a service oriented software architecture model can support the transformation of geographic information software from "system and function" to "service and application" and as the best practice of the architectural concepts can increase business alignment in the enterprise applications.
Advances in micro fabrication and communication techniques have led to unimaginable proliferation of
WSN applications. Research is focussed on reduction of setup operational energy costs. Bulk of operational
energy costs are linked to communication activities of WSN. Any progress towards energy efficiency has a
potential of huge savings globally. Therefore, every energy efficient step is an endeavour to cut costs and
‘Go Green’. In this paper, we have proposed a framework to reduce communication workload through: Innetwork
compression and multiple query synthesis at the base-station and modification of query syntax through introduction of Static Variables. These approaches are general approaches which can be used in any WSN irrespective of application.
Role of Operational System Design in Data Warehouse Implementation: Identifyi...iosrjce
Data warehouse designing process takes input from operational system of the organization. Quality
of data warehousing solution depends on design of operational system. Often, operational system
implementations of organizations have some limitations. Thus, we cannot proceed for data warehouse
designing so easily. In this paper, we have tried to investigate operational system of the organization for
identifying such limitations and determine role of operational system design in the process of data warehouse
design and implementation. We have worked out to find possible methods to handle such limitations and have
proposed techniques to get a quality data warehousing solution under such limitations. To make the work based
on live example, National Rural Health Mission (NRHM) Project has been taken. It is a national project of
health sector, managed by Indian Government across the country. The complex structure and high volume of
data makes it an ideal case for data warehouse implementation.
DISTRIBUTED AND BIG DATA STORAGE MANAGEMENT IN GRID COMPUTINGijgca
Big data storage management is one of the most challenging issues for Grid computing environments, since large amount of data intensive applications frequently involve a high degree of data access locality. Grid applications typically deal with large amounts of data. In traditional approaches high-performance computing consists dedicated servers that are used to data storage and data replication. In this paper we present a new mechanism for distributed and big data storage and resource discovery services. Here we proposed an architecture named Dynamic and Scalable Storage Management (DSSM) architecture in grid environments. This allows in grid computing not only sharing the computational cycles, but also share the storage space. The storage can be transparently accessed from any grid machine, allowing easy data sharing among grid users and applications. The concept of virtual ids that, allows the creation of virtual spaces has been introduced and used. The DSSM divides all Grid Oriented Storage devices (nodes) into multiple geographically distributed domains and to facilitate the locality and simplify the intra-domain storage management. Grid service based storage resources are adopted to stack simple modular service piece by piece as demand grows. To this end, we propose four axes that define: DSSM architecture and algorithms description, Storage resources and resource discovery into Grid service, Evaluate purpose prototype system, dynamically, scalability, and bandwidth, and Discuss results. Algorithms at bottom and upper level for standardization dynamic and scalable storage management, along with higher bandwidths have been designed.
Efficient and scalable multitenant placement approach for in memory database ...CSITiaesprime
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, multi-tenant placement (MTP) and best-fit greedy to show the quality of tenant placement. The experimental results show that MTP algorithm is scalable and efficient in comparison with best-fit greedy algorithm over proposed architecture.
Data Partitioning in Mongo DB with CloudIJAAS Team
Cloud computing offers various and useful services like IAAS, PAAS SAAS for deploying the applications at low cost. Making it available anytime anywhere with the expectation to be it scalable and consistent. One of the technique to improve the scalability is Data partitioning. The alive techniques which are used are not that capable to track the data access pattern. This paper implements the scalable workload-driven technique for polishing the scalability of web applications. The experiments are carried out over cloud using NoSQL data store MongoDB to scale out. This approach offers low response time, high throughput and less number of distributed transaction. The result of partitioning technique is conducted and evaluated using TPC-C benchmark.
Graph Based Workload Driven Partitioning System by Using MongoDBIJAAS Team
The web applications and websites of the enterprises are accessed by a huge number of users with the expectation of reliability and high availability. Social networking sites are generating the data exponentially large amount of data. It is a challenging task to store data efficiently. SQL and NoSQL are mostly used to store data. As RDBMS cannot handle the unstructured data and huge volume of data, so NoSQL is better choice for web applications. Graph database is one of the efficient ways to store data in NoSQL. Graph database allows us to store data in the form of relation. In Graph representation each tuple is represented by node and the relationship is represented by edge. But, to handle the exponentially growth of data into a single server might decrease the performance and increases the response time. Data partitioning is a good choice to maintain a moderate performance even the workload increases. There are many data partitioning techniques like Range, Hash and Round robin but they are not efficient for the small transactions that access a less number of tuples. NoSQL data stores provide scalability and availability by using various partitioning methods. To access the Scalability, Graph partitioning is an efficient way that can be easily represent and process that data. To balance the load data are partitioned horizontally and allocate data across the geographical available data stores. If the partitions are not formed properly result becomes expensive distributed transactions in terms of response time. So the partitioning of the tuple should be based on relation. In proposed system, Schism technique is used for partitioning the Graph. Schism is a workload aware graph partitioning technique. After partitioning the related tuples should come into a single partition. The individual node from the graph is mapped to the unique partition. The overall aim of Graph partitioning is to maintain nodes onto different distributed partition so that related data come onto the same cluster.
An elastic , effective, activety or intelligent ,graceful networking architecture layout be desired to make processing massive data. next to that ,existent network architectures be considerably incapable for
cleatting the huge data. massive data thrusts network exchequers into border it consequence with in network overcrowding ,needy achievement, then permicious employer exprtises. this offered the current state-of-the-art research affronts ,potential solutions into huge data networking notion. More specifically, present the state of networking problems into massive data connected intrequirements,capacity,running ,
data manipulating also will introduce the architectures of MapReduce , Hadoop paradigm within research
requirements, fabric networks and software defined networks which utilizized into making today’s idly growing digital world and compare and contrast into identify relevant drawbacks and solutions.
SUITABILITY OF SERVICE ORIENTED ARCHITECTURE FOR SOLVING GIS PROBLEMSijait
Nowadays spatial data is becoming as a key element for effective planning and decision making in all aspects of societies. Spatial data are those data which are related to the features on the ground. In this way, a Geographic Information System (GIS) is a system that captures, analyzes, and manages any spatially referenced data. This paper analyzes the architecture and main features of Geographic Information Systems and aims at discussing some important problems emerged in the research of applying GIS in the organizations. It focuses on some of them such as lack of interoperability, agility and business alignment. We explain that SOA as a service oriented software architecture model can support the transformation of geographic information software from "system and function" to "service and application" and as the best practice of the architectural concepts can increase business alignment in the enterprise applications.
With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
Granularity analysis of classification and estimation for complex datasets wi...IJECEIAES
Dispersed and unstructured datasets are substantial parameters to realize an exact amount of the required space. Depending upon the size and the data distribution, especially, if the classes are significantly associating, the level of granularity to agree a precise classification of the datasets exceeds. The data complexity is one of the major attributes to govern the proper value of the granularity, as it has a direct impact on the performance. Dataset classification exhibits the vital step in complex data analytics and designs to ensure that dataset is prompt to be efficiently scrutinized. Data collections are always causing missing, noisy and out-of-the-range values. Data analytics which has not been wisely classified for problems as such can induce unreliable outcomes. Hence, classifications for complex data sources help comfort the accuracy of gathered datasets by machine learning algorithms. Dataset complexity and pre-processing time reflect the effectiveness of individual algorithm. Once the complexity of datasets is characterized then comparatively simpler datasets can further investigate with parallelism approach. Speedup performance is measured by the execution of MOA simulation. Our proposed classification approach outperforms and improves granularity level of complex datasets.
Cloud computing has been the most adoptable technology in the recent times, and the database has also
moved to cloud computing now, so we will look into the details of database as a service and its functioning.
This paper includes all the basic information about the database as a service. The working of database as a
service and the challenges it is facing are discussed with an appropriate. The structure of database in
cloud computing and its working in collaboration with nodes is observed under database as a service. This
paper also will highlight the important things to note down before adopting a database as a service
provides that is best amongst the other. The advantages and disadvantages of database as a service will let
you to decide either to use database as a service or not. Database as a service has already been adopted by
many e-commerce companies and those companies are getting benefits from this service.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
Similar to Research of Embedded GIS Data Management Strategies for Large Capacity (20)
Technology Content Analysis with Technometric Theory Approach to Improve Perf...Nooria Sukmaningtyas
Radiologic installation be facilitated with medical equipment for supporting of health services in
investigation of disease. There are 3 (three) criteria technology of equipment: investigation with
sophisticated equipment (CT scans single slice), investigation with a medium-sized enterprises equipment
(general x-ray 300 mA / 125 KV) and simple equipment (Portable Dental x-ray 8mA /70 KV). In view of
contribution to the Hospital, Radiologic installation from 2008 until 2012 has decreased, the data as
follows: The year 2008 was 6.8 percent; in 2009 was 4.3 percent; in 2010 was 2.5 percent; in 2011 was
2.3 percent; and in 2012 was 2.6 percent.By using approach of technometric theory, the study measures
the significant contribution of each component of technology that consists of aspects: Technoware,
Humanware, Inforware, Orgaware in Radiologic Installation, and also want to know about the
sophistication of the technology which is used in indicators Technology contribution coefficien (TCC), so
factors that affect experienced performance result can be known, where TCC are: TCC High Technology
TCC_tt =0,490 if T_tt= 0,387 H= 0,519 I= 0,538 O=0,534 TCC Middle technology TCC_tm=0,443 if
T_tm=0,258 H=0,519 I=0,538 O=0,534TCC Simple technology TCC_ts =0,398 if T_ts=0,168 H=0,519
I=0,538 O=0,534If the value of component technology (T,H,I,O) is less than TCC, its means that
Radiologic Installation Unit is in decreasing phase, a condition that cannot be left, need the directors’s
action immediately to formulate the right and fast policies to protect it from lost. The final result of study is a
gap almost everywhere from the three technology component Humanware = 0,519, Inforware = 0,538,
Orgaware = 0,534, but the gap between most components in technology aspects Technoware (0,387,
0,258, 0,168), that means that development strategy of Radiologic Installation unit be prioritized on
increasing aspects Technoware (rejuvenation medical equipment).
Data Exchange Design with SDMX Format for Interoperability Statistical DataNooria Sukmaningtyas
Today’s concept of Open Government Data (OGD) for openness, transparency and ease of
access of data owned by government agencies becomes increasingly important. This initiative emerges
from the demand of data usersforthe data belongs to the government agencies. The data services
providing an easy access, cheap, fast, and interoperability are needed by the users and becomes
important indicator performance for respective government agencies. Statistical Data and Metadata
Exchange (SDMX) is a new standard format in the data dissemination activities particularly in the
exchange of statistical data and metadata via Internet. In this respect SDMX support the implementation of
OGD project. This paper is on the technical design, development and implementation of data and
metadata exchange service of statistical data using SDMX format to support interoperability data through
web services. Three results are proposed: (i) framework for standardization of structure of statistical
publications data model with SDMX; (ii) design architecture of data sharing model; and (iii) web service
implementation of data and metadata exchange service using Service Oriented Analysis and Design
(SOAD) method. Implementation at Statistics Indonesia (BPS) is chosen as a case study to prove the
design concept. It is shown through quantitative assessment and black box testing that the design
achieves its objective.
The knowledge management is a model involving the information system in the knowledge
processing. “Tim Penyelesaian Kerugian Negara” (TPKN) is one of sources of information related to the
state loss settlement, so it needs the development of knowledge management system on the state loss
settlement to ease the users when looking for the references of knowledge as completely as possible,
accurately, and quickly. This research aims to develop the system of knowledge management on the state
loss in LAPAN (SIMAPKLA). The used research methodology is Knowledge Management System Life
Cycle (KMSLC). The tacit, explicit knowledge is taken from the experts and it is stored in the Knowledge
Base (KB). The design model uses the approach with the orientation of object and implementation with Yii
Framework and blackbox testing. The menu on this system includes home, about us, dictionary, news,
meeting schedulles, knowledge about state loss, e-document, progress, forum, and contact us. Based on a
series of tests, in the aspect of functionality, this system is suitable and useful to share knowledge and
know the development of state loss settlement.
Automatic Extraction of Diaphragm Motion and Respiratory Pattern from Time-se...Nooria Sukmaningtyas
Thoracic time-sequential MRI can be used to assess diaphragm motion pattern without exposing
radiation to subject. Clinicians may employ the motion to evaluate the severeness of chronic obstructive
pulmonary disease (COPD). This study proposed a novel method of diaphragm motion extraction method
on time-sequential thoracic MRI in sagittal plane. Otsu’s threshold and active contour algorithm are used to
obtain diaphragm boundary. An automatic diaphragm motion tracking and extraction of respiratory pattern
are also performed based on the diaphragm boundary. A total of 1200 frames time-sequential MRI in
sagittal plane was obtained for total of 15 subjects (8 healthy volunteers and 7 COPD patients). The
proposed method successfully extracts diaphragm motion and respiratory patterns for both healthy
volunteers and COPD patients.
A dual-frequency microstrip patch antennas has been presented and used for 802.11WLAN
applications. The antennas had been designed, simulated and parametrically studied in CST Microwave
studio. By introducing u-slot, dual-band operation with its operating mode centered at frequency 2.4GHz,
3.65GHz and 5.2GHz had been obtained. The gain and directivity had been improved by adjusting the
parameters of the antennas. The gain of the proposed designs was 6.019dBi, 4.04dBi and 6.22dBi and
directivity was 6.02dBi, 4.05dBi and 6.22dBi at resonant frequencies 2.4GHz, 3.6GHz and 5.2GHz
respectively. The patch antennas had been proposed to be used in portable devices that require
miniaturized constituent parts.
The Detection of Straight and Slant Wood Fiber through Slop Angle Fiber FeatureNooria Sukmaningtyas
Quality control is one of important process that can not be avoided in industry. Image processing
technique is required to distinguish the quality of wood. If it can be done automatically by the computer, it
will be very helpful. This paper discusses the detection of straight and slant wood fiber to distinguish its
quality. This paper proposes an algorithm by using only two features i.e. mean (average value of slop
angle fiber) and maximumangle (the maximum value of slop angle fiber). Then the classification method is
used by tresholding. The result shows the performance is achieved on accuracy 79.2%
Active Infrared Night Vision System of Agricultural VehiclesNooria Sukmaningtyas
Active infrared night vision system was significant for night driving and it has been greatly used on
limousine car. Design active infrared night vision system for agricultural vehicles greatly improved the night
vision of them and it was an inevitable trend. Comparing parameters of various night vision systems and
designing active infrared night vision system of agricultural vehicles was significant for improving active
security of agricultural vehicles working at nighttime. By analyzing the infrared night vision system basic
parameters determined the structure form and basic parameters, calculated the infrared light wave width
and emission power to choose each components, designed active infrared night vision system’s structure
and determined parameters of agricultural vehicles.
Robot Three Dimensional Space Path-planning Applying the Improved Ant Colony ...Nooria Sukmaningtyas
To make robot avoid obstacles in 3D space, the Pheromone of Ant Colony Optimization (ACO) in
Fuzzy Control Updating is put forward, the Pheromone Updating value varies with The number of iterations
and the path-planning length by each ant . the improved Transition Probability Function is also proposed,
which makes more sense for each ant choosing next feasible point .This paper firstly, describes the Robot
Workspace Modeling and its path-planning basic method, which is followed by introducing the improved
designing of the Transition Probability Function and the method of Pheromone Fuzzy Control Updating of
ACO in detail. At the same time, the comparison of optimization between the pre-improved ACO and the
improved ACO is made. The simulation result verifies that the improved ACO is feasible and available.
Research on a Kind of PLC Based Fuzzy-PID Controller with Adjustable FactorNooria Sukmaningtyas
A kind of fuzzy-PID controller with adjustable factor is designed in this paper. Scale factor’s selfadjust
will come true. Fuzzy control algorithm is finished in STEP7 software, and then downloaded in S7-
300 PLC. WinCC software will be used to control the change-trend in real time. Data communication
between S7-300 PLC and WinCC is achieved by MPI. The research shows that this fuzzy-PID controller
has better robust capability and stability. It’s an effective method in controlling complex long time-varying
delay systems.
This paper proposed a nonlinear robust control for spacecraft attitude based on passivity and
disturbance suppression vector. The spacecraft model was described using quaternion. The control law
introduced the suppression vector of external disturbances and had no information related to the system
parameters. The desired performance of spacecraft attitude control could be achieved using the designed
control law. And stability conditions of the nonlinear robust control for spacecraft attitude were given. The
stability could be proved by applying Lyapunov approach. The verification of the proposed attitude control
method was performed through a series of simulations. The numerical results showed the effectiveness of
the proposed control method in controlling the spacecraft attitude in the presence of external disturbances.
The main benefit of the proposed attitude control method does not need angular velocity measurement
and has its robustness against model uncertainties and external disturbances.
Remote Monitoring System for Communication Base Based on Short MessageNooria Sukmaningtyas
The automatic monitoring system of communication base which is an important means to realize
modernization of mobile communication base station management. In this paper, we implement a
monitoring system for communication base with three essential functions which are telemetry, remote
control and communication. In this system, data acquisition unit, data transmit unit and monitoring centre
unit are combined to form this monitoring system. The system can check the communication base status
anytime through GSM SMS (short message service), and can send predefined command to perform
remote data collection and monitoring in the special conditions. It is suitable especially for the alarm of
unusual situation, the monitoring of environmental information and entrance guard information. The paper,
firstly, proposes the architecture of the monitoring system; secondly, proposes the terminal of monitoring
system. The data collection terminal is studied and designed, including hardware design based on
embedded system and software design. Finally, presents implmentation and results. The monitoring
system can improve the integrity, reliability, flexibility and intellectuality of monitoring system. The system
with modular structure, which is low-cost, fitter and easier to move and operate, can be expanded
according to practical need and is reliable and effective through field test.
Tele-Robotic Assisted Dental Implant Surgery with Virtual Force FeedbackNooria Sukmaningtyas
The dental implant surgical applications full of risk because of the complex anatomical
architecture of craio-maxillofacial area. Therefore, the surgeons move towards computer-aided planning
for surgeries and then implementation using robotic assisted tele-operated techniques. This study divided
into four main parts. The first part is developed by computer-aided surgical planning by image modalities.
The second part is based on Virtual Surgical Environment through virtual force feedback haptic device.
The third part is implemented the experimental surgery by integrating the prototype surgical manipulator
with the haptic device poses using inverse kinematics method. The fourth part based on monitoring the
robotic manipulator pose by using image guided navigation system to calculate the position error of the
surgical manipulator. Thus, this tele-robotic system is able to comprehend the sense of complete practice,
improve skills and gain experience of the surgeon during the surgery. Finally, the experimental outcomes
show in satisfactory boundaries.
This paper proposes an adaptation mechanism based on adaptation planning graph for servicebased
business processes. First, a three-layer representation model of service-based business process is
introduced. Second, control-flow patterns of tasks, goal, logic model of service-based business process
and adaptation planning graph are introduced to enforce reliability of composite web services at run-time.
Finally, a simulation example of adaptation in service-based business processes is given. Simulations
prove that this approach can efficiently guarantee the reliability of composite services at run-time.
Review on Islanding Detection Methods for Photovoltaic InverterNooria Sukmaningtyas
Solar power generation, which is regarded as an ideal environment-friendly manner for power
generation, is getting more and more attention. When photovoltaic inverter is connected to the grid, the
island effect is a special problem to confront. This paper briefly analyzes the island effects and makes a
summary of both domestic and external research progress concerning islanding detection methods; the
islanding detection methods can be divided into two classes: one is grid-side detection; the other is local
detection. The local detection is generally divided into passive methods and active methods. The theory of
advantages and disadvantages of those methods are briefly introduced in this paper. At the end of the
paper, to deal with the disadvantages of those methods that are mentioned, it proposes the research
direction for deeper study of islanding detection methods.
Stabilizing Planar Inverted Pendulum System Based on Fuzzy Nine-point ControllerNooria Sukmaningtyas
In order to stabilize planar inverted pendulum, after analyzing the physical characteristics of the
planar inverted pendulum system, a pendulum nine-point controller and a car nine-point controller for Xaxis
and Y-axis were designed respectively. Then a fuzzy coordinator was designed using the fuzzy
control theory based on the priority of those two controllers, and the priority level of the pendulum is higher
than the car. Thus, the control tasks of each controller in each axis were harmonized successfully. The
designed control strategy did not depend on mathematical model of the system; it depended on the control
experience of people or the control experts. The compared experiments showed that the control strategy
was easy and effective, what was’s more; it had a very good robust feature.
Gross Error Elimination Based on the Polynomial Least Square Method in Integr...Nooria Sukmaningtyas
The measurement data of parameter in the electrical equipment contains many noises in subway
integrated monitoring system. To eliminate the impact of gross error in the measurement data, a
polynomial least square curve fitting algorithm is used in this paper. Based on the Rajda criterion, the
algorithm gives the variance estimation of the noises, and then uses dynamic threshold to detect and
replace the measurement data with gross error by statistical estimation. Finally, a data processing
procedure has been presented to deal with the gross error. The practical application indicates that the
proposed algorithm can effectively eliminate the gross error in many types of measurement signals so as
to ensure the reliability of the monitoring system.
Design on the Time-domain Airborne Electromagnetic Weak Signal Data Acquisiti...Nooria Sukmaningtyas
According to principle of transient electromagnetic method as well as its signal characteristics,
this paper designed and implemented a time-domain airborne electromagnetic weak signal data
acquisition system. With the use of the floating-point amplification technology, the system amplifies the
weak transient electromagnetic signal dynamically. CPLD and DSP were used as the decoding control
circuit and the main controller for processing the sampled data, respectively. The transient electromagnetic
signal acquisition system, which was designed with a dynamic range up to 144dB and a sampling rate up
to 100 kHz, meets the requirements of the high sampling rate with high precision and it has been applied in
the time-domain fixed-wing airborne electromagnetic mineral exploration.
INS/GPS integrated navigation system is studied in this paper for the hypersonic UAV in order to
satisfy the precise guidance requirements of hypersonic UAV and in response to the defects while the
inertial navigation system (INS) and the global positioning system (GPS) are being applied separately. The
information of UAV including position, velocity and attitude can be obtained by using INS and GPS
respectively after generating a reference trajectory. The corresponding errors of two navigation systems
can be obtained through comparing the navigation information of the above two guidance systems.
Kalman filter is designed to estimate the navigation errors and then the navigation information of INS are
corrected. The non-equivalence relationship between the platform misalignment angle and attitude error
angle are considered so that the navigation accuracy is further improved. The Simulink simulation results
show that INS/GPS integrated navigation system can help to achieve higher accuracy and better antiinterference
ability than INS navigation system and this system can also satisfy the navigation accuracy
requirements of hypersonic UAV.
Research on Space Target Recognition Algorithm Based on Empirical Mode Decomp...Nooria Sukmaningtyas
The space target recognition algorithm, which is based on the time series of radar cross section
(RCS), is proposed in this paper to solve the problems of space target recognition in the active radar
system. In the algorithm, EMD method is applied for the first time to extract the eigen of RCS time series.
The normalized instantaneous frequencies of high-frequency intrinsic mode functions obtained by EMD are
used as the eigen values for the recognition, and an effective target recognition criterion is established.
The effectiveness and the stability of the algorithm are verified by both simulation data and real data. In
addition, the algorithm could reduce the estimation bias of RCS caused by inaccurate evaluation, and it is
of great significance in promoting the target recognition ability of narrow-band radar in practice.
With the expanding of database of the watch list of anti-money laundering, improving the speed in
matching between the watch list and the database of account holders and clients’ transaction is especially
important. This paper proposes an improved AC-BM Algorithm, a matching algorithm of subsection, to
improve the speed of matching. Experiment results show the time performance of the improved algorithm
is better than traditional BM algorithm, AC algorithm and the AC-BM algorithm. It can improve the
efficiency of on-line monitoring of anti-money laundering.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
2. ISSN: 2302-4046
TELKOMNIKA Vol. 12, No. 1, January 2014: 275 – 279
276
GIS data of large amount of complex data, but also has the characteristics of itself that the
storage space is small and the calculation of retrieval is simple and quick. This leads to 2 major
contradictions must be resolved in the development of embedded GIS data:
(1) The contradiction of large quantities of data and small storage space.
The complex space data relative to the traditional relational database data, the data
quantity is larger, and the data distribution is more uneven. But in the embedded system, the
storage space is very precious, which requires more effective structure and more effective way
to make GIS data stored in less space and used in smaller memory.
(2) The contradiction between embedded calculation speed and complex spatial
retrieval demand.
GIS query the database is the main reason related to spatial position and not directly
attribute data which led to the GIS database is much more complicated than traditional
database. But the shorter, faster process requirements in embedded system, and embedded
processor itself processing speed, let us must use a mechanism more flexible than desktop
operating system to meet the requirements of embedded system. The contradictions above are
the characteristics of embedded GIS data [2].
To solve these 2 contradictions, 2 aspects can be considered:
(1) Using the appropriate data compression technology. The storage space of
embedded devices is far less than the desktop computer storage space. Although the data can
be stored in block or can be downloaded dynamically through the network, but the effective
method for efficient compression of GIS data is the most effective and economical method.
(2) The reasonable spatial index and real-time response. The low processing speed and
limited memory space of embedded processor, makes it impossible to load large amounts of
data into memory for operation at the same time, To achieve real-time response for data
operational request, the appropriate method of data index and data organization is needed to
adopt to divide the data for reducing the amount of read data at once, and then the amount of
calculation to CPU is reduced and the electric power is saved.
3. Analysis of the Data Management Strategy
3.1. Data Management Strategy for Embedded GIS Data
For embedded GIS, due to the limitation of CPU speed, battery power, internal memory
capacity, external memory capacity and screen display quantity, we can not completely copy the
data management based on PC into embedded GIS. Generally, the method which is combined
region quad-tree index storage structure with the spatial data organization of blocking and
layering is adopted for the map data. After dividing the map into block, the geographical
features in the block are classified on the basis of the importance, and the classifications are
organized by complementary non-redundant memory structure, in this way, it is realized that the
data reading in the block according to the need of classification. The geographical features in
the same classification are stored into layers, and are displayed by layers control. For the
embedded GIS platform based on Windows mobile, data storage use CEDB or the combination
of EDB with file form [3].
3.2. Data Management Optimization for Embedded GIS Platform
3.2.1. Shortage of Embedded GIS Platform for Data Management
The method which is combined region quad-tree index storage structure with the spatial
data organization of blocking and layering solved the contradiction between large capacity data
and limited storage space in a certain extent, but with its expanding range of application and
increasing demand of business , it also reflects certain limitations [4].
1. Data Index and Organization:
(1) The block and classification method and layers control makes the data sent to
memory as little as possible, but data in the layer are still all loaded into memory. Although
embedded GIS application related to data modification operations, it rarely involved in the
operation for all features in the layer. For layers that had no change or modified very small
amount, there is no need to load all data into memory at one time.
(2) The inconvenience of data exchange to large GIS software. Because the data index
and organization methods are used in different ways, before exchanged, data must be
conversed into the structure of corresponding platform at first, this method hindered the sharing
3. TELKOMNIKA ISSN: 2302-4046
Research of Embedded GIS Data Management Strategies for Large Capacity (Shi Bei-lei)
277
and exchanging of data between platforms to some extent, also caused the conversion process
too cumbersome.
2. Data Storage: Using CEDB as the database, we improved the efficiency of
information query which is irrespective to spatial data. But CEDB as the database in practical
application also reflected some shortcomings:
(1) The CEDB database does not support the PC terminal operating. Usually the
attribute data is written into the file, when the program first runs, it reads the attribute data from
the file to form database. So when the program is first running, it is usually suspended for the
transient process of building database [5]. When the attribute data is too large (more than 5M)
or the database table has too many fields (greater than 12), it may lead to failure of contribute
database and affect the stable operation of the program.
(2) The fussy data query and modify operation lead to low efficiency.
CEDB database support only inquiry of a single field, the support for SQL is also very
simple subset.
3.2.2. Optimization of Embedded GIS Platform for Data Management
The limitations of the embedded GIS platform reflected in terms of data management,
especially the operation of the large amount of data in the city pipe network and other needed is
particularly obvious. Using the multi-level data indexing, dynamic data loading and SQL Mobile
database can solve these problems very well [6]. Optimizing in the data management of the
embedded GIS platform is mainly from the following aspects:
1. Using Method of the Multi-level Data Indexing and Dynamic Data Loading to
Optimize Data Indexing and Organization Mode:
(1) The multi-level data indexing: The whole map is divided and stored according to the
administrative region as the unit, and the index of map is established to switch between
administrative regions. For the bigger administrative region, the multi-level index can be
established to storage and switch. For the data of single map, the data is organized by layers.
Through the maximum and minimum display scale mark is added in each layer head
information, the platform realizes the controlling of classification between layers ,and data will
be loaded into the memory when displayed in the current scale. Through the method of multi-
level data indexing, data organization is maintained consistent with GIS software on the PC for
the exchanging and sharing of data with PC.
Figure 1. Multi-lever index of map
(2) Dynamic data loading based on layer: the layer processing is used for single map, at
the same time, the method of dynamic data load is used in the map loading. During map
loading, only the map layer index table is read and stored in memory permanently. Only when
queried or modified, the data is read into memory for corresponding operation by seeking index
index of district
district 1
distrct 2-1 district 2-2 district 2-3
secondary index of district 2 district 1 district 1
Laye 1 Laye 2 Laye 3
points, lines, areas
4. ISSN: 2302-4046
TELKOMNIKA Vol. 12, No. 1, January 2014: 275 – 279
278
number of corresponding layer and feature from the map layer index table, and data is released
from the memory after the operation. it save more memory space in this way.
2. Data Modification: Due to the data dynamic loading method and layer processing for
single map, the data (including the spatial data and attribute data) modification is managed by
dynamic index data table establishing.
(1) The modification of attribute data and its structure: By a unique number of the
feature (ID), the feature attribute data index values is read from the layer index tables in
memory, and then the attribute structure of the feature is read form file as well as data dynamic
index table is established. When modification is finished, the index table and its contents are
released, and the modified data is saved in the database.
(2) The modification for the spatial data is similar with attribute data, but due to the
spatial data is used more frequently than the attribute data. In order to ensure the real-time
response, the MapGIS-EMS platform is optimized. Through the optimization of operation, not
only the ability and efficiency for MapGIS-EMS platform data management was increased, but
also the process of data exchange was simplified. Before optimization, the data between
embedded equipments or embedded equipment with PC are exchanged through switching
platform in the form of file. After optimization, although data between the embedded device and
PC are exchanged still in the form of file, but embedded devices can exchange data directly. By
Using the multi-level data indexing, dynamic data loading and store the attribute data through
the SQLMobile for map data, the efficiency comparison before optimization and after
optimization is showed in table 1, 1:5 million map of Wuhan City (after conversion is 2.2M, 13
layers, 8747 features) is used:
Table 1. Comparative Efficiency Before and After Optimization
Test methods Before optimization After optimization
The SQLmobile is adopted to
replace the original CEDB as the
attribute database
A single database cannot be
greater than 5M, the
database table fields cannot
more than 12
A single data table cannot
be greater than 512M, the
database cannot be greater
than 4G, unlimited number
of fields
The SQLmobile database and the
same display menthod are
adopted to first run the program to
draw 1:1 map in real-time when
the all layers on
35~38 seconds 18~20 seconds
SQLmobile database is adopte
to query in the map with the
condition that Highway grade is
arterial road and name is luoshilu
1~2 seconds, and not good
at surpporting the complex
contition querying
Less than 1 second, and
good at surpporting the
complex contition querying
SQLmobile database is adopte
to add the new field of manager
and to insert default value in the
new field in the communal
facilities layer(1145 points)
About 2 seconds Less than 1 second
4. Summary
Due to the characteristics of embedded GIS data itself, resulting in the particularity of its
data management, and there are some limitations on the data management of embedded GIS
platform used at present, this paper presents the use of data classification indexing, data
dynamic loading as well as the SQLmobile database to optimize the embedded GIS data
management module, the data access efficiency and support on large volume data can be
significantly improved.
References
[1] Tang Minan, Wang Xiaoming, Yuan Shuang. Site Selection of Mechanical Parking System Based on
GIS with AFRARBMI. Indonesian Journal of Electrical Engineering. 2013; 11(7): 3935-3944.
5. TELKOMNIKA ISSN: 2302-4046
Research of Embedded GIS Data Management Strategies for Large Capacity (Shi Bei-lei)
279
[2] Fan Wen-you, Ren Nian-hai, Liu Qin. Research on Embedded Mobile Database and Its
Key Technology. Micro computer information. Beijing. 2008; 6: 49-51
[3] Xie Zhong, Feng Ming, Ma Chang-jie. Index Strategies for Embedded-GIS Spatial Data Management.
Journal of China University of Geosciences. Wuhan. 2006; 9: 653-658
[4] Ye Li-wei, Xie Zhong. Research and Implementation on Data Management Optimization of
Embedded GIS Platform. Micro computer information. Beijing. 2009; 25:138-139
[5] Ying Liu, Yantao Zhu, Yurong Li, Chao Ni. The Embedded Information Acquisition System of Forest
Resource. Indonesian Journal of Electrical Engineering. 2012; 10(7): 1843-1848
[6] Yifeng Wu, Hongchao Wang. Application of GPRS and GIS in Boiler Remote Monitoring System.
TELKOMNIKA Indonesian Journal of Electrical Engineering. 2012; 10(8): 2159-2168