The query optimizer is a significant element in today’s relational database
management system. This element is responsible for translating a user-submitted query
commonly written in a non-procedural language-into an efficient query evaluation program that
can be executed against the database. This research paper describes architecture steps of query
process and optimization time and memory usage. Key goal of this paper is to understand the
basic query optimization process and its architecture.
Issues in Query Processing and OptimizationEditor IJMTER
The paper identifies the various issues in query processing and optimization while
choosing the best database plan. It is unlike preceding query optimization techniques that uses only a
single approach for identifying best query plan by extracting data from database. Our approach takes
into account various phases of query processing and optimization, heuristic estimation techniques
and cost function for identifying the best execution plan. A review report on various phases of query
processing, goals of optimizer, various rules for heuristic optimization and cost components involved
are presented in this paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
Software cost estimation is a key open issue for the software industry, which
suffers from cost overruns frequently. As the most popular technique for object-oriented
software cost estimation is Use Case Points (UCP) method, however, it has two major
drawbacks: the uncertainty of the cost factors and the abrupt classification. To address
these two issues, refined the use case complexity classification using fuzzy logic theory which
mitigate the uncertainty of cost factors and improve the accuracy of classification.
Software estimation is a crucial task in software engineering. Software estimation
encompasses cost, effort, schedule, and size. The importance of software estimation becomes
critical in the early stages of the software life cycle when the details of software have not
been revealed yet. Several commercial and non-commercial tools exist to estimate software
in the early stages. Most software effort estimation methods require software size as one of
the important metric inputs and consequently, software size estimation in the early stages
becomes essential.
The proposed method presents a techniques using fuzzy logic theory to improve the
accuracy of the use case points method by refining the use case classification.
A developer needs to evaluate software performance metrics such as power consumption at an early stage of design phase to make a device or a software efficient especially in real-time embedded systems. Constructing performance models and evaluation techniques of a given system requires a significant effort. This paper presents a framework to bridge between a Functional Modeling Approach such as FSM, UML etc. and an Analytical (Mathematical) Modeling Approach such as Hierarchical Performance Modeling (HPM) as a technique to find the expected average power consumption for different layers of abstractions. A Hierarchical Generic FSM “HGFSM” is developed to be used in order to estimate the expected average power. A case study is presented to illustrate the concepts of how the framework is used to estimate the average power and energy produced.
Issues in Query Processing and OptimizationEditor IJMTER
The paper identifies the various issues in query processing and optimization while
choosing the best database plan. It is unlike preceding query optimization techniques that uses only a
single approach for identifying best query plan by extracting data from database. Our approach takes
into account various phases of query processing and optimization, heuristic estimation techniques
and cost function for identifying the best execution plan. A review report on various phases of query
processing, goals of optimizer, various rules for heuristic optimization and cost components involved
are presented in this paper.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
Software cost estimation is a key open issue for the software industry, which
suffers from cost overruns frequently. As the most popular technique for object-oriented
software cost estimation is Use Case Points (UCP) method, however, it has two major
drawbacks: the uncertainty of the cost factors and the abrupt classification. To address
these two issues, refined the use case complexity classification using fuzzy logic theory which
mitigate the uncertainty of cost factors and improve the accuracy of classification.
Software estimation is a crucial task in software engineering. Software estimation
encompasses cost, effort, schedule, and size. The importance of software estimation becomes
critical in the early stages of the software life cycle when the details of software have not
been revealed yet. Several commercial and non-commercial tools exist to estimate software
in the early stages. Most software effort estimation methods require software size as one of
the important metric inputs and consequently, software size estimation in the early stages
becomes essential.
The proposed method presents a techniques using fuzzy logic theory to improve the
accuracy of the use case points method by refining the use case classification.
A developer needs to evaluate software performance metrics such as power consumption at an early stage of design phase to make a device or a software efficient especially in real-time embedded systems. Constructing performance models and evaluation techniques of a given system requires a significant effort. This paper presents a framework to bridge between a Functional Modeling Approach such as FSM, UML etc. and an Analytical (Mathematical) Modeling Approach such as Hierarchical Performance Modeling (HPM) as a technique to find the expected average power consumption for different layers of abstractions. A Hierarchical Generic FSM “HGFSM” is developed to be used in order to estimate the expected average power. A case study is presented to illustrate the concepts of how the framework is used to estimate the average power and energy produced.
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
Harnessing deep learning algorithms to predict software refactoringTELKOMNIKA JOURNAL
During software maintenance, software systems need to be modified by adding or modifying source code. These changes are required to fix errors or adopt new requirements raised by stakeholders or market place. Identifying thetargeted piece of code for refactoring purposes is considered a real challenge for software developers. The whole process of refactoring mainly relies on software developers’ skills and intuition. In this paper, a deep learning algorithm is used to develop a refactoring prediction model for highlighting the classes that require refactoring. More specifically, the gated recurrent unit algorithm is used with proposed pre-processing steps for refactoring predictionat the class level. The effectiveness of the proposed model is evaluated usinga very common dataset of 7 open source java projects. The experiments are conducted before and after balancing the dataset to investigate the influence of data sampling on the performance of the prediction model. The experimental analysis reveals a promising result in the field of code refactoring prediction
Self-adaptive Software Modeling Based on Contextual RequirementsTELKOMNIKA JOURNAL
The ability of self-adaptive software in responding to change is determined by contextual requirements, i.e. a requirement in capturing relevant context-atributes and modeling behavior for system adaptation. However, in most cases, modeling for self-adaptive software is does not take into consider the requirements evolution based on contextual requirements. This paper introduces an approach through requirements modeling languages directed to adaptation patterns to support requirements evolution. The model is prepared through contextual requirements approach that is integrated into MAPE-K (monitor, anayze, plan, execute - knowledge) patterns in goal-oriented requirements engineering. As an evaluation, the adaptation process is modeled for cleaner robot. The experimental results show that the requirements modeling process has been able to direct software into self-adaptive capability and meet the requirements evolution.
Download Complete Material - https://www.instamojo.com/prashanth_ns/
This UML (Unified Markup Language) contains 6 Units and each Unit contains 35 slides in it.
Contents…
• Object-oriented modeling
• Origin and evolution of UML
• Architecture of UML
• User View
o Actor
o Use Cases
• Identify the behavior of a class
• Identify the attributes of a class
• Create a Class diagram
• Create an Object diagram
• Identify the dynamic and static aspects of a system
• Draw collaboration diagrams
• Draw sequence diagrams
• Draw statechart diagrams
• Understand activity diagrams
• Identify software components of a system
• Draw component diagrams
• Identify nodes in a system
• Draw deployment diagrams
Task scheduling methodologies for high speed computing systemsijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
JPL : IMPLEMENTATION OF A PROLOG SYSTEM SUPPORTING INCREMENTAL TABULATIONcsandit
The incremental evaluation of tabled Prolog programs allows to maintain the correctness and completeness of the tabled answers under the dynamic state. This paper presents JPL
implementation details. JPL is an approach to support incremental tabulation for logic programs under non-monotonic logic. The main idea is to cache the proof generated by the deductive inference engine rather than the end results. In order to be able to efficiently maintain
the proof to be updated, the proof structure is converted into a justification-based truthmaintenance (JTMS) network.
Statistical Model to Validate A Metaprocess-Oriented Methodology based on RAS...IJMERJOURNAL
ABSTRACT: Software reuse in the early stages is a key issue in rapid development of applications. This article introduces a metaprocess-oriented methodology based on the model reuse as software assets, and starting from the domain specification and analysis phases. The approach includes the definition of a conceptual level to adequately represent the domain and a reuse process to specify the metaprocess as software assets. The methodology has been applied successfully in the field of e-health, but our work also describes advances in reuse of models for implementation in other contexts, contributing to improved productivity in software development
In this paper, a review for consistency of data replication protocols has been investigated. A brief
deliberation about consistency models in data replication is shown. Also we debate on propagation
techniques such as eager and lazy propagation. Differences of replication protocols from consistency view
point are studied. Also the advantages and disadvantages of the replication protocols are shown. We
advent into essential technical details and positive comparisons, in order to determine their respective
contributions as well as restrictions are made. Finally, some literature research strategies in replication
and consistency techniques are reviewed.
Comparative Analysis of Various Grid Based Scheduling Algorithmsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Editor IJCATR
Nowadays, human faces with huge data. With regard to expansion of computer technology and detectors, some terabytes are
produced. In order to response to this demand, grid computing is considered as one of the most important research fields. Grid technology
and concepts were used to provide resource subscription between scientific units. The purpose was using resources of grid environment
to solve complex problems.
In this paper, a new algorithm based on Mamdani fuzzy system has been proposed for tasks scheduling in computing grid. Mamdani
fuzzy algorithm is a new technique measuring criteria by using membership functions. In this paper, our considered criterion is response
time. The results of proposed algorithm implemented on grid systems indicate priority of the proposed method in terms of validation
criteria of scheduling algorithms like ending time of the task and etc. Also, efficiency increases considerably.
A review of database, database management and challenges as of 2016, partly based on the database review research paper by Abadi et al., 2016, and link to my other presentation on database and database management (as of 2015)
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
Dynamically Adapting Software Components for the GridEditor IJCATR
The surfacing of dynamic execution environments such as „grids‟ forces scientific applications to take dynamicity. Dynamic
adaptation of Grid Components in Grid Comput ing is a critical issue for the design of framework for dynamic adaptation towards
self-adaptable software development components for the grid. T h i s paper carries the systematic design of dynamic adaptation
framework with the effective implementation of the structure of adaptable component. i . e . incorporating the layered architecture
e n v i r o nme n t with the concept of dynamicity.
Harnessing deep learning algorithms to predict software refactoringTELKOMNIKA JOURNAL
During software maintenance, software systems need to be modified by adding or modifying source code. These changes are required to fix errors or adopt new requirements raised by stakeholders or market place. Identifying thetargeted piece of code for refactoring purposes is considered a real challenge for software developers. The whole process of refactoring mainly relies on software developers’ skills and intuition. In this paper, a deep learning algorithm is used to develop a refactoring prediction model for highlighting the classes that require refactoring. More specifically, the gated recurrent unit algorithm is used with proposed pre-processing steps for refactoring predictionat the class level. The effectiveness of the proposed model is evaluated usinga very common dataset of 7 open source java projects. The experiments are conducted before and after balancing the dataset to investigate the influence of data sampling on the performance of the prediction model. The experimental analysis reveals a promising result in the field of code refactoring prediction
Self-adaptive Software Modeling Based on Contextual RequirementsTELKOMNIKA JOURNAL
The ability of self-adaptive software in responding to change is determined by contextual requirements, i.e. a requirement in capturing relevant context-atributes and modeling behavior for system adaptation. However, in most cases, modeling for self-adaptive software is does not take into consider the requirements evolution based on contextual requirements. This paper introduces an approach through requirements modeling languages directed to adaptation patterns to support requirements evolution. The model is prepared through contextual requirements approach that is integrated into MAPE-K (monitor, anayze, plan, execute - knowledge) patterns in goal-oriented requirements engineering. As an evaluation, the adaptation process is modeled for cleaner robot. The experimental results show that the requirements modeling process has been able to direct software into self-adaptive capability and meet the requirements evolution.
Download Complete Material - https://www.instamojo.com/prashanth_ns/
This UML (Unified Markup Language) contains 6 Units and each Unit contains 35 slides in it.
Contents…
• Object-oriented modeling
• Origin and evolution of UML
• Architecture of UML
• User View
o Actor
o Use Cases
• Identify the behavior of a class
• Identify the attributes of a class
• Create a Class diagram
• Create an Object diagram
• Identify the dynamic and static aspects of a system
• Draw collaboration diagrams
• Draw sequence diagrams
• Draw statechart diagrams
• Understand activity diagrams
• Identify software components of a system
• Draw component diagrams
• Identify nodes in a system
• Draw deployment diagrams
Task scheduling methodologies for high speed computing systemsijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
JPL : IMPLEMENTATION OF A PROLOG SYSTEM SUPPORTING INCREMENTAL TABULATIONcsandit
The incremental evaluation of tabled Prolog programs allows to maintain the correctness and completeness of the tabled answers under the dynamic state. This paper presents JPL
implementation details. JPL is an approach to support incremental tabulation for logic programs under non-monotonic logic. The main idea is to cache the proof generated by the deductive inference engine rather than the end results. In order to be able to efficiently maintain
the proof to be updated, the proof structure is converted into a justification-based truthmaintenance (JTMS) network.
Statistical Model to Validate A Metaprocess-Oriented Methodology based on RAS...IJMERJOURNAL
ABSTRACT: Software reuse in the early stages is a key issue in rapid development of applications. This article introduces a metaprocess-oriented methodology based on the model reuse as software assets, and starting from the domain specification and analysis phases. The approach includes the definition of a conceptual level to adequately represent the domain and a reuse process to specify the metaprocess as software assets. The methodology has been applied successfully in the field of e-health, but our work also describes advances in reuse of models for implementation in other contexts, contributing to improved productivity in software development
In this paper, a review for consistency of data replication protocols has been investigated. A brief
deliberation about consistency models in data replication is shown. Also we debate on propagation
techniques such as eager and lazy propagation. Differences of replication protocols from consistency view
point are studied. Also the advantages and disadvantages of the replication protocols are shown. We
advent into essential technical details and positive comparisons, in order to determine their respective
contributions as well as restrictions are made. Finally, some literature research strategies in replication
and consistency techniques are reviewed.
Comparative Analysis of Various Grid Based Scheduling Algorithmsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Editor IJCATR
Nowadays, human faces with huge data. With regard to expansion of computer technology and detectors, some terabytes are
produced. In order to response to this demand, grid computing is considered as one of the most important research fields. Grid technology
and concepts were used to provide resource subscription between scientific units. The purpose was using resources of grid environment
to solve complex problems.
In this paper, a new algorithm based on Mamdani fuzzy system has been proposed for tasks scheduling in computing grid. Mamdani
fuzzy algorithm is a new technique measuring criteria by using membership functions. In this paper, our considered criterion is response
time. The results of proposed algorithm implemented on grid systems indicate priority of the proposed method in terms of validation
criteria of scheduling algorithms like ending time of the task and etc. Also, efficiency increases considerably.
A review of database, database management and challenges as of 2016, partly based on the database review research paper by Abadi et al., 2016, and link to my other presentation on database and database management (as of 2015)
Query processing and Query OptimizationNiraj Gandha
This presentation is made with many efforts and I believe that it will be proven as good presentation to clear the basic of query processing and optimization under the DBMS subject. The topics covered in this presentation are the basic fundamentals of the topic as suggested.
Delivering IT as A Utility- A Systematic Reviewijfcstjournal
Utility Computing has facilitated the creation of new markets that has made it possible to realize the longheld
dream of delivering IT as a Utility. Even though utility computing is in its nascent stage today, the
proponents of utility computing envisage that it will become a commodity business in the upcoming time
and utility service providers will meet all the IT requests of the companies. This paper takes a crosssectional
view at the emergence of utility computing along with different requirements needed to realize
utility model. It also surveys the current trends in utility computing highlighting diverse architecture
models aligned towards delivering IT as a utility. Different resource management systems for proficient
allocation of resources have been listed together with various resource scheduling and pricing strategies
used by them. Further, a review of generic key perspectives closely related to the concept of delivering IT
as a Utility has been taken citing the contenders for the future enhancements in this technology in the form
of Grid and Cloud Computing.
Size and Time Estimation in Goal Graph Using Use Case Points (UCP): A SurveyIJERA Editor
In order to achieve ideal status and meet demands of stakeholders, each organization should follow their vision and long term plan. Goals and strategies are two fundamental basis in vision and mission. Goals identify framework of organization where processes, rules and resources are designed. Goals are modelled based on a graph structure by means of extraction, classification and determining requirements and their relations and in form of graph. Goal graph shows goals which should be satisfied in order to guarantee right route of organization. On the other hand, these goals can be called as predefined sub projects which business management unit should consider and analyse them. If we know approximate size and time of each part, we will design better management plans resulting in more prosperity and less fail. This paper studies how use case points method is used in calculating size and time in goal graph.
A NEW DATA ENCODER AND DECODER SCHEME FOR NETWORK ON CHIPEditor IJMTER
System-on-chip (soc) based system has so many disadvantages in power-dissipation as
well as clock rate while the data transfer from one system to another system in on-chip. At the same
time, a higher operated system does not support the lower operated bus network for data transfer.
However an alternative scheme is proposed for high speed data transfer. But this scheme is limited to
SOCs. Unlike soc, network-on-chip (NOC) has so many advantages for data transfer. It has a special
feature to transfer the data in on-chip named as transitional encoder. Its operation is based on input
transitions. At the same time it supports systems which are higher operated frequencies. In this
project, a low-power encoding scheme is proposed. The proposed system yields lower dynamic
power dissipation due to the reduction of switching activity and coupling switching activity when
compared to existing system. Even-though many factors which is based on power dissipation, the
dynamic power dissipation is only considerable for reasonable advantage. The proposed system is
synthesized using quartus II 9.1 software. Besides, the proposed system will be extended up to
interlink PE communication with help of routers and PE’s which are performed by various
operations. To implement this system in real NOC’s contains the proposed encoders and decoders for
data transfer with regular traffic scenarios should be considered.
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...Editor IJMTER
Coins are important part of our life. We use coins in a places like stores, banks, buses, trains
etc. So it becomes a basic need that coins can be sorted, counted automatically. For this, there is
necessary that the coins can be recognized automatically. Automated Coin Recognition System for the
Indian Coins of Rs. 1, 2, 5 and 10 with the rotation invariance. We have taken images from the both
sides of coin. So this system is capable to recognizing coins from both sides. Features are taken from the
images using techniques as a Hough Transformation, Pattern Averaging etc.
Analysis of VoIP Traffic in WiMAX EnvironmentEditor IJMTER
Worldwide Interoperability for Microwave Access (WiMAX) is currently one of the
hottest technologies in wireless communication. It is a standard based on the IEEE 802.16 wireless
technology that provides a very high throughput broadband connections over long distances. In
parallel, Voice Over Internet Protocol (VoIP) is a new technology which provides access to voice
communication over internet protocol and hence it is becomes an alternative to public switched
telephone networks (PSTN) due to its capability of transmission of voice as packets over IP
networks. A lot of research has been done in analyzing the performances of VoIP traffic over
WiMAX network. In this paper we review the analysis carried out by several authors for the most
common VoIP codec’s which are G.711, G.723.1 and G.729 over a WiMAX network using various
service classes. The objective is to compare the results for different types of service classes with
respect to the QoS parameters such as throughput, average delay and average jitter.
A Hybrid Cloud Approach for Secure Authorized De-DuplicationEditor IJMTER
The cloud backup is used for the personal storage of the people in terms of reducing the
mainlining process and managing the structure and storage space managing process. The challenging
process is the deduplication process in both the local and global backup de-duplications. In the prior
work they only provide the local storage de-duplication or vice versa global storage de-duplication in
terms of improving the storage capacity and the processing time. In this paper, the proposed system
is called as the ALG- Dedupe. It means the Application aware Local-Global Source De-duplication
proposed system to provide the efficient de-duplication process. It can provide the efficient deduplication process with the low system load, shortened backup window, and increased power
efficiency in the user’s personal storage. In the proposed system the large data is partitioned into
smaller part which is called as chunks of data. Here the data may contain the redundancy it will be
avoided before storing into the storage area.
Aging protocols that could incapacitate the InternetEditor IJMTER
The biggest threat to the Internet is the fact that it was never really designed. For e.g., the
BGP protocol is used by Internet routers to exchange information about changes to the Internet's
network topologies. However, it also is among the most fundamentally broken; as Internet routing
information can be poisoned with bogus routing information. Instead, it evolved in fits and start,
thanks to various protocols that have been cobbled together to fulfill the needs of the moment. Few
of protocols from them were designed with security in mind. or if they were sported no more than
was needed to keep out a nosy neighbor, not a malicious attacker. The result is a welter of aging
protocols susceptible to exploit on an Internet scale. Here are six Internet protocols that could stand
to be replaced sooner rather than later or are (mercifully) on the way out.
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...Editor IJMTER
The emergence of exactitude agriculture has been promoted by the numerous developments within
the field of wireless sensing element actor networks (WSAN). These WSANs offer important data
for gathering, work management, development of crops, and limitation of crop diseases. Goals of
this paper to introducing cloud computing as a brand new way (technique) to be utilized in addition
to WSANs to any enhance their application and benefits to the area of agriculture.
A CAR POOLING MODEL WITH CMGV AND CMGNV STOCHASTIC VEHICLE TRAVEL TIMESEditor IJMTER
Carpooling (also car-sharing, ride-sharing, lift-sharing), is the sharing of car journeys so
that more than one person travels in a car. It helps to resolve a variety of problems that continue to
plague urban areas, ranging from energy demands and traffic congestion to environmental pollution.
Most of the existing method used stochastic disturbances arising from variations in vehicle travel
times for carpooling. However it doesn’t deal with the unmet demand with uncertain demand of the
vehicle for car pooling. To deal with this the proposed system uses Chance constrained
formulation/Programming (CCP) approach of the problem with stochastic demand and travel time
parameters, under mild assumptions on the distribution of stochastic parameters; and relates it with a
robust optimization approach. Since real problem sizes can be large, it could be difficult to find
optimal solutions within a reasonable period of time. Therefore solution algorithm using tabu
heuristic solution approach is developed to solve the model. Therefore, we constructed a stochastic
carpooling model that considers the in- fluence of stochastic travel times. The model is formulated as
an integer multiple commodity network flow problem. Since real problem sizes can be large, it could
be difficult to find optimal solutions within a reasonable period of time.
Sustainable Construction With Foam Concrete As A Green Green Building MaterialEditor IJMTER
A green building is an environmentally sustainable building, designed, constructed and
operated to minimise the total environmental impacts. Carbon dioxide (CO2) is the primary
greenhouse gas emitted through human activities. It is claimed that 5% of the world’s carbon dioxide
emission is attributed to cement industry, which is the vital constituent of concrete. Due to the
significant contribution to the environmental pollution, there is a need for finding an optimal solution
along with satisfying the civil construction needs. Apart from normal concrete bricks, a clay brick,
Foam concrete is a new innovative technology for sustainable building and civil construction which
fulfills the criteria of being a Green Material. This paper concludes that Foam Concrete can be an
effective sustainable material for construction and also focuses on the cost effectiveness in using
Foam Concrete as a building material in replacement with Clay Brick or other bricks.
USE OF ICT IN EDUCATION ONLINE COMPUTER BASED TESTEditor IJMTER
A good education system is required for overall prosperity of a nation. A tremendous
growth in the education sector had made the administration of education institutions complex. Any
researches reveal that the integration of ICT helps to reduce the complexity and enhance the overall
administration of education. This study has been undertaken to identify the various functional areas
to which ICT is deployed for information administration in education institutions and to find the
current extent of usage of ICT in all these functional areas pertaining to information administration.
The various factors that contribute to these functional areas were identified. A theoretical model was
derived and validated.
Textual Data Partitioning with Relationship and Discriminative AnalysisEditor IJMTER
Data partitioning methods are used to partition the data values with similarity. Similarity
measures are used to estimate transaction relationships. Hierarchical clustering model produces tree
structured results. Partitioned clustering produces results in grid format. Text documents are
unstructured data values with high dimensional attributes. Document clustering group ups unlabeled text
documents into meaningful clusters. Traditional clustering methods require cluster count (K) for the
document grouping process. Clustering accuracy degrades drastically with reference to the unsuitable
cluster count.
Textual data elements are divided into two types’ discriminative words and nondiscriminative
words. Only discriminative words are useful for grouping documents. The involvement of
nondiscriminative words confuses the clustering process and leads to poor clustering solution in return.
A variation inference algorithm is used to infer the document collection structure and partition of
document words at the same time. Dirichlet Process Mixture (DPM) model is used to partition
documents. DPM clustering model uses both the data likelihood and the clustering property of the
Dirichlet Process (DP). Dirichlet Process Mixture Model for Feature Partition (DPMFP) is used to
discover the latent cluster structure based on the DPM model. DPMFP clustering is performed without
requiring the number of clusters as input.
Document labels are used to estimate the discriminative word identification process. Concept
relationships are analyzed with Ontology support. Semantic weight model is used for the document
similarity analysis. The system improves the scalability with the support of labels and concept relations
for dimensionality reduction process.
Testing of Matrices Multiplication Methods on Different ProcessorsEditor IJMTER
There are many algorithms we found for matrices multiplication. Until now it has been
found that complexity of matrix multiplication is O(n3). Though Further research found that this
complexity can be decreased. This paper focus on the algorithm and its complexity of matrices
multiplication methods.
Malware is a worldwide pandemic. It is designed to damage computer systems without
the knowledge of the owner using the system. Software‟s from reputable vendors also contain
malicious code that affects the system or leaks information‟s to remote servers. Malware‟s includes
computer viruses, spyware, dishonest ad-ware, rootkits, Trojans, dialers etc. Malware detectors are
the primary tools in defense against malware. The quality of such a detector is determined by the
techniques it uses. It is therefore imperative that we study malware detection techniques and
understand their strengths and limitations. This survey examines different types of Malware and
malware detection methods.
SURVEY OF TRUST BASED BLUETOOTH AUTHENTICATION FOR MOBILE DEVICEEditor IJMTER
Practical requirements for securely demonstrating identities between two handheld
devices are an important concern. The adversary can inject a Man-In- The-Middle (MITM) attack to
intrude the protocol. Protocols that employ secret keys require the devices to share private
information in advance, in which it is not feasible in the above scenario. Apart from insecurely
typing passwords into handheld devices or comparing long hexadecimal keys displayed on the
devices’ screen, many other human-verifiable protocols have been proposed in the literature to solve
the problem. Unfortunately, most of these schemes are unsalable to more users. Even when there are
only three entities attempt to agree a session key, these protocols need to be rerun for three times.
So, in the existing method a bipartite and a tripartite authentication protocol is presented using a
temporary confidential channel. Besides, further extend the system into a transitive authentication
protocol that allows multiple handheld devices to establish a conference key securely and efficiently.
But this method detects only the outsider attacks. Method does not consider the insider attacks. So,
in the proposed method trust score based method is introduced which computes the trust values for
the nodes and provide the security. The trust score is computed has a positive influence on the
confidence with which an entity conducts transactions with that node. Network the behavior of the
node will be monitored periodically and its trust value is also updated .So depending on the behavior
of the node in the network trust relation will be established between two nodes.
GLAUCOMA is a chronic eye disease that can damage optic nerve. According to WHO It
is the second leading cause of blindness, and is predicted to affect around 80 million people by 2020.
Development of the disease leads to loss of vision, which occurs increasingly over a long period of
time. As the symptoms only occur when the disease is quite advanced so that glaucoma is called the
silent thief of sight. Glaucoma cannot be cured, but its development can be slowed down by
treatment. Therefore, detecting glaucoma in time is critical. However, many glaucoma patients are
unaware of the disease until it has reached its advanced stage. In this paper, some manual and
automatic methods are discussed to detect glaucoma. Manual analysis of the eye is time consuming
and the accuracy of the parameter measurements also varies with different clinicians. To overcome
these problems with manual analysis, the objective of this survey is to introduce a method to
automatically analyze the ultrasound images of the eye. Automatic analysis of this disease is much
more effective than manual analysis.
Survey: Multipath routing for Wireless Sensor NetworkEditor IJMTER
Reliability is playing very vital role in some application of Wireless Sensor Networks
and multipath routing is one of the ways to increase the probability of reliability. More over energy
consumption is constraint. In this paper, we provide a survey of the state-of-the-art of proposed
multipath routing algorithm for Wireless Sensor Networks. We study the design, analyze the tradeoff
of each design, and overview several presenting algorithms.
Step up DC-DC Impedance source network based PMDC Motor DriveEditor IJMTER
This paper is devoted to the Quasi Z source network based DC Drive. The cascaded
(two-stage) Quasi Z Source network could be derived by the adding of one diode, one inductor,
and two capacitors to the traditional quasi-Z-source inverter The proposed cascaded qZSI inherits all
the advantages of the traditional solution (voltage boost and buck functions in a single stage,
continuous input current, and improved reliability). Moreover, as compared to the conventional qZSI,
the proposed solution reduces the shoot-through duty cycle by over 30% at the same voltage boost
factor. Theoretical analysis of the two-stage qZSI in the shoot-through and non-shoot-through
operating modes is described. The proposed and traditional qZSI-networks are compared. A
prototype of a Quasi Z Source network based DC Drive was built to verify the theoretical
assumptions. The experimental results are presented and analyzed.
SPIRITUAL PERSPECTIVE OF AUROBINDO GHOSH’S PHILOSOPHY IN TODAY’S EDUCATIONEditor IJMTER
The paper reflects the spiritual philosophy of Aurobindo Ghosh which is helpful in today’s
education. In 19th century he wrote about spirituality, in accordance with that it is a core and vital part
of today’s education. It is very much essential for today’s kid. Here I propose the overview of that
philosophy.At the utmost regeneration of those values in today’s generation is the great deal with
education system. To develop the values and spiritual education in the youngers is the great moto of
mine. It is the materialistic world and without value redefinition among them is the harder task but not
difficult.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
Software Defect Prediction Using Local and Global AnalysisEditor IJMTER
The software defect factors are used to measure the quality of the software. The software
effort estimation is used to measure the effort required for the software development process. The defect
factor makes an impact on the software development effort. Software development and cost factors are
also decided with reference to the defect and effort factors. The software defects are predicted with
reference to the module information. Module link information are used in the effort estimation process.
Data mining techniques are used in the software analysis process. Clustering techniques are used
in the property grouping process. Rule mining methods are used to learn rules from clustered data
values. The “WHERE” clustering scheme and “WHICH” rule mining scheme are used in the defect
prediction and effort estimation process. The system uses the module information for the defect
prediction and effort estimation process.
The proposed system is designed to improve the defect prediction and effort estimation process.
The Single Objective Genetic Algorithm (SOGA) is used in the clustering process. The rule learning
operations are carried out sing the Apriori algorithm. The system improves the cluster accuracy levels.
The defect prediction and effort estimation accuracy is also improved by the system. The system is
developed using the Java language and Oracle relation database environment.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
An Analysis on Query Optimization in Distributed Database
1. International Journal of Modern Trends in Engineering and
Research
www.ijmter.com
e-ISSN: 2349-9745
p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 16
An Analysis on Query Optimization in Distributed Database
Joshi Janki1
1
R&D Department, Infitrix Software, Delhi
Abstract: The query optimizer is a significant element in today’s relational database
management system. This element is responsible for translating a user-submitted query
commonly written in a non-procedural language-into an efficient query evaluation program that
can be executed against the database. This research paper describes architecture steps of query
process and optimization time and memory usage. Key goal of this paper is to understand the
basic query optimization process and its architecture.
Keywords –Query Optimization, Distributed Database System, Query Processing
I. INTRODUCTION
Query optimization is a function of much relational database management system. Generally, the
query optimizer cannot be accessed directly by users, once queries are submitted to database
server, and parsed by the parser, they are then passed to the query optimizer where optimization
take place. Queries results are generated by accessing relevant database data and manipulating it
in a way that yields the requested information.[1] Since database structures are complex, in most
cases, and especially for not-very-simple queries, the needed data for a query can be collected
from a database by accessing it in different ways, through different data-structures, and in
different orders. It determines the lowest cost plan for executing queries. By "lowest cost plan,"
we mean an access path to the data that takes the least amount of time.[2]
Figure1: Query Optimization Concept Through above figure
2. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 04, [October - 2014]
e-ISSN: 2349-9745
p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 17
The description of the above figure is as below:
The Query Parser checks the validity of the query and then translates it into an internal Form
usually a relational calculus expression or something equivalent. The Query Optimizer examines
all algebraic expressions that are equivalent to the given query and chooses the one that is
estimated to be the cheapest. The Code Generator or the Interpreter transforms the access plan
generated by the optimizer into calls to the query processor. The Query Processor actually
executes the query.[3]
Queries are posed to a DBMS by interactive users or by programs written in general-purpose
programming languages (e.g., Fortran, PL-1) that have queries embedded in them. An interactive
(ad hoc) query goes through the entire path shown in Figure 1. On the other hand, an embedded
query goes through the three steps only once, when the program in which it is embedded is
compiled. The code produced by the Code Generator is stored in the database and is simply
invoked and executed by the Query Processor whenever control reaches that query during the
program execution (run time). Thus, independent of the number of times an embedded query
needs to be executed, optimization is not repeated until database updates make the access plan
invalid (e.g., index deletion) or highly suboptimal (e.g., extensive changes in database
contents).[5]
Figure2: Query Optimizer Architecture [7]
The entire query optimization process can be seen as having two stages: rewriting and planning.
There is only one module in the first stage, the Rewriter, whereas all other modules are in the
second stage.
3. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 04, [October - 2014]
e-ISSN: 2349-9745
p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 18
II. Functionality of Query Optimizer Architecture
Rewriter: Module applies transformations to a given query and produces equivalent queries that
are hopefully more efficient, e.g., replacement of views with their definition, attending out of
nested queries, etc. The transformations performed by the Rewriter depend only on the
declarative i.e., static, characteristics of queries and do not take into account the actual query
costs for the specific DBMS and database concerned. If the rewriting is known or assumed to
always be beneficial, the original query is discarded; otherwise, it is sent to the next stage as
well. By the nature of the rewriting transformations, this stage operates at the declarative
level.[7]
Planner: This is the main module of the ordering stage. It examines all possible execution plans
for each query produced in the previous stage and selects the overall cheapest one to be used to
generate the answer of the original query. It employs a search strategy, which examines the space
of execution plans in a particular fashion. This space is determined by two other modules of the
optimizer, the Algebraic Space and the Method-Structure Space. For the most part, these two
modules and the search strategy determine the cost, i.e., running time, of the optimizer itself,
which should be as low as possible. The execution plans examined by the Planner are compared
based on estimates of their cost so that the cheapest may be chosen. These costs are derived by
the last two modules of the optimizer, the Cost Model and the Size-Distribution Estimator.[7]
Method-Structure Space: This module determines the implementation choices that exist for the
execution of each ordered series of actions specified by the Algebraic Space. This choice is
related to the available join methods for each join (e.g., nested loops, merge scan, and hash join),
if supporting data structures are built on the y, if/when duplicates are eliminated, and other
implementation characteristics of this sort, which are predetermined by the DBMS
implementation. This choice is also related to the available indices for accessing each relation,
which is determined by the physical schema of each database stored in its catalogs. Given an
algebraic formula or tree from the Algebraic Space, this module produces all corresponding
complete execution plans, which specify the implementation of each algebraic operator and the
use of any indices.[7]
Cost Model: This module specifies the arithmetic formulas that are used to estimate the cost of
execution plans. For every different join method, for every different index type access, and in
general for every distinct kind of step that can be found in an execution plan, there is a formula
that gives its cost. Given the complexity of many of these steps, most of these formulas are
simple approximations of what the system actually does and are based on certain assumptions
regarding issues like buffer management, disk-cpu overlap, sequential vs. random I/O, etc. The
most important input parameters to a formula are the size of the buffer pool used by the
corresponding step, the sizes of relations or indices accessed, and possibly various distributions
4. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 04, [October - 2014]
e-ISSN: 2349-9745
p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 19
of values in these relations. While the first one is determined by the DBMS for each query, the
other two are estimated by the Size-Distribution Estimator.[7]
Size-Distribution Estimator: This module specifies how the sizes (and possibly frequency
distributions of attribute values) of database relations and indices as well as (sub)query results
are estimated. As mentioned above, these estimates are needed by the Cost Model. The specific
estimation approach adopted in this module also determines the form of statistics that need to be
maintained in the catalogs of each database, if any.[7]
Algebraic Space: This module determines the action execution orders that are to be considered
by the Planner for each query sent to it. All such series of actions produce the same query
answer, but usually differ in performance. They are usually represented in relational algebra as
formulas or in tree form. Because of the algorithmic nature of the objects generated by this
module and sent to the Planner, the overall planning stage is characterized as operating at the
procedural level.[7]
III. Examples of Optimization Time and Memory
To find item price and customer name from two tables Orders and Customers Original:
Original:
Select O.ItemPrice, C.Name
From Orders O, Customers C
Corrected:
Select O.ItemPrice, C.Name
From Orders O, Customers C
Where O.CustomerID = C.CustomerID
In the first example we can see two query one is original and second one is corrected query. Here
join query was not used and also not used for all the keys, so that it would return so many
records and it’s takes a hours to find result.[4]
To find out employees salary based on their ID
Original:
For i = 1 to 20000
Select salary From Employees Where EmpID = Parameter(i)
Corrected:
Select salary From Employees Where EmpID >= 1 and EmpID <= 20000
5. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 01, Issue 04, [October - 2014]
e-ISSN: 2349-9745
p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 20
The original Query involves a lot of time and memory consumption and will make your entire
system slow.[8]
VI. CONCLUSION
The paper gives brief concept of query optimization along with its architecture and module
functionality. It also describes working of its query flow methods (step by step) execution. With
the help of example it shows optimization time and memory based on record extraction.
References
[1]M. M. Astrahan et al. System R: A relational approach to data management. ACM
Transactions on Database Systems, 1(2):97{137, June 1976.
[2] G. Antoshenkov. Dynamic query optimization in Rdb/VMS. In Proc. IEEE Int. Conference
on Data engineering, pages 538{547, Vienna, Austria, March 1993.
[3] K. Bennett, M. C. Ferris, and Y. Ioannidis. A genetic algorithm for database query
optimization. In Proc. 4th Int. Conference on Genetic Algorithms, pages 400{407, San Diego,
CA, July 1991.
[4] P. A. Bernstein, N. Goodman, E. Wong, C. L. Reeve, and J. B. Rothnie. Queryprocessing in a
system for distributed databases (SDD-1). ACM TODS, 6(4):602{625,December 1981.
[5]R. Cole and G. Graefe. Optimization of dynamic query evaluation plans. In Proc.ACM-
SIGMOD Conference on the Management of Data, pages 150{160, Minneapolis,MN, June 1994.
[6] S. Christodoulakis. Implications of certain assumptions in database performance evaluation.
uation. ACM TODS, 9(2):163{186, June 1984.
[7]S. Christodoulakis. On the estimation and use of selectivities in database performance
evaluation. Research Report CS-89-24, Dept. of Computer Science, University ofWa-terloo,
June 1989.
[8]http://www.serverwatch.com/tutorials.