Despite all preventive measures, there is so much possibility of risks in any project development as well as in enterprise management. There is no any standard mechanism or methodology available to assess the risks in any project or production management. Using some precautionary steps, the manager can only avoid the risks as much as he can. To address this issue, this paper presents a probabilistic risk assessment model for the production of an enterprise. For this, Multi-Entity Bayesian Network (MEBN) has been used to represent the requirements for production management as well as to assess the risks adherence in production management, where MEBN combines expressivity of first-order logic and probabilistic feature of Bayesian network. Bayesian network provides the feature to represent the probabilistic uncertainty and reasoning about probabilistic knowledge base, which is used here to represent the probable risks behind each causes of a risk. The proposed probabilistic model is discussed with the help of a case study, which is used to predict risks inherent in the production of an enterprise, which depends upon various measures like labour availability, power backup, transport availability etc.
Attack graph based risk assessment and optimisation approachIJNSA Journal
Attack graphs are models that offer significant cap
abilities to analyse security in network systems. A
n
attack graph allows the representation of vulnerabi
lities, exploits and conditions for each attack in
a single
unifying model. This paper proposes a methodology
to explore the graph using a genetic algorithm (GA)
.
Each attack path is considered as an independent at
tack scenario from the source of attack to the targ
et.
Many such paths form the individuals in the evoluti
onary GA solution. The population-based strategy of
a
GA provides a natural way of exploring a large numb
er of possible attack paths to find the paths that
are
most important. Thus unlike many other optimisation
solutions a range of solutions can be presented to
a
user of the methodology.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Attack graph based risk assessment and optimisation approachIJNSA Journal
Attack graphs are models that offer significant cap
abilities to analyse security in network systems. A
n
attack graph allows the representation of vulnerabi
lities, exploits and conditions for each attack in
a single
unifying model. This paper proposes a methodology
to explore the graph using a genetic algorithm (GA)
.
Each attack path is considered as an independent at
tack scenario from the source of attack to the targ
et.
Many such paths form the individuals in the evoluti
onary GA solution. The population-based strategy of
a
GA provides a natural way of exploring a large numb
er of possible attack paths to find the paths that
are
most important. Thus unlike many other optimisation
solutions a range of solutions can be presented to
a
user of the methodology.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
FORMAL MODELING AND VERIFICATION OF MULTI-AGENTS SYSTEM USING WELLFORMED NETScsandit
Multi-agent systems are asynchronous and distributed computer systems. These characteristics make them also a discrete-event dynamic system. It is, therefore, important to analyze the behavior of such systems to ensure that they terminate correctly and satisfy other important properties. This paper presents a formal modeling and analysis of MAS, based on Well-formed Nets, in order to ensure the absence of any undesired or unexpected behavior. To validate our ontribution, we consider the timetable problem, which is a multi-agent resource allocation problem.
Neurocomputing is widely implemented in time series area, however the nearness of exceptions that for the most part happen in information time arrangement might be hurtful to the information organize preparing. This is on the grounds that the capacity to consequently discover any examples without earlier suppositions and loss of all-inclusive statement. In principle, the most well-known preparing calculation for Backpropagation calculations inclines toward lessening ordinary least squares estimator (OLS) or all the more particularly, the mean squared error (MSE). In any case, this calculation is not completely hearty when exceptions exist in preparing information, and it will prompt false estimate future esteem. Along these lines, in this paper, we show another calculation that control calculations firefly on slightest middle squares estimator (FFA-LMedS) for BFGS quasi-newton backpropagation neural network nonlinear autoregressive moving (BPNN-NARMA) model to lessen the effect of exceptions in time arrangement information. In the in the mean time, the monthly data of Malaysian Roof Materials cost index from January 1980 to December 2012 (base year 1980=100) with various level of exceptions issue is adjusted in this examination. Toward the finish of this paper, it was found that the upgraded BPNN-NARMA models utilizing FFA-LMedS performed extremely well with RMSE values just about zero errors. It is expected that the finding would help the specialists in Malaysian development activities to handle cost indices data accordingly.
Application Development Risk Assessment Model Based on Bayesian NetworkTELKOMNIKA JOURNAL
This paper describes a new risk assessment model for application development and its
implementation. The model is developed using a Bayesian network and Boehm’s software risk principles.
The Bayesian network is created after mapping top twenty risks in software projects with interrelationship
digraph of risk area category. The probability of risk on the network is analyzed and validated using both
numerical simulation and subjective probability from several experts in the field and a team of application
developers. After obtaining the Bayesian network model, risk exposure is calculated using Boehm's risk
principles. Finally, the implementation of the proposed model in a government institution is shown as a real
case illustration.
A State-based Model for Runtime Resource Reservation for Component-based Appl...IDES Editor
Predictable execution enforcement for applications
with highly and arbitrarily fluctuating resource usage requires
runtime resource management. Correct runtime predictions
regarding resource usage of individual components allows
making proper resource reservations, enabling a better
resource management of the component-based applications.
This work presents a state-based resource usage model for a
component, in which states represent CPU utilization
intervals. This resource model is intended for a resourceaware
component framework where it will be used to determine
the quality of resource reservation. For this purpose, the model
offers two metrics: failure rate, which measures the fraction
of the reservation periods for which the reserved budget was
insufficient, and resource waste, which measures unused
budget.To illustrate the model, we apply it to a family of reservation
prediction strategies and validate the outcome by means
of a series of experiments in which we measure the resource
utilization of two video components. The latter requires a
method for monitoring resource states which is also presented,
analyzed and validated in this paper.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
ENSEMBLE REGRESSION MODELS FOR SOFTWARE DEVELOPMENT EFFORT ESTIMATION: A COMP...ijseajournal
As demand for computer software continually increases, software scope and complexity become higher than ever. The software industry is in real need of accurate estimates of the project under development. Software development effort estimation is one of the main processes in software project management. However, overestimation and underestimation may cause the software industry loses. This study determines which technique has better effort prediction accuracy and propose combined techniques that could provide better estimates. Eight different ensemble models to estimate effort with Ensemble Models were compared with each other base on the predictive accuracy on the Mean Absolute Residual (MAR) criterion and statistical tests. The results have indicated that the proposed ensemble models, besides delivering high efficiency in contrast to its counterparts, and produces the best responses for software project effort estimation. Therefore, the proposed ensemble models in this study will help the project managers working with development quality software.
An Efficient Mechanism of Handling MANET Routing Attacks using Risk Aware Mit...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Multi dimensional customization modelling based on metagraph for saas multi-t...csandit
Software as a Service (SaaS) is a new software delivery model in which pre-built applications
are delivered to customers as a service. SaaS providers aim to attract a large number of tenants
(users) with minimal system modifications to meet economics of scale. To achieve this aim, SaaS
applications have to be customizable to meet requirements of each tenant. However, due to the
rapid growing of the SaaS, SaaS applications could have thousands of tenants with a huge
number of ways to customize applications. Modularizing such customizations still is a highly
complex task. Additionally, due to the big variation of requirements for tenants, no single
customization model is appropriate for all tenants. In this paper, we propose a multidimensional
customization model based on metagraph. The proposed mode addresses the
modelling variability among tenants, describes customizations and their relationships, and
guarantees the correctness of SaaS customizations made by tenants.
Results of Fitted Neural Network Models on Malaysian Aggregate DatasetjournalBEEI
This result-based paper presents the best results of both fitted BPNN-NAR and BPNN-NARMA on MCCI Aggregate dataset with respect to different error measures. This section discusses on the results in terms of the performance of the fitted forecasting models by each set of input lags and error lags used, the performance of the fitted forecasting models by the different hidden nodes used, the performance of the fitted forecasting models when combining both inputs and hidden nodes, the consistency of error measures used for the fitted forecasting models, as well as the overall best fitted forecasting models for Malaysian aggregate cost indices dataset.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
In this paper, we introduce the notion of intuitionistic fuzzy semipre generalized connected space, intuitionistic fuzzy semipre generalized super connected space and intuitionistic fuzzy semipre generalized extremally disconnected spaces. We investigate some of their properties.
Wireless data broadcast is an efficient way of disseminating data to users in the mobile computing environments. From the server’s point of view, how to place the data items on channels is a crucial issue, with the objective of minimizing the average access time and tuning time. Similarly, how to schedule the data retrieval process for a given request at the client side such that all the requested items can be downloaded in a short time is also an important problem. In this paper, we investigate the multi-item data retrieval scheduling in the push-based multichannel broadcast environments. The most important issues in mobile computing are energy efficiency and query response efficiency. However, in data broadcast the objectives of reducing access latency and energy cost can be contradictive to each other. Consequently, we define a new problem named Minimum Cost Data Retrieval Problem (MCDR) and Large Number Data Retrieval (LNDR) Problem. We also develop a heuristic algorithm to download a large number of items efficiently. When there is no replicated item in a broadcast cycle, we show that an optimal retrieval schedule can be obtained in polynomial time
In this paper, we consider a criminal investigation on the collective guilt of part members in a working group. Assuming that the statistics we used are reliable, we present the Page Rank Model based on mutual information. First, we use the average mutual information between non-suspicious topics and the suspicious topics to score the topics by degree of suspicion. Second, we build the correlation matrix based on the degree of suspicion and acquire the corresponding Markov state transition matrix. Then, we set the original value for all members of the working group based on the degree of suspicion. At the last, we calculate the suspected degree of each member in the working group. In the small 10-people case, we build the improved Page Rank model. By calculating the statistics of this case, we acquire a table which indicates the ranking of the suspected degree. In contrast with the results given in this issue, we find these two results basically match each other, indicating the model we have built is feasible. In the current case, firstly, we obtain a ranking list on 15 topics in order of suspicion via Page Rank Model based on mutual information. Secondly, we acquire the stable point of Markov state transition matrix using the Markov chain. Then, we build the connection matrix based on the degree of suspicion and acquire the corresponding Markov state transition matrix. Last, we calculate the degree of 83 candidates. From the result, we can see that those suspicious are on the top of the ranking list while those innocent people are at the bottom of the list, representing that the model we have built is feasible. When suspicious topics and conspirators changed, a relatively good result can also be obtained by this model. In the current case, we have the evidence to believe that Dolores and Jerome, who are the senior managers, have significant suspicion. It is recommended that future attention should be paid to them. The Page Rank Model, based on mutual information, takes full account of the information flow in message distribution network. This model can not only deal with the statistics used in conspiracy, but also be applied to detect the infected cells in a biological network. Finally, we present the advantages and disadvantages of this model and the direction of improvements.
Multi-Agent systems (Autonomous agents or agents) and knowledge discovery (or data mining) are two active
areas in information technology. A profound insight of bringing these two communities together has unveiled a tremendous
potential for new opportunities and wider applications through the synergy of agents and data mining. Multi-agent systems
(MAS) often deal with complex applications that require distributed problem solving. In many applications the individual and
collective behavior of the agents depends on the observed data from distributed data sources. Data mining technology has
emerged, for identifying patterns and trends from large quantities of data. The increasing demand to scale up to massive data sets
inherently distributed over a network with limited band width and computational resources available motivated the development of
distributed data mining (DDM).Distributed data mining is originated from the need of mining over decentralized data
sources. DDM is expected to perform partial analysis of data at individual sites and then to send the outcome as partial result
to other sites where it sometimes required to be aggregated to the global result
FORMAL MODELING AND VERIFICATION OF MULTI-AGENTS SYSTEM USING WELLFORMED NETScsandit
Multi-agent systems are asynchronous and distributed computer systems. These characteristics make them also a discrete-event dynamic system. It is, therefore, important to analyze the behavior of such systems to ensure that they terminate correctly and satisfy other important properties. This paper presents a formal modeling and analysis of MAS, based on Well-formed Nets, in order to ensure the absence of any undesired or unexpected behavior. To validate our ontribution, we consider the timetable problem, which is a multi-agent resource allocation problem.
Neurocomputing is widely implemented in time series area, however the nearness of exceptions that for the most part happen in information time arrangement might be hurtful to the information organize preparing. This is on the grounds that the capacity to consequently discover any examples without earlier suppositions and loss of all-inclusive statement. In principle, the most well-known preparing calculation for Backpropagation calculations inclines toward lessening ordinary least squares estimator (OLS) or all the more particularly, the mean squared error (MSE). In any case, this calculation is not completely hearty when exceptions exist in preparing information, and it will prompt false estimate future esteem. Along these lines, in this paper, we show another calculation that control calculations firefly on slightest middle squares estimator (FFA-LMedS) for BFGS quasi-newton backpropagation neural network nonlinear autoregressive moving (BPNN-NARMA) model to lessen the effect of exceptions in time arrangement information. In the in the mean time, the monthly data of Malaysian Roof Materials cost index from January 1980 to December 2012 (base year 1980=100) with various level of exceptions issue is adjusted in this examination. Toward the finish of this paper, it was found that the upgraded BPNN-NARMA models utilizing FFA-LMedS performed extremely well with RMSE values just about zero errors. It is expected that the finding would help the specialists in Malaysian development activities to handle cost indices data accordingly.
Application Development Risk Assessment Model Based on Bayesian NetworkTELKOMNIKA JOURNAL
This paper describes a new risk assessment model for application development and its
implementation. The model is developed using a Bayesian network and Boehm’s software risk principles.
The Bayesian network is created after mapping top twenty risks in software projects with interrelationship
digraph of risk area category. The probability of risk on the network is analyzed and validated using both
numerical simulation and subjective probability from several experts in the field and a team of application
developers. After obtaining the Bayesian network model, risk exposure is calculated using Boehm's risk
principles. Finally, the implementation of the proposed model in a government institution is shown as a real
case illustration.
A State-based Model for Runtime Resource Reservation for Component-based Appl...IDES Editor
Predictable execution enforcement for applications
with highly and arbitrarily fluctuating resource usage requires
runtime resource management. Correct runtime predictions
regarding resource usage of individual components allows
making proper resource reservations, enabling a better
resource management of the component-based applications.
This work presents a state-based resource usage model for a
component, in which states represent CPU utilization
intervals. This resource model is intended for a resourceaware
component framework where it will be used to determine
the quality of resource reservation. For this purpose, the model
offers two metrics: failure rate, which measures the fraction
of the reservation periods for which the reserved budget was
insufficient, and resource waste, which measures unused
budget.To illustrate the model, we apply it to a family of reservation
prediction strategies and validate the outcome by means
of a series of experiments in which we measure the resource
utilization of two video components. The latter requires a
method for monitoring resource states which is also presented,
analyzed and validated in this paper.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
ENSEMBLE REGRESSION MODELS FOR SOFTWARE DEVELOPMENT EFFORT ESTIMATION: A COMP...ijseajournal
As demand for computer software continually increases, software scope and complexity become higher than ever. The software industry is in real need of accurate estimates of the project under development. Software development effort estimation is one of the main processes in software project management. However, overestimation and underestimation may cause the software industry loses. This study determines which technique has better effort prediction accuracy and propose combined techniques that could provide better estimates. Eight different ensemble models to estimate effort with Ensemble Models were compared with each other base on the predictive accuracy on the Mean Absolute Residual (MAR) criterion and statistical tests. The results have indicated that the proposed ensemble models, besides delivering high efficiency in contrast to its counterparts, and produces the best responses for software project effort estimation. Therefore, the proposed ensemble models in this study will help the project managers working with development quality software.
An Efficient Mechanism of Handling MANET Routing Attacks using Risk Aware Mit...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Multi dimensional customization modelling based on metagraph for saas multi-t...csandit
Software as a Service (SaaS) is a new software delivery model in which pre-built applications
are delivered to customers as a service. SaaS providers aim to attract a large number of tenants
(users) with minimal system modifications to meet economics of scale. To achieve this aim, SaaS
applications have to be customizable to meet requirements of each tenant. However, due to the
rapid growing of the SaaS, SaaS applications could have thousands of tenants with a huge
number of ways to customize applications. Modularizing such customizations still is a highly
complex task. Additionally, due to the big variation of requirements for tenants, no single
customization model is appropriate for all tenants. In this paper, we propose a multidimensional
customization model based on metagraph. The proposed mode addresses the
modelling variability among tenants, describes customizations and their relationships, and
guarantees the correctness of SaaS customizations made by tenants.
Results of Fitted Neural Network Models on Malaysian Aggregate DatasetjournalBEEI
This result-based paper presents the best results of both fitted BPNN-NAR and BPNN-NARMA on MCCI Aggregate dataset with respect to different error measures. This section discusses on the results in terms of the performance of the fitted forecasting models by each set of input lags and error lags used, the performance of the fitted forecasting models by the different hidden nodes used, the performance of the fitted forecasting models when combining both inputs and hidden nodes, the consistency of error measures used for the fitted forecasting models, as well as the overall best fitted forecasting models for Malaysian aggregate cost indices dataset.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
In this paper, we introduce the notion of intuitionistic fuzzy semipre generalized connected space, intuitionistic fuzzy semipre generalized super connected space and intuitionistic fuzzy semipre generalized extremally disconnected spaces. We investigate some of their properties.
Wireless data broadcast is an efficient way of disseminating data to users in the mobile computing environments. From the server’s point of view, how to place the data items on channels is a crucial issue, with the objective of minimizing the average access time and tuning time. Similarly, how to schedule the data retrieval process for a given request at the client side such that all the requested items can be downloaded in a short time is also an important problem. In this paper, we investigate the multi-item data retrieval scheduling in the push-based multichannel broadcast environments. The most important issues in mobile computing are energy efficiency and query response efficiency. However, in data broadcast the objectives of reducing access latency and energy cost can be contradictive to each other. Consequently, we define a new problem named Minimum Cost Data Retrieval Problem (MCDR) and Large Number Data Retrieval (LNDR) Problem. We also develop a heuristic algorithm to download a large number of items efficiently. When there is no replicated item in a broadcast cycle, we show that an optimal retrieval schedule can be obtained in polynomial time
In this paper, we consider a criminal investigation on the collective guilt of part members in a working group. Assuming that the statistics we used are reliable, we present the Page Rank Model based on mutual information. First, we use the average mutual information between non-suspicious topics and the suspicious topics to score the topics by degree of suspicion. Second, we build the correlation matrix based on the degree of suspicion and acquire the corresponding Markov state transition matrix. Then, we set the original value for all members of the working group based on the degree of suspicion. At the last, we calculate the suspected degree of each member in the working group. In the small 10-people case, we build the improved Page Rank model. By calculating the statistics of this case, we acquire a table which indicates the ranking of the suspected degree. In contrast with the results given in this issue, we find these two results basically match each other, indicating the model we have built is feasible. In the current case, firstly, we obtain a ranking list on 15 topics in order of suspicion via Page Rank Model based on mutual information. Secondly, we acquire the stable point of Markov state transition matrix using the Markov chain. Then, we build the connection matrix based on the degree of suspicion and acquire the corresponding Markov state transition matrix. Last, we calculate the degree of 83 candidates. From the result, we can see that those suspicious are on the top of the ranking list while those innocent people are at the bottom of the list, representing that the model we have built is feasible. When suspicious topics and conspirators changed, a relatively good result can also be obtained by this model. In the current case, we have the evidence to believe that Dolores and Jerome, who are the senior managers, have significant suspicion. It is recommended that future attention should be paid to them. The Page Rank Model, based on mutual information, takes full account of the information flow in message distribution network. This model can not only deal with the statistics used in conspiracy, but also be applied to detect the infected cells in a biological network. Finally, we present the advantages and disadvantages of this model and the direction of improvements.
Multi-Agent systems (Autonomous agents or agents) and knowledge discovery (or data mining) are two active
areas in information technology. A profound insight of bringing these two communities together has unveiled a tremendous
potential for new opportunities and wider applications through the synergy of agents and data mining. Multi-agent systems
(MAS) often deal with complex applications that require distributed problem solving. In many applications the individual and
collective behavior of the agents depends on the observed data from distributed data sources. Data mining technology has
emerged, for identifying patterns and trends from large quantities of data. The increasing demand to scale up to massive data sets
inherently distributed over a network with limited band width and computational resources available motivated the development of
distributed data mining (DDM).Distributed data mining is originated from the need of mining over decentralized data
sources. DDM is expected to perform partial analysis of data at individual sites and then to send the outcome as partial result
to other sites where it sometimes required to be aggregated to the global result
Grid computing is a modularized way of structuring network resources so as to share the information and resources, to perform heavily intense problems. Grid computing is a part of distributed processing which allows the distribution of the problem over multiple computational resources. The formation of grid decides the computation cost, the performance metrics of the operation to be carried out. The grid computing favours the formation of an adhoc structure formed at the time of request (it does not hold an infrastructure) that is termed as virtual organization. This paper presents dynamic source routing for modeling virtual organization.
Security for Effective Data Storage in Multi CloudsEditor IJCATR
Cloud Computing is a technology that uses the internet and central remote servers to maintain data and
applications. Cloud computing allows consumers and businesses to use applications without installation and access their personal
files at any computer with internet access. This technology allows for much more efficient computing by centralizing data
storage, processing and bandwidth. The use of cloud computing has increased rapidly in many organizations. Cloud computing
provides many benefits in terms of low cost and accessibility of data. Ensuring the security of cloud computing is a major factor
in the cloud computing environment, as users often store sensitive information with cloud storage providers but these providers
may be untrusted. Dealing with “single cloud” providers is predicted to become less popular with customers due to risks of
service availability failure and the possibility of malicious insiders in the single cloud. A movement towards “multi-clouds”, or in
other words, “interclouds” or “cloud-of clouds” has emerged recently. This paper surveys recent research related to single and
multi-cloud security and addresses possible solutions. It is found that the research into the use of multicloud providers to maintain
security has received less attention from the research community than has the use of single clouds. This work aims to promote the
use of multi-clouds due to its ability to reduce security risks that affect the cloud computing user.
Image enhancement plays an important role in vision applications. Recently a lot of work has been performed in the field of image enhancement. Many techniques have already been proposed till now for enhancing the digital images. This paper has presented a comparative analysis of various image enhancement techniques. This paper has shown that the fuzzy logic and histogram based techniques have quite effective results over the available techniques. This paper ends up with suitable future directions to enhance fuzzy based image enhancement technique further. In the proposed technique, an approach is made to enhance the images other than low-contrast images as well by balancing the stretching parameter (K) according to the color contrast. Proposed technique is designed to restore the degraded edges resulted due to contrast enhancement as well.
Random Valued Impulse Noise Elimination using Neural FilterEditor IJCATR
A neural filtering technique is proposed in this paper for restoring the images extremely corrupted with random valued impulse noise. The proposed intelligent filter is carried out in two stages. In first stage the corrupted image is filtered by applying an asymmetric trimmed median filter. An asymmetric trimmed median filtered output image is suitably combined with a feed forward neural network in the second stage. The internal parameters of the feed forward neural network are adaptively optimized by training of three well known images. This is quite effective in eliminating random valued impulse noise. Simulation results show that the proposed filter is superior in terms of eliminating impulse noise as well as preserving edges and fine details of digital images and results are compared with other existing nonlinear filters.
Online Social Network (OSN) sites act as a medium to spread their own views, activities and their thoughts to some camaraderie. Contents of this network are spread over web, so it was hard to determine by a human decision. Currently, they do not provide any mechanism to ensure privacy concerns towards data associated with each user. Due to this problem, number of users lacks from their ownership control. In this paper, we proposed AC2P (Activity Control-Access Control Protocol) for information control on the web. Alternatively, Tag Refinement strategy determines illegal tagging over images and send notification about particular image spread within different communities/groups. These techniques reduce risk of information flow and avoid unwanted tagging toward images.
One of the most popular areas of research is wireless communication. Mobile Ad Hoc network (MANET) is a network with wireless mobile nodes, infrastructure less and self organizing. With its wireless and distributed nature it is exposed to several security threats. One of the threats in MANET is the wormhole attack. In this attack a pair of attacker forms a virtual link thereby recording and replaying the wireless transmission. This paper presents types of wormhole attack and also includes different technique for detecting wormhole attack in MANET..
Data Mining is an analytic process designed to explore data in search of consistent patterns and systematic relationships between variables, and then to validate the results by applying the patterns found to a new subset of data. Data mining is often described as the process of discovering patterns, correlations, trends or relationships by searching through a large amount of data stored in repositories, databases, and data warehouses. Diabetes, often referred to by doctors as diabetes mellitus, describes a group of metabolic diseases in which the person has high blood [3] glucose (blood sugar), either because insulin production is insufficient, or because the body's cells do not respond properly to insulin, or both. This project helps in identifying whether a person has diabetes or not, if predicted diabetic[4] the project suggest measures for maintaining normal health and if not diabetic it predicts the risk of getting diabetic. In this project Classification algorithm was used to classify the Pima Indian diabetes dataset. Results have been obtained using Android Application.
Assistive Examination System for Visually ImpairedEditor IJCATR
This paper presents a design of voice enabled examination system which can be used by the visually challenged students.
The system uses Text-to-Speech (TTS) and Speech-to-Text (STT) technology. The text-to-speech and speech-to-text web based
academic testing software would provide an interaction for blind students to enhance their educational experiences by providing them
with a tool to give the exams. This system will aid the differently-abled to appear for online tests and enable them to come at par with
the other students. This system can also be used by students with learning disabilities or by people who wish to take the examination in
a combined auditory and visual way.
A Formal Machine Learning or Multi Objective Decision Making System for Deter...Editor IJCATR
Decision-making typically needs the mechanisms to compromise among opposing norms. Once multiple objectives square measure is concerned of machine learning, a vital step is to check the weights of individual objectives to the system-level performance. Determinant, the weights of multi-objectives is associate in analysis method, associated it's been typically treated as a drawback. However, our preliminary investigation has shown that existing methodologies in managing the weights of multi-objectives have some obvious limitations like the determination of weights is treated as one drawback, a result supporting such associate improvement is limited, if associated it will even be unreliable, once knowledge concerning multiple objectives is incomplete like an integrity caused by poor data. The constraints of weights are also mentioned. Variable weights square measure is natural in decision-making processes. Here, we'd like to develop a scientific methodology in determinant variable weights of multi-objectives. The roles of weights in a creative multi-objective decision-making or machine-learning of square measure analyzed, and therefore the weights square measure determined with the help of a standard neural network.
Risk-Aware Response Mechanism with Extended D-S theoryEditor IJCATR
Mobile Ad hoc Networks (MANET) are having dynamic nature of its network infrastructure and
it is vulnerable to all types of attacks. Among these attacks, the routing attacks getting more attention
because its changing the whole topology itself and it causes more damage to MANET. Even there are lot of
intrusion detection Systems available to diminish those critical attacks, existing causesunexpected network
partition, and causes additional damages to the infrastructure of the network , and it leads to uncertainty in
finding routing attacks in MANET. In this paper, we propose a adaptive risk-aware response mechanism with
extended Dempster-Shafer theory in MANET to identify the routing attacks and malicious node. Our
techniques find the malicious node with degree of evidence from the expert knowledge and detect the
important factors for each node.It creates black list and all those malicious nodes so that it may not enter the
network again
Information security risk assessment under uncertainty using dynamic bayesian...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Risk Management of Secure Cloud in Higher Educational Institutionijtsrd
Cloud Computing is one of the most widely used today in higher educational institutions and other business organizations. It provides many advantages for higher educational institutions by sharing IT services on cloud. However, a cloud provider needs to manage the cloud computing environment risks in order to identify, assess, and prioritize the risks in order to mitigate those risks, improve security and confidence in cloud services. Risk assessment is a core of risk management, estimates and prioritizes risks to reduce their impact and maximize the benefits of cloud computing for higher educational institutions. Fuzzy Logic is adopted to deal with insufficient information and estimate the severity and the likelihood of each risk mathematically. The proposed framework identifies the security risk factors for higher educational institution in cloud computing and how to measure and evaluate based on Fuzzy Logic. It can improve the accuracy and efficiency of cloud security risk assessment on the basis of previous research results. Moe Moe San | Khin May Win "Risk Management of Secure Cloud in Higher Educational Institution" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26638.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-security/26638/risk-management-of-secure-cloud-in-higher-educational-institution/moe-moe-san
A simplified predictive framework for cost evaluation to fault assessment usi...IJECEIAES
Software engineering is an integral part of any software development scheme which frequently encounters bugs, errors, and faults. Predictive evaluation of software fault contributes towards mitigating this challenge to a large extent; however, there is no benchmarked framework being reported in this case yet. Therefore, this paper introduces a computational framework of the cost evaluation method to facilitate a better form of predictive assessment of software faults. Based on lines of code, the proposed scheme deploys adopts a machine-learning approach to address the perform predictive analysis of faults. The proposed scheme presents an analytical framework of the correlation-based cost model integrated with multiple standards machine learning (ML) models, e.g., linear regression, support vector regression, and artificial neural networks (ANN). These learning models are executed and trained to predict software faults with higher accuracy. The study considers assessing the outcomes based on error-based performance metrics in detail to determine how well each learning model performs and how accurate it is at learning. It also looked at the factors contributing to the training loss of neural networks. The validation result demonstrates that, compared to logistic regression and support vector regression, neural network achieves a significantly lower error score for software fault prediction.
future internetArticleERMOCTAVE A Risk Management FraDustiBuckner14
future internet
Article
ERMOCTAVE: A Risk Management Framework for IT
Systems Which Adopt Cloud Computing
Masky Mackita 1, Soo-Young Shin 2 and Tae-Young Choe 3,*
1 ING Bank, B-1040 Brussels, Belgium; [email protected]
2 Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea;
[email protected]
3 Department of Computer Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
* Correspondence: [email protected]; Tel.: +82-54-478-7526
Received: 22 June 2019; Accepted: 3 September 2019; Published: 10 September 2019
����������
�������
Abstract: Many companies are adapting cloud computing technology because moving to the cloud
has an array of benefits. During decision-making, having processed for adopting cloud computing,
the importance of risk management is progressively recognized. However, traditional risk management
methods cannot be applied directly to cloud computing when data are transmitted and processed by
external providers. When they are directly applied, risk management processes can fail by ignoring
the distributed nature of cloud computing and leaving numerous risks unidentified. In order to fix
this backdrop, this paper introduces a new risk management method, Enterprise Risk Management
for Operationally Critical Threat, Asset, and Vulnerability Evaluation (ERMOCTAVE), which combines
Enterprise Risk Management and Operationally Critical Threat, Asset, and Vulnerability Evaluation for
mitigating risks that can arise with cloud computing. ERMOCTAVE is composed of two risk management
methods by combining each component with another processes for comprehensive perception of risks.
In order to explain ERMOCTAVE in detail, a case study scenario is presented where an Internet seller
migrates some modules to Microsoft Azure cloud. The functionality comparison with ENISA and
Microsoft cloud risk assessment shows that ERMOCTAVE has additional features, such as key objectives
and strategies, critical assets, and risk measurement criteria.
Keywords: risk management; ERM; OCTAVE; cloud computing; Microsoft Azure
1. Introduction
Cloud computing is a technology that uses virtualized resources to deliver IT services through the
Internet. It can also be defined as a model that allows network access to a pool of computing resources
such as servers, applications, storage, and services, which can be quickly offered by service providers [1].
One of properties of the cloud is its distributed nature [2]. Data in the cloud environments had become
gradually distributed, moving from a centralized model to a distributed model. That distributed nature
causes cloud computing actors to face problems like loss of data control, difficulties to demonstrate
compliance, and additional legal risks as data migration from one legal jurisdiction to another. An example
is Salesforce.com, which suffered a huge outage, locking more than 900,000 subscribers out of important
resources needed for business trans ...
future internetArticleERMOCTAVE A Risk Management Fra.docxgilbertkpeters11344
future internet
Article
ERMOCTAVE: A Risk Management Framework for IT
Systems Which Adopt Cloud Computing
Masky Mackita 1, Soo-Young Shin 2 and Tae-Young Choe 3,*
1 ING Bank, B-1040 Brussels, Belgium; [email protected]
2 Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea;
[email protected]
3 Department of Computer Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
* Correspondence: [email protected]; Tel.: +82-54-478-7526
Received: 22 June 2019; Accepted: 3 September 2019; Published: 10 September 2019
����������
�������
Abstract: Many companies are adapting cloud computing technology because moving to the cloud
has an array of benefits. During decision-making, having processed for adopting cloud computing,
the importance of risk management is progressively recognized. However, traditional risk management
methods cannot be applied directly to cloud computing when data are transmitted and processed by
external providers. When they are directly applied, risk management processes can fail by ignoring
the distributed nature of cloud computing and leaving numerous risks unidentified. In order to fix
this backdrop, this paper introduces a new risk management method, Enterprise Risk Management
for Operationally Critical Threat, Asset, and Vulnerability Evaluation (ERMOCTAVE), which combines
Enterprise Risk Management and Operationally Critical Threat, Asset, and Vulnerability Evaluation for
mitigating risks that can arise with cloud computing. ERMOCTAVE is composed of two risk management
methods by combining each component with another processes for comprehensive perception of risks.
In order to explain ERMOCTAVE in detail, a case study scenario is presented where an Internet seller
migrates some modules to Microsoft Azure cloud. The functionality comparison with ENISA and
Microsoft cloud risk assessment shows that ERMOCTAVE has additional features, such as key objectives
and strategies, critical assets, and risk measurement criteria.
Keywords: risk management; ERM; OCTAVE; cloud computing; Microsoft Azure
1. Introduction
Cloud computing is a technology that uses virtualized resources to deliver IT services through the
Internet. It can also be defined as a model that allows network access to a pool of computing resources
such as servers, applications, storage, and services, which can be quickly offered by service providers [1].
One of properties of the cloud is its distributed nature [2]. Data in the cloud environments had become
gradually distributed, moving from a centralized model to a distributed model. That distributed nature
causes cloud computing actors to face problems like loss of data control, difficulties to demonstrate
compliance, and additional legal risks as data migration from one legal jurisdiction to another. An example
is Salesforce.com, which suffered a huge outage, locking more than 900,000 subscribers out of important
resources needed for business trans.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...IJCNCJournal
Much research has been conducted to detect vulnerabilities of Web Applications; however, these never
proposed a methodology to measure the vulnerabilities either qualitatively or quantitatively. In this paper,
a methodology is proposed to investigate the quantification of vulnerabilities in Web Applications. We
applied the Goal Question Metrics (GQM) methodology to determine all possible security factors and subfactors of Web Applications in the Department of Transportation (DOT) as our proof of concept. Then we
introduced a Multi-layered Fuzzy Logic (MFL) approach based on the security sub-factors’ prioritization
in the Analytic Hierarchy Process (AHP). Using AHP, we weighted each security sub-factor before the
quantification process in the Fuzzy Logic to handle imprecise crisp number calculation.
Analytic Hierarchy Process-based Fuzzy Measurement to Quantify Vulnerabilitie...IJCNCJournal
Much research has been conducted to detect vulnerabilities of Web Applications; however, these never proposed a methodology to measure the vulnerabilities either qualitatively or quantitatively. In this paper, a methodology is proposed to investigate the quantification of vulnerabilities in Web Applications. We applied the Goal Question Metrics (GQM) methodology to determine all possible security factors and subfactors of Web Applications in the Department of Transportation (DOT) as our proof of concept. Then we introduced a Multi-layered Fuzzy Logic (MFL) approach based on the security sub-factors’ prioritization in the Analytic Hierarchy Process (AHP). Using AHP, we weighted each security sub-factor before the quantification process in the Fuzzy Logic to handle imprecise crisp number calculation.
Analytic Hierarchy Process-based Fuzzy Measurement to Quantify Vulnerabilitie...IJCNCJournal
Much research has been conducted to detect vulnerabilities of Web Applications; however, these never proposed a methodology to measure the vulnerabilities either qualitatively or quantitatively. In this paper, a methodology is proposed to investigate the quantification of vulnerabilities in Web Applications. We applied the Goal Question Metrics (GQM) methodology to determine all possible security factors and subfactors of Web Applications in the Department of Transportation (DOT) as our proof of concept. Then we introduced a Multi-layered Fuzzy Logic (MFL) approach based on the security sub-factors’ prioritization in the Analytic Hierarchy Process (AHP). Using AHP, we weighted each security sub-factor before the quantification process in the Fuzzy Logic to handle imprecise crisp number calculation.
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...IJCNCJournal
Much research has been conducted to detect vulnerabilities of Web Applications; however, these never proposed a methodology to measure the vulnerabilities either qualitatively or quantitatively. In this paper, a methodology is proposed to investigate the quantification of vulnerabilities in Web Applications. We applied the Goal Question Metrics (GQM) methodology to determine all possible security factors and subfactors of Web Applications in the Department of Transportation (DOT) as our proof of concept. Then we
introduced a Multi-layered Fuzzy Logic (MFL) approach based on the security sub-factors’ prioritization in the Analytic Hierarchy Process (AHP). Using AHP, we weighted each security sub-factor before the quantification process in the Fuzzy Logic to handle imprecise crisp number calculation.
IMPLEMENTATION OF RISK ANALYZER MODEL FOR UNDERTAKING THE RISK ANALYSIS OF PR...IJDKP
The model of RISK ANALYZER was implemented as Knowledge-based System for the purpose of undertaking risk analysis for proposed construction projects in a selected domain. The Fuzzy Decision Variables (FDVs) that cause differences between initial and final contract sums of building projects were identified, the likelihood of the occurrence of the risks were determined and a Knowledge-Based System that would rank the risks was constructed using JAVA programming language and Graphic User Interface. The Knowledge-Based System is composed a Knowledge Base for storing data, an Inference Engine for controlling and directing the use of knowledge for problem-solution, and a User Interface that assists the user retrieve, use and alter data in the Knowledge Base. The developed Knowledge-Based System was compiled, implemented and validated with data of previously completed projects. The client could utilize the Knowledge-Based System to undertake proposed building projects
Similar to Risk Prediction for Production of an Enterprise (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...NelTorrente
In this research, it concludes that while the readiness of teachers in Caloocan City to implement the MATATAG Curriculum is generally positive, targeted efforts in professional development, resource distribution, support networks, and comprehensive preparation can address the existing gaps and ensure successful curriculum implementation.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
1. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 237 - 244, 2013
www.ijcat.com 237
Risk Prediction for Production of an Enterprise
Kumar Ravi
Department of Computer Science
School of Engineering, Pondicherry University
Pondicherry, India
Sheopujan Singh
Department of Mathematics and Computer Application
S. Sinha College
Aurangabad, Bihar, India
Abstract: Despite all preventive measures, there is so much possibility of risks in any project development as well as in enterprise management.
There is no any standard mechanism or methodology available to assess the risks in any project or production management. Using some
precautionary steps, the manager can only avoid the risks as much as he can. To address this issue, this paper presents a probabilistic risk
assessment model for the production of an enterprise. For this, Multi-Entity Bayesian Network (MEBN) has been used to represent the
requirements for production management as well as to assess the risks adherence in production management, where MEBN combines
expressivity of first-order logic and probabilistic feature of Bayesian network. Bayesian network provides the feature to represent the
probabilistic uncertainty and reasoning about probabilistic knowledge base, which is used here to represent the probable risks behind each causes
of a risk. The proposed probabilistic model is discussed with the help of a case study, which is used to predict risks inherent in the production of
an enterprise, which depends upon various measures like labour availability, power backup, transport availability etc.
Keywords: Bayesian network, UnBBayes, MEBN, PR-OWL
1. INTRODUCTION
Risk management is particularly important for the supply chain
management because of the inherent uncertainties in the various
stages of it. It occurs due to loosely defined requirements, under or
over estimation of time and resources required for product
development, dependence on individual skills, and requirements
changes due to changes in customer needs. The manager of an
enterprise should be aware of the impact of these risks on the
project, the product and the business in advance, and should have
some contingency plans so that, if the risks do occur, those can be
managed effectively [7]. Although several steps for risk
management are needed to address as shown in Fig. 1, but here risk
assessment have been considered mainly.
There is no standard mechanism to handle the risk, so it
can be minimized using some preplanned strategies. Risk is quite
uncertain with respect to various attributes like time, schedule,
requirements, human resource, raw material etc. So, a probabilistic
approach has been used to represent these attributes as well as its
effects on the incurred risks in this literature, which is based on
Bayesian network.
Bayesian network is a graphical representation of random
variables and their conditional dependencies in the form directed
acyclic graph (DAG), which is widely used in the field of Semantic
web to represent probabilistic relationship among components of
knowledge base, where probabilistic dependence between
components of knowledge base has been used for the situation
awareness, recommender system, sentiment analysis, and data
mining. Currently well known Semantic Web (SW) languages like
Resource Description Framework (RDF), Resource Description
Framework Schema (RDF/S), Web Ontology Language (OWL) etc
and their constructs have been used to construct the Bayesian
network. One of them is Probabilistic Web Ontology Language (PR-
OWL) based on MEBN to represent the uncertainty incurred within
any knowledge base system.
Modeling of uncertainty has been widely done using
MEBN [2], [9], [14], which can be used to determine the probability
of events that are influenced by various variables. Using it, the
degree of belief can be specified to determine the favorable and
contradicting outcomes for given evidences. It can be used as
dynamic Bayesian network, using which degree of belief can be
modified time to time according to likeness of the situation or to
observe the effect of different evidences for propositions. To model
the domain, we need to represent them in the form of Bayesian
network, which is a major task to have a well defined probabilistic
model. The generation of Bayesian network needed following 3
major steps [1].
-First, identification of propositions and evidences.
-Second, identification of random variables for nodes and creation of
dependence graph using them.
-Third, assignment of Conditional Probability Table (CPT) for each
node.
To do this, in addition to above listed tasks, incorporation
of existing knowledge can be done with the help of domain
ontology.
This paper is arranged in following manner. The section 2 will
present the survey on risk prediction in various fields like network
security, water management, power supply, health risks etc. Some
milestones in creation of Bayesian network from semantic web
languages is discussed in section 3, out of that MEBN is used in this
paper to create a probabilistic model. Section 4 describes a case
study for modeling and prediction of causes and effects of risks for
an enterprise production. Conclusion has been drawn in section 5.
2. LITERATURE SURVEY
This paper is mainly motivated from [14], which has used MEBN
and PR-OWL for risk assessment in the process of Security
Certification and Accreditation (C&A). Due to diverse uncertainties
in the field of software performance, major security requirements
and their causal relationship has been represented using MEBN,
which provides probabilistic model driven risk assessment on
security requirements.
Bayesian network and Monte Carlo simulation have been
used to predict the thermal overload risk for the next hour on the
current weather conditions and power system operating conditions
[3]. According to [15], an electricity supply industry faces sort of
risks like risk of load forecast error, risk of equipment, risk of
transmission constraints, risk of financial return, risk within
contracts etc. So, some of these risks will also be considered as a
part of proposed model in the form of power backup. Nevertheless
Bayesian network, this literature has used auto-regressive integrated
moving average (ARIMA) models and artificial neural network
(ANN) structures.
Caruso (2006) has analyzed the supplier and consumer
risk for daily market-price of power uncertainties and it aggregates
risk measures for the power supply. Monte Carlo algorithm has been
used to verify the simulated results by the OPF-based problem [16].
Fuzzy dominance and similarity analysis have been considered to
represent a framework to integrate probabilistic health risk
assessment into a comprehensive, yet simple, cost-based multi-
criteria decision analysis framework [4]. It has been done for the
management of contaminated ground water resources using health
risk assessment and economic analysis through a multi-criteria
decision analysis framework. These are the trade-off between
2. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 237 - 217, 2013
www.ijcat.com 238
population risk and individual risk, the trade-off between the
residual risk and the cost of risk reduction, and cost-effectiveness as
a justification for remediation.
Risk
Identification
Risk
Analysis
Risk
Planning
Risk
Monitoring
List of
potential
risks
Prioritized
risk list
Risk avoidance
and contingency
plans
Risk
Assessment
Figure 1 Phases of risk management [7]
Fuzzy-probabilistic modeling process has been adopted for power
system risk assessment which can capture both randomness and
fuzziness of loads and component outage parameters. This is based
on a hybrid method of fuzzy set and Monte Carlo simulation. An
actual example using a regional system at the British Columbia
Transmission Corporation has been given to demonstrate the
application of the presented fuzzy-probabilistic model [5].
Risk assessment is done for information security domain, where
Bayesian network has been used as Attack graphs and attack trees to
assess the cause-consequence relationships between various network
states [6]. Security risk assessment and mitigation have been
considered as two major steps of risk management to manage IT
infrastructure. Genetic algorithm has been used to refine the
outcomes resulted for risk mitigation.
3. BAYESIAN NETWORK AND SEMANTIC
WEB FOR UNCERTAINTY
REPRESENTATION
Four major works to represent uncertainty on the basis of Bayesian
network has been discussed in further sub-sections, which can be
used risk prediction. These proposed models will be co-related with
following three steps
1. Propositions and evidences identification
2. Identification of random variables for nodes and representation
of them in the form of dependence graph.
3. Assignment of Conditional Probability Table (CPT) for each
node.
3.1 BayesOWL
This is one of the first successful approaches to create
Bayesian network using terminologies of OWL [11]. For the
identification of evidences and propositions, it has used the concept
of the ontology, which is represented as nodes in the underlying
network and for the logical relation between concepts viz. union,
intersection, complementation, disjoint, equivalence, and
inheritance, it introduces link node, which works as the bridge
between given nodes.
The node representing degree of truth in fractional form
will be considered as random variable.
For the creation of conditional probability table, some
propositional formulas have been given for each logical relations,
where CPT will be created for both types of node representing
concepts of taxonomies as well as for link nodes.
It considers only terminologies or vocabularies of the
knowledge base and it is unable to represent assertions of the
knowledge base.
3.2 OntoBayes [12]
Instead of creating single graph, it creates two graphs, one for the
Bayesian network i.e. Bayesian graph and other for the ontology
constructs i.e. OWL graph. It considers triple form of OWL like
subject, predicate, and objects (s, p, o). While both graph will have
three constructs of OWL, Bayesian graph will consider only one
predicate of the RDF language. Predicate of the OWL will represent
dependence of successor nodes on predecessor nodes, which is
represented in OWL as <rdfs:dependsOn>.
So, similar to BayesOWL, subject and object will be used
as propositions and evidences for the Bayesian graph. Predecessor
node will be represented as random variables and CPT will be
created according to specification of random variable. It can
represent only discrete random variable but not Boolean random
variable.
3.3 Using Netica API1
Fenz (2012) has used Netica API and Jena API2
to create a security
ontology, where Netica API is used as plugin with Protege3
to select
components and attributes representing uncertain features of the
ontology. It has been followed three basic steps given below to
create Bayesian Network from ontology.
(i) The determination of relevant influence factors,
(ii) The determination of relationships between the identified
influence factors, and
(iii) The calculation of the conditional probability tables for each
node in the Bayesian network.
In general, two different methods or a combination of both
are used to construct a Bayesian network: (i) automated construction
of Bayesian networks from existing data, and (ii) the domain expert-
based construction of Bayesian networks covering complex
knowledge domains with insufficient or non-existing empirical data
regarding relevant variables.
This proposal is mainly based on second approach and it
involves both manual and automatic operations. The proposal
proposed a generic method for the ontology-based Bayesian network
construction by (i) using ontology classes/individuals to create the
nodes of the Bayesian network, (ii) using ontology properties to link
the Bayesian network nodes, (iii) utilizing the ontological
knowledge base to support the conditional probability table
calculation for each node, and (iv) enriching the Bayesian network
with concrete findings from existing domain knowledge. The
developed method enables the semiautomatic construction and
modification of Bayesian networks based on existing ontologies.
The method is demonstrated on the example of threat probability
determination, which uses a security ontology as its underlying
formal knowledge base.
It is based on mainly four phases, out of them first step
will be done manually and remaining steps will be done
automatically using Netica API:
1. Selection of relevant classes, individuals, and
properties: Domain expert has to select classes, individuals, and
properties which are relevant to the problem and needed to represent
in Bayesian network.
1
http://www.norsys.com/netica_api.html
2
http://jena.apache.org/
3
http://protege.stanford.edu/
3. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 237 - 217, 2013
www.ijcat.com 239
2. Creation of the Bayesian network structure
3. Construction of the CPTs
4. Incorporation of existing knowledge facts
3.4 Multi-Entity Bayesian Network
(MEBN) [2]
MEBN uses first order logic along with Bayesian network to
create dynamic Bayesian network, which can represent
uncertainty for complex situations and for different purposes
like situation awareness, prediction, weather forecasting,
battlefield strategy management etc. MEBN can be used to
create multiple instances of whole Bayesian graph. MEBN
can generate dynamic Bayesian network and it is able to
represent different type of uncertainty like structural
uncertainty, attribute uncertainty, number uncertainty, type
uncertainty, referential uncertainty etc.
Components of MEBN are MEBN Theory (MTheory), MEBN
Fragments (MFrags), context node, resident node, input node,
and probability distribution table as shown in Fig. 2.
Here, MTheory will combine a set of MFrags and
represent a whole probabilistic ontology. Each MFrag will
contain different types of nodes like context, resident, input
and ordinary variable. Each MFrag will represent a Bayesian
network and nodes of one MFrag can be re-used as input
nodes into other MFrag as well.
Figure 3. Uncertainty Reasoning Process for Semantic Technologies
[10]
Figure 2. PR-OWL classes in detail [10]
4. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 237 - 217, 2013
www.ijcat.com 240
For propositions and evidences, it uses uncertain
feature of the model, which can have different states either in
binary or discrete form.
Instead of having nodes for random variable only in
one form, it uses mainly 3 forms of nodes, where first node is
known as context node, which can specify constraints using
first-order logic formula on the used random variable for
reasoning or decision purposes in complex situations. Second
node is the most important node i.e. resident node, which will
have different states of random variable and it will be
associated with a Conditional Probability Table (CPT), and
last one is the input node, which is the resident node of other
MFrag. Context nodes and input nodes will be connected
using edges to form directed acyclic graph (DAG) and each
edge will represent the dependence of one node to other.
For CPT, at first it is required to specify probability
for each states of random variable, which will be used for the
creation of joint probability distribution table for the resident
node or random variable. For the decision purposes, it will use
the joint probability distribution table.
MEBN is realized in Probabilistic-Web Ontology
Language (PR-OWL) [13] to create the probabilistic ontology.
Modeling of uncertain situation can be done using MEBN and
PR-OWL, which is efficient for the expression of complex
situations. UnBBayes is a graphical user interface developed
in Java, which implements PR-OWL to create a probabilistic
ontology [9]. It provides facilities to create and save a
knowledge base as well as to generate the Situation-Specific
Bayesian Network (SSBN). Bayesian inference and SSBN
inference algorithm are used to find out the inconsistencies in
the ontology and to estimate the probability of an event.
PR-OWL 1.0 cannot support continuous random
variable, so, PR-OWL 2.0 [10] is proposed to overcome the
major disadvantages of PR-OWL 1.0 1) it cannot provide
mapping to properties of OWL and 2) although it provides
concept of meta-entities for the definition of complex types, it
does not have type compatibility with OWL. Major classes of
PR-OWL 1.0 are shown in Fig. 2.
4. A CASE STUDY FOR RISK PREDICTION
FOR PRODUCTION OF AN ENTERPRISE
Prediction of risk has been done using different techniques [3]
[4] [5] [6] [14][15]. Prediction is quite uncertain so, we have
come up with a new model using Uncertainty Reasoning
Process for Semantic Technologies (URP-ST) [10] to predict
the production of an enterprise. Production of an enterprise
depends upon various causes, which are listed in following
sub-sections.
4.1 Modeling of System
Modeling of system includes 3 steps as shown in Fig. 3.
4.1.1 Identifying Requirements
Production of an enterprise will depend upon several factors
some of them are listed below, which can be seen as basic
requirements. These requirements will be considered to build
the probabilistic model to estimate the probability of affect on
the production rate.
4.1.1.1 Labour availability
Most important requirement for production is human resource,
which is very uncertain feature for any manufacturing
enterprise due to dependence on following attributes.
4.1.1.1.1 Probability of strike by labour
Strike by labour will severely affect the production rate. To
avoid this, there should be proper management of their
requirements.
4.1.1.1.2 Contractual state of staff
Completion of contract of major technical staffs will affect the
project because new staff cannot be trained in short period to
take over the running project.
4.1.1.1.3 Variability of staff joining and resigning
If there is lack of constant support of staff then it will result
poor quality of product as well as time delay.
4.1.1.1.4 Labour Health Situation
Water and clean accommodation area are major concerns for
good health, but every enterprise cannot ensure this, because
this will increase the expenditure [4].
4.1.1.1.5 Labour Union Support
Labour union leader may mitigate the labours for unnecessary
demands.
4.1.1.2 Power backup
Possibility of power backup will depend upon governmental
supply and enterprise’s own power availability.
4.1.1.2.1 Oldness of power generator
If power generator is so old, it will often need some rest and
repairing as well as it should not have extra overload.
4.1.1.2.2 Power grid availability
Availability of power grid and duration of supply should be
considered for scheduling management [15].
4.1.1.2 Transport availability
For transportation of raw materials and produced items,
enterprise needs transport. In transportation some difficulties
may also occur, which are listed below and these factors also
have a bit of uncertainty to affect the production rate.
4.1.1.3.1 Possibility of owned vehicles
Number of more owned vehicles will surely have less
probability to affect the transportation facility.
4.1.1.3.2 Rented vehicles availability
This is more susceptible to the means of
communication.
4.1.1.3.3 Public transport availability
Easy reach abilities to public cargo will be cost
effective.
4.1.1.4. Legal impacts
Legal activities may also hamper the production which
may include consumer’s claim, copyright conflicts etc.
4.1.1.4.1 License from pollution control board
Periodical renewal of license will decrease the
possibility of disturbance.
4.1.1.4.2 License from manufacturing governing bodies
4.1.1.4.3 Enterprise and personnel legal issues
Legal issues may obstruct the production.
4.1.1.4.4 Transportation permit
Permit should be available according to requirement of
communication.
4.1.1.5. Raw material availability
Raw material should be available according to demand
of supply, but it will depend upon the suppliers’
capacities as well as on financial relationship with
suppliers.
4.1.1.5.1 Raw Material Suppliers (RMS) availability
Easy availability of RMS will have less effect on
production.
4.1.1.5.2 Communication facility between production
unit and suppliers or warehouse
This should be trouble-free for less effect on production
rate.
4.1.1.5.3 Goodwill of enterprise with RMS
5. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 237 - 217, 2013
www.ijcat.com 241
Loss of goodwill is the loss of everything, so trust
should be maintained between raw material supplier
and enterprise.
4.1.1.5.4 Alternatives of RMS
Limited number of RMS will affect the production
severely.
4.1.1.6. Market demand
The second most important aspect is demand of
commodities in market, without it production of items
is obviously useless.
4.1.1.6.1 Share market situation
The volume of production will be regulated according
to share market situation.
4.1.1.6.2 Market agents’ performance and dealers’
interest Production will be affected by the interest of
dealers and performance of market agents.
4.1.1.6.3 Enterprise competitors’ production quantity,
quality, and rate
For an enterprise it is one of the most influential
measures to consider the market availabilities, qualities
and rate.
4.1.1.7. Some other factors
These factors can also affect the output of an enterprise.
4.1.1.7.1 Social impact
In some enterprise social impact should also be
considered, which depends upon the type of waste
generated by production unit and the location of
enterprise like in dense or sparse population.
4.1.1.7.2 Share market situation
Share market situation will affect the cost and product
of product.
4.1.1.7.3 Inflation rate
It is obvious condition for share market situations.
4.1.1.7.4. Arrangement schedules of machine and
labour
The efficient scheduling of machine and labour will
increase the production rate.
4.1.1.7.5. Machineries availability
According to demand of production, the enterprise
should have enough tools and machineries for good
production rate.
4.1.1.7.6 Machine fault rate
It will depend upon the age of machineries’ and load on
machineries.
4.1.2 Analysis and Design
The proposed model should be able to identify the total
probable risk upon specifying probabilities of various
factors listed in the requirements section. The
relationship among these factors can be seen in the
implementation section. The proposed model should be
able to achieve following objectives…
Calculating probability of risk of unavailability of
labours and its effect on final production.
Calculating probability of risk of unavailability of
power backup.
Calculating probability of risk of unavailability of
enough transport facilities.
Calculating probability of affect of legal issues.
Calculating probability of risk of unavailability of
raw materials etc.
4.1.3 Implementation
For implementation, UnBBayes [9] have been
used which provides graphical user interface to create
complete probabilistic ontology on the basis of Multi-
Entity Bayesian Logic. It provides graphical interface
for each component of MEBN like Entities, MFrag,
Context node, Resident node, input node, MTheory etc.
which can be used as dragged and dropped on MFrag
area. Identified entities have been shown in Fig. 4.
For each of the requirements, separate MFrag
has been created but due to space constraints, only few
of them are shown in Fig. 5 and these MFrags can be
classified into two categories …
a) MFrag without dependence on other
i) Strike
ii) StaffContracts
iii) LabourHealth
iv) Variability of Staff
b) MFrags with dependence on other MFrag
i) Labour Availability
ii) riskonRawMaterialAvailability
iii) Transport Availability
iv) Legal Impacts
v) PowerBackup
vi) Market Demand
4.2 Creating instances, findings, and
defining probability distribution table for
each resident node
Fig. 6 shows the instances of Person entity,
findings of RawMaterialAvailabilities, findings of
EasyCommunication, and GoodwillStatus horizontally.
These findings will be used as knowledge bases. The
probability distribution table will be specified for each
resident node, where probability for each state will
represent the expected degree of risks for each factor.
Probability distribution table can be manipulated using
first order logic formula as shown in the last snapshot of
Fig. 6.
4.3 Reasoning with knowledge base
Reasoning will be performed to predict the risk, which
will generate the situation-specific Bayesian network as
shown in Fig. 7.
Figure 4 Identified Entities in proposed model
6. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 237 - 217, 2013
www.ijcat.com 242
riskOnRawMaterialAvailability
Figure 5. Probabilistic Ontology
7. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 237 - 217, 2013
www.ijcat.com 243
Figure 6. Entity Instances, Knowledge Base and CPT with First-Order Logic formula
8. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 237 - 217, 2013
www.ijcat.com 244
Labour
Availability
StrikeEffect
StaffContractPosition
LabourHealthState
StaffVariability
UnionSupport
Power
Backup
GeneratorAge
GridAvailability
Transport
Availability
Owned
Rented
PublicCargo
Legal
Imapacts
ManufacturingLicense
PatentLicense PollutionBoardLicense
LicenseClearance
Estimated
Risk (%)
Figure 7. Situation-Specific Bayesian Network
5. CONCLUSION
This paper has presented a probabilistic model to represent the
probable risk inherent in major causes for the production of an
enterprise. The model can also be applied in other domain as
well. MEBN and PR-OWL have been used to realize the
model which is embedded in Java open source software
UnBBayes.
6. ACKNOWLEDGMENTS
We would like to thank Dr. Rommel N. Carvalho and Dr.
Paulo Cesar G. da Costa for giving permissions to use any
contents from their Ph. D. theses and papers.
7. REFERENCES
Journal Papers:
[1] S. Fenz, An ontology-based approach for constructing
Bayesian networks, Data & Knowledge Engineering, 73,
2012, 73–88
[2] K. B. Laskey, MEBN: A language for first-order
Bayesian knowledge bases, Artificial Intelligence 172
(2008) 140–178
[3] Jun Zhang, Jian Pu James D. McCalley Hal Stern, and
William A. Gallus, Jr. A Bayesian Approach for Short-
Term Transmission Line Thermal Overload Risk
Assessment, IEEE TRANSACTIONS ON POWER
DELIVERY, VOL. 17, NO. 3, JULY 2002
[4] Ibrahim M. Khadam, Jagath J. Kaluarachchi, Multi-
criteria decision analysis with probabilistic risk
assessment for the management of contaminated ground
water, Environmental Impact Assessment Review 23
(2003) 683–721
[5] Wenyuan Li, Jiaqi Zhou, Kaigui Xie, and Xiaofu Xiong,
Power System Risk Assessment Using a Hybrid Method
of Fuzzy Set and Monte Carlo Simulation, IEEE
TRANSACTIONS ON POWER SYSTEMS, VOL. 23,
NO. 2, MAY 2008
[6] Nayot Poolsappasit, Rinku Dewri, and Indrajit Ray,
Dynamic Security Risk Management Using Bayesian
Attack Graphs, IEEE TRANSACTIONS ON
DEPENDABLE AND SECURE COMPUTING, VOL. 9,
NO. 1, JANUARY/FEBRUARY 2012
Books:
[7] Ian Sommerville, Software Engineering, 8th Ed., Pearson
Education, 2009, pp. 129
Chapters in Books:
[8] Stuart J. Russell and Peter Norvig (2003) Artificial
Intelligence – A Modern Approach, Second Edition,
Pearson Education, Inc., Chapter 13
[9] Rommel Carvalho, Kathryn Laskey, Paulo Costa,
Marcelo Ladeira, Laecio Santos and Shou Matsumoto
(2010). UnBBayes: Modeling Uncertainty for Plausible
Reasoning in the Semantic Web, Semantic Web, Gang
Wu (Ed.), ISBN: 978-953-7619-54-1, InTech, Available
from:
http://www.intechopen.com/books/semanticweb/unbbaye
s-modeling-uncertainty-for-plausible-reasoning-in-the-
semantic-web
Theses:
[10] Rommel N. Carvalho, (2011). Probabilistic Ontology:
Representation and Modeling Methodology. Ph. D.
thesis; George Mason University, Brazil
[11] Ding, Z. (2005). BayesOWL: A Probabilistic Framework
for Semantic Web. Doctoral dissertation. Computer
Science and Electrical Engineering. 2005, University of
Maryland, Baltimore County: Baltimore, MD, USA. p.
168
[12] Y. Yi, (2007). A framework for decision support systems
adapted to uncertain knowledge. Ph.D. thesis; Fakultat
fur Informatik der Universitat Fridericiana zu Karlsruhe.
[13] Costa P. C. G., (2005) Bayesian Semantics for the
Semantic Web, Ph. D. thesis; George Mason University.
Proceedings Papers:
[14] Seok-Won Lee, Probabilistic Risk Assessment for
Security Requirements: A Preliminary Study, Fifth
International Conference on Secure Software Integration
and Reliability Improvement, 2011
[15] K.L. Lo and Y.K. Wu, Risk assessment due to local
demand forecast uncertainty in the competitive supply
industry, IEE Proc.-Gener. Transm. Distrib., Vol. 150,
No. 5, September 2003
[16] E. Caruso, M. Dicorato, A. Minoia and M. Trovato,
Supplier risk analysis in the day-ahead electricity market,
IEE Proc.-Gener. Transm. Distrib., Vol. 153, No. 3, May
2006