Many theoretical works and tools on epidemiological field reflect the emphasis on decision-making tools
by both public health and the scientific community, which continues to increase.
Indeed, in the epidemiological field, modeling tools are proving a very important way in helping to make
decision. However, the variety, the large volume of data and the nature of epidemics lead us to seek
solutions to alleviate the heavy burden imposed on both experts and developers.
Among the important steps of modeling and simulation: model validation. It refers to the process of
determining how well a model corresponds to the system that it intended to represent. So the question is:
what happens if the model is invalid? Do we need to reproduce another one, or just optimize the existing
one?
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
Practical Methods To Overcome Sample Size ChallengesnQuery
Watch the video at: https://www.statsols.com/webinars/practical-methods-to-overcome-sample-size-challenges
In this webinar hosted by Ronan Fitzpatrick - Head of Statistics and nQuery Lead Researcher at Statsols - we will examine some of the most common practical challenges you will experience while calculating sample size for your study. These challenges will be split into two categories:
1. Overcoming Sample Size Calculation Challenges
(Survival Analysis Example)
We will examine practical methods to overcome common sample size calculation issues by focusing in on one of the more complex areas for sample size determination; Survival analysis. We will cover difficulties and potential issues surrounding challenges such as:
Drop Out: How to deal with expected dropouts or censoring. We compare the simple loss-to-follow-up method and integrating a dropout process into the sample size model?
Planning Uncertainty: How best to deal with the inevitable uncertainty at the planning stage? We examine how best to apply a sensitivity analysis and Bayesian approaches to explore the uncertainty in your sample size calculations.
Choosing the Effect Size: Various approaches and interpretations exist for how to find the effect size value. We examine those contrasting interpretations and determine the best method and also how to deal with parameterization options.
2. Overcoming Study Design Challenges
(Vaccine Efficacy Example)
The Randomised Controlled Trial (RCT) is considered the gold standard in trial design in drug development. However, there are often practical impediments which mean that adjustments or pragmatic approaches are needed for some trials and studies.
We will examine practical methods how to overcome common study design challenges and how these affect your sample size calculations. In this webinar, we will use common issues in vaccine study design to examine difficulties surrounding issues such as:
Case-Control Analysis: We will examine how to deal with study constraints and how to deal with analyses done during an observational study.
Alternative Randomization Methods: How best to address randomization in your vaccine trial design when full randomization is difficult, expensive or impractical. We examine how sample size calculations are affected with cluster or Mendelian randomization.
Rare Events: How does an outcome being rare affect the types of study design and statistical methods chosen in your study.
How to establish and evaluate clinical prediction models - StatsworkStats Statswork
A clinical prediction model can be used in various clinical contexts, including screening for asymptomatic illness, forecasting future events such as disease, and assisting doctors in their decision-making and health education. Despite the positive effects of clinical prediction models on practice, prediction modeling is a difficult process that necessitates meticulous statistical analysis and sound clinical judgments. Statswork offers statistical services as per the requirements of the customers. When you Order statistical Services at Statswork, we promise you the following always on Time, outstanding customer support, and High-quality Subject Matter Experts.
Read More With Us: https://bit.ly/3dxn32c
Why Statswork?
Plagiarism Free | Unlimited Support | Prompt Turnaround Times | Subject Matter Expertise | Experienced Bio-statisticians & Statisticians | Statistics across Methodologies | Wide Range of Tools & Technologies Supports | Tutoring Services | 24/7 Email Support | Recommended by Universities
Contact Us:
Website: www.statswork.com
Email: info@statswork.com
United Kingdom: 44-1143520021
India: 91-4448137070
WhatsApp: 91-8754446690
A non technical overview of sample size calculation and why it is necessary with some brief examples of how to approach the problem and why it is useful to actually think of these calculations.
A comment in Nature, signed by over 800 researchers, called for a rise up against statistical significance. This was followed by a special issue of The American Statistician aimed at halting the use of the term "statistically significant", and new guidelines for statistical reporting in the New England Journal of Medicine. These slides discuss the broader context of the "p-value crisis" and alternatives for communicating the conclusions after statistical analyses.
Target audience: Medical researchers; Scientists involved in conducting or interpreting analyses and communicating the results of scientific research, as well as readers of scientific publications.
Learning objectives:
To understand the context of the reproducibility crisis in medical research.
To learn about problems with p-values and alternatives to report findings.
To understand how (not) to interpret significant and insignificant findings.
To learn how to communicate research findings in a modest, thoughtful, and transparent way.
How to handle discrepancies while you collect data for systemic review – pubricaPubrica
1. Population specification error:
2. Sample error:
3. Selection error:
4. Non- response error:
Continue Reading: https://bit.ly/36i7iYo
For our services: https://pubrica.com/services/research-services/systematic-review/
Why Pubrica:
When you order our services, We promise you the following – Plagiarism free | always on Time | 24*7 customer support | Written to international Standard | Unlimited Revisions support | Medical writing Expert | Publication Support | Biostatistical experts | High-quality Subject Matter Experts.
Contact us:
Web: https://pubrica.com/
Blog: https://pubrica.com/academy/
Email: sales@pubrica.com
WhatsApp : +91 9884350006
United Kingdom: +44-1618186353
An efficient implementation for key management technique using smart card and...ijctcm
A Elliptic curve cryptosystem are become popular because of the reduced number of keys bits required in
Comparision to other cryptosystem. In existing work ECC technique are used to describe the encryption
data to provide a security over a network. ECC satisfy the Smart cards requirements in term of memory,
processing and cost. In existing work ECC cryptographic Algorithm work with a smart card technique.
Many existing approaches work with smart card with various Technique and produce a better efficient
result. In these review paper, we Define a smart card technique using a ECIES cryptographic algorithm. So
These Technique key management using smart card and ECIES.ECC basically based on a discrete
logarithm over appoint on an elliptic curve. The ECIES is standard elliptic curve that is totally based on
encryption algorithm. Smart Card using ECIES technique in key management technique.
Model predictive control (MPC) is an advanced contr
ol algorithm that has been very successful in the
control industries due to its capability of handlin
g multi input multi output (MIMO) systems with phys
ical
constraints. In MPC, the control action are obtaine
d by solving a constrained optimization problem at
every sample interval to minimize the difference be
tween the predicted outputs and the reference value
through the using of minimum control energy and sat
isfying the constraints of the physical system.
Quadratic programing (QP) problem is solved using Q
PKWIK method which improves the active set
method. The system architecture and design for the
implementation of online MPC on the FPGA is taken
into consideration in this paper to control a DC mo
tor. This implementation is completed using Spartan
6
Nexys3 FPGA chip using simulation environment (EDK
tool) and the comparison between MPC and PID
controller is also established.
How to establish and evaluate clinical prediction models - StatsworkStats Statswork
A clinical prediction model can be used in various clinical contexts, including screening for asymptomatic illness, forecasting future events such as disease, and assisting doctors in their decision-making and health education. Despite the positive effects of clinical prediction models on practice, prediction modeling is a difficult process that necessitates meticulous statistical analysis and sound clinical judgments. Statswork offers statistical services as per the requirements of the customers. When you Order statistical Services at Statswork, we promise you the following always on Time, outstanding customer support, and High-quality Subject Matter Experts.
Read More With Us: https://bit.ly/3dxn32c
Why Statswork?
Plagiarism Free | Unlimited Support | Prompt Turnaround Times | Subject Matter Expertise | Experienced Bio-statisticians & Statisticians | Statistics across Methodologies | Wide Range of Tools & Technologies Supports | Tutoring Services | 24/7 Email Support | Recommended by Universities
Contact Us:
Website: www.statswork.com
Email: info@statswork.com
United Kingdom: 44-1143520021
India: 91-4448137070
WhatsApp: 91-8754446690
A non technical overview of sample size calculation and why it is necessary with some brief examples of how to approach the problem and why it is useful to actually think of these calculations.
A comment in Nature, signed by over 800 researchers, called for a rise up against statistical significance. This was followed by a special issue of The American Statistician aimed at halting the use of the term "statistically significant", and new guidelines for statistical reporting in the New England Journal of Medicine. These slides discuss the broader context of the "p-value crisis" and alternatives for communicating the conclusions after statistical analyses.
Target audience: Medical researchers; Scientists involved in conducting or interpreting analyses and communicating the results of scientific research, as well as readers of scientific publications.
Learning objectives:
To understand the context of the reproducibility crisis in medical research.
To learn about problems with p-values and alternatives to report findings.
To understand how (not) to interpret significant and insignificant findings.
To learn how to communicate research findings in a modest, thoughtful, and transparent way.
How to handle discrepancies while you collect data for systemic review – pubricaPubrica
1. Population specification error:
2. Sample error:
3. Selection error:
4. Non- response error:
Continue Reading: https://bit.ly/36i7iYo
For our services: https://pubrica.com/services/research-services/systematic-review/
Why Pubrica:
When you order our services, We promise you the following – Plagiarism free | always on Time | 24*7 customer support | Written to international Standard | Unlimited Revisions support | Medical writing Expert | Publication Support | Biostatistical experts | High-quality Subject Matter Experts.
Contact us:
Web: https://pubrica.com/
Blog: https://pubrica.com/academy/
Email: sales@pubrica.com
WhatsApp : +91 9884350006
United Kingdom: +44-1618186353
An efficient implementation for key management technique using smart card and...ijctcm
A Elliptic curve cryptosystem are become popular because of the reduced number of keys bits required in
Comparision to other cryptosystem. In existing work ECC technique are used to describe the encryption
data to provide a security over a network. ECC satisfy the Smart cards requirements in term of memory,
processing and cost. In existing work ECC cryptographic Algorithm work with a smart card technique.
Many existing approaches work with smart card with various Technique and produce a better efficient
result. In these review paper, we Define a smart card technique using a ECIES cryptographic algorithm. So
These Technique key management using smart card and ECIES.ECC basically based on a discrete
logarithm over appoint on an elliptic curve. The ECIES is standard elliptic curve that is totally based on
encryption algorithm. Smart Card using ECIES technique in key management technique.
Model predictive control (MPC) is an advanced contr
ol algorithm that has been very successful in the
control industries due to its capability of handlin
g multi input multi output (MIMO) systems with phys
ical
constraints. In MPC, the control action are obtaine
d by solving a constrained optimization problem at
every sample interval to minimize the difference be
tween the predicted outputs and the reference value
through the using of minimum control energy and sat
isfying the constraints of the physical system.
Quadratic programing (QP) problem is solved using Q
PKWIK method which improves the active set
method. The system architecture and design for the
implementation of online MPC on the FPGA is taken
into consideration in this paper to control a DC mo
tor. This implementation is completed using Spartan
6
Nexys3 FPGA chip using simulation environment (EDK
tool) and the comparison between MPC and PID
controller is also established.
ANTI-SYNCHRONIZATION OF HYPERCHAOTIC PANG AND HYPERCHAOTIC WANG-CHEN SYSTEMS ...ijctcm
Hyperchaotic systems are chaotic systems having more than one positive Lyapunov exponent and they have
important applications in secure data transmission and communication. This paper applies active control
method for the synchronization of identical and different hyperchaotic Pang systems (2011) and
hyperchaotic Wang-Chen systems (2008). Main results are proved with the stability theorems of Lypuanov
stability theory and numerical simulations are plotted using MATLAB to show the synchronization of
hyperchaotic systems addressed in this paper.
Comparison between pid controllers for gryphon robot optimized with neuro fuz...ijctcm
In this paper three intelligent evolutionary optimization approaches to design PID controller for a
Gryphon Robot are presented and compared to the results of a neuro-fuzzy system applied. The three
applied approaches are artificial bee colony, shuffled frog leaping algorithm and particle swarm
optimization. The design goal is to minimize the integral absolute error and reduce transient response by
minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is
defined and minimized applying the four optimization methods mentioned above. After optimization of the
objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that
FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of
steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is
better on decreasing of overshoot. On the other hand PSO sounds to perform well on steady-state error
only. In steady state manner all of the methods react robustly to the disturbance, but FNN shows more
stability in transient response.
P AINTING T OOL C ONTROL AND S CENARIO FOR G ONDOLA - TYPED F ACADE M A...ijctcm
We have researched a gondola-typed building façade
maintenance robot system. Its main goal is to paint
reinforced concrete walls as fast and wide as possi
ble. We applied a horizontal array of painting spray nozzles which have uniform heights. In order to apply them to the gondola robot, a painting scenario is designed. Basically, when the gondola robot goes from the bottom to the top of the wall, wall shape
recognition is executed. While it goes down, the no
zzles are turned on and off in accordance with the
wallshape
Mechanization and error analysis of aiding systems in civilian and military v...ijctcm
In present scenario GPS is widely used to provide extremely accurate position information for navigation.
From, where the GPS does not give continuous localization in environments where signal blockages are
present, Inertial Navigation System comes into action. Because of sensors present in INS and time
integration process, errors get accumulated over time. Henceforth, an aiding system is integrated with INS.
The aim of this paper is to model VMS and RADAR and aid it with INS in order to overcome its errors. VMS
is aided to INS to achieve acceptable accuracy and ease of implementation, much needed in civilian
navigation. Different trajectories are generated to offer solutions in a practical scenario. Whereas, for
highly accurate positioning in military navigation a reliable aiding system, Radar has been opted. The
Kalman filter is designed and modeled as the integrating element in INS/RADAR, to provide an optimal
estimate of navigation solutions. An error analysis has been done for both INS aided VMS and INS aided
Radar systems. The navigation performance of VMS and Radar aiding system is compared and their merits
have been brought out. We besides give the readers a more honest insight of the demand for an aiding
system in different environments based on various simulation results
Automatic classification of bengali sentences based on sense definitions pres...ijctcm
Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the
Bengali sentences automatically into different groups in accordance with their underlying senses. The input
sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL
project of the Govt. of India, while information about the different senses of particular ambiguous lexical
item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic
model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a
particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses
that render sentences in different meanings. In our experiment we have achieved around 84% accurate
result on the sense classification over the total input sentences. We have analyzed those residual sentences
that did not comply with our experiment and did affect the results to note that in many cases, wrong
syntactic structures and less semantic information are the main hurdles in semantic classification of
sentences. The applicational relevance of this study is attested in automatic text classification, machine
learning, information extraction, and word sense disambiguation
In this paper, we made a survey on Word Sense Disambiguation (WSD). Near about in all major languages
around the world, research in WSD has been conducted upto different extents. In this paper, we have gone
through a survey regarding the different approaches adopted in different research works, the State of the
Art in the performance in this domain, recent works in different Indian languages and finally a survey in
Bengali language. We have made a survey on different competitions in this field and the bench mark
results, obtained from those competitions.
Dynamic modelling and optimal controlscheme of wheel inverted pendulum for mo...ijctcm
Unstable wheel inverted pendulum is modelled and controlled deploying Kane’s method and optimal
partial-state PID control scheme. A correct derivation of nonlinear mathematical model of a wheel inverted
pendulum is obtained using a proper definition of the geometric context of active and inertia forces. Then
the model is decoupled to two linear subsystems namely balancing and heading subsystems. Afterward
partial-state PID controller is proposed and formulated to quadratic optimal regulation tuning method. It
enables partial-state PID to be optimally tuned and guarantees a satisfactory level of states error and a
realistic utilization of torque energy. Simulation and numerical analyses are carried out to analyse
system’s stability and to determine the performance of the proposed controller for mobile wheel inverted
pendulum application.
An Approach To Automatic Text Summarization Using Simplified Lesk Algorithm A...ijctcm
Text Summarization is a way to produce a text, which contains the significant portion of information of the
original text(s). Different methodologies are developed till now depending upon several parameters to find
the summary as the position, format and type of the sentences in an input text, formats of different words,
frequency of a particular word in a text etc. But according to different languages and input sources, these
parameters are varied. As result the performance of the algorithm is greatly affected. The proposed
approach summarizes a text without depending upon those parameters. Here, the relevance of the
sentences within the text is derived by Simplified Lesk algorithm and WordNet, an online dictionary. This
approach is not only independent of the format of the text and position of a sentence in a text, as the
sentences are arranged at first according to their relevance before the summarization process, the
percentage of summarization can be varied according to needs. The proposed approach gives around 80%
accurate results on 50% summarization of the original text with respect to the manually summarized result,
performed on 50 different types and lengths of texts. We have achieved satisfactory results even upto 25%
summarization of the original text.
In this paper, block-oriented systems with linear parts based on Laguerre functions is used to
approximation of a cone crusher dynamics. Adaptive recursive least squares algorithm is used to
identification of Laguerre model. Various structures of Hammerstein, Wiener, Hammerstein-Wiener models
are tested and the MATLAB simulation results are compared. The mean square error is used for models
validation.It has been found that Hammerstein-Wiener with orthonormal basis functions improves the
quality of approximation plant dynamics. The mean square error for this model is 11% on average
throughout the considered range of the external disturbances amplitude. The analysis also showed that
Wiener model cannot provide sufficient approximation accuracy of the cone crusher dynamics. During the
process it is unstable due to the high sensitivity to disturbances on the output.The Hammerstein-Wiener
model will be used to the design nonlinear model predictive control application.
Robust second order sliding mode control for a quadrotor considering motor dy...ijctcm
In this paper, a robust second order sliding mode control (SMC) for controlling a quadrotor with uncertain
parameters presented based on high order sliding mode control (HOSMC). A controller based on the
HOSMC technique is designed for trajectory tracking of a quadrotor helicopter with considering motor
dynamics. The main subsystems of quadrotor (i.e. position and attitude) stabilized using HOSMC method.
The performance and effectiveness of the proposed controller are tested in a simulation study taking into
account external disturbances with consider to motor dynamics. Simulation results show that the proposed
controller eliminates the disturbance effect on the position and attitude subsystems efficiency that can be
used in real time applications.
Wall shape recognition using limit switch moduleijctcm
We have researched a gondola typed robot system for building façade maintenance. Its main application is
to paint on building façade. To apply a robot system to the painting tool, recognition of building wall shape
should be preceded. In this paper, we proposed a limit switch module as a mechanical sensor method. As
experiments, we applied the proposed module to window and obstacle on wall with attitude reference
sensor (ARS) and laser height sensor (distance sensor).
WORK BREAKDOWN STRUCTURE: A TOOL FOR SOFTWARE PROJECT SCOPE VERIFICATIONijseajournal
Software project scope verification is a very important process in project scope management and it needs
to be performed properly and thoroughly so as to avoid project rework and scope creep. Moreover,
software scope verification is crucial in the process of delivering exactly what the customer requested and
minimizing project scope changes. Well defined software scope eases the process of scope verification and
contributes to project success. Furthermore, a deliverable-oriented WBS provides a road map to a well
defined software scope of work. It is on the basis of this that this paper extends the use of deliverableoriented
WBS to that of scope verification process. This paper argues that a deliverable-oriented WBS is a
tool for software scope verification
DESIGN OF A MULTI-AGENT SYSTEM ARCHITECTURE FOR THE SCRUM METHODOLOGYijseajournal
The objective of this paper is to design a multi-agent system architecture for the Scrum methodology.
Scrum is an iterative, incremental framework for software development which is flexible, adaptable and
highly productive. An agent is a system situated within and a part of an environment that senses the
environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the
future (Franklin and Graesser, 1996). To our knowledge, this is first attempt to include software agents in
the Scrum framework. Furthermore, our design covers all the stages of software development. Alternative
approaches were only restricted to the analysis and design phases. This Multi-Agent System (MAS)
Architecture for Scrum acts as a design blueprint and a baseline architecture that can be realised into a
physical implementation by using an appropriate agent development framework. The development of an
experimental prototype for the proposed MAS Architecture is in progress. It is expected that this tool will
provide support to the development team who will no longer be expected to report, update and manage
non-core activities daily.
The main concept of neutrosophy is that any idea has not only a certain degree of truth but also a degree of falsity and indeterminacy in its own right. Although there are many applications of neutrosophy in different disciplines, the incorporation of its logic in education and psychology is rather scarce compared to other fields. In this study, the Satisfaction with Life Scale was converted into the neutrosophic form and the results were compared in terms of confirmatory analysis by convolutional neural networks. To sum up, two different formulas are proposed at the end of the study to determine the validity of any scale in terms of neutrosophy. While the Lawshe methodology concentrates on the dominating opinions of experts limited by a one-dimensional data space analysis, it should be advocated that the options can be placed in three-dimensional data space in the neutrosophic analysis . The effect may be negligible for a small number of items and participants, but it may create enormous changes for a large number of items and participants. Secondly, the degree of freedom of Lawshe technique is only 1 in 3D space, whereas the degree of freedom of neutrosophical scale is 3, so researchers have to employ three separate parameters of 3D space in neutrosophical scale while a resarcher is restricted in a 1D space in Lawshe technique in 3D space. The third distinction relates to the analysis of statistics. The Lawhe technical approach focuses on the experts' ratio of choices, whereas the importance and correlation level of each item for the analysis in neutrosophical logic are analysed. The fourth relates to the opinion of experts. The Lawshe technique is focused on expert opinions, yet in many ways the word expert is not defined. In a neutrosophical scale, however, researchers primarily address actual participants in order to understand whether the item is comprehended or opposed to or is imprecise. In this research, an alternative technique is presented to construct a valid scale in which the scale first is transformed into a neutrosophical one before being compared using neural networks. It may be concluded that each measuring scale is used for the desired aim to evaluate how suitable and representative the measurements obtained are so that its content validity can be evaluated.
Review Journal 1A simplified mathematical-computational model of .docxmichael591
Review Journal 1
A simplified mathematical-computational model of the immune response to the yellow fever vaccine
1. This model can be improved in a way if there are more test subjects and more variables and parameters with test data is added. Plus the mathematical process is always improvable so if there is an equation which is more better for this experiments then it’s can improve the model and experiment with development. Another try is to improve the qualitative results obtained from our model. Additional computational experiments, such as the effects of a) a booster dose and b) a reduction in the population of TCD8+ naive. Also, a sensitivity analysis will be performed to identify sensitive parameters and to identify connections between change in parameters values and computational results.
2. if more cases or more experiments were added then it could be more expanded research and could improve the research more but the similar results are achieved by the shorter experiments so we can say this number of experiments were enough. But there is always a room left for improvements. The second difference between the two models is that this work reduces the amount of equations from 19 to 10. The reduced model described in this work considers only the main populations of cells and molecules involved in the response to the vaccine, and abstracts some details that are not crucial to represent the behavior of the immune response. For example, the distinct compartments are not represented here. Also, some populations were not considered because no experimental data is available to validate the simulations, such as the CD4+ T cells. In future, more cell or molecule can be included in the model again, if its role is important to explain or represent some behavior that the reduced model presented in this section could not represent. That was something in the first paper journal which was not satisfactory for me.
3. This model can be applied to the numerous medical applications like cancer immunity and other immune vaccinations for the diseases or viruses in the open environment which are lethal and are possible to cure in the future. The virus cannot proliferate itself, it needs to infect a cell and use it as a factory for new viruses. This is implicitly considered in the term πvV , which represents the multiplication of the virus in the body, with a production rate πv. The term cv1V cv2+V denotes a non-specific viral clearance made by the innate immune system
4. The main problem which was to solve is that the author needs something authentic calculations to perform the experiment to show the results of that experiment as negative and for that only programming and developing an algorithm or a program is not enough. It needs the proper calculations of the human body and cells about each and every details in depth. So that’s why it was necessary to include the calculations in the immune system development. Previously the author wasn’t including proper cal.
HLT 362 V GCU Quiz 11. When a researcher uses a random samSusanaFurman449
HLT 362 V GCU
Quiz 1
1. When a researcher uses a random sample of 400 to make conclusions about a larger population, this is an example of:
· Descriptive statistics
· Demographics
· Inferential statistics
· Dependent variables
2. If a study is comparing number of falls by age, age is considered what type of variable?
· Interval
· Ordinal
· Ratio
· Nominal
3. Validity is:
· A data item, such as characteristics, numbers, properties, or quantities, that can be measured or counted.
· The extent to which an idea or measurement is well-founded and an accurate representation of the real world.
· A measurement level with equal distances between the points and a zero-starting point.
· Raw unorganized information from which conclusions can be made.
4. Data is defined as:
· A data item, such as characteristics, numbers, properties, or quantities, that can be measured or counted.
· The extent to which an idea or measurement is well-founded and an accurate representation of the real world.
· A measurement level with equal distances between the points and a zero-starting point.
· Raw unorganized information from which conclusions can be made.
5. The average of the collected data is known as:
· Mean
· Median
· Variance
· Range
6. The experimental or predictor variable is an example of:
· Extraneous variable
· Dependent variable
· Independent variable
· Nominal data
7. Level of measurement that defines the relationship between things and assigns an order or ranking to each thing is known as:
· Interval
· Ordinal
· Ratio
· Nominal
8. A variable is considered:
· A data item, such as characteristics, numbers, properties, or quantities, that can be measured or counted.
· A component of mathematics that looks at gathered data.
· Statistics designed to allow the researcher to infer characteristics regarding a population from sample population.
· External and internal influences within a study that can affect the validity and reliability of the outcomes.
9. External and internal influences within a study that can affect the validity and reliability of outcomes is called:
· Continuous variables
· Demographics
· Bias
· Standard deviation
10. The subset of the population to be studied is called:
· Sample
· Variable
· Population
· Demographic
Put the below in your own words into 1-2 paragraphs for the main conclusion and 1-2 paragraphs for the clinical application
Main conclusion:
The following is one example of a main conclusion and clinical applicability to assist you in formulating your take home message for the dissemination assignment. The details in these descriptions are intentionally detailed for your consideration. Do not include this level of detail in the dissemination assignment.
HPV study:
The Healthy People 2020 HPV vaccination goal of 80% of all United States adolescents[KG1] is not being met with current practices (citation). With insufficient vaccination, reduction in HPV-related disease ...
Interprofessional Simulation: An Effective Training Experience for Health Car...Dan Belford
Background
This descriptive study measured the effectiveness of and participants' satisfaction with an interprofessional simulation education workshop as a teaching strategy for health care professionals.
Method
Health care professionals completed a 1-day clinical simulation workshop on interprofessional collaboration, after which they had the opportunity to fill out 4 evaluative instruments
Manuel Cabrera Discussion 7 Manuel M CabreraCOLLAPSETop of .docxalfredacavx97
Manuel Cabrera
Discussion 7: Manuel M Cabrera
COLLAPSE
Top of Form
Discussion 7
Szeto et al. (2010) conducted a pilot study focused on the investigation of the effectiveness of multifaceted ergonomic interventions aimed at community nurses (p. 1022). The results of the pilot study indicate that such interventions improved symptoms and functional outcomes. Pilot studies are typically conducted to evaluate the possibility of a large study and identify complications that may occur. One of the issues that could be considered problematic is that the authors emphasize the importance of statistical significance while overlooking the importance of feasibility. Nevertheless, one may argue that the discussed study is aligned with the definition of a pilot study because it focused on a specific population, and the authors relied on a small sample. Differently put, it would be inappropriate to generalize the results of the study, but it has helped the researchers to assess whether research in this area is feasible. Therefore, a larger study focused on this issue was conducted at a later rate. Szeto et al. (2013) attempted to evaluate the effectiveness of multifaceted ergonomic interventions in four local hospitals aimed at community nurses (p. 414). One of the unique aspects of the study is that the participants involved in the pilot study agreed to continue to participate in research in this area. Therefore, researchers were able to keep track of their progress and evaluated the impact of interventions in the long-term. Moreover, they expanded the explanatory power of the study by increasing the sample size and introducing a self-control group. The results of the study indicate that multifaceted ergonomic interventions designed based on the needs of community nurses decrease symptoms and improve functional outcomes. One has to acknowledge the fact that this study has a set of limitations because researchers focused on local hospitals, and it may be inappropriate to generalize the results. Therefore, it would be appropriate to conduct large-scale studies in this area to establish the overall effectiveness of multifaceted ergonomic interventions.
References
Szeto, G. P., Law, K. Y., Lee, E., Lau, T., Chan, S. Y., & Law, S. (2010). Multifaceted ergonomic intervention programme for community nurses: Pilot study. Journal of Advanced Nursing, 66(5), 1022–1034. doi:10.1111/j.1365-2648.2009.05255.x
Szeto, G. P., Wong, T. K., Law, R. K., Lee, E. W., Lau, T., So, B. C., & Law, S. W. (2013). The impact of a multifaceted ergonomic intervention program on promoting occupational health in community nurses. Applied Ergonomics, 44(3), 414–422. doi:10.1016/j.apergo.2012.10.004
Bottom of Form
Euclides Munoz Perez
Discussion # 7
A pilot study is a preliminary study that is done as a pretest for research tools and instruments that will be used in the main study project. It assesses the resources which include the time and costs and forese.
After a long period of stagnancy since its original inception, Ayurveda research has caught up speed in the recent times. The research methodology in general got modernized both in terms of data capturing methods and inferential process. Thereby, we are witnessing more and more sophisticated study designs being employed and more of allopathic parameters being measured in investigations undertaken in Ayurveda. This article attempts to consolidate some of the methodological developments currently being pursued in the domain.
College Writing II Synthesis Essay Assignment Summer Semester 2017.docxclarebernice
College Writing II Synthesis Essay Assignment Summer Semester 2017
Directions:
For this assignment you will be writing a synthesis essay. A synthesis is a combination of two or more summaries and sources. In a synthesis essay you will have three paragraphs, an introduction, a synthesis and a conclusion.
In the introduction you will give background information about your topic. You will also include a thesis statement at the end of the introduction paragraph. The thesis statement should describe the goal of your synthesis. (informative or argumentative)
The second paragraph is the synthesis. You will combine two summaries of two different articles on the same topic. You will follow all summary guidelines for these two paragraphs. The synthesis will most likely either argue or inform the reader about the topic.
The conclusion paragraph should summarize the points of your essay and restate the general ideas.
For this essay you will read two research articles on a similar topic to the previous critical review essay as you can use this research in your inquiry paper. You will summarize both articles in two paragraphs and combine the paragraphs for your synthesis. In the synthesis you must include the main ideas of the articles and the author, title, and general idea in the first sentences.
This essay will be three pages long and the first draft and peer review are due June 15. You must turn them in hardcopy in class so you can do a peer review.
Running head: THESIS DRAFT 1
THESIS DRAFT 3Thesis Draft
Katelyn B. Rhodes
D40375299
DeVry University
Point-of-Care Testing (PoCT) has dramatically taken over the field of clinical laboratory testing since it’s introduction approximately 45 years ago. The technologies utilized in PoCT have been refined to deliver accurate and expedient test results and will become even more sensitive and accurate in order to dominate the field of clinical laboratory testing. Furthermore, there will be a dramatic increase in the volume of clinical testing performed outside of the laboratory. New and emerging PoCT technologies utilize sophisticated molecular techniques such as polymerase chain reaction to aid in the treatment of major health problems worldwide, such as sexually transmitted infections (John & Price, 2014).
Historic Timeline
In the early-to-mid 1990’s, bench top analyzers entered the clinical laboratory scene. These analyzers were much smaller than the conventional analyzers being used, and utilized touch-screen PCs for ease of use. For this reason, they were able to be used closer to the patient’s bedside or outside of the laboratory environment. However, at this point in time, laboratory testing results were stored within the device and would have to then be sent to the main central laboratory for analysis.
Technology in the mid-to-late 1990’s permitted analyzers to be much smaller so that they may be easily carried to the patient’s location. Computers also became more ...
· Reflect on the four peer-reviewed articles you critically apprai.docxVannaJoy20
· Reflect on the four peer-reviewed articles you critically appraised in Module 4, related to your clinical topic of interest and PICOT.
· Reflect on your current healthcare organization and think about potential opportunities for evidence-based change, using your topic of interest and PICOT as the basis for your reflection.
· Consider the best method of disseminating the results of your presentation to an audience.
The Assignment: (Evidence-Based Project)
Part 4: Recommending an Evidence-Based Practice Change
Create an 8- to 9-slide
narrated PowerPoint presentation in which you do the following:
· Briefly describe your healthcare organization, including its culture and readiness for change. (You may opt to keep various elements of this anonymous, such as your company name.)
· Describe the current problem or opportunity for change. Include in this description the circumstances surrounding the need for change, the scope of the issue, the stakeholders involved, and the risks associated with change implementation in general.
· Propose an evidence-based idea for a change in practice using an EBP approach to decision making. Note that you may find further research needs to be conducted if sufficient evidence is not discovered.
· Describe your plan for knowledge transfer of this change, including knowledge creation, dissemination, and organizational adoption and implementation.
· Explain how you would disseminate the results of your project to an audience. Provide a rationale for why you selected this dissemination strategy.
· Describe the measurable outcomes you hope to achieve with the implementation of this evidence-based change.
· Be sure to provide APA citations of the supporting evidence-based peer reviewed articles you selected to support your thinking.
· Add a lessons learned section that includes the following:
· A summary of the critical appraisal of the peer-reviewed articles you previously submitted
· An explanation about what you learned from completing the Evaluation Table within the Critical Appraisal Tool Worksheet Template (1-3 slides)
Zeinab Hazime
Nurs 6052
10/16/2022
Evaluation Table
Use this document to complete the
evaluation table requirement of the Module 4 Assessment,
Evidence-Based Project, Part 3A: Critical Appraisal of Research
Full
APA formatted citation of selected article.
Article #1
Article #2
Article #3
Article #4
Abraham, J., Kitsiou, S., Meng, A., Burton, S., Vatani, H., & Kannampallil, T.
(2020). Effects of CPOE-based medication ordering on outcomes: an overview of systematic reviews.
BMJ Quality & Safety, 29(10), 1-2.
Alanazi, A. (2020). The effect of computerized physician order entry on mortality rates in pediatric and neonatal care setting: Meta-analysis.
Informatics in Medicine
Unlocked, 19, 100308. https.
Scheduling Of Nursing Staff in Hospitals - A Case Studyinventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
38 www.e-enm.org
Endocrinol Metab 2016;31:38-44
http://dx.doi.org/10.3803/EnM.2016.31.1.38
pISSN 2093-596X · eISSN 2093-5978
Review
Article
How to Establish Clinical Prediction Models
Yong-ho Lee1, Heejung Bang2, Dae Jung Kim3
1Department of Internal Medicine, Yonsei University College of Medicine, Seoul, Korea; 2Division of Biostatistics, Department
of Public Health Sciences, University of California Davis School of Medicine, Davis, CA, USA; 3Department of Endocrinology
and Metabolism, Ajou University School of Medicine, Suwon, Korea
A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymp-
tomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education.
Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statisti-
cal analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model develop-
ment and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for de-
veloping and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection;
handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods
for developing clinical prediction models with comparable examples from real practice. After model development and vigorous
validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use
in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading
to active applications in real clinical practice.
Keywords: Clinical prediction model; Development; Validation; Clinical usefulness
INTRODUCTION
Hippocrates emphasized prognosis as a principal component of
medicine [1]. Nevertheless, current medical investigation
mostly focuses on etiological and therapeutic research, rather
than prognostic methods such as the development of clinical
prediction models. Numerous studies have investigated wheth-
er a single variable (e.g., biomarkers or novel clinicobiochemi-
cal parameters) can predict or is associated with certain out-
comes, whereas establishing clinical prediction models by in-
corporating multiple variables is rather complicated, as it re-
quires a multi-step and multivariable/multifactorial approach to
design and analysis [1].
Clinical prediction models can inform patients and their
physicians or other healthcare providers of the patient’s proba-
bility of having or developing a certain disease and help them
with associated decision-making (e.g., facilitating patient-doc-
tor communication based on more objective information). Ap-
Received: 9 January 2016, Revised: 14 ...
Dynamic drivers of disease in Africa: Integration of participatory researchILRI
Presented by Peter Atkinson, Gianni Lo Iacono, Catherine Grant, Bernard Bett, Vupenyu Dzingirai, Tom Winnebah and other members of the Dynamic Drivers of Disease in Africa Consortium at the EcoHealth 2014 conference, Montreal, Canada, 11-15 August 2014.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
How world-class product teams are winning in the AI era by CEO and Founder, P...
From simulated model by bio pepa to narrative language through sbml
1. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
DOI : 10.5121/ijctcm.2014.4203 27
FROM SIMULATED MODEL BY BIO-PEPA TO
NARRATIVE LANGUAGE THROUGH SBML
Dalila Hamami1
and Baghdad Atmani2
Computer science laboratory of Oran (LIO)
1
Department of Computer Science, Mostaganem University, Algeria
2
Oran university, Algeria
ABSTRACT
Many theoretical works and tools on epidemiological field reflect the emphasis on decision-making tools
by both public health and the scientific community, which continues to increase.
Indeed, in the epidemiological field, modeling tools are proving a very important way in helping to make
decision. However, the variety, the large volume of data and the nature of epidemics lead us to seek
solutions to alleviate the heavy burden imposed on both experts and developers.
Among the important steps of modeling and simulation: model validation. It refers to the process of
determining how well a model corresponds to the system that it intended to represent. So the question is:
what happens if the model is invalid? Do we need to reproduce another one, or just optimize the existing
one?
In this paper, we present a new approach consisting on the passage of an epidemic model realized in Bio-
PEPA to a narrative language using the basics of SBML language. Our goal is to allow on one hand,
epidemiologists to verify and validate the model, and the other hand, developers to optimize the model in
order to achieve a better model of decision making. We also present some preliminary results and some
suggestions to improve the simulated model.
KEYWORDS
Epidemiology, simulation, modelling, Bio-PEPA, narrative language, SBML, model validation.
1. INTRODUCTION
In recent years, biotechnology improved the knowledge of epidemiological pathogens, and
developed effective ways to fight against these epidemics. Currently, several outbreaks are in
vogue, and developing factors helped build huge data banks [1]. Therefore, the amounts of raw
data are too large to be analyzed manually by both experts and computer scientists who need to
understand the epidemiological domain.
Due to the variety of biological data and the nature of epidemics [2], the validation method seems
to be the perfect way, to ensure the perfect model or to optimize it. A set of recommendation are
presented in the literature review [3, 4, 5], among which a verifying model should [6]:
• Make biological sense,
2. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
28
• Mimic real life,
• Be fit for the use they are designed for,
• Carried out a sensitivity analysis to assess the influence of uncertain parameters on the
model outcome.
Unfortunately, by analyzing these recommendations, it appears that, the adopted methods were
insufficient and therefore inefficient, so a new approach is used: the development of an interface
between expert and computer scientist who is no longer required to start from the "whole" to
achieve the "perfect" model. This interface converts the model created by the developer, to
understandable language by the expert, so that he can check the validity of the model by: settings,
rules, constraints, etc, and finally allow the developer to review and optimize the model, so that,
any implementation of prevention and control are carried out for appropriate treatment.
The rest of the paper is organized as follows. Section 2 presents a brief review of epidemiological
modeling and why do we need to use the narrative language?. Section 3, presents a few existing
works on translating narrative language to simulated models and why do we need to translate the
simulated model to the narrative languages?. A description of our model in SBML (Bio-PEPA),
and how to perform its translation into narrative language, in Section 4. Section 5 describes the
details of information on testing and evaluation and some discussion are presented in Section 6.
Section 7 summarizes the work done and also some suggestions to improve the model.
2. MODEL OPTIMIZATION
Modeling and simulation plays a critical role in estimating the potential impact of outbreaks of
highly contagious diseases, either in human population, such as tuberculosis, HIV, or in animal
population, as highly pathogenic avian influenza. Although in nowadays epidemiological models
are refined, but the quality of the results of modeling studies, still depends on the quality, the
pertinence and accuracy of the data on which they are based, and on validity, in both, of the
model themselves and its conceptual specification. For this, these models should be subjected to
careful and ongoing evaluation and scrutiny.
Because in epidemiological studies, major decisions are made on the basis of the results of
modeling studies, it is important to know that these studies are appropriate, accurate and correct.
The result is that the key step in the evaluation of epidemiological models is a model validation,
which refers to the process of determining how well a model corresponds to the system that it
intended to represent [7]. As well defined by Schlesinger [8], model validation is the
“substantiation that a ... model within its domain of applicability possesses a satisfactory range of
accuracy consistent with the intended application of the model”.
Reeves et al [7], reviews a set of works about validation methods, where, several authors detailed,
taxonomies of the methods used to validate model [9, 3, 4].
Taylor et al [4] specified that there no easy test of a model’s validity. He described perfectly
validation by four notions (methods):
• Valid models should make biological sense: The model should be examined to ensure
that all the epidemiological knowledge that influence outbreaks of the disease have been
included. In this phase, it is quite interesting to detail document describing the conceptual model,
by a series of discussion and evaluation of the details of the model’s operation. Hervey et al [10],
gave a large importance to this phase, in his study. He reviewed all workshops and meeting done
3. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
29
about NAADSM (the North American Animal Disease Spread Model), in particular Foot-and-
Mouth Disease which was carried out by experts in the field of FMD and modeling. Among the
suggested improvements were inclusion of various livestock species, production systems, and a
variety of mitigation strategies, as well as extension of the utility of the model to diseases other
than FMD. Also, as mentioned by Taylor [4]:” Ferguson and others [11] found that estimating
the spatial transmission kernel by retrospectively fitting a model to the epidemic data produced
a wider kernel than that derived from the tracing data provided”.
• Valid models should mimic real life: the model gives the same output as the real system
over a range of variables. The problem is that, all modelling groups affirm that their models could
reproduce the course of an epidemic with reasonable accuracy. However, the level of proof of
validity this provides is challenged by the fact that some of the models were parameterized
using statistical methods in the aim to provide a matching to the real data, as done by Keeling et
al [12]. Low-Beer et al [13], studied VIH disease, throughout the study, it was widely reported
that the model produced similar outcomes, therefore supporting its validity. However, not all
aspects of model produced conclusions about control policy that were alike in every detail. They
stated that their model did not model the dynamic aspects of sexual mixing and
transmission, and should be used for short-term forecasts of 3-5 years, assumptions which
belongs to the reality and not still realized. Skvortsov et al [6], developing a complex agent-
based model to simulate an epidemic outbreak and to validate it they using a simple
mathematical model by comparing the results with a SIR model, the later is derived assuming
uniform mixing among all members of the population, whereas a real population has
people with a wide range of contact rates. Taylor et al [4], concluded that, the importance of
studying carefully the field data from different areas, in order to better understand the
relationships between control policies and disease dynamics cannot be overstressed. Indeed,
models can never be substitutes for careful analysis of field data.
• Valid models should be fit for the use they are designed for: the important thing is not the
model's validity/invalidity, but model's usefulness. As presented by Sanson et al [5], they
achieved the model validation by comparing three scenarios over different control strategies or
resource constraints. They proved that each outcome helped in making decision, even if it did not
match reality.
• Sensitivity analysis should be carried out to assess the influence of uncertain parameters
on the model outcome: used particularly when the values assigned to model parameters are
uncertain due to lack of good quality data. Sensitivity analysis helped to detect parameters for
which model is more sensitive than the case in real life and reveal that the model is invalid. As
specified by Hughes et al [14]: “in the absence of data from the household survey of
tuberculosis disease, the model was validated by comparing its output with TB incidence data
for Zimbabwe and characteristics typical of epidemics of any infectious disease”. They noted
that in future work they have to use a model population that is sufficiently large for TB disease to
become endemic and include the effect of interactions of the model population with a background
pool of TB infection. This leaves suggests that their used parameters was uncertain and they have
to test others. Waller et al [15] proposed the use of Monte Carlo hypothesis tests, which compare
a single set of outcome data from a real system to multiple model-generated outcome data sets,
the drawback of this type of validation was reported by Anderson’ works [16, 17], he studied the
outbreaks of FMD, where that of 2001, resulted in the infection of over 2,000 herds and that of
2007, resulted in the infection of only eight infected herds, then which data is better for validation
step and consistent for study.
4. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
30
Summing up, this review helped us to conclude that to optimizing a model, we need verifying it,
which means validating it, and to achieve this, we have to take in consideration two points; the
inputs and the outputs of model. The input data are used to extract relevant parameters that will
influence model outcomes. The output data (results of a system) are used to provide a basis for
comparison with model's produced outcomes.
In particular cases, we have access to information pertaining to only a single outbreak of disease
in a particular set of circumstances, which is inconsistent for the study. In still other instances,
models are developed to explore hypothetical scenarios, where some information is generally
established to inform model inputs, but there can be no data on the system outputs. Whatever the
form or source of data used to inform models, their correctness and validity should be considered.
These observations lead us to come back to the paramount factor of study who is expert
(epidemiologist).
So, in the presence or absence of input and output data, the expert alone can devise a scheme to
monitor the epidemic and the validation results. Only problem, that he is considered as an expert
in epidemiology and not expert in modeling and simulation, and it is no way that he could
understand the developer language, as simple as it seems.
To overcome this drawback, the revolution of using narrative language appears. Where some
works offer interface to helping expert interacts with developer. The following sections review
the principal existing work on this field.
3. FROM NARRATIVE LANGUAGE TO A MODEL
Develop and use a good epidemiological model, remains to this day a very attractive idea, and to
achieve it, many researchers are struggling between having to choose the best tools and methods,
or to conduct a thorough training in the field in question, and often they find themselves stagger
between them. However, others, give little importance to neither one nor the other, rather they
prefer to save their energy and adopt a technique completely original that is to transform the
context expressed by an expert directly in a simulated model, as it was presented by Georgoulas
and Guerriero in 2012 [18], for translating the narrative language in a "Bio-PEPA" formal model.
In 2007 and 2009, Guerriero et all [19, 20] studied the translation of narrative language in a
"model Beta-binders" and "a bio-inspired process calculus". They proposed an approach to
biochemical model specification which was based on a narrative language, where expert as
biologist or epidemiologist specify their model, by providing a textual description of the system
and its evolution. In their description, the experts should specify the set of list composing the
model as compartments, entities, reactions describing the dynamic of system with their rate
parameters. The authors have assumed that it would be better to simplify the communication
between experts and developers by providing a simple interface that would allow both the expert
to insert their information and the developer to manipulate only the code without worrying too
much about understanding everything. This approach has been baptized the passage from
narrative language to a model. Although this work is regarded as a large opening in the field of
modeling, however, the question arises, what is a narrative language? What happens to existing
models?.
• Narrative language
The narrative language is a formal language, allowing to the experts in general and biologists in
particular expressing the system and its dynamics, using terms that are well known and common
5. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
31
in their natural language. As defined by Guerriero et al [20], a model in narrative language is
composed of four sections (by respecting the copyrights of authors, we prefer to retain the
important sentences definition in its entirety): “
• The description of the biological compartments in which the involved entities can be
located during the evolution of the system;
• The description of the entities composing the system;
• The description of the occurring reactions;
• The narrative description of the evolution of the system, i.e. the list of the occurring
events.
A compartment is identified by an integer number; moreover, its name, size, and number of
spatial dimensions can be specified. Compartments could represent cellular or sub-cellular
compartments, but also abstract locations.
A component is identified by its name, and it can be seen as a list of interaction sites. Each site is
defined by a name and a state. Also, the initial quantity/concentration of the component should be
set.
A reaction is identified by an integer number; its type and the reaction rate parameter should also
be specified. A reliability value can be associated to each numerical value (e.g. rate parameters
and initial quantities); it is a percentage value that can be used to distinguish between values that
are certain because obtained from wet experiments, and others which are the result of not verified
assumptions. Modellers can take this information into account during the important step of model
refining by means of parameter space search and sensitivity analysis.
Finally, the evolution of the system is described by means of a narrative of events. This narrative
is a sequence of basic events, each of which is a textual description of a reaction involving at
most two components. Events can be grouped into processes.
An event is identified by an integer number; in addition to the textual semi-formal description of
the event, the identifier of the reaction associated with the event should be specified. The
description of the event is a string of the form if condition then event_descr, with the conditional
part being optional.
A condition is a string of the form component is state, component_site is state, component is in
compartment, etc. Multiple conditions can be specified by separating them with the keyword and.
Event_descr is a string of the form component reaction for monomolecular events, or component
reaction component for bimolecular events. Table 1, shows an example of events expressed in
narrative language.
6. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
32
Table 1: List of events in narrative language (an extract taken from [20])
Id Description react
LIF-LIFR binding
1
2
If LIFR.LIF is not bound and LIF is not bound then binds LIFR on LIF
If LIFR.LIF is bound and LIF is bound then LIF unbinds LIFR on LIF
1
2
3
4
If gp130.LIF is not bound and LIF is not bound then LIF binds gp30 on LIF
If gp130.LIF is bound and LIF is bound then LIF unbinds gp130 on LIF
3
4
This section helped us to understand the narrative language’s importance to modelling, why in the
following work, which were largely inspired from Guerriero's work [20], we suggest, to do the
reverse of him, kept the existing model and improve it, which means, translating the Bio-PEPA
model to a narrative language.
4. FROM MODEL TO A NARRATIVE LANGUAGE
In order to respond to the issues raised in the previous section and based on the principle defined
above, we propose an approach whose aim is to preserve existing models and also to optimize it
by allowing an incremental model implementation.
An extensive literature search, which focused on methods of modeling with both, analytical and
decision support tools, as well as the translation of the model in other specific formats, for
approaching the narrative language, we were able to highlight Bio-PEPA [2, 21], which is formal
language based on the process algebras recommended for biochemical systems and which was
perfectly suited to epidemiological field. Beyond this definition, Bio-PEPA is equipped with an
extension that allows translating any model in Bio-PEPA in XML format better known as SBML
(Systems Biology Markup Language).
4.1. Bio-PEPA (Biological Performance Evaluation of Process Algebras)
Bio-PEPA is a tool, method and language based on process algebra. These are described by
mathematical formalisms used in the analysis of concurrent systems [2, 22, 23], which consist of
a set of processes running in parallel, can be independent or share common tasks.
As it was defined in [21], the Bio-PEPA language is 7-tuple (V, N, K, FR, Comp, P, Event)
Where:
• V is a set of locations,
• N is a set of auxiliary information,
• K is a set of parameters,
• E is a set of functional rates
• Comp is the set of species
• P is the component model.
• Event is the set of events.
7. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
33
4.1.1. Characteristics of Bio-PEPA
The main features which are provided in Bio-PEPA are:
• provides a formal abstraction of biochemical systems and further epidemiological systems.
• Allows expressing any kind of interaction law expressed using functional rates.
• Allows expressing the evolution of species and their interaction.
• Defined syntax and structural semantics based on a formal representation.
• Provides the ability to perform different types of analysis from the model (continuous time
Markov chain, the stochastic simulation algorithms, differential equations).
4.1.2. Bio-PEPA Syntax
As defined by [24, 21], Bio-PEPA syntax is described by:
S:= (α,k) op S:=S ; S:=S+S; S:=C where
op = ↓ ׀↑׀⊕׀⊖׀ ⊙ And
S::= S S ׀ S(x)
Where, S: describe the species (different types of individuals); P: the model describing the system
and interaction between species. The term (α, k) op S, express that the action α is described by k
rate and performed by the species S, “op” define the role of S. Op={ ↓: reactant, ↑: product, ⊕ :
activator, ⊖ : inhibitor, ⊙ : generic modifier}.
4.2. Systems Biology Markup Language (SBML)
SBML (The Systems Biology Markup Language) is a markup language based on XML (the
eXtensible Markup Language). In essence, an XML document is divided into hierarchically
structured elements from a root element. Syntactically, the elements of an XML document are
marked in the document itself by opening and closing pairs of tags, each element consists of a
name that specifies its type, attributes, and content (elements or text).
SBML is a set of constructions’ elements specific to the systems biology, defined in an XML
schema. It has been adapted to the epidemiological models.
The SBML language is divided into hierarchically structured elements which are a syntax tree of
language as an XML schema.
As it was defined in section 4.1, an epidemiological model is defined in Bio-PEPA by a set of
compartments, species and reactions described by rates and parameters. SBML do the same by
using tags and attributes [25, 26]. Figure 1 shows the general organization of SBML's TAG which
are described in the following [27]:
8. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
34
Figure 1. General organization of SBML language.
• Model : An SBML model definition consists of lists of SBML components located
inside the tags : <model id="My_Model" > ….</model>.
• listOfFunctionDefinitions: The mathematical functions that can be used in the other
part of the model are defined in this section.
• listOfUnitDefinitions : these units are used to explicitly specify: constants, initial
conditions, the symbols in the formulas and the results of formulas.
• listOfCompartments : Is an enclosed space in which the species (species) are located.
• listOfSpecies : To specify the different entities in the model regardless of their nature,
where one type of species "listOfSpeciesTypes" can be specified.
• listOfReactions : Any process whereby the transfer of a species from one
compartment to another.
We specify that Representation and semantics of mathematical expressions are defined in the
SBML using MathML.
4.3. Relation of Bio-PEPA to SBML
The principal notions which rely Bio-PEPA to SBML are summarized in table 2, (this table was
directly extracted from [2]).
9. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
35
Table 2: Summary of mapping from SBML to Bio-PEPA (taken from [2])
SBML Element Corresponding Bio-PEPA component
List of Compartments Bio-PEPA compartments
List of Species Species definitions (Name, initial concentration
and compartment). Step size and level default
to 1. Also used in species sequential component
definitions.
List of Parameters Bio-PEPA parameter list. Local parameters
renamed to include re- action name.
List of Reactions Species component definitions and model
component definition.
Kinetic-Laws Bio-PEPA Functional rates
Table 2, describes the mapping from SBML to Bio-PEPA, where each element in SBML,
matches perfectly to element in Bio-PEPA.
As seen in figure 1, SBML schema contains:
• A ListOfCompartments section. Each compartment in this list is directly matched to a
compartment in Bio-PEPA, which defines compartment by compartment name matched to an
identifier ‘ID’ attribute in SBML. The compartments in Bio-PEPA, is also defined by its size.
This is also mapped to the attribute “compartment size” in SBML.
• A ListOfSpecies section. Each species in this list is directly mapped to a species in Bio-
PEPA, which defines species by name, the initial concentration, the enclosing compartment name
and the ‘unit’ for the species concentration. All these information are matched respectively to
SBML species section which include attributes for ID, initialConcentration, compartment and
unit.
• A ListOfParameters. Each parameter in this list is directly matched to a parameter in Bio-
PEPA, which defines parameters by name and value, those are mapped to the "name" and "value"
attribute in SBML.
• A Kinetic-Laws included in ListOfReactions. Each Kinetic-Laws in this list is directly
matched to a KineticLaw (Functional rate) in Bio-PEPA, which expresses it by a name of
functional rate matched to an integer number in SBML and an expanded formula in terms of
species and parameters matched to a list of species and parameters used in MathML link to
express what kind of relation is between each element of formula.
• A ListOfReactions. Each reaction in this list is directly matched to reaction in Bio-PEPA,
which defines a dynamic of species using a set of operation as: reactant (↓), product (↑), and
generic modifier (⊙), and functional rates defined above. These information are matched in
SBML to a ListOfProducts, a ListOfReactants and ListOfModifiers (respectively).
10. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
36
5. IMPLEMENTATION
For implementing our approach, we resumed work that we had already started in [21] which was
to reproduce the spread and vaccination protocol of chickenpox in Bio-PEPA, as shown in Figure
2.
Figure 2. Model structure (taken from [28]).
The overall scheme of our approach is defined by three main steps:
• Formulation of the epidemic model in Bio-PEPA: definition of compartment, species and
reactions.
• Exporting SBML file.
• Representation in narrative language: analysis of SBML file, displaying a detailed report,
validation by the expert.
5.1. Description of model structure
Our approach, as it was structured, allows us to share our work in two main stages, the first is to
develop a model with Bio-PEPA (Formulation from epidemic model in Bio-PEPA), a work that
has already been done [21] and demonstrated the importance of using such a tool.
The second part (Exporting SBML file from Bio-PEPA, Representation of SBML text in
narrative language) is developing a module that would translate the Bio-PEPA code in a language
understood by the expert, who may easily check whether the contents of the model is adequate to
the example and thus validate it.
5.1.1. Chickenpox model in Bio-PEPA
To better understand the modeling process, we have taken and explained in this section the most
important part of the code Bio-PEPA model of chickenpox [28]. (For clarity of the document, we
have listed a few parts of model).
11. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
37
1. Location: To express the seven age groups of the model, we have represented it as
compartments.
location Age1 in world : size = sizeAge, type = compartment;
………
location Age7 in world : size = sizeAge, type = compartment;
2. Functional rate: Describes the interaction law between compartments.
Exposition = λ . S . I; describes the contact between susceptible (S) and infected (I) with λ
rate.
…….
LostVaccin = W . VP; defines the rate of immune lost (W) of those protected by vaccination
(VP).
3. The species : are the system entities expressed by operations describing their evolution.
S = [(Exposition,1)↓ S + (Vaccination_1,1) ↓ S + (Vaccination_2,1)↓S; explains what happens
if executes Exposition function, Vaccination_1 or Vaccination_2.
Some lines of Bio-PEPA code are shown in Figure 3. We could note that even if the Bio-PEPA
language is simple and easy for developer, however, remains an ambiguous part face which is set
epidemiologist who cannot verify the validity of information represented by the developer, as the
epidemiologist cannot understand the Bio-PEPA code.
We can extract from this figure, two important points, firstly the representation of chickenpox
model in Bio-PEPA (right of the figure), and another hand the results of a simulation graph
summarizing the status of various species (left of the figure).
Figure 3. Global view of Bio-PEPA model.
12. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
38
5.1.2. Exporting SBML file from Bio-PEPA
Bio-PEPA provides the ability to export the model as an SBML file.
As shown in Figure 4: The resulting text describes all the tags and attributes as they were
presented in Section 4.2, corresponding to our model of chickenpox.
It should be remembered that to study an epidemic, we must take into consideration: the
environment "space", “time”, and various other functions.
SBML can express perfectly each section describing the elements defined in Bio-PEPA.
5.1.3. The Chickenpox model in narrative language
To work with SBML, we need to perform a literature research, about tools analyzing and
interpreting this type of descriptor, the latter revealed the JDOM tool (Java Document Object
Model) [29], is an open source library for manipulating XML files in Java..
The main features of DOM are:
• The DOM model (unlike on this point to another famous API: SAX) is a specification
that has its origins in the w3C consortium.
• The DOM model is not only a multi-platform specification, but also multi-languages:
as Java, JavaScript, etc.
• DOM presents documents as a hierarchy of objects, from which, more specialized
interfaces are themselves implemented: Document, Element, Attribute, Text, etc. With
this model, we can treat all DOM components either by their generic type, “Node”, or
by their specific type “element, attribute”, many methods of navigation allow
navigation in the tree without having to worry about the specific type of component
Treated.
<?xml version="1.0" encoding="UTF-8"?>
<sbml version="3" level="2"
xmlns="http://www.sbml.org/sbml/level2/version3">
<model id= "vaccination_biopepa">
<listOfCompartmentTypes> <compartmentType
id="Compartment"/>
<compartmentTypeid="Membrane"/>
</listOfCompartmentTypes>
<listOfCompartments>
<compartmentid="Age7" outside="world" size="100000.0"
compartmentType="Compartment"/>
……..
<compartment id="Age5" outside="world" size="100000.0"
compartmentType="Compartment"/>
</listOfCompartments>
<listOfSpecies> <species id="Exp_Age1"
hasOnlySubstanceUnits="true" substanceUnits="item"
compartment="Age1" name="Exp"/>
……
<species id="VS_Age7" hasOnlySubstanceUnits="true"
substanceUnits="item" compartment="Age7" name="VS"/>
</listOfSpecies>
<listOfParameters> <parameter id="landa1" value="0.17241"/>
……..
<parameter id="W" value="0.021"/>
</listOfParameters>
…….
13. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
39
Figure 4. Chickenpox model in SBML.
Figure 5. (a) From SBML code to narrative language.
Figure 5 (b) From SBML code to narrative language.
Figures 5 (a, b) shows the interface of our application based on JDOM model and thus gathering
the steps defined above. White space viewed in the figure corresponds to the loading of SBML
14. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
40
file, when the black area corresponds to the translation and analysis of SBML in narrative
language understandable by the expert, the expert in this way has no difficulty in verifying the
validity of the model. The user-friendly interface allows him to navigate the various components
of the model (species, function of interaction, locations ... etc.)
To validate our application, we made a change in the initial code (Bio-PEPA) where we
intentionally caused an error in our model, the generation of the latter, as shown clearly in Figure
6 that the expert could detect the species and reactions which are missing, and therefore it can
easily report them to us. (The red frame line specifies the error caused by the number of species
missed).
Figure 6. Detection error after translatation the model into narrative language.
6. DISCUSSION
This paper surveyed an importance of validation in optimizing simulated model, in particular
epidemiological model. It emphasized statistical a several works using techniques that yield
reproducible, objective, quantitative data about the quality of simulation models. However, our
analyze suggests that all these techniques remain insufficient, and that epidemiologist is the
expert of the final validation. Hence the proposal to translate the model into a narrative
language.
Our translation module has been implemented as an extension of the already developed Bio-
PEPA plugin for the Eclipse IDE [30]. Specifically, we have added a new menu item that prompts
the user to select the output type, which could performs a simulation of Bio-PEPA model, or
translation of Bio-PEPA model to narrative language. However, we advise the user, to start with
simulation step, to help expert to validate in first time that the model is something adequate to the
reality, and if he needs to verify it thoroughly, he may perform the second step: translation.
15. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
41
To validate our model we choose to focus on the characteristics of the output. Specifically, we
wanted to see how easy the output model is to read and how easy the expert to validate a model,
and how close it is to natural language. To prove the importance of our work, we review the
principal remark done by those who translated the narrative language to Bio-PEPA model. In [18,
19, 20] works, authors also focused on the same characteristics of us, they specified that
obviously, as they was presented by Bio-PEPA model, these metrics are very subjective and
difficult (if not impossible) to measure. However, this task seems for us, to be very easy.
Therefore, the expert has just to read the output, and compared it, with its own knowledge. the
alone thing is that, if there is any mistakes noted by expert in the resulting narrative model, the
expert has to come back to developer, because he is the only one able to correct it.
To overcome this drawback, the idea to couple our model with [18, 19, 20] model is very
encouraging, in the aim to minimize all interaction between expert/developer.
7. CONCLUSIONS
Modeling and simulation are very useful to understand and predict the dynamics of various
biological phenomena. The Bio-PEPA approach seems to be an interesting and powerful
approach to address such problems. Through its various features it allows an easy development of
the computer model and a transparent way for biologists between the real system and the built
which helps a faithful representation of the phenomenon studied. Nevertheless, in case of
occurrence of a new event, which has been badly treated by the developer and therefore omitted,
correction model is considered a tedious task for both. This is the reason for which, we have
introduced a new module (interface), where the expert can easily detect this omission and thus
back to the developer, who may discern error and quickly position it, on the Bio-PEPA model.
By doing so, we concluded that to validate/optimize simulated model is better to come back to the
expert’s knowledge than to try a series of validation methods without knowing where exactly, we
have to optimize.
As perspective to strengthen this work, why not attached it to the one that was mentioned in
Section 2, and thus drift toward a cyclical pattern, which would not require even the presence of
developer, however, after reflection, what will become the expert, front of its multitude of
information? After a brief literature review the idea of integrating it with the world of data mining
would be a much better idea to fruition.
REFERENCES
[1] Mansoul, A., Atmani, B. (2009). Fouille de données biologiques
des règles d’association, In Proceedings of CIIA.
[2] Ciocchetta, F., Ellavarason, K. (2008). An Automatic Mapping from the Systems Biology Markup
Language to the Bio-PEPA Process Algebra.
[3] Sargent, R., (2009). Verification and validation of simulation models. In Rossetti, M.D., Hill, R.R.,
Johansson, B., Dunkin, A., Ingalls, R.G. (Eds.), Proceedings of the 2009 Winter Simulation
Conference, December 13-16 2009, Austin, Texas, Institute of Electrical and Electronics Engineers,
Piscataway, New Jersey, USA.
[4] Taylor, N., (2003). Review of the use of models in informing disease control policy development and
adjustment. A Report for the Department for Environmental, Food, and Rural Affairs, UK. Web page
http://www.defra.gov.uk/science/documents/publications/2003/UseofModelsinDisease-
ControlPolicy.pdf.
[5] Sanson, RL., Harvey, N., Garner, MG., Stevenson, MA., Davies, TM., Hazelton, ML., O'Connor, J.,
Dube, C., Forde-Folle, KN. and Owen, K ,(2011). Foot and mouth disease model verification
16. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
42
and'relative validation'through a formal model comparison.Revue scientifique et technique,
International Office of Epizootics, 30.2,pp.527.
[6] Skvortsov, ATRB., Connell, RB., Dawson, P., Gailis, R, (2007). Epidemic modelling: Validation of
agent-based simulation by using simple mathematical models, MODSIM 2007 International Congress
on Modelling and Simulation. Modelling and Simulation Society of Australia and New Zealand, pp.
657-662
[7] Reeves, A., (2012). Construction and evaluation of epidemiologic simulation models for the within-
and among-unit spread and control of infectious diseases of livestock and poultry. Dissertation from
Colorado State University.
[8] Schlesinger, S., (1979). Terminology for model credibility. Simulation 32, 103–104.
[9] Law, A.M., Kelton, W.D., (2000). Simulation modeling and analysis. Analysis, 3rd ed. McGraw-Hill,
Boston, Massachusetts, USA.
[10] Harvey, N., Reeves, A., Schoenbaum, M.A., Zagmutt-Vergara, F.J., Dubé, C., Hill, A.E., Corso, B.A.,
McNab, W.B., Cartwright, C.I., Salman, M.D., (2007). The North American Animal Disease Spread
Model: A simulation model to assist decision making in evaluating animal disease incursions. Prev.
Vet. Med. 82, 176–197.
[11] Ferguson, N.M., Donnelly, C.A., Anderson, R.M., (2001). The foot-and-mouth epidemic in Great
Britain: pattern of spread and impact of interventions. Science 292, 1155–1160.
[12] Keeling, M.J., (2005). Models of foot-and-mouth disease. P. Roy. Soc. B 272, 1195–1202.
[13] Low-Beer, D., and Stoneburner, R.L., (1997). An age-and sex-structured HIV
epidemiological model: features and applications. Bulletin of the World Health
Organization, 75-3, pp213.
[14] Hughes, Georgina R and Currie, Christine SM and Corbett, Elizabeth L, (2006), Modeling
tuberculosis in areas of high HIV prevalence. Simulation Conference, 2006. WSC 06. Proceedings of
the Winter, pp. 459-465, IEEE.
[15] Waller, L.A., Smith, D., Childs, J.E., Real, L.A., (2003). Monte Carlo assessments of goodness-of-fit
for ecological simulation models. Ecol. Model. 164, 49–63.
[16] Anderson, I., 2002. Foot and mouth disease (2001). lessons to be learned inquiry report. Web page
http://webarchive.nationalarchives.gov.uk/20100807034701/archive.cabinetoffice.gov.uk/fmd/fmd_re
port/documents/index.htm.
[17] Anderson, I., (2008). Foot and mouth disease 2007: a review and lessons learned. Web page
http://webarchive.nationalarchives.gov.uk/20100807034701/archive.cabinetoffice.gov.uk/fmdreview/.
Last accessed June 26, 2011.
[18] Georgoulas, A., & Guerriero, M. L. (2012). A software interface between the Narrative Language and
Bio-PEPA, 1–9.
[19] Guerriero, M. L., A. Dudka, N. Underhill-Day, J. K. Heath and C. Priami (2009). Narrative-based
computational modelling of the Gp130/JAK/STAT signalling pathway, BMC Systems Biology 3, p.
40.
[20] Guerriero, M. L., J. K. Heath and C. Priami, (2007). An Automated Translation from a Narrative
Language for Biological Modelling into Process Algebra, in: Proceedings of Computational Methods
in Systems Biology (CMSB’07), LNCS 4695, pp. 136–151. URL
http://www.springerlink.com/content/vt23126776012835/
[21] Hamami.D, Atmani.B. (2012). Modeling the effect of vaccination on varicella using Bio-PEPA.
Proceeding Iasted , 783-077, doi:978-0-88986-926-4
[22] Milner.R. (1999). Communicating and Mobile Systems: the π-calculus. Cambridge University Press.
[23] Baeten J.C.M.(2005). A Brief History of Process Algebras. Theoretical Computer Science, Volume
335, Issue 2-3, Pages 131-146.
[24] Ciocchetta, F. and M. Guerriero (2009). Modelling Biological Compartments in Bio-PEPA, ENTCS
227, pp. 77–95.
[25] Hucka.M, Finney.A, S. Hoops, S. Keating and N. L. Novere (2007). Systems Biology Markup
Language (SBML) Level 2: Structures and Facilities for Model Definitions. Systems Biology Markup
Language, Release 2.
[26] Hucka.M, Finney.A, Bornstein.B.J, Keating.S.M, B.E. Shapiro, J. Matthews, B.L. Kovitz, M.J.
Schilstra, A. Funahashi, J.C. Doyle and H. Kitano (2004). Evolving a Lingua Franca and Associated
Software Infrastructure for Computational Systems Biology: The Systems Biology Markup Language
(SBML) Project, Systems Biology, Volume 1, Pages 41-53.
17. International Journal of Control Theory and Computer Modeling (IJCTCM) Vol.4, No.1/2, April 2014
43
[27] Beurton-aimar.M (2007). Langage de modélisation des réseaux biochimiques, 1–16, ECRIN-Biologie
syst, Chap. 07, Page 7·.
[28] Bonmarin.I, Santa-Olalla.P, Lévy-Bruhl.D(2008). Modélisation de l’impact de la vaccination sur
l’épidémiologie de la varicelle et du zona », Revue d’Epidémiologie et de Santé Publique 56 323–
331.
[29] Hunter, J. (2002). JDOM Makes XML Easy.Sun’s 2002 Worlwide Java Developer Conference.
[30] Bio-PEPA, http://www.biopepa.org/.