A talk at the ESSA Silico Summer School in Wageningen, June 2017. It looks at some of the different purposes for a simulation model, and how complicated one should make one's model
Screening for disease or Early detection of disease is detecting a disease at an earlier stage than would usually occur in standard clinical practice.
This denotes detecting disease at a pre-symptomatic stage, at which point the patient has no clinical complaint ( no symptoms or signs) and therefore no reason to seek medical care for the condition
Early detection of disease is beneficial and that intervention at an earlier stage of the disease process is more effective or easier to implement than a later intervention
A cross-sectional study is a descriptive study in which disease and exposure status are measured simultaneously in a given population.
It measures
the prevalence of health outcomes(also called prevalence study)
or determinants of health,
or both,
In a population at a point in time or over a short period.
When the investigator draws a sample out of the study population of interest and examines all the subjects to detect
those having the disease/outcome
and those not having this disease/outcome of interest.
At the same time, finds out whether or not they have the presence of
the suspected cause (exposure)
(or give a History of such exposure in the past),
is called the Cross-sectional analytic study.
Lecture for Post and Undergraduate.
From the past two decades Non Communicable diseases are increasing in both developing and developed countries due to which developing are experiencing double burden of diseases.
A talk at ESSA@Work, TUHH (Technical University of Hamburg), 24th Nov 2017.
Abstract: Simulation models can only be justified with respect to the models purpose or aim. The talk looks at six common purposes for modelling: prediction, explanation, analogy, theoretical exposition, description, and illustration. Each of these is briefly described, with an example and an brief analysis of the risks to achieving these, and hence how they should be demonstrated. The importance of being explicitly clear about the model purpose is repeatedly emphasised.
Winter is coming! – how to survive the coming critical storm and demonstrate ...Bruce Edmonds
A talk at the 2014 European Social Simulation Association summer school, at UAB in Barcelona 8th sept 2014
The talk covers some of the symptoms of hype in social simulation and argues that it needs to be more careful and rigourous. In particular that the (current) purpose of a simulation needs to be distinguished between theoretical, explanatory or predictive. Each having their own critieria.
Screening for disease or Early detection of disease is detecting a disease at an earlier stage than would usually occur in standard clinical practice.
This denotes detecting disease at a pre-symptomatic stage, at which point the patient has no clinical complaint ( no symptoms or signs) and therefore no reason to seek medical care for the condition
Early detection of disease is beneficial and that intervention at an earlier stage of the disease process is more effective or easier to implement than a later intervention
A cross-sectional study is a descriptive study in which disease and exposure status are measured simultaneously in a given population.
It measures
the prevalence of health outcomes(also called prevalence study)
or determinants of health,
or both,
In a population at a point in time or over a short period.
When the investigator draws a sample out of the study population of interest and examines all the subjects to detect
those having the disease/outcome
and those not having this disease/outcome of interest.
At the same time, finds out whether or not they have the presence of
the suspected cause (exposure)
(or give a History of such exposure in the past),
is called the Cross-sectional analytic study.
Lecture for Post and Undergraduate.
From the past two decades Non Communicable diseases are increasing in both developing and developed countries due to which developing are experiencing double burden of diseases.
A talk at ESSA@Work, TUHH (Technical University of Hamburg), 24th Nov 2017.
Abstract: Simulation models can only be justified with respect to the models purpose or aim. The talk looks at six common purposes for modelling: prediction, explanation, analogy, theoretical exposition, description, and illustration. Each of these is briefly described, with an example and an brief analysis of the risks to achieving these, and hence how they should be demonstrated. The importance of being explicitly clear about the model purpose is repeatedly emphasised.
Winter is coming! – how to survive the coming critical storm and demonstrate ...Bruce Edmonds
A talk at the 2014 European Social Simulation Association summer school, at UAB in Barcelona 8th sept 2014
The talk covers some of the symptoms of hype in social simulation and argues that it needs to be more careful and rigourous. In particular that the (current) purpose of a simulation needs to be distinguished between theoretical, explanatory or predictive. Each having their own critieria.
Some supporting slides on modelling purposes and pitfalls when using ABM in policy contexts to accompany discussion on Modelling Pitfalls at the ESSA Summer School, Aberdeen, June 2019
Different Modelling Purposes - an 'anit-theoretical' approachBruce Edmonds
Models are a tool, not a picture of reality. There are many different uses for models. The intended use of a model - its 'purpose' - affects how it is judged, checked and developed. Much confusion and bad practice in modelling can be attributed to not clearly identifying the intended 'purpose' for a model. Neo-classical Economics is used to illustrate some of these confusions. In some (but not all) uses the model stands in for a theory (at least key aspects of it), but this can happen in different ways and at different levels of abstraction. The talk looks at some of these different ways and advocates a staged, inductive methodology for theory development instead of one that jumps to high generality and simple models which confuse different uses.
A talk given at the Workshop on "From Cases To General Principles - Theory Development Through Agent-Based Modeling" see http://abm-theory.org
The slides from a class on the relationship of formal modelling and their use, with particular focus on different purposes for modelling and how they can go wrong.
The Post-Truth Drift in Social SimulationBruce Edmonds
A talk at the Social Simulation Conference, Dublin, September 2017.
Abstract
The paper identifies a danger in the field of social simulation a danger of using weasel words to give a false impression to the world about the achievements of our field. Whether this is intentional or unintentional, the effect might be to damage the reputation of the field and impair its development. At the root of this is a need for brutal honesty and openness, something that can be personally difficult and that needs social support. The paper considers some of the subtle ways that this kind of post-truth drift might occur, including: confusion/conflation of modelling purpose, wishing to justify pragmatic limitations in our work, falling back to unvalidated theory, confusing using a model for a way of looking at the world for something more reliable, and seeking protection from critique in vagueness. It calls on social simulation researchers to firmly reject such a drift.
Policy Making using Modelling in a Complex worldBruce Edmonds
A talk given at the CECAN workshop, London July 2016
Abstract:
The consequences of complexity in the real world are discussed together with some meaningful ways of understanding and managing such situations. The implications of such complexity are that many social systems are fundamentally unpredictable by nature, especially when in the presence of structural change (transitions). This implies consequences for the way we model, but also for the way models are used in the policy process.
I discuss the problems arising from a too narrow focus on quantification in managing complex systems, in particular those of optimisation. I criticise some of the approaches that ignore these difficulties and pretend to approximately forecast using the impact of policy options using over-simple models. However, lack of predictability does not automatically imply a lack of managerial possibilities. We will discuss how some insights and tools from "Complexity Science" can help with such management. Managing complex systems requires a good understanding of the dynamics of the system in question - to know, before they occur, some of the real possibilities that might occur and be ready so they can be reacted to as responsively as possible. Agent based simulation will be discussed as a tool that is suitable for this task, especially in conjunction with model-informed data visualisation.
Possibilistic prediction and risk analyses
A talk given at the EA annual Conference, Bonn, May 2015
Abstract:
It is in the nature of complex systems that predictions that give a probability are not possible.
Indeed I argue that giving "the most likely" or "rough" prediction is more harmful than useful.
Rather an approach which maps out some of the possible outcomes is outlined.
Agent-based modelling is ideal for producing these - including, crucially, possibilities that could not have been conceived just by thinking about it (due to the fact that events can combine in ways that are more complex than the human brain can cope with directly).
A characterisation of the real future possibilities and their nature allows some positive responses to events:
* putting in place 'early warning indicators' for the emergence of identified possibilities
* contingency planning for when they are indicated.
Such an approach would allow policy makers to better 'drive' their decision making, without abnegating responsibility to experts.
Towards Integrating Everything (well at least: ABM, data-mining, qual&quant d...Bruce Edmonds
A talk given at the SKIN3 workshop in Budapest, May 2014 (http://cress.soc.surrey.ac.uk/SKIN/events/third-skin-workshop)
Innovation or other policy-orientated research has tended to take one of two strategies: (a) work with high-level abstractions of macro-level variables or (b) focus on micro-level aspects/areas with simpler mechanisms. Whilst (a) may provide some comfort in the form of forecasts, these are almost useless for policy since they can only be relied upon if nothing much has changed. Although approach (b) may produce some interesting studies which show how complex even small aspects of the involved processes are, with maybe interesting emergent effects, it provides only a small part of the overall picture and little to guide decision making.
Rather, I (with others) suggest a different approach. Instead of aiming to produce some kind of "adequate" theory (usually in the form of a model along with its interpretation), that instead we aim at integrating different kinds of evidence and find the best ways to present these to policy makers in order to help policy-makers 'drive' by providing views of what is happening. Thus (1) utilising the greatest possible range of evidence and (2) providing rich, relevant but synthetic views of this evidence to the policy makers. Any projections should be 'possibilistic' rather than 'probabilistic' - showing the different ways in which social processes might unfold, and help inform the analysis of risks. The talk looks at some of the ways in which this might be done, to integrate micro-level narrative data, time-series data, survey data, network data, big data using a variety of techniques. In this view, models do not disappear, but rather have a different purpose and hence be developed and checked differently.
This shift will involve a change in attitude and approach from both researchers and those in the policy world. Researchers will have to give up the playing for general or abstract theory, satisfying themselves with more gentle and incremental abstraction, whilst also accepting and working with a greater variety of kinds of evidence. They will also have to stop 'conning' the policy world with forecasts, and refuse to provide these as more dangerous than helpful. The policy world will have to stop looking for a magic 'crutch' that will reduce uncertainty (or provide justification for chosen policies) and move towards greater openness with both data and models.
A talk at the workshop on "Agent-Based Models in Philosophy: Prospects and Limitations", Rurh University, Bochum, Germany
Abstract:
ABMs (like other kinds of model) can be used in a purely abstract way, as a kind of thought experiment - a way of thinking about some aspect of the world that is too complicated to hold in our mind (in all its detail). In this way it both informs and complements discursive thought. However there is another set of uses for ABMs - empirical uses - where the mapping between the model and sets of observation-derived data are crucial. For these uses, one has to (a) use the mapping to get from some data to the model (b) use the model for some inference and (c) use the mapping again back to data. This includes both predictive and explanatory uses of ABMs. These are easily distinguishable from abstact uses becuase there is a fixed and well-defined relationship between the model and the data, this is not flexible on a case by case basis. In these cases the reliability comes from the composite (a)-(b)-(c) mapping, so that simplifying step (b) can be counterproductive if that means weakening steps (a) and (c) because it is the strength of the overall chain that is important. Taking the use of models in quantum mechanics as an example, one can see that sometimes the evolution of the formal models driven by empirical adequacy can be more important than the attendent abstract models used to get a feel for what is happening. Although using ABM's for empirical purposes is more challenging than for purely abstract purposes, they are being increasingly used for empirical explanation rather than thought experiments, and there is no reason to suppose that robust empirical adequacy is unachievable.
A talk given by Eugene Dubossarsky on predictive analytics at the Big Data Analytics meetup in Sydney this month. The talk is available at http://www.youtube.com/watch?v=aG16YSFgtLY
Risk-aware policy evaluation using agent-based simulationBruce Edmonds
A talk about how modelling of complex issues of policy relevance. It covers some of the tensions and difficulties, as well as some of the unrealistic expectations of this kind of modelling. Rather it is suggested these kinds of model should be used as a kind of risk-analysis. Two examples of this are given.
Talk given in Reykjavik at University of Iceland, 30th Nov 2016.
invited talk in the ExUM workshop in the UMAP 2022 conference
abstract:
Explainability has become an important topic both in Data Science and AI in general and in recommender systems in particular, as algorithms have become much less inherently explainable. However, explainability has different interpretations and goals in different fields. For example, interpretability and explanainability tools in machine learning are predominantly developed for Data Scientists to understand and scrutinize their models. Current tools are therefore often quite technical and not very ‘user-friendly’. I will illustrate this with our recent work on improving the explainability of model-agnostic tools such as LIME and SHAP. Another stream of research on explainability in the HCI and XAI fields focuses more on users’ needs for explainability, such as contrastive and selective explanations and explanations that fit with the mental models and beliefs of the user. However, how to satisfy those needs is still an open question. Based on recent work in interactive AI and machine learning, I will propose that explainability goes together with interactivity, and will illustrate this with examples from our own work in music genre exploration, that combines visualizations and interactive tools to help users understand and tune our exploration model.
Maximum Likelihood Estimation is an online course offered at Statistics.com. Statistics.com is the leading provider of online education in statistics, and offers over 100 courses in introductory and advanced statistics. Courses typically are taught by leading experts. Some course highlights -
A. Taught by renowned International Faculty (Not self-paced learning)
B. Instructor led and Peer learning
C. Flexible and Convenient schedule
D. Practical Application and Software skills
For more details please contact info@c-elt.com or ourcourses@c-elt.com.
Website: www.india.statistics.com
Staging Model Abstraction – an example about political participationBruce Edmonds
A presentation at the workshop on ABM and Theory (From Cases to General Principles), Hannover, July 2019
This reports on work where we started with a complex, but evidence driven model, and then modelled that model sto understand and abstract from it. As reported in the paper:
Lafuerza LF, Dyson L, Edmonds B, McKane AJ (2016) Staged Models for Interdisciplinary Research. PLoS ONE, 11(6): e0157261. DOI:10.1371/journal.pone.0157261
Mixing fat data, simulation and policy - what could possibly go wrong?Bruce Edmonds
A talk given at the CECAN workshop on "What Good Data could do for Evaluation" at the Alan Turing Institute, 25th Feb. 2019.
Abstract:
In complex situations (which includes most where humans are involved) it is infeasible to predict the impact of any particular policy (or even what is probable). Randomised Control Trials do not tell one: what kinds of situation a policy might work in, what are enablers and inhibitors of the effectiveness of a policy. Here I suggest that using 'fat' data and simulation might allow a possibilistic analysis of policy impact - namely an exploration of what could go surprisingly wrong (or indeed right). Whilst this does not allow the optimisation of policy, it does inform the effective monitoring of policy, and basic contingency planning. However, this requires a different approach to policy - from planning and optimisation to an adaptive approach, with richer continual monitoring and a readiness to tune or adapt policy as data comes in. Examples of this are given concerning domestic water consumption (in the main talk), and in supplementary slides: voter turnout and fishing.
Social Context
An invited talk at the 2018 Surrey Sociology Conference, Barnett Hill, Surrey, November 2018.
Although there is much evidence that context is crucial to much human cognition and social behaviour, it remains a difficult area to research. In much social science research it is either by-passed or ignored. In some qualitative research context is almost deified with any level of generalisation across contexts being left to the reader. At the other extreme, some qualitative research restricts itself to patterns that are generally detectable - that is the patterns that are left when one aggregates over many different contexts. Context is often used as a 'dustbin concept' to which otherwise unexplained variation is attributed.
This talk looks at some of the ways social context might be actively represented, understood and researched. Firstly the ideas of cognitive then social context are distinguished. Then some possible approaches to researching this are discussed, including: agent-based simulation, a context-sensitive analysis of narrative data and machine learning.
Using agent-based simulation for socio-ecological uncertainty analysisBruce Edmonds
A talk given in the MMU Big Data Centrem, 30th October 2018.
Both social and ecological systems can be highly complex, but the interaction between these two worlds - a socio-ecological system (SES) - can add even greater levels. However, the maintenance of SES are vital to our well being and the health of the planet. We do not know how such systems work in practice and we lack good data about them (especially the ecological side) so predicting the effect of any particular policy is infeasible. Here we present an approach which tries to understand some of the ways in which SES may go wrong, but constructing different complex simulation models and analysing the emergent outcomes. These, in silico, examples can allow for the institution of targeted data gathering instruments that give the earliest possible warning of deleterious outcomes, and thus allow for timely remedial responses. An example of this approach applied to fisheries is described.
How social simulation could help social science deal with contextBruce Edmonds
An invited plenary at Social Simluation 2018, Stockholm.
This points out how context-sensitivity is fundamental to much human social behaviour, but largely bypassed or ignored in social science. I more formal social science, it is usual to assume or fit universal models, even if this covers a lot of different contexts. In qualitative social science context is almost deified, and any generalisation across contexts is passed on to those that learn from it. Agent-based modelling allows for context-sensitive models to be developed and hence the role of context explored and better understood. The talk discussed a framework for analysing narrative text using the Context-Scope-Narrative-Elements (CSNE) framework. It also illustrates a cognitive model that allows for context-dependent knowledge to be implemented wthin an agent in a simulation. The talk ends with a plea to avoid uncecessary or premature summarisation (using averages etc.).
Agent-based modelling,laboratory experiments,and observation in the wildBruce Edmonds
An invited talk at the workshop on "Social complexity and laboratory experiments – testing assumptions and predictions of social simulation models with experiments" at Social Simulation 2018, Stockholm
Culture trumps ethnicity!– Intra-generational cultural evolution and ethnoce...Bruce Edmonds
Essential to understanding the impact of in-group bias on society is the micro-macro link and the complex dynamics involved. Agent-based modelling (ABM) is the only technique that can formally represent this and thus allow for the more rigorous exploration of possi-ble processes and their comparison with observed social phenomena. This talk discusses these issues, providing some examples of some relevant ABMs.
A talk given at the BIGSSS summer school on conflict, Bremen, Jul/Aug 2018.
An Introduction to Agent-Based ModellingBruce Edmonds
An introduction to the technique with two example models of in-group bias and voter turnout.
An invited talk at the BIGSSS Summer Schools in Computational Social Science, at the Jacobs Bremen University, July 2018.
Mixing ABM and policy...what could possibly go wrong?Bruce Edmonds
Invited talk at 19th International Workshop on Multi-Agent Based Simulation at Stockholm on 14th July 2018.
Mixing ABM and Policy ... what could possibly go wrong?
This talk looks at a number of ways in which using ABM in the context of influencing policy can go wrong: during model construction, with model application and other.
It is related to the book chapter:
Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity - a handbook, 2nd edition. Springer, 801-822.
Socio-Ecological Simulation - a risk-assessment approachBruce Edmonds
An invited talk in Tromsoe, 5 June 2018.
Both social and ecological systems are complex, but when they combine (as when human societies farm/hunt) there is a double complexity. This complexity means it is infeasible to predict the outcome of their interaction and unwise to rely on any prediction. An alternative approach is to use complex simulations to try and discover some possible ways that such systems can go wrong. This can reveal risks that other approaches might miss, due to the fact that more of the complexity is included within the model. Once a risk is identified then measures to monitor its emergence can be implemented, allowing the earliest possible warning of this. An example of this approach applied to a fisheries ecosystem is described.
A talk at the workshop on "Thinking toys (or games) for commoning, Basel, 5/6 April, Switzerland.
This describes a simple model of anonymous donation of resources, with minimal group structuring.
Am open-access paper on this model is at: http://cfpm.org/discussionpapers/152
The model can be freely downloaded from:
http://openABM.org/model/4744
Drilling down below opinions: how co-evolving beliefs and social structure mi...Bruce Edmonds
A talk at ODCD2017, Jocob's University, Bremen, July 2017. (http://odcd2017.user.jacobs-university.de/)
The talk looks at an alternative to "linear" models which deal with a euclidean space of opinions (usually a 1D space). This is a model of belief change, where both social influence and internal consistency of beliefs co-evolve with social structure. Thus this goes beyond most opinion dynamics models in a number of ways: (a) it deals with beliefs that may underlie measured opinions (b) the internal coherency among sets of beliefs is important as well as social influence (c) the social structure co-evolves with belief change and (d) the social structures are complex and continually dynamic. The internal consistency of beliefs is based on Thagard's theory of explanatory coherence, which has some empirical support. The model seems to display some of the tensions and processes that are observed in politics, for example: the tension between moderating views so as to connect with the public vs. reinforcing the in-group coherency. It displays a dynamic that can reflect a number of different courses including those that result turning points in opinions.
Modelling Innovation – some options from probabilistic to radicalBruce Edmonds
A talk on the various kinds of innovation based on Margret Boden's types of creativity . Given at the European Academy, Ahrweiler, Germany 31st May 2017.
Co-developing beliefs and social influence networksBruce Edmonds
Argues that many social phenomena needs ABM models with both cognitive and social change co-developing
Presented at the AISB workshop in Bath, April 2017 on "The power of Immergence...". See last slide for details of where to get the paper and the model
An invited talk given at the Institute for Research into Superdiversity (IRIS), University of Brimingham, 31st Jan 2017
Abstract:
A simulation to illustrate how the complex patterns of cultural and genetic signals might combine to define what we mean by "groups" of people is presented. In this model both (a) how each individual might define their "in group" and (b) how each individual behaves to others in 'in' or 'out' groups can evolve over time. Thus groups are not something that is precisely defined but is something that emerges in the simulation. The point is to illustrate the power of simulation techniques to explore such processes in a non-prescriptive way that takes the micro-macro distinction seriously and represents them within complex simulations. In the particular simulation presented, groups defined by culture strongly emerge as dominant and ethnically defined groups only occur when they are also culturally defined.
Towards Institutional System Farming
A talk at the Lorentz Workshop on "Emerging Institutions: Design or Evolution?" September 2016, Leiden, NL (https://www.lorentzcenter.nl/lc/web/2016/836/info.php3?wsid=836&venue=Oort)
A Model of Social and Cognitive CoherenceBruce Edmonds
An inbvited talk at the Workshop on Coherence -Based Approaches to Decision-Making, Cognition and Communication, Berlin July 2016
Human cognition can be usefully understood as a primarily social set of abilities - its survival benefit is from our ability to social organise and hence inhabit a variety of niches. From this point of view any ability makes more sense when put into a social context. This includes our innate ability to judge candidate beliefs in terms of their coherency with our existing beliefs and goals. However studying cognition in its social context implies high complexity, for this reason I describe an agent-based model of coherency based belief within a dynamic network of individuals. Here beliefs might be copied (or discarded) by an individual based upon the change in coherence it causes with its other beliefs, but also that an individual will change their social connections based upon the the coherence of their beliefs with those they socially interact with.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
1. Model Purpose and Complexity
Bruce Edmonds
Centre for Policy Modelling
Manchester Metropolitan University
2. A model is not a ‘picture’
• Even if we often think of it in this way…
• …a model is not a picture of its target, not a
simulacrum (well almost never).
• Rather it is a tool to help deal with it, e.g.:
– To understand it
– To predict it
– To communicate about it
• Just as machines extend our physical
abilities, models extend our mental abilities.
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 2
3. Different tools for different jobs
• A good tool is well designed for its purpose
• Each model is just such a tool
• However, there are many alternative
models for every target so that we do not
know what model is good for what purpose
and what target
• So to be worth bothering other people about
our models, to not waste their time…
• …our models needs to be justified with
respect to a stated purpose and target etc.
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 3
4. There are no generic models…
• ..yet and there seem little prospect for them
in the foreseeable future.
• A ‘Darwin’ for the social sciences has not
arrived with a integrative explanatory theory
that connects social phenomena in a way
that checks out against evidence
• Rather, we are in a ‘pre-integrative’ stage –
doing the equivalent of describing finches
on the Galapagos Islands – specific models
for specific purposes
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 4
5. Simpler is not more general
• Whilst one can add in specific detail to a
simpler model to fit what is known about a
specific case (making it less general)…
• ...the other way around does not generally
work, making a model simpler usually makes it
less general!
• This is because we do not know which of the
processes are essential to a target, and might
simplify these away
• Imagine removing acceleration from
Newtonian dynamics to just use linear
equations – the resulting approximations would
only work at a few specific points!
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 5
6. Using models as an analogy
• Sometimes models are not about the observed
world, but related to our ideas.
• In this case a model is used as a kind of
complicated analogy – a way of thinking
• It does not relate to data but to a natural
language understanding
• The trouble is we apply analogies with great
(unconscious) fluidity, inventing a new way of
relating the analogy to a situation ‘on the fly’
• But this is different from empirical models
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 6
7. Models stage understanding
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 7
Intuitive understanding expressed in normal
language
Observations of the system of concern
Data obtained by measuring the
system
Models of the processes in the
system
Common-SenseComparison
ScientificComparisons
8. Different model purposes
• There are many different kinds of ways to
use a model.
• Each such purpose has its own benefits
and dangers...
• ...and needs different things checking for
different purposes...
• ...and probably needs to be developed in a
different way.
• The next section looks at these different
model purposes.
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 8
10. Motivation
• If you can reliably predict something about
the world, this is undeniably useful…
• ...even if you do not know why your model
predicts (e.g. a black-box model)!
• But it has also become the ‘gold standard’
of science…
• ...becuase (unlike many of the other
purposes) it is difficult to fudge or fool
yourself about – if its wrong this is obvious.
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 10
11. Predictive modelling
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 11
Target system
Initial
Conditions
Outcomes
Predictive Model
Model
set-up
Model
results
12. What it is
The ability to anticipate unknown data reliably
and to a useful degree of accuracy
• Some idea of the conditions in which it does
this well enough have to be understood
• The data it anticipates has to be unknown to
the modeller when building the model
• What is a useful degree of accuracy depends
on its application
• What is predicted can be: categorical,
probability distributions, ranges, negative
predictions, etc.
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 12
13. Examples
• The gas laws (temperature is proportional
to pressure at the same volume etc.) predict
future measurements on a gas without any
indication of why this works
• Nate Silver’s team tries to predict the
outcome of sports events and elections
using computational models. These are
usually probabilistic predictions and the
wider predicted distribution of outcomes is
displayed (http://fivethirtyeight.com)
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 13
14. Warnings
• There are two different uses of the word
‘predict’: one as above and one to indicate any
calculation made using a model.
• This requires repeated attempts at anticipating
unknown data (and learning from this)
• because it is impossible to avoid ‘fitting’ known
data (e.g. due to publication bias)
• If the outcome is unknown and can be
unambiguously checked it could be predictive
• Prediction is VERY hard in the social sciences,
it is rarely done
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 14
15. Mitigating Measures
• The following are documented:
– what aspects it predicts
– roughly when it predicts well
– what degree of accuracy it predicts with
• That the model has been tested that it
predicts on several independent cases
• That the model code is distributed so others
can test it and seek to understand how and
when it predicts
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 15
17. Motivation
• When one wants to understand why or how
something happens
• One makes a simulation with the
mechanisms one wants and then shows
that the results fit the observed data
• The intricate workings of the simulation runs
support an explanation of the outcomes in
terms of those mechanisms
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 17
18. What it is
Establishing a possible causal chain from a
set-up to its consequences in terms of the
mechanisms of a simulation
• The causation can be deterministic,
possibilistic or probibilistic
• The nature of the set-up determines the
terms that the explanation is expressed in
• Only some aspects of the results will be
relevant to be matched to data
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 18
19. Explanatory modelling
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 19
Mechanisms
Model
processes
Model
results
Outcomes
Model
Target System
Outcomes are explained
by the processes
20. Examples
• The model of a gas with atoms randomly
bumping around explains what happens in
a gas (but does not directly predict the
values)
• Lansing & Kramer’s (1993) model of water
distribution in Bali, explained how the
system of water temples acted to enforce
social norms and a complicated series of
negotiations
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 20
21. Warnings
• The fit to the target data maybe a very
special case which would limit the likelihood
of the explanation over other cases
• The process from mechanisms to outcomes
might be complex and poorly understood.
The explanation should be clearly stated
and tested. Assumptions behind this must
be tested.
• There might well be more than one possible
explanation (and/or model)!
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 21
22. Mitigating Measures
• Ensure the built-in mechanisms are
plausible and at the right level
• Be clear which aspects of the output are
considered significant and which artifacts of
the simulation
• Probe the simulation to find when the
explanation works (noise, assumptions etc)
• Do classical experiments to show your
explanation works for your code
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 22
23. Purpose 3: Theory Exposition
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 23
24. Motivation
• If one has a system of equations,
sometimes one can analytically solve the
equations to get a general solution
• When this is not possible (almost all
complicated systems) we can calculate
specific examples – to simulate it!
• We aim to sufficiently explore the whole
space of behaviour to understand a
particular set of abstract mechanisms
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 24
25. What it is
Discovering then establishing (or refuting)
hypotheses about the general behaviour of a
set of mechanisms
• The hypotheses may need to be discovered
• But crucially showing the hypotheses hold
(or are refuted) by the set of experiments
• the hypotheses need to be quite general for
the exercise to be useful to others
• Does not say anything about the observed
world!
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 25
26. Modelling to understand Theory
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 26
Model
processes
Model
results
Model
Target System
Hypothesis or general characterisation of behaviour
27. Examples
• Many economic models are explorations of
sets of abstract mechanisms
• Deffuant, G., et al. (2002) How can
extremism prevail?
jasss.soc.surrey.ac.uk/5/4/1.html
• Edmonds & Hales (2003) Replication…
jasss.soc.surrey.ac.uk/6/4/11.html
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 27
28. Warnings & Mitigation
• A bug in the code is fatal to this purpose
• A general idea of the outcome behaviour so
the exploration needs to be extensive
• Clarity about what is claimed, the model
description etc. is very important
• It it tempting to use the model as a way of
thinking about the world, but this is
dangerous!
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 28
30. Motivation & What it is
• An idea is new but complex and one wants
to simply illustrate it
• This is a way of communicating through a
single (but maybe complex) example
A behaviour or system is illustrated precisely
using a simulation
• It might be a very special case, no
generality is established
• It might be used as a counter-example
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 30
31. Examples
• Sakoda/Schelling’s 2D Model of
segregation which showed that a high level
of racial intollerance was not necessary to
explain patterns of segregation
• Riolo et al. (2001) Evolution of cooperation
without reciprocity, Nature 414:441-443.
• Baum, E. (1996) Toward a model of mind
as a laissez-faire economy of idiots.
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 31
32. Purpose 5: Analogy
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 32
33. Motivation & What it is
• Provides a ‘way of thinking about’ stuff
• The model is not (directly) about anything
observed, but about ideas (which, in turn,
may or may not relate to something
observed)
• It can suggest new insights or new future
directions for research
• We need analogies to help us think about
what to do (e.g. what and how to model)
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 33
34. An illustration of analogical use
of a model
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 34
Target system 1
Model
Informal Ideas
Target system 2
35. Examples
• Axelrod’s Evolution of Cooperation models
(1984 etc.)
• Hammond & Axelrod (2006) The Evolution
of Ethnocentrism. Journal of Conflict
Research
• Many economic models which show the
efficiency of markets
• Many ecological models showing how
systems reach an equilibrium
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 35
36. Warnings
• When one has played with a model the
whole world looks like that model
• But this does not make this true!
• Such models can be very influential but (as
with the economic models of risk about
lending) can be very misleading
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 36
38. Motivation & What it is
• Much science involves a lot of description
(e.g. Darwin drawing Finches)
• Simulations can also be used in this way
This is an attempt to partially represent what
is important of a specific observed case
• It does abstract (as all modelling does) but
cautiously, retaining as much relevant detail
as possible
• Later we might go back to the description
and learn something new from it
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 38
39. Examples
• Scott Moss’s (1998) Model of handling
crises in a water pumping station. JASSS
1(4):1, jasss.soc.surrey.ac.uk/1/4/1.html
• Richard Taylors (2003) thesis.
http://cfpm.org/cpmrep137.html
• Sukaina Bharwani’s (2004) thesis.
Translating interviews with farmers into a
simulation http://goo.gl/MzJJR7
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 39
40. Warnings & Mitagation
• These models should have a good
evidential base – qualitative or quantitative
• Might use expert or stakeholder opinion
• Assumptions and their basis should be very
carefully documented
• They maybe complex but have NO level of
generality – they are a particular case
• Need later further work for generalisation or
analysis of complicated simulations
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 40
41. Summary of Modelling Purposes
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 41
42. Some common confusions
• Firstly in many publications researchers do
not make their model purpose clear
• So the model is hard to judge properly
• Some have simply not thought about it!
Some common confusions:
• Theory Analogy
• Illustration Explanation
• Description Explanation
• Explanation Prediction
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 42
43. Pragmatics of Model Development
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 43
44. Exploratory & Consolidation
• It is common that one does not know clearly
what one wants to program
• In this case you might ‘play’ with models,
exploring the possibilities, getting an idea of
what works, what is possible etc.
• But this does not make a model suitable for
communicating with others
• Rather you then need to re-do the model
properly and justify what it does rigourously
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 44
45. How complicated should you
make your model?
• Simpler models are easier for us humans to
deal with. Easier to: program, check,
understand, communicate, analyse etc.
• But if you leave out crucial processes your
model may simply not be adequate to
predict, explain etc. your target
• In general, we have no idea what is
necessary and what is not – this is a
major area of contention and confusion!
• So what to do? What strategy to follow?
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 45
46. KISS – keep it simple stupid!
• An engineering approach
• Start with a simple model
and add features only
when the simpler is shown
not to work
• Trouble is it maybe a
combination of
mechanisms that are
required – trying them one
at a time might not work
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 46
SimpleComplex
47. KIDS – keep it descriptive stupid!
• Start with the available
evidence of what is important
• Then explore variations from
there, maybe showing some
are not required
• Trouble is this makes for
complicated models so care
is needed in development
and further work needed to
understand your model
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 47
SimpleComplex
48. Which is better?
• There is no reason to suppose that simple
models will be adequate for socio-
ecological phenomena
• Limitations on: human understanding, time,
resources, data are inevitable and should
simply be honestly declared
• For each feature the decision as to whether
to include it or not include it is important and
needs justified
• This depends on the model purpose!
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 48
50. Using models together
• Part of the answer seems to be that we have
to use ‘clusters’ of models together
For example:
• A simpler model of a more complex model to
understand it (model chains, meta-modelling)
• Different (but related) models for different
aspects and different purposes
• Simpler models for understanding, but more
complex for actual use but checked against the
simpler
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 50
51. Many models but distinct
purposes
• Use different kinds of model together, but
keep them distinct!
• Each model for a single purpose
• Think how the models relate to each other
• If a model has two purposes, it is better to
think of them as different models (e.g.
modelA-theory and modelA-analogy)
• In presentations/publications you need to
justify a model for each use seperately
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 51
53. Think!
• Think about what you are trying to do, and
why as you develop or use your simulations
• Keep the models and purposes distinct but
think how they relate
• Analogies are needed to think, including
what we are going to model and why…
• But the serious models themselves are the
business, they either demonstrate or not
that they are adequate to their purpose
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 53
54. Check!
• There are many problems from jumping to
publication when finding interesting results
• But simulations need a lot of work before it
is worth wasting others’ time with them
• Lots of checking of the code, the results etc.
are necessary
(jasss.soc.surrey.ac.uk/12/1/1.html)
• Lots of documentation of your code, your
results, your experiments, what fails etc. are
necessary
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 54
55. Share!
• You are not alone!
• The power of formal models (like computer
simulations) is they can be unambiguously
shared with others who can then run, check,
inspect, change etc.
• Share your models and ideas at an early stage
(e.g. at OpenABM.org), be honest and open –
this is the way to learn
• But distinguish between developing work and
mature work so others can understand how
seriously to take it
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 55
56. Beware!
• We, as humans, are very, VERY, VERY good
at deceiving ourselves
• Simulation models make this worse!
• We use models as a way of thinking about
things and then it is hard to think about them in
other ways (including new ways)
• Any way to shake us up and reconsider this is
good – empirical data is a good way of doing
this, critiquing models can also help
• You should be throwing away most of your
simulations, not elaborating bad models
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 56
57. The key idea of this talk
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 57
some
observed
phenomena
Model
Simplistic
picture of a
universal
modelling
approach
58. Illustrative
Model
The key idea of this talk
Model Purpose and Complexity, Bruce Edmonds, ESSA Sum Sch. 2017, Wageningen, 58
some
observed
phenomena
Explanatory
Model
Predictive
Model
Theoretical
Model Analogical
Model
We build this up
ONE model at a
time, justifying
each as we do
59. The End
Bruce Edmonds http://bruce.edmonds.name
Centre for Policy Modelling http://cfpm.org
A paper that goes into 5 of the modelling purposes:
http://cfpm.org/discussionpapers/192
These slides will be available at:
http://slideshare.com/BruceEdmonds