Some supporting slides on modelling purposes and pitfalls when using ABM in policy contexts to accompany discussion on Modelling Pitfalls at the ESSA Summer School, Aberdeen, June 2019
Staging Model Abstraction – an example about political participationBruce Edmonds
A presentation at the workshop on ABM and Theory (From Cases to General Principles), Hannover, July 2019
This reports on work where we started with a complex, but evidence driven model, and then modelled that model sto understand and abstract from it. As reported in the paper:
Lafuerza LF, Dyson L, Edmonds B, McKane AJ (2016) Staged Models for Interdisciplinary Research. PLoS ONE, 11(6): e0157261. DOI:10.1371/journal.pone.0157261
Towards Integrating Everything (well at least: ABM, data-mining, qual&quant d...Bruce Edmonds
A talk given at the SKIN3 workshop in Budapest, May 2014 (http://cress.soc.surrey.ac.uk/SKIN/events/third-skin-workshop)
Innovation or other policy-orientated research has tended to take one of two strategies: (a) work with high-level abstractions of macro-level variables or (b) focus on micro-level aspects/areas with simpler mechanisms. Whilst (a) may provide some comfort in the form of forecasts, these are almost useless for policy since they can only be relied upon if nothing much has changed. Although approach (b) may produce some interesting studies which show how complex even small aspects of the involved processes are, with maybe interesting emergent effects, it provides only a small part of the overall picture and little to guide decision making.
Rather, I (with others) suggest a different approach. Instead of aiming to produce some kind of "adequate" theory (usually in the form of a model along with its interpretation), that instead we aim at integrating different kinds of evidence and find the best ways to present these to policy makers in order to help policy-makers 'drive' by providing views of what is happening. Thus (1) utilising the greatest possible range of evidence and (2) providing rich, relevant but synthetic views of this evidence to the policy makers. Any projections should be 'possibilistic' rather than 'probabilistic' - showing the different ways in which social processes might unfold, and help inform the analysis of risks. The talk looks at some of the ways in which this might be done, to integrate micro-level narrative data, time-series data, survey data, network data, big data using a variety of techniques. In this view, models do not disappear, but rather have a different purpose and hence be developed and checked differently.
This shift will involve a change in attitude and approach from both researchers and those in the policy world. Researchers will have to give up the playing for general or abstract theory, satisfying themselves with more gentle and incremental abstraction, whilst also accepting and working with a greater variety of kinds of evidence. They will also have to stop 'conning' the policy world with forecasts, and refuse to provide these as more dangerous than helpful. The policy world will have to stop looking for a magic 'crutch' that will reduce uncertainty (or provide justification for chosen policies) and move towards greater openness with both data and models.
The Post-Truth Drift in Social SimulationBruce Edmonds
A talk at the Social Simulation Conference, Dublin, September 2017.
Abstract
The paper identifies a danger in the field of social simulation a danger of using weasel words to give a false impression to the world about the achievements of our field. Whether this is intentional or unintentional, the effect might be to damage the reputation of the field and impair its development. At the root of this is a need for brutal honesty and openness, something that can be personally difficult and that needs social support. The paper considers some of the subtle ways that this kind of post-truth drift might occur, including: confusion/conflation of modelling purpose, wishing to justify pragmatic limitations in our work, falling back to unvalidated theory, confusing using a model for a way of looking at the world for something more reliable, and seeking protection from critique in vagueness. It calls on social simulation researchers to firmly reject such a drift.
Drilling down below opinions: how co-evolving beliefs and social structure mi...Bruce Edmonds
A talk at ODCD2017, Jocob's University, Bremen, July 2017. (http://odcd2017.user.jacobs-university.de/)
The talk looks at an alternative to "linear" models which deal with a euclidean space of opinions (usually a 1D space). This is a model of belief change, where both social influence and internal consistency of beliefs co-evolve with social structure. Thus this goes beyond most opinion dynamics models in a number of ways: (a) it deals with beliefs that may underlie measured opinions (b) the internal coherency among sets of beliefs is important as well as social influence (c) the social structure co-evolves with belief change and (d) the social structures are complex and continually dynamic. The internal consistency of beliefs is based on Thagard's theory of explanatory coherence, which has some empirical support. The model seems to display some of the tensions and processes that are observed in politics, for example: the tension between moderating views so as to connect with the public vs. reinforcing the in-group coherency. It displays a dynamic that can reflect a number of different courses including those that result turning points in opinions.
An invited talk given at the Institute for Research into Superdiversity (IRIS), University of Brimingham, 31st Jan 2017
Abstract:
A simulation to illustrate how the complex patterns of cultural and genetic signals might combine to define what we mean by "groups" of people is presented. In this model both (a) how each individual might define their "in group" and (b) how each individual behaves to others in 'in' or 'out' groups can evolve over time. Thus groups are not something that is precisely defined but is something that emerges in the simulation. The point is to illustrate the power of simulation techniques to explore such processes in a non-prescriptive way that takes the micro-macro distinction seriously and represents them within complex simulations. In the particular simulation presented, groups defined by culture strongly emerge as dominant and ethnically defined groups only occur when they are also culturally defined.
Risk-aware policy evaluation using agent-based simulationBruce Edmonds
A talk about how modelling of complex issues of policy relevance. It covers some of the tensions and difficulties, as well as some of the unrealistic expectations of this kind of modelling. Rather it is suggested these kinds of model should be used as a kind of risk-analysis. Two examples of this are given.
Talk given in Reykjavik at University of Iceland, 30th Nov 2016.
Using Data Integration Modelsfor Understanding Complex Social SystemsBruce Edmonds
Describing the use of complex, descriptive simulations to integrate the maximum amount of evidence in a staged manner. With an example from the SCID project (http://www.scid-project.org).
Policy Making using Modelling in a Complex worldBruce Edmonds
A talk given at the CECAN workshop, London July 2016
Abstract:
The consequences of complexity in the real world are discussed together with some meaningful ways of understanding and managing such situations. The implications of such complexity are that many social systems are fundamentally unpredictable by nature, especially when in the presence of structural change (transitions). This implies consequences for the way we model, but also for the way models are used in the policy process.
I discuss the problems arising from a too narrow focus on quantification in managing complex systems, in particular those of optimisation. I criticise some of the approaches that ignore these difficulties and pretend to approximately forecast using the impact of policy options using over-simple models. However, lack of predictability does not automatically imply a lack of managerial possibilities. We will discuss how some insights and tools from "Complexity Science" can help with such management. Managing complex systems requires a good understanding of the dynamics of the system in question - to know, before they occur, some of the real possibilities that might occur and be ready so they can be reacted to as responsively as possible. Agent based simulation will be discussed as a tool that is suitable for this task, especially in conjunction with model-informed data visualisation.
Staging Model Abstraction – an example about political participationBruce Edmonds
A presentation at the workshop on ABM and Theory (From Cases to General Principles), Hannover, July 2019
This reports on work where we started with a complex, but evidence driven model, and then modelled that model sto understand and abstract from it. As reported in the paper:
Lafuerza LF, Dyson L, Edmonds B, McKane AJ (2016) Staged Models for Interdisciplinary Research. PLoS ONE, 11(6): e0157261. DOI:10.1371/journal.pone.0157261
Towards Integrating Everything (well at least: ABM, data-mining, qual&quant d...Bruce Edmonds
A talk given at the SKIN3 workshop in Budapest, May 2014 (http://cress.soc.surrey.ac.uk/SKIN/events/third-skin-workshop)
Innovation or other policy-orientated research has tended to take one of two strategies: (a) work with high-level abstractions of macro-level variables or (b) focus on micro-level aspects/areas with simpler mechanisms. Whilst (a) may provide some comfort in the form of forecasts, these are almost useless for policy since they can only be relied upon if nothing much has changed. Although approach (b) may produce some interesting studies which show how complex even small aspects of the involved processes are, with maybe interesting emergent effects, it provides only a small part of the overall picture and little to guide decision making.
Rather, I (with others) suggest a different approach. Instead of aiming to produce some kind of "adequate" theory (usually in the form of a model along with its interpretation), that instead we aim at integrating different kinds of evidence and find the best ways to present these to policy makers in order to help policy-makers 'drive' by providing views of what is happening. Thus (1) utilising the greatest possible range of evidence and (2) providing rich, relevant but synthetic views of this evidence to the policy makers. Any projections should be 'possibilistic' rather than 'probabilistic' - showing the different ways in which social processes might unfold, and help inform the analysis of risks. The talk looks at some of the ways in which this might be done, to integrate micro-level narrative data, time-series data, survey data, network data, big data using a variety of techniques. In this view, models do not disappear, but rather have a different purpose and hence be developed and checked differently.
This shift will involve a change in attitude and approach from both researchers and those in the policy world. Researchers will have to give up the playing for general or abstract theory, satisfying themselves with more gentle and incremental abstraction, whilst also accepting and working with a greater variety of kinds of evidence. They will also have to stop 'conning' the policy world with forecasts, and refuse to provide these as more dangerous than helpful. The policy world will have to stop looking for a magic 'crutch' that will reduce uncertainty (or provide justification for chosen policies) and move towards greater openness with both data and models.
The Post-Truth Drift in Social SimulationBruce Edmonds
A talk at the Social Simulation Conference, Dublin, September 2017.
Abstract
The paper identifies a danger in the field of social simulation a danger of using weasel words to give a false impression to the world about the achievements of our field. Whether this is intentional or unintentional, the effect might be to damage the reputation of the field and impair its development. At the root of this is a need for brutal honesty and openness, something that can be personally difficult and that needs social support. The paper considers some of the subtle ways that this kind of post-truth drift might occur, including: confusion/conflation of modelling purpose, wishing to justify pragmatic limitations in our work, falling back to unvalidated theory, confusing using a model for a way of looking at the world for something more reliable, and seeking protection from critique in vagueness. It calls on social simulation researchers to firmly reject such a drift.
Drilling down below opinions: how co-evolving beliefs and social structure mi...Bruce Edmonds
A talk at ODCD2017, Jocob's University, Bremen, July 2017. (http://odcd2017.user.jacobs-university.de/)
The talk looks at an alternative to "linear" models which deal with a euclidean space of opinions (usually a 1D space). This is a model of belief change, where both social influence and internal consistency of beliefs co-evolve with social structure. Thus this goes beyond most opinion dynamics models in a number of ways: (a) it deals with beliefs that may underlie measured opinions (b) the internal coherency among sets of beliefs is important as well as social influence (c) the social structure co-evolves with belief change and (d) the social structures are complex and continually dynamic. The internal consistency of beliefs is based on Thagard's theory of explanatory coherence, which has some empirical support. The model seems to display some of the tensions and processes that are observed in politics, for example: the tension between moderating views so as to connect with the public vs. reinforcing the in-group coherency. It displays a dynamic that can reflect a number of different courses including those that result turning points in opinions.
An invited talk given at the Institute for Research into Superdiversity (IRIS), University of Brimingham, 31st Jan 2017
Abstract:
A simulation to illustrate how the complex patterns of cultural and genetic signals might combine to define what we mean by "groups" of people is presented. In this model both (a) how each individual might define their "in group" and (b) how each individual behaves to others in 'in' or 'out' groups can evolve over time. Thus groups are not something that is precisely defined but is something that emerges in the simulation. The point is to illustrate the power of simulation techniques to explore such processes in a non-prescriptive way that takes the micro-macro distinction seriously and represents them within complex simulations. In the particular simulation presented, groups defined by culture strongly emerge as dominant and ethnically defined groups only occur when they are also culturally defined.
Risk-aware policy evaluation using agent-based simulationBruce Edmonds
A talk about how modelling of complex issues of policy relevance. It covers some of the tensions and difficulties, as well as some of the unrealistic expectations of this kind of modelling. Rather it is suggested these kinds of model should be used as a kind of risk-analysis. Two examples of this are given.
Talk given in Reykjavik at University of Iceland, 30th Nov 2016.
Using Data Integration Modelsfor Understanding Complex Social SystemsBruce Edmonds
Describing the use of complex, descriptive simulations to integrate the maximum amount of evidence in a staged manner. With an example from the SCID project (http://www.scid-project.org).
Policy Making using Modelling in a Complex worldBruce Edmonds
A talk given at the CECAN workshop, London July 2016
Abstract:
The consequences of complexity in the real world are discussed together with some meaningful ways of understanding and managing such situations. The implications of such complexity are that many social systems are fundamentally unpredictable by nature, especially when in the presence of structural change (transitions). This implies consequences for the way we model, but also for the way models are used in the policy process.
I discuss the problems arising from a too narrow focus on quantification in managing complex systems, in particular those of optimisation. I criticise some of the approaches that ignore these difficulties and pretend to approximately forecast using the impact of policy options using over-simple models. However, lack of predictability does not automatically imply a lack of managerial possibilities. We will discuss how some insights and tools from "Complexity Science" can help with such management. Managing complex systems requires a good understanding of the dynamics of the system in question - to know, before they occur, some of the real possibilities that might occur and be ready so they can be reacted to as responsively as possible. Agent based simulation will be discussed as a tool that is suitable for this task, especially in conjunction with model-informed data visualisation.
A talk at the ESSA Silico Summer School in Wageningen, June 2017. It looks at some of the different purposes for a simulation model, and how complicated one should make one's model
A talk at ESSA@Work, TUHH (Technical University of Hamburg), 24th Nov 2017.
Abstract: Simulation models can only be justified with respect to the models purpose or aim. The talk looks at six common purposes for modelling: prediction, explanation, analogy, theoretical exposition, description, and illustration. Each of these is briefly described, with an example and an brief analysis of the risks to achieving these, and hence how they should be demonstrated. The importance of being explicitly clear about the model purpose is repeatedly emphasised.
A talk at the workshop on "Agent-Based Models in Philosophy: Prospects and Limitations", Rurh University, Bochum, Germany
Abstract:
ABMs (like other kinds of model) can be used in a purely abstract way, as a kind of thought experiment - a way of thinking about some aspect of the world that is too complicated to hold in our mind (in all its detail). In this way it both informs and complements discursive thought. However there is another set of uses for ABMs - empirical uses - where the mapping between the model and sets of observation-derived data are crucial. For these uses, one has to (a) use the mapping to get from some data to the model (b) use the model for some inference and (c) use the mapping again back to data. This includes both predictive and explanatory uses of ABMs. These are easily distinguishable from abstact uses becuase there is a fixed and well-defined relationship between the model and the data, this is not flexible on a case by case basis. In these cases the reliability comes from the composite (a)-(b)-(c) mapping, so that simplifying step (b) can be counterproductive if that means weakening steps (a) and (c) because it is the strength of the overall chain that is important. Taking the use of models in quantum mechanics as an example, one can see that sometimes the evolution of the formal models driven by empirical adequacy can be more important than the attendent abstract models used to get a feel for what is happening. Although using ABM's for empirical purposes is more challenging than for purely abstract purposes, they are being increasingly used for empirical explanation rather than thought experiments, and there is no reason to suppose that robust empirical adequacy is unachievable.
Winter is coming! – how to survive the coming critical storm and demonstrate ...Bruce Edmonds
A talk at the 2014 European Social Simulation Association summer school, at UAB in Barcelona 8th sept 2014
The talk covers some of the symptoms of hype in social simulation and argues that it needs to be more careful and rigourous. In particular that the (current) purpose of a simulation needs to be distinguished between theoretical, explanatory or predictive. Each having their own critieria.
Possibilistic prediction and risk analyses
A talk given at the EA annual Conference, Bonn, May 2015
Abstract:
It is in the nature of complex systems that predictions that give a probability are not possible.
Indeed I argue that giving "the most likely" or "rough" prediction is more harmful than useful.
Rather an approach which maps out some of the possible outcomes is outlined.
Agent-based modelling is ideal for producing these - including, crucially, possibilities that could not have been conceived just by thinking about it (due to the fact that events can combine in ways that are more complex than the human brain can cope with directly).
A characterisation of the real future possibilities and their nature allows some positive responses to events:
* putting in place 'early warning indicators' for the emergence of identified possibilities
* contingency planning for when they are indicated.
Such an approach would allow policy makers to better 'drive' their decision making, without abnegating responsibility to experts.
Mixing ABM and policy...what could possibly go wrong?Bruce Edmonds
Invited talk at 19th International Workshop on Multi-Agent Based Simulation at Stockholm on 14th July 2018.
Mixing ABM and Policy ... what could possibly go wrong?
This talk looks at a number of ways in which using ABM in the context of influencing policy can go wrong: during model construction, with model application and other.
It is related to the book chapter:
Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity - a handbook, 2nd edition. Springer, 801-822.
Mixing fat data, simulation and policy - what could possibly go wrong?Bruce Edmonds
A talk given at the CECAN workshop on "What Good Data could do for Evaluation" at the Alan Turing Institute, 25th Feb. 2019.
Abstract:
In complex situations (which includes most where humans are involved) it is infeasible to predict the impact of any particular policy (or even what is probable). Randomised Control Trials do not tell one: what kinds of situation a policy might work in, what are enablers and inhibitors of the effectiveness of a policy. Here I suggest that using 'fat' data and simulation might allow a possibilistic analysis of policy impact - namely an exploration of what could go surprisingly wrong (or indeed right). Whilst this does not allow the optimisation of policy, it does inform the effective monitoring of policy, and basic contingency planning. However, this requires a different approach to policy - from planning and optimisation to an adaptive approach, with richer continual monitoring and a readiness to tune or adapt policy as data comes in. Examples of this are given concerning domestic water consumption (in the main talk), and in supplementary slides: voter turnout and fishing.
The slides from a class on the relationship of formal modelling and their use, with particular focus on different purposes for modelling and how they can go wrong.
An Introduction to Agent-Based ModellingBruce Edmonds
An introduction to the technique with two example models of in-group bias and voter turnout.
An invited talk at the BIGSSS Summer Schools in Computational Social Science, at the Jacobs Bremen University, July 2018.
Different Modelling Purposes - an 'anit-theoretical' approachBruce Edmonds
Models are a tool, not a picture of reality. There are many different uses for models. The intended use of a model - its 'purpose' - affects how it is judged, checked and developed. Much confusion and bad practice in modelling can be attributed to not clearly identifying the intended 'purpose' for a model. Neo-classical Economics is used to illustrate some of these confusions. In some (but not all) uses the model stands in for a theory (at least key aspects of it), but this can happen in different ways and at different levels of abstraction. The talk looks at some of these different ways and advocates a staged, inductive methodology for theory development instead of one that jumps to high generality and simple models which confuse different uses.
A talk given at the Workshop on "From Cases To General Principles - Theory Development Through Agent-Based Modeling" see http://abm-theory.org
Quality problems are typically not simple. They often involve the complex interaction of several causes. A cause-and-effect diagram will help you to define and display the major causes, sub-causes and root causes that influence a process or a characteristic. Provide a focus for discussion and consensus. Visualize the possible relationships between causes which may be creating problems or defects.
Agent-Based Modelling and Microsimulation: Ne’er the Twain Shall Meet? Edmund Chattoe-Brown
This presentation considers the differences in approach between ABM and microsimulation and considers the extent to which the two approaches might be reconciled.
Statistical Calculations 5
Statistical Calculations
Jeffree Doerflinger
Instructor: Margarita Rovira
August 22, 2014
1. For assistance with these calculations, see the Recommended Resources for Week One. Data, even numerically code variables, can be one of 4 levels – nominal, ordinal, interval, or ratio. It is important to identify which level a variable is, as this impacts the kind of analysis we can do with the data. For example, descriptive statistics such as means can only be done on interval or ratio level data. Please list, under each label, the variables in our data set that belong in each group.
Nominal
ID
Gender
Gender 1
Ordinal
Degree
Grade
Midpoint
Interval
Salary
Age
Performance Rating
Service
Ratio
Compa
Raise
1. The first step in analyzing data sets is to find some summary descriptive statistics for key variables. For salary, compa, age, Performance Rating, and Service; find the mean and standard deviation for 3 groups: overall sample, Females, and Males. You can use either the Data Analysis Descriptive Statistics tool or the Fx = average and = stdev functions. Note: Place data to the right, if you use Descriptive statistics, place that to the right as well:
Salary
Overall = 45
Male = 52
Female = 38
Compa
Overall = 1.062
Male = 1.056
Female = 1.069
Age
Overall = 35.72
Male = 38.92
Female = 32.52
Performance rating
Overall = 85.9
Male = 87.6
Female = 84.2
Service
Overall = 8.96
Male = 10
Female = 7.92
Standard Deviations
Salary
Overall = 19.20140301
Male = 17.77638883
Female = 18.29389698
Compa
Overall = 0.07682507
Male = 0.083789061
Female = 0.070344699
Age
Overall = 8.251258407
Male = 8.38609961
Female = 8.251258407
Performance rating
Overall = 11.41472375
Male = 8.674675786
Female = 13.59227722
Service
Overall = 5.717713258
Male = 6.357410374
Female = 4.906798005
2. What is the probability for a:
a. Randomly selected person being a male in grade E?
= 25/49*12/49
= 0.1249
b. Randomly selected male being in grade E?
= 10/25
= 0.4
c. Why are the results different?
When we look at the first instance it would show that a person would be randomly chosen from the total population, and I look at the second the individual or person is chosen randomly from a group of only males.
4. For each group (overall, females, and males) find::
a. The value that cuts off the top 1/3 salary in each group.
Overall = 58
Male = 62
Female = 41
b. The z score for each value.
z = (X - μ) / σ
Overall = (58-45)/19.20 = 0.67708
Male = (62-52)/17.77 = 0.5627
Female = (41-38)/18.29 = 0.1640
c. The normal curve probability of exceeding this score.
Overall = 0.2332
Male = 0.2147
Female = 0.1332
d. What is the empirical probability of being at or exceeding this salary value?
Overall = 0.2525
Male = 0.2214
Female = 0.1564
e. The score that cuts off the top 1/3 compa in each group.
Overall = 1.096
Male = 1.086
Female = 1.096
f. The z score for each value.
z = (X - μ) / σ
Overall = (1.096-1.062)/0.07683 = 0.4425
Male = .
Social Context
An invited talk at the 2018 Surrey Sociology Conference, Barnett Hill, Surrey, November 2018.
Although there is much evidence that context is crucial to much human cognition and social behaviour, it remains a difficult area to research. In much social science research it is either by-passed or ignored. In some qualitative research context is almost deified with any level of generalisation across contexts being left to the reader. At the other extreme, some qualitative research restricts itself to patterns that are generally detectable - that is the patterns that are left when one aggregates over many different contexts. Context is often used as a 'dustbin concept' to which otherwise unexplained variation is attributed.
This talk looks at some of the ways social context might be actively represented, understood and researched. Firstly the ideas of cognitive then social context are distinguished. Then some possible approaches to researching this are discussed, including: agent-based simulation, a context-sensitive analysis of narrative data and machine learning.
Using agent-based simulation for socio-ecological uncertainty analysisBruce Edmonds
A talk given in the MMU Big Data Centrem, 30th October 2018.
Both social and ecological systems can be highly complex, but the interaction between these two worlds - a socio-ecological system (SES) - can add even greater levels. However, the maintenance of SES are vital to our well being and the health of the planet. We do not know how such systems work in practice and we lack good data about them (especially the ecological side) so predicting the effect of any particular policy is infeasible. Here we present an approach which tries to understand some of the ways in which SES may go wrong, but constructing different complex simulation models and analysing the emergent outcomes. These, in silico, examples can allow for the institution of targeted data gathering instruments that give the earliest possible warning of deleterious outcomes, and thus allow for timely remedial responses. An example of this approach applied to fisheries is described.
A talk at the ESSA Silico Summer School in Wageningen, June 2017. It looks at some of the different purposes for a simulation model, and how complicated one should make one's model
A talk at ESSA@Work, TUHH (Technical University of Hamburg), 24th Nov 2017.
Abstract: Simulation models can only be justified with respect to the models purpose or aim. The talk looks at six common purposes for modelling: prediction, explanation, analogy, theoretical exposition, description, and illustration. Each of these is briefly described, with an example and an brief analysis of the risks to achieving these, and hence how they should be demonstrated. The importance of being explicitly clear about the model purpose is repeatedly emphasised.
A talk at the workshop on "Agent-Based Models in Philosophy: Prospects and Limitations", Rurh University, Bochum, Germany
Abstract:
ABMs (like other kinds of model) can be used in a purely abstract way, as a kind of thought experiment - a way of thinking about some aspect of the world that is too complicated to hold in our mind (in all its detail). In this way it both informs and complements discursive thought. However there is another set of uses for ABMs - empirical uses - where the mapping between the model and sets of observation-derived data are crucial. For these uses, one has to (a) use the mapping to get from some data to the model (b) use the model for some inference and (c) use the mapping again back to data. This includes both predictive and explanatory uses of ABMs. These are easily distinguishable from abstact uses becuase there is a fixed and well-defined relationship between the model and the data, this is not flexible on a case by case basis. In these cases the reliability comes from the composite (a)-(b)-(c) mapping, so that simplifying step (b) can be counterproductive if that means weakening steps (a) and (c) because it is the strength of the overall chain that is important. Taking the use of models in quantum mechanics as an example, one can see that sometimes the evolution of the formal models driven by empirical adequacy can be more important than the attendent abstract models used to get a feel for what is happening. Although using ABM's for empirical purposes is more challenging than for purely abstract purposes, they are being increasingly used for empirical explanation rather than thought experiments, and there is no reason to suppose that robust empirical adequacy is unachievable.
Winter is coming! – how to survive the coming critical storm and demonstrate ...Bruce Edmonds
A talk at the 2014 European Social Simulation Association summer school, at UAB in Barcelona 8th sept 2014
The talk covers some of the symptoms of hype in social simulation and argues that it needs to be more careful and rigourous. In particular that the (current) purpose of a simulation needs to be distinguished between theoretical, explanatory or predictive. Each having their own critieria.
Possibilistic prediction and risk analyses
A talk given at the EA annual Conference, Bonn, May 2015
Abstract:
It is in the nature of complex systems that predictions that give a probability are not possible.
Indeed I argue that giving "the most likely" or "rough" prediction is more harmful than useful.
Rather an approach which maps out some of the possible outcomes is outlined.
Agent-based modelling is ideal for producing these - including, crucially, possibilities that could not have been conceived just by thinking about it (due to the fact that events can combine in ways that are more complex than the human brain can cope with directly).
A characterisation of the real future possibilities and their nature allows some positive responses to events:
* putting in place 'early warning indicators' for the emergence of identified possibilities
* contingency planning for when they are indicated.
Such an approach would allow policy makers to better 'drive' their decision making, without abnegating responsibility to experts.
Mixing ABM and policy...what could possibly go wrong?Bruce Edmonds
Invited talk at 19th International Workshop on Multi-Agent Based Simulation at Stockholm on 14th July 2018.
Mixing ABM and Policy ... what could possibly go wrong?
This talk looks at a number of ways in which using ABM in the context of influencing policy can go wrong: during model construction, with model application and other.
It is related to the book chapter:
Aodha, L. and Edmonds, B. (2017) Some pitfalls to beware when applying models to issues of policy relevance. In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity - a handbook, 2nd edition. Springer, 801-822.
Mixing fat data, simulation and policy - what could possibly go wrong?Bruce Edmonds
A talk given at the CECAN workshop on "What Good Data could do for Evaluation" at the Alan Turing Institute, 25th Feb. 2019.
Abstract:
In complex situations (which includes most where humans are involved) it is infeasible to predict the impact of any particular policy (or even what is probable). Randomised Control Trials do not tell one: what kinds of situation a policy might work in, what are enablers and inhibitors of the effectiveness of a policy. Here I suggest that using 'fat' data and simulation might allow a possibilistic analysis of policy impact - namely an exploration of what could go surprisingly wrong (or indeed right). Whilst this does not allow the optimisation of policy, it does inform the effective monitoring of policy, and basic contingency planning. However, this requires a different approach to policy - from planning and optimisation to an adaptive approach, with richer continual monitoring and a readiness to tune or adapt policy as data comes in. Examples of this are given concerning domestic water consumption (in the main talk), and in supplementary slides: voter turnout and fishing.
The slides from a class on the relationship of formal modelling and their use, with particular focus on different purposes for modelling and how they can go wrong.
An Introduction to Agent-Based ModellingBruce Edmonds
An introduction to the technique with two example models of in-group bias and voter turnout.
An invited talk at the BIGSSS Summer Schools in Computational Social Science, at the Jacobs Bremen University, July 2018.
Different Modelling Purposes - an 'anit-theoretical' approachBruce Edmonds
Models are a tool, not a picture of reality. There are many different uses for models. The intended use of a model - its 'purpose' - affects how it is judged, checked and developed. Much confusion and bad practice in modelling can be attributed to not clearly identifying the intended 'purpose' for a model. Neo-classical Economics is used to illustrate some of these confusions. In some (but not all) uses the model stands in for a theory (at least key aspects of it), but this can happen in different ways and at different levels of abstraction. The talk looks at some of these different ways and advocates a staged, inductive methodology for theory development instead of one that jumps to high generality and simple models which confuse different uses.
A talk given at the Workshop on "From Cases To General Principles - Theory Development Through Agent-Based Modeling" see http://abm-theory.org
Quality problems are typically not simple. They often involve the complex interaction of several causes. A cause-and-effect diagram will help you to define and display the major causes, sub-causes and root causes that influence a process or a characteristic. Provide a focus for discussion and consensus. Visualize the possible relationships between causes which may be creating problems or defects.
Agent-Based Modelling and Microsimulation: Ne’er the Twain Shall Meet? Edmund Chattoe-Brown
This presentation considers the differences in approach between ABM and microsimulation and considers the extent to which the two approaches might be reconciled.
Statistical Calculations 5
Statistical Calculations
Jeffree Doerflinger
Instructor: Margarita Rovira
August 22, 2014
1. For assistance with these calculations, see the Recommended Resources for Week One. Data, even numerically code variables, can be one of 4 levels – nominal, ordinal, interval, or ratio. It is important to identify which level a variable is, as this impacts the kind of analysis we can do with the data. For example, descriptive statistics such as means can only be done on interval or ratio level data. Please list, under each label, the variables in our data set that belong in each group.
Nominal
ID
Gender
Gender 1
Ordinal
Degree
Grade
Midpoint
Interval
Salary
Age
Performance Rating
Service
Ratio
Compa
Raise
1. The first step in analyzing data sets is to find some summary descriptive statistics for key variables. For salary, compa, age, Performance Rating, and Service; find the mean and standard deviation for 3 groups: overall sample, Females, and Males. You can use either the Data Analysis Descriptive Statistics tool or the Fx = average and = stdev functions. Note: Place data to the right, if you use Descriptive statistics, place that to the right as well:
Salary
Overall = 45
Male = 52
Female = 38
Compa
Overall = 1.062
Male = 1.056
Female = 1.069
Age
Overall = 35.72
Male = 38.92
Female = 32.52
Performance rating
Overall = 85.9
Male = 87.6
Female = 84.2
Service
Overall = 8.96
Male = 10
Female = 7.92
Standard Deviations
Salary
Overall = 19.20140301
Male = 17.77638883
Female = 18.29389698
Compa
Overall = 0.07682507
Male = 0.083789061
Female = 0.070344699
Age
Overall = 8.251258407
Male = 8.38609961
Female = 8.251258407
Performance rating
Overall = 11.41472375
Male = 8.674675786
Female = 13.59227722
Service
Overall = 5.717713258
Male = 6.357410374
Female = 4.906798005
2. What is the probability for a:
a. Randomly selected person being a male in grade E?
= 25/49*12/49
= 0.1249
b. Randomly selected male being in grade E?
= 10/25
= 0.4
c. Why are the results different?
When we look at the first instance it would show that a person would be randomly chosen from the total population, and I look at the second the individual or person is chosen randomly from a group of only males.
4. For each group (overall, females, and males) find::
a. The value that cuts off the top 1/3 salary in each group.
Overall = 58
Male = 62
Female = 41
b. The z score for each value.
z = (X - μ) / σ
Overall = (58-45)/19.20 = 0.67708
Male = (62-52)/17.77 = 0.5627
Female = (41-38)/18.29 = 0.1640
c. The normal curve probability of exceeding this score.
Overall = 0.2332
Male = 0.2147
Female = 0.1332
d. What is the empirical probability of being at or exceeding this salary value?
Overall = 0.2525
Male = 0.2214
Female = 0.1564
e. The score that cuts off the top 1/3 compa in each group.
Overall = 1.096
Male = 1.086
Female = 1.096
f. The z score for each value.
z = (X - μ) / σ
Overall = (1.096-1.062)/0.07683 = 0.4425
Male = .
Similar to Modelling Pitfalls - extra resources (20)
Social Context
An invited talk at the 2018 Surrey Sociology Conference, Barnett Hill, Surrey, November 2018.
Although there is much evidence that context is crucial to much human cognition and social behaviour, it remains a difficult area to research. In much social science research it is either by-passed or ignored. In some qualitative research context is almost deified with any level of generalisation across contexts being left to the reader. At the other extreme, some qualitative research restricts itself to patterns that are generally detectable - that is the patterns that are left when one aggregates over many different contexts. Context is often used as a 'dustbin concept' to which otherwise unexplained variation is attributed.
This talk looks at some of the ways social context might be actively represented, understood and researched. Firstly the ideas of cognitive then social context are distinguished. Then some possible approaches to researching this are discussed, including: agent-based simulation, a context-sensitive analysis of narrative data and machine learning.
Using agent-based simulation for socio-ecological uncertainty analysisBruce Edmonds
A talk given in the MMU Big Data Centrem, 30th October 2018.
Both social and ecological systems can be highly complex, but the interaction between these two worlds - a socio-ecological system (SES) - can add even greater levels. However, the maintenance of SES are vital to our well being and the health of the planet. We do not know how such systems work in practice and we lack good data about them (especially the ecological side) so predicting the effect of any particular policy is infeasible. Here we present an approach which tries to understand some of the ways in which SES may go wrong, but constructing different complex simulation models and analysing the emergent outcomes. These, in silico, examples can allow for the institution of targeted data gathering instruments that give the earliest possible warning of deleterious outcomes, and thus allow for timely remedial responses. An example of this approach applied to fisheries is described.
How social simulation could help social science deal with contextBruce Edmonds
An invited plenary at Social Simluation 2018, Stockholm.
This points out how context-sensitivity is fundamental to much human social behaviour, but largely bypassed or ignored in social science. I more formal social science, it is usual to assume or fit universal models, even if this covers a lot of different contexts. In qualitative social science context is almost deified, and any generalisation across contexts is passed on to those that learn from it. Agent-based modelling allows for context-sensitive models to be developed and hence the role of context explored and better understood. The talk discussed a framework for analysing narrative text using the Context-Scope-Narrative-Elements (CSNE) framework. It also illustrates a cognitive model that allows for context-dependent knowledge to be implemented wthin an agent in a simulation. The talk ends with a plea to avoid uncecessary or premature summarisation (using averages etc.).
Agent-based modelling,laboratory experiments,and observation in the wildBruce Edmonds
An invited talk at the workshop on "Social complexity and laboratory experiments – testing assumptions and predictions of social simulation models with experiments" at Social Simulation 2018, Stockholm
Culture trumps ethnicity!– Intra-generational cultural evolution and ethnoce...Bruce Edmonds
Essential to understanding the impact of in-group bias on society is the micro-macro link and the complex dynamics involved. Agent-based modelling (ABM) is the only technique that can formally represent this and thus allow for the more rigorous exploration of possi-ble processes and their comparison with observed social phenomena. This talk discusses these issues, providing some examples of some relevant ABMs.
A talk given at the BIGSSS summer school on conflict, Bremen, Jul/Aug 2018.
Socio-Ecological Simulation - a risk-assessment approachBruce Edmonds
An invited talk in Tromsoe, 5 June 2018.
Both social and ecological systems are complex, but when they combine (as when human societies farm/hunt) there is a double complexity. This complexity means it is infeasible to predict the outcome of their interaction and unwise to rely on any prediction. An alternative approach is to use complex simulations to try and discover some possible ways that such systems can go wrong. This can reveal risks that other approaches might miss, due to the fact that more of the complexity is included within the model. Once a risk is identified then measures to monitor its emergence can be implemented, allowing the earliest possible warning of this. An example of this approach applied to a fisheries ecosystem is described.
A talk at the workshop on "Thinking toys (or games) for commoning, Basel, 5/6 April, Switzerland.
This describes a simple model of anonymous donation of resources, with minimal group structuring.
Am open-access paper on this model is at: http://cfpm.org/discussionpapers/152
The model can be freely downloaded from:
http://openABM.org/model/4744
Modelling Innovation – some options from probabilistic to radicalBruce Edmonds
A talk on the various kinds of innovation based on Margret Boden's types of creativity . Given at the European Academy, Ahrweiler, Germany 31st May 2017.
Co-developing beliefs and social influence networksBruce Edmonds
Argues that many social phenomena needs ABM models with both cognitive and social change co-developing
Presented at the AISB workshop in Bath, April 2017 on "The power of Immergence...". See last slide for details of where to get the paper and the model
Towards Institutional System Farming
A talk at the Lorentz Workshop on "Emerging Institutions: Design or Evolution?" September 2016, Leiden, NL (https://www.lorentzcenter.nl/lc/web/2016/836/info.php3?wsid=836&venue=Oort)
A Model of Social and Cognitive CoherenceBruce Edmonds
An inbvited talk at the Workshop on Coherence -Based Approaches to Decision-Making, Cognition and Communication, Berlin July 2016
Human cognition can be usefully understood as a primarily social set of abilities - its survival benefit is from our ability to social organise and hence inhabit a variety of niches. From this point of view any ability makes more sense when put into a social context. This includes our innate ability to judge candidate beliefs in terms of their coherency with our existing beliefs and goals. However studying cognition in its social context implies high complexity, for this reason I describe an agent-based model of coherency based belief within a dynamic network of individuals. Here beliefs might be copied (or discarded) by an individual based upon the change in coherence it causes with its other beliefs, but also that an individual will change their social connections based upon the the coherence of their beliefs with those they socially interact with.
Staged Models for Interdisciplinary ResearchBruce Edmonds
A talk give n at CosyDy, Leeds 12th May 2016.
The papers can be read at: http://arxiv.org/abs/1604.00903 (this work, soon in PLoSOne) and http://arxiv.org/abs/1508.04024 (the further simplification step, soon in EPJ-B)
The models are at: http://openabm.org/model/4368 and http://openabm.org/model/4686
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
1. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 1
the Pitfalls of ABM
– some resources
Bruce Edmonds
Centre for Policy Modelling
Manchester Metropolitan University
2. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 2
Some Different Model Purposes
3. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 3
Purpose 1: Prediction
4. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 4
Motivation
• If you can reliably predict something about the
world (that you did not already know), this is
undeniably useful…
• ...even if you do not know why your model
predicts (e.g. a black-box model)!
• But it has also become the ‘gold standard’ of
science…
• ...becuase (unlike many of the other purposes) it
is difficult to fudge or fool yourself about – if its
wrong this is obvious.
5. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 5
Predictive modelling
Target system
Initial
Conditions
Outcomes
Predictive Model
Model
set-up
Model
results
(Hesse 1963)
6. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 6
What it is
The ability to anticipate unknown data reliably and
to a useful degree of accuracy
• Some idea of the conditions in which it does this
have to be understood (even if this is vague)
• The data it anticipates has to be unknown to the
modeller when using the model
• What is a useful degree of accuracy depends on
the purpose for predicting
• What is predicted can be: categorical, probability
distributions, ranges, negative predictions, etc.
7. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 7
Examples
• The gas laws (temperature is proportional to
pressure at the same volume etc.) predict future
measurements on a gas without any indication of
why this works
• Nate Silver’s team tries to predict the outcome of
sports events and elections using computational
models. These are usually probabilistic
predictions and the predicted distribution of
predictions is displayed (http://fivethirtyeight.com
and Silver 2013)
8. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 8
Risks and Warnings
• There are two different uses of the word ‘predict’:
one as above and one to indicate any calculation
made using a model (the second confuses others)
• This requires repeated attempts at anticipating
unknown data (and learning from this)
• because it is otherwise impossible to avoid ‘fitting’
known data (due to publication bias etc.)
• If the outcome is unknown and can be
unambiguously checked it could be predictive
• Prediction is VERY hard in the social sciences –
for this reason, it is rarely done
9. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 9
Mitigating Measures
• The following are documented:
– what aspects it predicts
– roughly when it predicts well
– what degree of accuracy it predicts with
• Check that the model predicts on several
independent cases
• Ensure the program is distributed so others can
independently check its predictions
10. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 10
Purpose 2: Explanation
11. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 11
Motivation
• When one wants to understand why or how
something happens
• One makes a simulation with the mechanisms
one wants and then shows that the results fit the
observed data
• The intricate workings of the simulation runs
support an explanation of the outcomes in terms
of those mechanisms
• The explanation is usually an abstraction of the
model workings, so as to be comprehensible to us
(e.g. a hypothesis about model behaviour)
12. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 12
What it is
Establishing a possible causal chain from a set-up
to its consequences in terms of the mechanisms of
a simulation
• The causation can be deterministic, possibilistic or
probabilistic
• The nature of the set-up constrains the terms that
the explanation is expressed in
• Only some aspects of the results will be relevant
to be matched to data
• But how the model maps to data/evidence is
explicitly specified
13. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 13
Explanatory modelling
Mechanisms
Model
processes
Model
results
Outcomes
Model
Target System
Outcomes are explained
by the processes
14. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 14
Examples
• The model of a gas with atoms randomly bumping
around explains what happens in a gas (but does
not directly predict the values)
• Lansing & Kramer’s (1993) model of water
distribution in Bali, explained how the system of
water temples act to help enforce social norms
and facilitate a complicated series of negotiations
15. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 15
Risks and Warnings
• A bug in the code is fatal to this purpose if this
could change the outcomes substantially
• The fit to the target data maybe a very special
case which would limit the likelihood of the
explanation over similar cases
• The process from mechanisms to outcomes might
be complex and poorly understood. The
explanation should be clearly stated and tested.
Assumptions behind this must be tested.
• There might well be more than one possible
explanation (and/or model)!
16. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 16
Mitigating Measures
• Ensure the built-in mechanisms are plausible and
at the right kind to support an explanation
• Be clear which aspects of the output are
considered significant (i.e. those that are
explained) and which artifacts of the simulation
• Probe the simulation to find when the explanation
works (noise, assumptions etc)
• Do classical experiments to show your
explanation works for your code
17. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 17
Purpose 3: Theory Exposition
18. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 18
Motivation
• If one has a system of equations, sometimes one
can analytically solve the equations to get a
general solution (i.e. a ‘closed form’ solution)
• When this is not possible (the case for almost all
complicated systems) we can calculate specific
examples – i.e. we simulate it!
• Using multiple runs, we aim to sufficiently explore
the whole space of behaviour to understand the
effect of this particular set of abstract mechanisms
• We might approximate these with equations (or a
simpler model) to check this understanding
19. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 19
What it is
Discovering then establishing (or refuting)
hypotheses about the general behaviour of a set of
mechanisms
• The hypotheses may need to be discovered
• But, crucially, showing the hypotheses hold (or
are refuted) by the set of experiments
• There needs to be a wide (maybe even complete)
exploration of outcomes
• The hypotheses need to be quite general for the
exercise to be useful to others
• Does not say anything about the observed world!
20. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 20
Modelling to understand Theory
Model
processes
Model
results
Model
Target System
Hypothesis or general characterisation of behaviour
21. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 21
Examples
• Many economic models are explorations of sets of
abstract mechanisms
• Deffuant, G., et al. (2002) How can extremism
prevail?
jasss.soc.surrey.ac.uk/5/4/1.html
• Edmonds & Hales (2003) Replication…
jasss.soc.surrey.ac.uk/6/4/11.html
22. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 22
Risks, Warnings & Mitigation
• A bug in the code is dangerous to this purpose
since the explanation might partly be based on an
understanding of what the code was to do
• A general idea of the outcome behaviour is
needed so the exploration needs to be extensive
• The code needs to be available so that people
can test its assumptions etc.
• Clarity about what is claimed, the model
description etc. is very important
23. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 23
Purpose 4: Illustration
24. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 24
Motivation & What it is
• An idea is new but has complex ramifications and
one wants to simply illustrate it
• This is a way of communicating through a single
(but maybe complex) example
A behaviour or system is illustrated precisely using
a simulation in an understandable way
• It might be a very special case, no generality is
established or claimed
• It might be used as a counter-example
• Simpler models are easier to communicate and
hence often make more vivid illustrations
25. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 25
Examples
• Sakoda/Schelling’s 2D Model of segregation
which showed that a high level of racial
intollerance was not necessary to explain patterns
of segregation
• Riolo et al. (2001) Evolution of cooperation
without reciprocity, Nature 414:441-443.
• Baum, E. (1996) Toward a model of mind as a
laissez-faire economy of idiots.
26. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 26
Purpose 5: Analogy
27. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 27
Motivation & What it is
• Provides a ‘way of thinking about’ stuff
• The model does not (directly) tell us about
anything observed, but is about ideas (which, in
turn, may or may not relate to something
observed)
• It can suggest new insights – e.g. new
hypotheses or future research directions
• We need analogies to help us think about what to
do (e.g. what and how to model)
• They are unavoidable
• They are very useful, but can also be deceptive
28. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 28
Models as Analogies
Intuitive understanding expressed in normal
language
Observations of the system of concern
Models of the processes in the
system
Common-SenseComparison
29. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 29
Examples
• Axelrod’s Evolution of Cooperation models (1984
etc.)
• Hammond & Axelrod (2006) The Evolution of
Ethnocentrism. Journal of Conflict Research
• Many economic models which show an ‘efficient’
market
• Many ecological models showing how systems
reach an equilibrium
30. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 30
Warnings
• When one has played with a model the whole
world looks like that model (especially to the
model builder)
• But this does not make this true!
• Such models can be very influential but (as with
the economic models of risk about lending) can
be very misleading
• At best, they can suggest hypotheses about the
observed world, but they don’t demonstrate
anything
31. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 31
How to decide what a model purpose is
Is there a well-defined
mapping from model
to evidence?Yes No
Does it describe a
particular case?Yes
Does it predict
unknown data reliably?Yes
No
No
Descriptive
Predictive
Explanatory
Does it thoroughly explore
the consequences of
some mechanisms?
NoYes
Theory
Exposition
Is it a general way of
thinking about a set of
phenomena?
Yes
Analogical
No
Illustration
Empirical Theoretical
or
Conceptual
32. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 32
Some Pitfalls in Model Construction
33. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 33
Modelling Assumptions
• All models are built on assumptions, but…
• They have different origins and reliability, e.g.:
– Empirical evidence
– Other well-defined theory
– Expert Opinion
– Common-sense
– Tradition
– Stuff we had to assume to make the model possible
• Choosing assumptions is part of the art of
simulation but which assumptions are used
should be transparent and one should be honest
about their reliability – plausibility is not enough!
34. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 34
Theoretical Spectacles
• Our conceptions and models constrain how we
1. look for evidence (e.g. where and what kinds)
2. what kind of models we develop
3. how we evaluate any results
• This is Kuhn’s “Theoretical Spectacles” (1962)
– e.g. continental drift
• This is MUCH stronger for a complex simulation
we have immersed ourselves in
• Try to remember that just because it is useful to
think of the world through our model, this does
not make them valid or reliable
35. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 35
Over-Simplified Models
• Although simple models have many pragmatic
advantages (easier to check, understand etc.)…
• If we have missed out key elements of what is being
modelled it might be completely wrong!
• Playing with simple models to inform formal and
intuitive understanding is an OK scientific practice
• …but it can be dangerous when informing policy
• Simple does not mean it is roughly correct, or more
general or gives us useful intuitions
• Need to accept that many modelling tasks requested
of us by policy makers are not wise to do with
restricted amounts of time/data/resources
36. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 36
Underestimating model limitations
• All models have limitations
• They are only good for certain things: a model
that explains well might not predict well
• The may well fail when applied in a different
context than the one they were developed in
• Policy actors often do not want to know about
limitations and caveats
• Not only do we have to be 100% honest about
these limitations, but we also have to ensure that
these limitations are communicated with the
model
37. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 37
Not checking & testing a model
thoroughly
• Doh!
• Sometimes there is not a clear demarcation
between an exploratory phase of model
development and its application to serious
questions (whose answers will impact on others)
• Sometimes an answer is demanded before
thorough testing and checking can be done – “Its
OK, I just want an approximate answer” :-/
• Sometimes researchers are not honest
• Depends on the potential harm if the model is
relied on (at all) and turns out to be wrong
38. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 38
Some Pitfalls in Model Application
39. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 39
Insufficiently Validated Models
• One can not rely on a model until it has been
rigorously checked and tested against reality
• Plausibility is nowhere NEAR enough
• This needs to be on more than one case
• Its better if this is done independently
• You can not validate a model using one set of
settings/cases then rely on it in another
• Validation usually takes a long time
• Iterated development and validation over many
cycles is better than one-off models (for policy)
40. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 40
Promising too much
• Modellers are in a position to see the potential of
their work, and so can tantalise others by
suggesting possible/future uses (e.g. in the
conclusions of papers or grant applications)
• They are tempted to suggest they can ‘predict’,
‘evaluate the impact of alternative polices’ etc.
• Especially with complex situations (that ABM is
useful for) this is simply deceptive
• ‘Giving a prediction to a policy maker is like giving
a sharp knife to a child’
41. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 41
The inherent plausibility of ABMs
• Due to the way ABMs map onto reality in a
common-sense manner (e.g. peopleagents)…
• …visualisations of what is happening can be
readily interpretted by non-modellers
• and hence given much greater credence than
they warrant (i.e. the extent of their validation)
• It is thus relatively easy to persuade using a good
ABM and visualisation
• Only we know how fragile they are, and need to
be especially careful about suggesting otherwise
42. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 42
Model Spread
• On of the big advantages of formal models is that
they can be passed around to be checked, played
with, extended, used etc.
• However once a model is out there, it might get
used for different purposes than intended
• e.g. the Black-Scholes model of derivative pricing
• Try to ensure a released model is packaged with
documentation that warns of its uses and
limitations
43. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 43
Narrowing the evidential base
• The case of the Newfoundland cod, indicates how
models can work to constrain the evidence base,
therefore limiting decision making
• If a model is considered authoritative, then the
data it uses and produces can sideline other
sources of evidence
• Using a model rather than measuring lots of stuff
is cheap, but with obvious dangers
• Try to ensure models are used to widen the
possibilities considered, rather than limit them
44. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 44
Other/General Pitfalls
45. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 45
Confusion over model purpose
• A model is not a picture of reality, but a tool
• A tool has a particular purpose
• A tool good for one purpose is probably not good
for another
• These include: prediction, explanation, as an
analogy, an illustration, a description, for theory
exploration, or for mediating between people
• Modellers should be 100% clear under which
purpose their model is to be judged
• Models need to be justified for each purpose
separately
46. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 46
When models are used out of the
context they were designed for
• Context matters!
• In each context there will be many
conditions/assumptions we are not even aware of
• A model designed in one context may fail for
subtle reasons in another (e.g. different ontology)
• Models generally need re-testing, re-validating
and often re-developing in new contexts
47. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 47
What models cannot reasonably do
• Many questions are beyond the realm of models
and modellers but are essentially
– ethical
– political
– social
– semantic
– symbolic
• Applying models to these (outside the walls of
our academic asylum) can confuse and distract
48. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 48
The uncertainty is too great
• Required reliability of outcome values is too low
for purpose
• Can be due to data or model reasons
• Radical uncertainty is when its not a question of
degree but the situation might fundamentally
change or be different from the model
• Error estimation is only valid in absence of radical
uncertainly (which is not the case in almost all
ecological, technical or social simulations)
• Just got to be honest about this and not only
present ‘best case’ results
49. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 49
A false sense of security
• If the outcomes of a model give a false sense of
certainly about outcomes then a model can be
worse than useless; positively damaging to policy
• Better to err on the side of caution and say there
is not good model in this case
• Even if you are optimistic for a particular model
• Distinction here between probabilistic and
possibilistic views
50. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 50
Not more facts, but values!
• Sometimes it is not facts and projections that are
the issue but values
• However good models are, the ‘engineering’
approach to policy (enumerate policies, predict
impact of each, choose best policy) might be
inappropriate
• Modellers caught on the wrong side of history
may be blamed even though they were just doing
the technical parts
51. Pitfalls of ABM, Bruce Edmonds, ESSA Summer School, Aberdeen, June 2019. slide 51
The End
Bruce Edmonds: bruce@edmonds.name
Centre for Policy Modelling: http://cfpm.org
A version of these slides will be in the shared dropbox
folder and at:
http://slideshare.com/BruceEdmonds